A
Andrew Mayo
As with all demand paged virtual memory operating systems, Windows
uses a LRU algorithm to determine when a memory page can be stolen.
Even on systems where the virtual memory commit is not substantially
greater than the actual physical memory, this has the side-effect that
pages which have not been touched recently are likely to be stolen by
routine system tasks, such as services, which run continuously.
Now, the issue here is that we have some software - a large and fairly
complex app, that uses MSDE - and where the user might be interacting
with it and then go off to lunch. When they come back, some code and
data has been stolen by other tasks, and consequently there will be a
flurry of pagefaults as soon as the user starts interacting with the
program again.
They perceive this as poor and erratic performance, even though once
they've done something, it'll be quick again. Alas, there are lots of
'corners' they can go into before things are fast again.
Unfortunately, adding a reasonably generous amount of physical memory
doesn't seem to help a great deal. There are enough processes running
all the time that even with quite a large amount of physical memory,
sooner or later - and an hour is a long time in scheduling terms -
that application memory is gonna get stolen (SQL Server is probably a
prime culprit here, I suspect, even with its memory upper limit
configured to a fixed amount).
My question is. Is there any way to indicate to the OS that a
particular process is to have priority in the sense that its memory
pages are not to be stolen UNLESS absolutely necessary. Does running
the process with a higher scheduling priority (using 'start') affect
the paging algorithm in any way such that process pages are less
likely to be stolen. Or is the only cure to add so much physical
memory that the virtual commit is less than physical memory?
NB: you can see this phenomenon in standard Windows apps - e.g Word.
Use word and then leave it idle for a while (but don't do anything
else on the PC). After you return, try invoking some less-used
function - options dialogues, spell checking etc - and note that the
system will fault the code and data in from disk. After this the
function is quite fast until you leave the system idle for an hour or
so, after which you will observe that re-using the function will again
fault pages in.
uses a LRU algorithm to determine when a memory page can be stolen.
Even on systems where the virtual memory commit is not substantially
greater than the actual physical memory, this has the side-effect that
pages which have not been touched recently are likely to be stolen by
routine system tasks, such as services, which run continuously.
Now, the issue here is that we have some software - a large and fairly
complex app, that uses MSDE - and where the user might be interacting
with it and then go off to lunch. When they come back, some code and
data has been stolen by other tasks, and consequently there will be a
flurry of pagefaults as soon as the user starts interacting with the
program again.
They perceive this as poor and erratic performance, even though once
they've done something, it'll be quick again. Alas, there are lots of
'corners' they can go into before things are fast again.
Unfortunately, adding a reasonably generous amount of physical memory
doesn't seem to help a great deal. There are enough processes running
all the time that even with quite a large amount of physical memory,
sooner or later - and an hour is a long time in scheduling terms -
that application memory is gonna get stolen (SQL Server is probably a
prime culprit here, I suspect, even with its memory upper limit
configured to a fixed amount).
My question is. Is there any way to indicate to the OS that a
particular process is to have priority in the sense that its memory
pages are not to be stolen UNLESS absolutely necessary. Does running
the process with a higher scheduling priority (using 'start') affect
the paging algorithm in any way such that process pages are less
likely to be stolen. Or is the only cure to add so much physical
memory that the virtual commit is less than physical memory?
NB: you can see this phenomenon in standard Windows apps - e.g Word.
Use word and then leave it idle for a while (but don't do anything
else on the PC). After you return, try invoking some less-used
function - options dialogues, spell checking etc - and note that the
system will fault the code and data in from disk. After this the
function is quite fast until you leave the system idle for an hour or
so, after which you will observe that re-using the function will again
fault pages in.