Page File discussion

G

Guest

Great! So any clue as to why in W2K in Task Manager we have memory usage
history but in XP it was renamed to page file usage history? I have to
deduce that the functionality is the same but the label was renamed in XP.

What I find stanger yet is with no pagefile.sys file downloadable system
reporting utilities will report I have a pagefile but each is different in
its report of pagefile size.

The more I look into this aspect of MS memory management the more I find
that doesn't add up. Clearly it works well and I am not disputing how it
works. It is what I find on the web and even some aspects of MS
recommendations that don't add up. Wouldn't you agree for home/office pc
users [average folks now - no CAD users] that "optimizing" your system around
a disk dump is pointless? Why aren't the recommendations made specific to
servers vs workstations? They certainly shouldn't be treated the same.

I learned under NT how to optimize my servers pagefile by "right sizing".
You leave the srver up for a least a month, run every app and put the server
thru its paces then look at max mem used in NT Diagnostics. Add 10% as a
'just in case' to this max used and set min and max the same. I have run one
server for almost 5 years now with a 250meg pagefile [2gig of ram]. It runs
SQL Server backend to a financial app for our business office. I have freed
up a gig plus of disk space, my pagefile will never fragment, the system is
very stable and I will have no dump file. I figured if I needed to I can
always configure the pagefile back to the boot drive so I can get a dump.
But having worked with dumps before I wouldn't waste my time.

This proven configuration is contrary to one MS recommendation and complies
with another. If you look on the web though most sites poo poo the set min
and max settings yet will suggest a seperate partition is optimal but not
specify that it needs to be on a different drive than the one the OS is on.

So where is the optimization? Where is the source of these recommendations
and what are they based upon? Web sites like
http://www.aumha.org/win5/a/xpvm.htm
worsen the problem. He states as myths standard MS recommendations of
recommended size [1.5 x ram] and setting min and max the same [another MS
recommendation]. Yet folks quote this site as factual.

Your thoughts?
 
G

Guest

Thanks David for your time. I do understand the mechanics of paging and the
difference between paging and swapping. I used to work with VMS for 4 years
which btw was written by the same guy, Dave Cutler, who wrote NT and is on
the team for server 03.

I have two interests. One is that what is being recommended as optimization
techniques are, in fact, not optimizing the system at all. For example MS's
recommendation for configuring the system around a disk dump or the
forementioned web sites recommendation not to set min and max the same. The
second is what are these recommendations based upon? Where is the meat of
their reasoning?

A number of Microsoft recommendations for optimization have proven their
worth. Setting min and max the same and putting the pagefile on a different
drive than the OS have good reasoning behind them. Setting min and max the
same prevents pagefile fragmentation and doesn't waste cpu cycles. Putting
the pagefile on a different drive than the OS stops disk io contention
between system and pagefile requests. From my experience these are proven
optimization techniques.

The others, as I have mentioned, don't seem to have a basis. For example
with XP, forcing the system to "waste" memory with no pagefile appears to
make the OS more stable and certainly much faster in my experience when you
have plenty of ram. But there are folks that have workstations with 2gigs of
ram that insist they must have a 3gig pagefile. I don't see the reasoning.
This is why I am asking the questions.

Do you think it is reasonable for the average home user wanting to optimize
their system to do it around a disk dump?
Do you see any advantage to oversizing the pagefile on a server system?
 
J

JimWae±

I think it may be useful to point out that optimization may be used
ambiguously here as RAM I/O optimization & as disk optimization

For anyone with a big enough HD, using more space than needed on the HD is
not an issue -- as long as it does not impact other optimization.

Thus using HD space for a disk dump, or oversizing the page file are not
deciding issues for RAM I/O optimization (as long as, as I said, there is
plenty of room on the HD and HD issues do not impact RAM I/O )

The main issue would then be if having no pagefile detracts from RAM I/O in
any way.

In my own case at least, even 1 GB RAM is not always enough for my laptop,
so I need some pagefile at least.
Last night I was USING 448 MB of pagefile space, and I have had over 1 GB of
space allocated.
I have begun experimenting by setting adjustment for memory usage away from
SYSTEM CACHE and to PROGRAMS to see if the same conclusion holds. I do not,
so far, notice any perceptible speed improvement.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top