G
Guest
Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions answered?
For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage History
in W2K changed to P F History in XP? After all with no pagefile the title PF
History is misleading in XP.
MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482
Talks about how you should configure your pagefile around a disk dump. IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport to
read their dump file is a last or never resort to troubleshooting their
system.
So where is the reasoning in configuring a system around a disk dump that is
never going to be used? This configuration is far from optimal especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.
In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I have
read say a algorthem will determine which will be the fastest and use that
one. In my limited testing this appears to be the case with all other unused
pagefiles being "checked" routinely.
So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile? If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive and
only check the ones on the ide drives, what is the point of wasting the disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks and
controllers improves performance since modern disk subsystems can process I/O
concurrently in a round-robin fashion.
I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the time
it takes to read from multiple disks even if it can be done at exactly the
same time.
Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So why
waste an entire partition on this? Especially since most users read this as
another partition on the same disk as the OS which defeats the objective. The
goal of pagefile optimization is to eliminate disk i/o contention between the
OS system files being read and pagefile operations.
Anyone want to jump in?
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions answered?
For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage History
in W2K changed to P F History in XP? After all with no pagefile the title PF
History is misleading in XP.
MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482
Talks about how you should configure your pagefile around a disk dump. IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport to
read their dump file is a last or never resort to troubleshooting their
system.
So where is the reasoning in configuring a system around a disk dump that is
never going to be used? This configuration is far from optimal especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.
In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I have
read say a algorthem will determine which will be the fastest and use that
one. In my limited testing this appears to be the case with all other unused
pagefiles being "checked" routinely.
So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile? If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive and
only check the ones on the ide drives, what is the point of wasting the disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks and
controllers improves performance since modern disk subsystems can process I/O
concurrently in a round-robin fashion.
I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the time
it takes to read from multiple disks even if it can be done at exactly the
same time.
Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So why
waste an entire partition on this? Especially since most users read this as
another partition on the same disk as the OS which defeats the objective. The
goal of pagefile optimization is to eliminate disk i/o contention between the
OS system files being read and pagefile operations.
Anyone want to jump in?