Page File discussion

G

Guest

Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions answered?

For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage History
in W2K changed to P F History in XP? After all with no pagefile the title PF
History is misleading in XP.

MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482

Talks about how you should configure your pagefile around a disk dump. IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport to
read their dump file is a last or never resort to troubleshooting their
system.

So where is the reasoning in configuring a system around a disk dump that is
never going to be used? This configuration is far from optimal especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.

In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I have
read say a algorthem will determine which will be the fastest and use that
one. In my limited testing this appears to be the case with all other unused
pagefiles being "checked" routinely.

So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile? If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive and
only check the ones on the ide drives, what is the point of wasting the disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks and
controllers improves performance since modern disk subsystems can process I/O
concurrently in a round-robin fashion.

I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the time
it takes to read from multiple disks even if it can be done at exactly the
same time.

Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So why
waste an entire partition on this? Especially since most users read this as
another partition on the same disk as the OS which defeats the objective. The
goal of pagefile optimization is to eliminate disk i/o contention between the
OS system files being read and pagefile operations.

Anyone want to jump in?
 
T

Ted Zieglar

Virtual Memory in Windows XP
http://www.aumha.org/win5/a/xpvm.htm

The advice in "How to configure pageing files for optimization..." works
great for me. Perhaps with a better attitude it will work for you, too.
--
Ted Zieglar


Wanderer said:
Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions answered?

For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage History
in W2K changed to P F History in XP? After all with no pagefile the title PF
History is misleading in XP.

MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482

Talks about how you should configure your pagefile around a disk dump. IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport to
read their dump file is a last or never resort to troubleshooting their
system.

So where is the reasoning in configuring a system around a disk dump that is
never going to be used? This configuration is far from optimal especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.

In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I have
read say a algorthem will determine which will be the fastest and use that
one. In my limited testing this appears to be the case with all other unused
pagefiles being "checked" routinely.

So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile? If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive and
only check the ones on the ide drives, what is the point of wasting the disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks and
controllers improves performance since modern disk subsystems can process I/O
concurrently in a round-robin fashion.

I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the time
it takes to read from multiple disks even if it can be done at exactly the
same time.

Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So why
waste an entire partition on this? Especially since most users read this as
another partition on the same disk as the OS which defeats the objective. The
goal of pagefile optimization is to eliminate disk i/o contention between the
OS system files being read and pagefile operations.

Anyone want to jump in?
 
J

JimWae±

While RAM is of course faster than pagefile, XP is designed to use a
pagefile no matter what you do. Setting OS to no pagefile will not improve
purrrrformance & may detract.

I agree that some KB articles need to be more specific that the "separate
partition" is only of value if it is on a separate disk.

There are some utility programs which unintentionally result in the pagefile
being deleted & rebuilt on boot up. If it IS on a (nondedicated) partition
with other files, it will get moved further from outside of disk AND then
even if MIN=MAX it CAN even become fragmented.

I also agree that there are benefits of making MIN=MAX as long as MIN is as
big as ever needed.
I have 1 GB of RAM & my commit charge peak often exceeds 1.3 GB, and actual
pagefile usage reaches 400 MB often enough.
I have enough HD space so keep pagefile at 1535 MB.


Wanderer said:
Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions answered?

For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage
History
in W2K changed to P F History in XP? After all with no pagefile the title
PF
History is misleading in XP.

MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482

Talks about how you should configure your pagefile around a disk dump.
IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport
to
read their dump file is a last or never resort to troubleshooting their
system.

So where is the reasoning in configuring a system around a disk dump that
is
never going to be used? This configuration is far from optimal especially
if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.

In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I
have
read say a algorthem will determine which will be the fastest and use that
one. In my limited testing this appears to be the case with all other
unused
pagefiles being "checked" routinely.

So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile?
If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive
and
only check the ones on the ide drives, what is the point of wasting the
disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks
and
controllers improves performance since modern disk subsystems can process
I/O
concurrently in a round-robin fashion.

I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications
across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the
time
it takes to read from multiple disks even if it can be done at exactly the
same time.

Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So
why
waste an entire partition on this? Especially since most users read this
as
another partition on the same disk as the OS which defeats the objective.
The
goal of pagefile optimization is to eliminate disk i/o contention between
the
OS system files being read and pagefile operations.

Anyone want to jump in?
 
J

JimWae±

I think the algorithm to determine which pagefile to use is ongoing - not
just a boot-up

JimWae± said:
While RAM is of course faster than pagefile, XP is designed to use a
pagefile no matter what you do. Setting OS to no pagefile will not improve
purrrrformance & may detract.

I agree that some KB articles need to be more specific that the "separate
partition" is only of value if it is on a separate disk.

There are some utility programs which unintentionally result in the
pagefile being deleted & rebuilt on boot up. If it IS on a (nondedicated)
partition with other files, it will get moved further from outside of disk
AND then even if MIN=MAX it CAN even become fragmented.

I also agree that there are benefits of making MIN=MAX as long as MIN is
as big as ever needed.
I have 1 GB of RAM & my commit charge peak often exceeds 1.3 GB, and
actual pagefile usage reaches 400 MB often enough.
I have enough HD space so keep pagefile at 1535 MB.


Wanderer said:
Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions
answered?

For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile
usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage
History
in W2K changed to P F History in XP? After all with no pagefile the
title PF
History is misleading in XP.

MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482

Talks about how you should configure your pagefile around a disk dump.
IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport
to
read their dump file is a last or never resort to troubleshooting their
system.

So where is the reasoning in configuring a system around a disk dump that
is
never going to be used? This configuration is far from optimal
especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.

In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I
have
read say a algorthem will determine which will be the fastest and use
that
one. In my limited testing this appears to be the case with all other
unused
pagefiles being "checked" routinely.

So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile?
If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive
and
only check the ones on the ide drives, what is the point of wasting the
disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks
and
controllers improves performance since modern disk subsystems can process
I/O
concurrently in a round-robin fashion.

I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications
across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the
time
it takes to read from multiple disks even if it can be done at exactly
the
same time.

Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So
why
waste an entire partition on this? Especially since most users read this
as
another partition on the same disk as the OS which defeats the objective.
The
goal of pagefile optimization is to eliminate disk i/o contention between
the
OS system files being read and pagefile operations.

Anyone want to jump in?
 
D

David Candy

Part of a exe file that is running becomes part of the swap file. As code doesn't change it is thrown away (unlike data which is written out to pagefile.sys) and just reloaded from the exe. You can't stop XP paging, you can only cripple it's paging performance. Also programs may use their own swap file (although most don't).

--
----------------------------------------------------------
http://www.uscricket.com
Wanderer said:
Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions answered?

For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage History
in W2K changed to P F History in XP? After all with no pagefile the title PF
History is misleading in XP.

MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482

Talks about how you should configure your pagefile around a disk dump. IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport to
read their dump file is a last or never resort to troubleshooting their
system.

So where is the reasoning in configuring a system around a disk dump that is
never going to be used? This configuration is far from optimal especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.

In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I have
read say a algorthem will determine which will be the fastest and use that
one. In my limited testing this appears to be the case with all other unused
pagefiles being "checked" routinely.

So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile? If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive and
only check the ones on the ide drives, what is the point of wasting the disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks and
controllers improves performance since modern disk subsystems can process I/O
concurrently in a round-robin fashion.

I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the time
it takes to read from multiple disks even if it can be done at exactly the
same time.

Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So why
waste an entire partition on this? Especially since most users read this as
another partition on the same disk as the OS which defeats the objective. The
goal of pagefile optimization is to eliminate disk i/o contention between the
OS system files being read and pagefile operations.

Anyone want to jump in?
 
G

Guest

Thanks JimWae for the response.

"While RAM is of course faster than pagefile, XP is designed to use a
pagefile no matter what you do. Setting OS to no pagefile will not improve
performance & may detract. "

This is not true. XP allows you to set pagefile to NONE. This means NO
pagefile.sys file. I have been running XP on 768meg of RAM with no pagefile
for over a year. Performance is outstanding. So with no pagefile.sys file
what is task manager showing for page file history?
 
D

David Candy

If you have more than one pagefile it uses the one on a drive that isn't being used at that moment.

--
----------------------------------------------------------
http://www.uscricket.com
JimWae± said:
I think the algorithm to determine which pagefile to use is ongoing - not
just a boot-up

JimWae± said:
While RAM is of course faster than pagefile, XP is designed to use a
pagefile no matter what you do. Setting OS to no pagefile will not improve
purrrrformance & may detract.

I agree that some KB articles need to be more specific that the "separate
partition" is only of value if it is on a separate disk.

There are some utility programs which unintentionally result in the
pagefile being deleted & rebuilt on boot up. If it IS on a (nondedicated)
partition with other files, it will get moved further from outside of disk
AND then even if MIN=MAX it CAN even become fragmented.

I also agree that there are benefits of making MIN=MAX as long as MIN is
as big as ever needed.
I have 1 GB of RAM & my commit charge peak often exceeds 1.3 GB, and
actual pagefile usage reaches 400 MB often enough.
I have enough HD space so keep pagefile at 1535 MB.


Wanderer said:
Is there anyone hanging out here who feels they understand pagefile
operations and Microsoft recommendations for pagefile configuration? Or
perhaps you may have a lead as to where I can get some questions
answered?

For example in XP, when you bring up Task Manager, you have Page File
History. Yet if I set my pagefile to NONE, TM still shows pagefile
usage.
With no pagefile.sys file, what is it showing? Why was Memory Usage
History
in W2K changed to P F History in XP? After all with no pagefile the
title PF
History is misleading in XP.

MS knowledge base article 314482 [How to configure pageing files for
optimization and recovery in Windows XP]
http://support.microsoft.com/default.aspx?scid=kb;en-us;314482

Talks about how you should configure your pagefile around a disk dump.
IMHO
this is nonsense for 99.9% of home pc and most office users of XP. These
folks have no idea how to use the dump utilities. Getting MS techsupport
to
read their dump file is a last or never resort to troubleshooting their
system.

So where is the reasoning in configuring a system around a disk dump that
is
never going to be used? This configuration is far from optimal
especially if
on the same disk. The configuration of two pagefiles on a single disk
consumes resources not optimizes them.

In the Windows 2000 Server Resource Kit, Operations Guide it talks about
configuring multiple page files across multiple disks. Other places I
have
read say a algorthem will determine which will be the fastest and use
that
one. In my limited testing this appears to be the case with all other
unused
pagefiles being "checked" routinely.

So is this a "dumbed down" procedure for system administrators who don't
have the knowledge to properly place and configure the systems pagefile?
If
the OS is only going to use the pagefile on the 15K rpm U320 scsi drive
and
only check the ones on the ide drives, what is the point of wasting the
disk
space and cpu cycles checking the unused pagefiles? The Operations guide
says something to the effect of multiple pagefiles across multiple disks
and
controllers improves performance since modern disk subsystems can process
I/O
concurrently in a round-robin fashion.

I believe this to be debateable. To start, disk pagefiles don't improve
performance. System RAM improves performance. Balancing applications
across
multiple servers improves performance. There would have to be additional
system overhead as the OS tracks what it wrote where not mentioning the
time
it takes to read from multiple disks even if it can be done at exactly
the
same time.

Last but not least is the recommendation to put the pagefile on its own
partition to prevent fragmentation. You can accomplish EXACTLY the same
thing by setting min and max entries the same as recommended by MS. So
why
waste an entire partition on this? Especially since most users read this
as
another partition on the same disk as the OS which defeats the objective.
The
goal of pagefile optimization is to eliminate disk i/o contention between
the
OS system files being read and pagefile operations.

Anyone want to jump in?
 
G

Guest

Thanks David. An executable is loaed into memory. If the memory manager
feels it doesn't need to supply that particular code it is written to
pagefile.sys file on the hard drive. Only when the program exits, like when
you close it, does it go out of memory [unless its a bad app :)]

This difference between where code is found, either in RAM or on the disk
[pagefile or program file] is the difference between a soft page fault [found
in ram] or hard page fault [found on disk]
 
D

David Candy

You can't turn off paging, only cripple it. What you are doing is forcing code to be swapped out rather than unused data.

The other consequences are
1. Disk caching is reduced. I presume registry caching too.
2. You will have a limit of how many programs can be started
3. If programs request extra memory they may crash or hang.
 
G

Guest

Thanks everyone who has responded so far.

Please understand I hear what you are saying. But have you ever thought
about the SOURCE of what you are saying?

For example its common knowledge to say XP is a paging OS and must have a
pagefile. That without a pagefile you cripple performance.

So why does MS give you the option of setting pagefile to NONE if this were
true? Why do those of us who have set their pagefiles to none have great
performance?
Where is the root of this "common knowledge"? Or is it simply true that if
you repeat something often enough it becomes fact even when it is not?

Maybe it would be better to focus on one question.
If I set my pagefile to NONE in XP what is being shown under Page File
History? If there is no pagefile.sys then how can there be history:)
 
D

David Candy

Code NEVER goes into the pagefile unless the linker marks the code segments as changable which is rare. It is dumped from memory and reread from the exe. It's dumped from memory and not written to disk as the contents are the same in memory or disk. This is why you can't delete a running file but can rename it..

A soft page fault is where the page exists in memory but not assigned to the application.
 
G

Guest

David Candy said:
You can't turn off paging, only cripple it. What you are doing is forcing code to be swapped out rather than unused data.

The other consequences are
1. Disk caching is reduced. I presume registry caching too.
2. You will have a limit of how many programs can be started
3. If programs request extra memory they may crash or hang.

Hi David: This is where we need to agree on vocabulary. Paging is a
memory operation done in RAM. It is how the memory manager moves things
around. Paging also refers to the operation of taking information out of the
working set and placing it on the hard drive ie. pagefile.sys.

Your statement "You can't turn off paging" is a bit vague to me. Setting
the pagefile to none removes pagefile.sys from the drive. So yes there is no
paging to disk taking place but this is not the same as "turning off paging"
which still occures in RAM.

Unless you mean that even with no pagefile.sys file the OS is still paging
to the hard disk???
 
J

JimWae±

I think I understood him to say that with no pagefile, contents of RAM never
get written to disk - they either 1>stay there or 2>get dumped & need to be
reread from disk (if there) when needed. Thus important contents that are
not already somewhere else on disk could potentially get dumped (unless the
OS refuses to proceed with the request for new data).

Having no pagefile.sys would then have the advantage of not ever writing to
disk, but no advantage (& a potential disadvantage) when data needs to be
read.
 
G

Guest

I know this reference is for NT and a bit old but if you have more modern
links I would love to have them.

David in response to you saying code is not paged to disk I have this quote
"NT's Memory Manager will move parts of the application to a file on disk
called a paging file".
This is from Mark Russinovock online article Inside Memory Management Part 1
found here http://www.winntmag.com/Windows/Articles/ArticleID/3686/pg/2/2.html

The quote is from the 4th paragraph under Paging.
 
J

JimWae±

Windows Task Manager displays the commit charge in its Performance tab.
There are three memory readings, measured in kilobytes:

a.. Total: refers to the total amount of physical and virtual memory the
computer is using at that moment.
b.. Limit: refers to the combined limit of both the physical memory and
the allocated virtual memory.
c.. Peak: refers to the highest total system memory usage during the
session in which you are using the computer.
The commit charge will increase when applications are opened and used and
decrease when applications are closed.



Since my Commit Charge often exceeds my RAM (1 GB) it seems I need a paging
file
 
G

Guest

I think, JimWae [how do you get that + - :) in your name], you are correct
in what David means. I just want to get clarification since I have seen it
posted on the web that even with no pagefile.sys present the OS memory
manager is still going to page to disk [which I dispute].
 
D

David Candy

Firstly any application may make it's own swap file. Also any applicayion can treat any file as a swap file (instead of reading writing a file you read/write memory and the VMM makes the file saving automatic).

Windows won't dump anything it can't reload. So that means fonts, exes, dlls, and probably the registry are all page files. But it's important to note that only part of an exe or dll is a swap - the code segments - as they don't normally change from on disk to in memory.

What this means is that windows can only swap a small subsection of memory - which means all swapping will be focused here. Plus windows periodically reduces the memory to a process and sees if a page fault is generated - if not that working set is reduced (most programs are set to have 2 meg max of physical memory - which is the default).
A process has an associated minimum working set size and maximum working set size. Each time you call CreateProcess, it reserves the minimum working set size for the process. The virtual memory manager attempts to keep enough memory for the minimum working set resident when the process is active, but keeps no more than the maximum size. (Process Working Set page from the Platform SDK) [Translated this means it tries to set the working set as low as possible - it tries for 0 bytes and starts at 512 Kbytes and subtract 4k at a time and it will give more than the max if the memory is free]


What this means is one swaps code not code and data. As data is likely to be bigger than it's working set this will force a lot of code paging as that's all XP's got to work with..

Assume you have MS timezone clock (http://www.microsoft.com/globaldev/outreach/dnloads/downloads.mspx) in the System Notification Area but don't actually click it for a week. As it hasn't executed a line of code in a week the whole thing will be swapped out. It's data to the swap file and it code to the exe (which it throws away from memory as t knows it's already written on the disk). It minimum working set will be zero (it started off at 512k but got reduced by 4k at a time and as it's got nuffin to do it didn't generate a page fault so the memory manager reduced it's working set).

If the page file is turned off most of the program stays in memory.
 
D

David Candy

I am saying applications are paged. It will dump code segments relying on the code in the exe to reload. It will write data segments to the page file. While data segments are in exe files too they are changable so it can't reload from the exe.
 
R

Ron Martell

Joshua Bolton said:
Thanks JimWae for the response.

"While RAM is of course faster than pagefile, XP is designed to use a
pagefile no matter what you do. Setting OS to no pagefile will not improve
performance & may detract. "

This is not true. XP allows you to set pagefile to NONE. This means NO
pagefile.sys file. I have been running XP on 768meg of RAM with no pagefile
for over a year. Performance is outstanding. So with no pagefile.sys file
what is task manager showing for page file history?

Running without a pagefile is technically possible, just not
advisable.

Actually having a pagefile actually makes your RAM usage more
efficient. This is because Windows must provide memory address space
for all memory requests that are issued, and pretty much everything -
Windows components, device drivers, and application programs included
- issues memory allocation requests that are larger than what is
normally needed. Windows handles this by using actual RAM addresses
only for those parts of the requests that are actually used and maps
the unused portions to locations in the pagefile. Note that this
mapping of unused portions to pagefile locations does not require any
disk activity, just a notation in the memory mapping tables maintained
by the CPU.

In the absence of a pagefile all requested memory must be mapped to
actual locations in physical RAM and therefore a considerable quantity
of RAM will be tied up for this basically useless purpose. This could
even have a negative impact on overall performance, as Windows will
reduce the disk cache size in order to provide RAM addresses once all
available RAM is fully committed.


Additionally there are other uses for the pagefile that are worth
considering:

1. If you have multiple user names configured on the computer and
have the "fast user switching" option enabled then when the user is
changed the memory content relating the previous user will be rolled
out the pagefile.

2. In the event of a system failure error the pagefile on the boot
drive is used to received the contents of the system failure memory
dump, unless the memory dump option is set to "none". The pagefile is
then renamed. It is faster to use an existing file and then rename it
than it is to create a new file, and speed is essential in these error
situations.

Hope this clarifies the situation.

Good luck


Ron Martell Duncan B.C. Canada
--
Microsoft MVP
On-Line Help Computer Service
http://onlinehelp.bc.ca

"The reason computer chips are so small is computers don't eat much."
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top