Why does XP reset the size of my virtual memory?

G

Guest

We've observed this on all our XP Pro computers; regardless of the virtual
memory custom size setting, upon reboot the pagefile is reset to 1.5 times my
physical memory.

To change the size of an existing pagefile, I must :
* Set XP to "No paging file" and reboot. After reboot, there is no
pagefile, as expected.
* Set a custom pagefile at 2048 (initial & maximum size boxes) & reboot.
Upon restart, there is a 2048M pagefile, as expected.
* When rebooted again (without doing anything else), low & behold the
pagefile is 1534M, 1.5 times my physical memory (1G of RAM)
* If I open the Virtual Memory dialog it still shows 2048 in the two custom
size text boxes, but the currently allocated size is 1534M, and nothing I can
do will increase the size aside from the procedure listed (which is lost
after the next reboot).


As stated, this is observed on all our XP Pro computers (not our 2000
machines), laptops or desktops, Dells, Toshibas, HPs, etc.

I've seen other people complain about this (thanks google.com), but can't
find a solution.

Any ideas?

Ken
 
R

Ron Martell

Ratboy Ken said:
We've observed this on all our XP Pro computers; regardless of the virtual
memory custom size setting, upon reboot the pagefile is reset to 1.5 times my
physical memory.

To change the size of an existing pagefile, I must :
* Set XP to "No paging file" and reboot. After reboot, there is no
pagefile, as expected.
* Set a custom pagefile at 2048 (initial & maximum size boxes) & reboot.
Upon restart, there is a 2048M pagefile, as expected.
* When rebooted again (without doing anything else), low & behold the
pagefile is 1534M, 1.5 times my physical memory (1G of RAM)
* If I open the Virtual Memory dialog it still shows 2048 in the two custom
size text boxes, but the currently allocated size is 1534M, and nothing I can
do will increase the size aside from the procedure listed (which is lost
after the next reboot).


As stated, this is observed on all our XP Pro computers (not our 2000
machines), laptops or desktops, Dells, Toshibas, HPs, etc.

I've seen other people complain about this (thanks google.com), but can't
find a solution.

Any ideas?

Ken

See MVP Alex Nichol's article on Virtual Memory in Windows XP at
http://aumha.org/win5/a/xpvm.htm especially the section dealing with
problems. And note the section dealing with problems caused by Norton
Antvirus if your computers happen to be infested with that
virus///product.

Good luck


Ron Martell Duncan B.C. Canada
--
Microsoft MVP
On-Line Help Computer Service
http://onlinehelp.bc.ca

"The reason computer chips are so small is computers don't eat much."
 
A

Alex Nichol

Ratboy Ken said:
We've observed this on all our XP Pro computers; regardless of the virtual
memory custom size setting, upon reboot the pagefile is reset to 1.5 times my
physical memory.

To change the size of an existing pagefile, I must :
* Set XP to "No paging file" and reboot. After reboot, there is no
pagefile, as expected.
* Set a custom pagefile at 2048 (initial & maximum size boxes) & reboot.
Upon restart, there is a 2048M pagefile, as expected.
* When rebooted again (without doing anything else), low & behold the
pagefile is 1534M, 1.5 times my physical memory (1G of RAM)

First Do not try to waste that much disk space on the page file with
1GB RAM

When you change placement to a different drive, you need to leave a
token file - init 2 max 50 is fine - on C *and always click Set before
going on*. For your RAM size I would just set that as initial 100, max
maybe 1000 or 2000. The file is unlikely to grow beyond the 100 unless
you are running programs that make *very* heavy demands on memory. The
nxRAM is plain bad advice. See more at my page
www.aumha.org/win5/a/xpvm.htm
 
G

Guest

Thanks Alex,

We are running very heavy memory demanding scientific programs that need
virtual memory in excess of 1G. If a computer has only 512M of RAM, the
FORTRAN program will not load unless the virtual memory is set to 1G.
However, under XP, everytime they reboot their pagefile is reset to ~750M,
and the program will not load. The complaints from our processors in the
field is becoming annoying! This never happened under Win2000.

We only have one drive per computer, so the pagefile must sit on C drive.
And I can change the size once without problem, but then a subsequent reboot
and it goes to 1.5xRAM... always.

As for Norton Antivirus, I am running SystemWorks 2004 Pro with all the
updates as posted by Symantec. Everybody in this newsgroup seems to be
pointing the finger of blame on them... maybe so. But searching Symantec's
website, I couldn't find anything related to this problem. Fortunately I will
be taking receipt of a brand new virgin laptop from Dell in the next few days
(no antivirus solution preinstalled) and I am going to test this hypothesis.

The only way to know is to experiment.

Ken
 
A

Alex Nichol

perris said:
Ken, of course you need a bigger pagefile then what Alex is suggesting,
everybody does, and the settings he suggests will always cause
performance hits.

That I flatly deny. *Everybody* does not, though This particular
workload probably does.
 
R

Ron Martell

deny it all you like, it is a fact...data in memory has to be charged
against the hardrive where the memory manager will release and reclaim
as a users need indicates, private writeable address is charged against
the pagefile, your settings eliminate the area on the hardrive that's
neccessary for the memory manger to be efficient.

Balderdash. Hogwash. Malarkey.

The pagefile in Windows XP is used for the following specific
functions in Windows XP:

1. To compensate for the lack of sufficient physical RAM in the
computer to meet the total memory load requirements.

2. To fulfill the memory address space requirements for the unused
portions of memory allocation requests. And all that is required in
Windows XP is that the maximum size limit be large enough so that the
page file could be increased if these memory items were to be actually
used.

3. To hold the memory content for other users if multiple users are
configured on the computer and if the "fast user switching" option is
in effect.

4. To received the contents of the "system failure memory dumps" if a
memory dump option has been configured. This requires that the
pagefile be located on the boot drive.
you seem to think that a the pagefile only needs to be as big as the
amount of info that's likey to be written to it.

no, all datat in memory NEED it's OWN area on the hardrive for the
memory manager

Wrong. All requested memory must be allocated memory address space.
These addresses may be either in RAM or in the page file. There is no
requirement for the same items to be allocated space in both. The
memory manager decides which items will be in RAM and which will be in
the pagefile on a dynamic basis and swaps them back and forth as
requirements change.
you also seem quite willing to invite expansion, and more then willing
to have users invite expansion

for this, I am amazed

For meeting the memory address requirements of the unused portion of
memory allocation requests all that is requires is that the potential
to enlarge the pagefile exist. It does not have to actually be
enlarged for these items. The unused portions of requested memory
can easily aggregate to several hundred megabytes even on a lightly
used system. For example on my own system these items currently total
208 mb. Task Monitor tells me that the Page File Usage is 308 mb
while another utility tells me that there is only 94 mb of active
memory content residing in the page file. And the actual size of the
pagefile is 160 mb, which is the minimum that I have set for it.


Ron Martell Duncan B.C. Canada
--
Microsoft MVP
On-Line Help Computer Service
http://onlinehelp.bc.ca

"The reason computer chips are so small is computers don't eat much."
 
R

Ron Martell

perris said:
incorrect, just about your entire response to my post

No. It is your opinions about how memory management works in Windows
that are incorrect.
quote;

The pagefile in Windows XP is used for the following specific
functions in Windows XP:

1. To compensate for the lack of sufficient physical RAM in the
computer to meet the total memory load requirements. unquote

incorrect, the pagefile is only to provide backing store for modified
pages so they can be considered by the memory manager

That is totally incorrect. Modified memory pages are not backed up
anywhere, except when done so by the progam. And I know of no
application program or Windows component which does this.

everything that's not modified gets backed to the hardrive from whence
it came...the exe, dll, whatever


That is also incorrect. Items loaded from the hard drive which have
not been modified have no need for backup because the original is
still there intact on the hard drive.
you would need over 2 gigs to run xp without backing store, and
everything in memory needs a place, it's OWN place on the hardrive so
the memory manager will be able to conceder it in the memory management
model

That does not make sense.
as far as memory dumps, ya, that's a good purpose of it too...you do
have that one right

you also got the following right;

quote;

The
memory manager decides which items will be in RAM and which will be in
the pagefile on a dynamic basis and swaps them back and forth as
requirements change. unquote

memory is addressed first and allocated second, the memory manager
needs an area to perform these "swaps" you're speaking about, the
"swap" space is not shared

The memory manager has its own area in RAM which is specifically
marked as not to be paged out.

and your claim

quote


For meeting the memory address requirements of the unused portion of
memory allocation requests all that is requires is that the potential
to enlarge the pagefile exist. It does not have to actually be
enlarged for these items. The unused portions of requested memory
can easily aggregate to several hundred megabytes even on a lightly
used system. For example on my own system these items currently total
208 mb. Task Monitor tells me that the Page File Usage is 308 mb
while another utility tells me that there is only 94 mb of active
memory content residing in the page file. And the actual size of the
pagefile is 160 mb, which is the minimum that I have set for it.

unquote

rediculous...you think that just because only 94 mbs of information is
actually in the pagefile, that's all that the memory manager is
charging to it?

Yup. All there is is all there is.
obsurd...taskmanager is exactly correct in what is charged to the
pagefile, yet you want to circumvent this strategy.

Taskmanager is including the *unused* portions of requested memory in
the pagefile count because that is where these unused addresses have
been mapped to.
the kernel team IS EXTREMELY happy with the memory management model of
the NT kernel, and yes, they do know how much memory is available on
modern systems

they've continued to raise, not lower the recommendations for pagefile,
the continue their recommendations in server 2003, and in longhorn

how you can defend circumventing the recommendation of the kernel team
when as a fact you KNOW there is no performance to gain for the effort,
and wuite a bit to loose for some users, (as the very poster of this
thread clearly demonstrates) is irresponsible in every sense

What recommendations of the Kernel team are you talking about? Where
are they published? The 1.5 times RAM figure was arrived at for two
reasons:
1. to satisfy the marketing types, who wanted a simplistic value even
if it was largely bogus.
2. to ensure tha the pagefile was always big enough to hold a complete
memory dump in the event of a system failure class error. The actual
truth is that the complete memory dump is usable in something like
..0000000000000000001% of the system failure memory errors that occur.
For the overwhelming majority of these errors the STOP code is fully
adequate for diagnosing the problem, and for the remainder the 64kb
small memory dump is sufficient. The complete memory dump was
instituted as an aid for testing and development and for a few large
corporate and government users where this information might actually
be used on occasion.

whether or not YOU put your memory under pressure doesn't mean I
don't, or my customers, or the people that work for me, and those that
mess with these machines because of the irresponsible papers that
"recommend" lowering the default for absolutely NO reason whatsoever

in case you didn't know it, Microsoft even wrote hacks for users to
overcome the 4 gig threshold for page files


The fundamental basic fact regarding the pagefile is that the size
requirements are *inversely* related to the amount of RAM.

More RAM means less pagefile and less RAM means more pagfile.

RAM plus pagefile equals a constant value for any given system
provided all other factors (application and data file load in
particular) are held constant.

Any formula that relates pagefile size to some multiple of the amount
of RAM only proves that the author of that formula does not understand
how memory management works.


Ron Martell Duncan B.C. Canada
--
Microsoft MVP
On-Line Help Computer Service
http://onlinehelp.bc.ca

"The reason computer chips are so small is computers don't eat much."
 
R

Ron Martell

perris said:
now I'd like to pose the same question to you that I pose to ANYONE that
suggests lowering the default memory management settings on systems that
don't have a storage issue.

give me ONE reason you think your "recomendations" are better then the
kernel teams..give me one

any will do

further, you obviously have an incorrect understanding of what the
pagefile is, what it does, and what it's there for...a cut and paste of
something I recently wrote;


<snip>

That material is so totally incorrect it is almost ludicrous.


Ron Martell Duncan B.C. Canada
--
Microsoft MVP
On-Line Help Computer Service
http://onlinehelp.bc.ca

"The reason computer chips are so small is computers don't eat much."
 
A

Alex Nichol

Ron said:
Balderdash. Hogwash. Malarkey.

The pagefile in Windows XP is used for the following specific
functions in Windows XP:

1. To compensate for the lack of sufficient physical RAM in the
computer to meet the total memory load requirements.

This guy has completely wrong ideas and refuses to even consider he
might be wrong. In particular he misses the point that the hundreds of
MB that get allocated to programs but never taken up need no Physical
file present at all; merely the potential for it to grow to the Max
size.

It is worth referring to 'Windows XP Inside Out' - Ed Bott and Carl
Seichert, p 217 which gives much the advice I do (though I think they
could suggest a larger setting for 128 MB machines). Anyone is of
course entitled to waste disk space if he wants - but *not* to make it a
general 'Essental'

Also while Microsoft cannot of course formally endorse a third party
page, they did make mine a featured on in the Expert Zone an year ago.
and conversations with their people suggest it is well regarded in
developer circles
 
H

Harry Ohrn

perris said:
this guy has wrong ideas meaning me?, or Mark Russinovich.?

from mark again;

"Memory allocation in NT is a two-step process--virtual memory
addresses are reserved first, and committed second...The reservation
process is simply a way NT tells the Memory Manager to reserve a block
of virtual memory pages to satisfy other memory requests by the
process...There are many cases in which an application will want to
reserve a large block of its address space for a particular purpose
(keeping data in a contiguous block makes the data easy to manage) but
might not want to use all of the space."

potential is all that matters Alex

once again

anybody, anbody at all

give me ONE reason you think your settings are better then the
default.

any reason will do

now, this conversation is obviously going nowhere, so Alex, continue to
give the irresponsible advice you gave the original poster on this
thread, I know you will...and people like him will definately suffer
performance, and no body will get gain

wondefull

A quick question perris. Microsoft has placed a link at Expert Zone to Alex
Nichol's information demonstrating respect for his advice. Can you cite
anywhere that your personal position on this subject is widely accepted? I'm
not talking about you citing work done by others that reinforces your
position, but rather, where others agree with work you have done yourself.
It would be helpful if you could do that.

TIA
 
W

Will Denny

perris said:
this guy has wrong ideas meaning me?, or Mark Russinovich.?

from mark again;

"Memory allocation in NT is a two-step process--virtual memory
addresses are reserved first, and committed second...The reservation
process is simply a way NT tells the Memory Manager to reserve a block
of virtual memory pages to satisfy other memory requests by the
process...There are many cases in which an application will want to
reserve a large block of its address space for a particular purpose
(keeping data in a contiguous block makes the data easy to manage) but
might not want to use all of the space."

potential is all that matters Alex

once again

anybody, anbody at all

give me ONE reason you think your settings are better then the
default.

any reason will do

now, this conversation is obviously going nowhere, so Alex, continue to
give the irresponsible advice you gave the original poster on this
thread, I know you will...and people like him will definately suffer
performance, and no body will get gain

wondefull

Hi Perris

You seem to be disputing Alex's knowledge of VM. Your *own independent*
view with proof would be greatly appreciated on this matter.


--


Will Denny
MS-MVP Windows Shell/User
Please reply to the News Groups
 
H

Harry Ohrn

[snip]
now, I've surgery, and will be somewhat out of action for a little bit,
I retire from this thread

good night

Thanks for retiring the thread and good luck on the surgery.
 
G

Guest

Hi Guys,

As regards this problem with pagefile.sys resetting, i have been
experimenting with this for about 2 months now.I have made and used more
restore points than I care to remember, and been about as frustrated as it is
possible to get.
I am a reasonably knowledgeable user of XP and have been messing with the
windows OS for about 15 years.This is the only problem i have ever been
unable to solve.
I have installs to all 60ish of the programs I use stored on a second
partition on my split HD.I went through all 60 installs 1 by 1 creating a
manual restore point after each and checked the pagefile after every reboot.
Sure enough at times it had magically decided to ignore the registry settings
in memory management under currentcontrolset. Also sure enough NAV 2004 and
2005 are culprits that cause this.

BUT HERES THE THING!!! So are about a dozen of my programs ranging from
the newest version of easycleaner, to java 5.0, the kazaa codec pack, and net
framework 1.1.And the reason the registry setting is affected is that it wont
allow full control even when logged in with full admin as i'm the only user.

Now the problem I have is that I have upgraded my PC and although its a
compaq XP disk, about only the motherboard is as it was when i bought it.But
because of this Microsoft says see your manufacturer.If I'm running XP in 10
years time still with this problem, will they finally take responsibility
then for the vulnerability of their operating system?

As far as I am concerned they need to stop packaging gimmicks into software
to get us to buy music online, or buy stupid nudges over msn stop treating us
all like idiots by blaming other companies, when the fact of the matter is
that the OS registry settings should in fact be impervious to all third party
software, unless specifically changed by a user/administrator. If I can see
in the registry 1024 2048 for pagefile size, it should create one that size,
not whatever it feels like.

Very annoyed having to reinstall 60GB of software all over again cos not one
single person can explain how to reverse this damm thing!!!
 
C

cquirke (MVP Windows shell/user)

On Sat, 30 Apr 2005 22:48:01 -0400, "Mike Hall \(MS-MVP\)"
To understand the XP pagefile better, go to this website and read through
the article..

The problem with this pagefile business is that it's not used purely
as a page file, but also as a dumping ground for memory dumps and fast
user switching workspace. Ideally, both of thee useages should be
directed to use their own redirectable workspace.

Here's why...

For the same amount of work, you should need less pagefile the more
RAM you have. But the more RAM you have, the more "pagefile" you need
for memory dumps and user switching.

The speed requirements are different too. True page file access has
to be as fast as possible, but fast user switching doesn't have to be
*that* fast (it's a dump-once, read-once access at a moment that some
slowdown is expected anyway) and memory dumps don't have to be fast at
all (the system's about to stop working alltogether, duh).

With a lot of RAM, you end up with huge lumps of seldom-used material
stuck in the middle of the fastest disk space, bloating up head
travel. This is as counter-productive as allocating x% of the volume
for IE web cache; both are poor scalability design, born of a time
when RAM and HD were of a certain size and ASSumed to remain
appropriate even when this hardware expands in capacity.

A nice way to set up systems is with a small C: that has all the
bloated, seldom-accessed stuff moved off it - Service Pack undo, MSI
pre-install material, collections of videos, pictures and music etc.
When combined with large RAM, this should work even better, as true
pagefile use is reduced and so should the size of the page file.

Instead, you end up with an enormous amount of prime real estate being
hogged by what will only be used if using fast user switching, or if
you are in the process of crashing into unusability.

It's like a company insisting on having not only it's admin offices in
the center of the ciry for fast access, but also its 30 square miles
of warehousing and manufacturing facilities too.


---------- ----- ---- --- -- - - - -
Gone to bloggery: http://cquirke.blogspot.com
 
D

DSG

This is an interesting piece. I have used a certain graphics program for a
few years, in which, when I've used all the RAM and have forgotten to save
and restart, the program (with my wonderful work) just closes without
warning. Last week - with me now a new XP user, I got the Windows message
that XP was increasing the amount of virtual memory in pagefile.sys. Which
was a nice reminder for me to save, shutdown, and restart to get a bigger
playground.
......................................................
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top