perris said:
incorrect, just about your entire response to my post
No. It is your opinions about how memory management works in Windows
that are incorrect.
quote;
The pagefile in Windows XP is used for the following specific
functions in Windows XP:
1. To compensate for the lack of sufficient physical RAM in the
computer to meet the total memory load requirements. unquote
incorrect, the pagefile is only to provide backing store for modified
pages so they can be considered by the memory manager
That is totally incorrect. Modified memory pages are not backed up
anywhere, except when done so by the progam. And I know of no
application program or Windows component which does this.
everything that's not modified gets backed to the hardrive from whence
it came...the exe, dll, whatever
That is also incorrect. Items loaded from the hard drive which have
not been modified have no need for backup because the original is
still there intact on the hard drive.
you would need over 2 gigs to run xp without backing store, and
everything in memory needs a place, it's OWN place on the hardrive so
the memory manager will be able to conceder it in the memory management
model
That does not make sense.
as far as memory dumps, ya, that's a good purpose of it too...you do
have that one right
you also got the following right;
quote;
The
memory manager decides which items will be in RAM and which will be in
the pagefile on a dynamic basis and swaps them back and forth as
requirements change. unquote
memory is addressed first and allocated second, the memory manager
needs an area to perform these "swaps" you're speaking about, the
"swap" space is not shared
The memory manager has its own area in RAM which is specifically
marked as not to be paged out.
and your claim
quote
For meeting the memory address requirements of the unused portion of
memory allocation requests all that is requires is that the potential
to enlarge the pagefile exist. It does not have to actually be
enlarged for these items. The unused portions of requested memory
can easily aggregate to several hundred megabytes even on a lightly
used system. For example on my own system these items currently total
208 mb. Task Monitor tells me that the Page File Usage is 308 mb
while another utility tells me that there is only 94 mb of active
memory content residing in the page file. And the actual size of the
pagefile is 160 mb, which is the minimum that I have set for it.
unquote
rediculous...you think that just because only 94 mbs of information is
actually in the pagefile, that's all that the memory manager is
charging to it?
Yup. All there is is all there is.
obsurd...taskmanager is exactly correct in what is charged to the
pagefile, yet you want to circumvent this strategy.
Taskmanager is including the *unused* portions of requested memory in
the pagefile count because that is where these unused addresses have
been mapped to.
the kernel team IS EXTREMELY happy with the memory management model of
the NT kernel, and yes, they do know how much memory is available on
modern systems
they've continued to raise, not lower the recommendations for pagefile,
the continue their recommendations in server 2003, and in longhorn
how you can defend circumventing the recommendation of the kernel team
when as a fact you KNOW there is no performance to gain for the effort,
and wuite a bit to loose for some users, (as the very poster of this
thread clearly demonstrates) is irresponsible in every sense
What recommendations of the Kernel team are you talking about? Where
are they published? The 1.5 times RAM figure was arrived at for two
reasons:
1. to satisfy the marketing types, who wanted a simplistic value even
if it was largely bogus.
2. to ensure tha the pagefile was always big enough to hold a complete
memory dump in the event of a system failure class error. The actual
truth is that the complete memory dump is usable in something like
..0000000000000000001% of the system failure memory errors that occur.
For the overwhelming majority of these errors the STOP code is fully
adequate for diagnosing the problem, and for the remainder the 64kb
small memory dump is sufficient. The complete memory dump was
instituted as an aid for testing and development and for a few large
corporate and government users where this information might actually
be used on occasion.
whether or not YOU put your memory under pressure doesn't mean I
don't, or my customers, or the people that work for me, and those that
mess with these machines because of the irresponsible papers that
"recommend" lowering the default for absolutely NO reason whatsoever
in case you didn't know it, Microsoft even wrote hacks for users to
overcome the 4 gig threshold for page files
The fundamental basic fact regarding the pagefile is that the size
requirements are *inversely* related to the amount of RAM.
More RAM means less pagefile and less RAM means more pagfile.
RAM plus pagefile equals a constant value for any given system
provided all other factors (application and data file load in
particular) are held constant.
Any formula that relates pagefile size to some multiple of the amount
of RAM only proves that the author of that formula does not understand
how memory management works.
Ron Martell Duncan B.C. Canada
--
Microsoft MVP
On-Line Help Computer Service
http://onlinehelp.bc.ca
"The reason computer chips are so small is computers don't eat much."