James said:
This is complete nonsense.. Are you making this up as you go along?
no
A memory page doesn't have to be modified to be eligible for paging.
Marking
you are exactly correct and this repeats what I've said, so you've
misinterpreted my writing
paging happens within every .dll and .exe weather or not they are
dirty...when the os "releases a .dll" or portions of it, this is
paging, the data is simply released, not written to the pagefile
because there's already a clean image to retrieve that data...pages are
part of the memory management strategy regardless of being dirty or
clean
It's quite possible to have enough memory to run the programs you have.
very few desktops have gigs of memory.
What you might be referring to is that it's not possible to prevent XP
from attempting to use the page file, which is a completely different
concept.
you have this backward, and this is not what I'm either insinuating
..plus, it is possible to keep xp from paging to the pagefile, it
isn't possible to keep xp from pages to the other images on the disk
that are not the pagefile
Sophisticated programs aren't written to any 90/10 rule.
yes, most of them are
(they are)...written to a design that makes it likely that a very
significant
part of the code will never get executed (such as error
handling)ummm...you just made the point...thanx for not making me explain
it
again
[quoteyou haven't indicated why this comment is relevant -
you mean on this thread I haven't *yet*, I have on other threads and I
will on this one...here's why it's relevant;
just about all programs reserve more virtual memory then they need...
The reservation process is simply a way NT tells the Memory Manager to
reserve a block of virtual memory pages to satisfy other memory requests
by the process...There are many cases in which an application will want
to reserve a large block of its address space for a particular purpose
(keeping data in a contiguous block makes the data easy to manage) but
might not want to use all of the space.
the virtual memory a program requests is the "commit charge" for that
process...trimming the pagefile lower then the commit charge as a total
circumvents this strategy, and is ALWAYS a performance liability.
The more RAM you have the less page file is likely to be used.
first of all, what I said was the more computer has in use, and yes,
you're correct, the more memory a person has the LESS OFTEN the pagefile
will be used, but the more room the pagefile will be needed to facilitate
an area to image for the memory being used
yes, if a user has a gig of memory, but his commit charge is only 250
mb's, he doesn't "need" a gig pagefile, but he suffers nothing for having
a gig pagefile, and he suffers nothing for keeping his box ready to use it
to it's capacity
now, I'll make the numbers smaller so it's easier to follow...and yes,
this analogy holds true for whatever amount of memory you have;
you are assuming that if I have 3 mbs of memory yet only one mb is ever
written to the pagefile, all I need is a mb of pagefile.
no
this is what most people assume, but no, all three mb's need their own
area to image, they don't share that the area that they get imaged to
just because nobody else will ever be there when they are.
This is simplest to explain using the following analogy:
If you were to look to any 100% populated apartment building in
Manhattan, you would see that at any given time throughout the day, there
are less then 25% of the residents in the building at once!
Does this mean the apartment building can be 75% smaller?
Of course not, you could do it, but man would that make things tough. For
best efficiency, every resident in this building needs their own address.
Even those that have never shown up at all need their own address, don't
they? We can't assume that they never will show up, and we need to keep
space available for everybody.
here's an internal if you'd care;
re are quite a few pages that are very rarely used but you may not know
that there are a lot of unreferenced but committed pages in the system
which reserve virtual space (physmem+pagefile) but don't have physical
memory committed for them. If you don't have a pagefile then that
commitment is taken out of physical memory. An example of that is
committed stack pages which are never or extremely rarely touched. These
are committed.
In other words, if you run without a pagefile your system will actually
use MORE memory.
as far as code, and data that's not meant to get modified...these simply
get released, no pagefile used for this paging at all
however, a page on the modified page list moves to the standby page list
after the Memory Manager's modified page writer writes the page's data to
disk. (usually the pagefile, sometimes a data file)
However, while the modified page writer is writing its data to disk, the
page makes a stop in the transition state (which has no list).
this doesn't happen until the number of pages on the modified page list
exceed a threshold, or when free memory drops below a threshold (which is
based on the amount of physical memory on the system and determined during
the boot), one of two modified page writer threads wakes up to perform
disk I/O that sends the data to a paging file {or a data file. but it
can't be released to the original file since it's been modified).
perris said:
snip <
the pagefile is a place to image modified pages because there is no
image for the memory manager to back the info..when a page is modified,
it's marked as dirty, and the area for dirty pages is the pagefile
the pagefile is not there for "when you don't have enough memory to run
the programs you have"
NOBODY has enough ram to run the programs they have.
sophisticated programs are written with the "90/10" rule...they spend
90% of the time accessing 10% of there code.
the more ram you have in use, the bigger the image area needs to be for
those pages so they can be part of the memory management strategy...the
more ram in use, the bigger the pagefile needs to be.