Optimum page file size for 1 GB?

T

Terry Pinnell

I just upgraded my Athlon 1800 512 MB to 1 GB. Is there any general
consensus on the 'best' setting I should use for page file please? I
recall a few years ago much debate/controversy over this, but wonder
if a consensus has now emerged? My CPU is now slow by today's
standards (runs at 1533 MHz), so I naturally want to get the most out
of this extra RAM.
 
B

Bob Willard

Terry said:
I just upgraded my Athlon 1800 512 MB to 1 GB. Is there any general
consensus on the 'best' setting I should use for page file please? I
recall a few years ago much debate/controversy over this, but wonder
if a consensus has now emerged? My CPU is now slow by today's
standards (runs at 1533 MHz), so I naturally want to get the most out
of this extra RAM.

I think the consensus is, for almost all XP desktop PCs, to let the OS
control the size dynamically.
 
A

Arno Wagner

Previously Bob Willard said:
Terry Pinnell wrote:
I think the consensus is, for almost all XP desktop PCs, to let the OS
control the size dynamically.

Not at all. Good performance requires a static size. No control
by the OS then.

Arno
 
T

Terry Pinnell

Arno Wagner said:
Not at all. Good performance requires a static size. No control
by the OS then.

Arno

Thanks both. Hmm, so I suspect things haven't changed then - still no
consensus!
 
A

Alexander Grigoriev

Not quite. It doesn't hurt much to have a static PF larger than necessary,
but it's no better than to have Windows extend the PF as needed. A pagefile
is not shrunk back when extra is no more needed, so there is no penalty of
changing the size back and forth.
The only issue is that the PF may get slightly fragmented. As long as it's
not tens of pieces, it should not hurt.

One issue (important mostly to kernel component developers) is that the
pagefile should be at least as big as RAM size, to allow full crash dump.
Even for normal users, some crappy video drivers sometimes crash, and to
allow automated post-mortem analysis, the crash dump needs to be saved.
 
E

Eric Gisin

Annie Wagner said:
Not at all. Good performance requires a static size. No control
by the OS then.

Annie

What a ****ing maroon.

The only version of windows this was valid for was Win 3,
where permanent, contigous pagefile bypassed FAT and
had lower overhead than the temporary pagefile.
 
A

Arno Wagner

Thanks both. Hmm, so I suspect things haven't changed then - still no
consensus!

Hehe. Here is one rule of thumb that I use: Swap should be the same size
as the main memory, but not larger than 256MB, since it then starts to
take forever to actually use it.

My Linux currently runns swappless without issue. For XP, I think
I have 250MB static size.

Arno
 
A

Arno Wagner

Previously Alexander Grigoriev said:
Not quite. It doesn't hurt much to have a static PF larger than necessary,
but it's no better than to have Windows extend the PF as needed. A pagefile
is not shrunk back when extra is no more needed, so there is no penalty of
changing the size back and forth.
The only issue is that the PF may get slightly fragmented. As long as it's
not tens of pieces, it should not hurt.
One issue (important mostly to kernel component developers) is that the
pagefile should be at least as big as RAM size, to allow full crash dump.

Well, If you are a developer....
Even for normal users, some crappy video drivers sometimes crash, and to
allow automated post-mortem analysis, the crash dump needs to be saved.

I don't think that is relevant for most users.

Arno
 
M

Michael Cecil

What a ****ing maroon.

The only version of windows this was valid for was Win 3,
where permanent, contigous pagefile bypassed FAT and
had lower overhead than the temporary pagefile.
Exactly. A pagefile of 250MB? Perhaps, if you only have that much RAM.
How can memory swap to the pagefile, if the pagefile is less than the
amount of RAM? You need at least as much pagefile as you have RAM.

Damn. There needs to be a way to killfile morons off the entire Internet.
 
F

Frazer Jolly Goodfellow

Exactly. A pagefile of 250MB? Perhaps, if you only have that
much RAM. How can memory swap to the pagefile, if the pagefile
is less than the amount of RAM? You need at least as much
pagefile as you have RAM. Nonsense.


Damn. There needs to be a way to killfile morons off the entire
Internet.
Indeed.
 
A

Alexander Grigoriev

Then it's like you don't have a swapfile at all.
Windows is using different physical to pagefile mapping. First, it doesn't
allow overcommit, unlike *ix OSs. All committed pageable RAM pages map to
pagefile pages. As soon as a page needs to be swapped out, it's written to
PF. Then the RAM page becomes free to read another page.

Bottom line, total virtual memory size is NOT RAM+PF. It's max(RAM, PF).
 
A

Arno Wagner

Previously Alexander Grigoriev said:
Then it's like you don't have a swapfile at all.
Windows is using different physical to pagefile mapping. First, it doesn't
allow overcommit, unlike *ix OSs. All committed pageable RAM pages map to
pagefile pages. As soon as a page needs to be swapped out, it's written to
PF. Then the RAM page becomes free to read another page.

Ok, that is plain stupid. They should have read some introductory
text on OS memory management! If MS really does not know ho to do this
properly and implemented it this way, then your are correct, of course.
Bottom line, total virtual memory size is NOT RAM+PF. It's max(RAM, PF).

So in Windows, you cannot use more memory than you have physically
available? Unbelivable! That was the reason pageing was invented in the
first place!

Well, I usually only run single taks in XP (games), so I will probably
not be too much affected. Nut I now understand why a webbrowser run in
paralell can lead to memory exhaustion. Why does MS always have to
ignore technology prooven over decades and do their own inferiour and
botched thing? Does nobody there study first what already exists?
Extremely incompetent or extremely arrogant. Maybe both.

Arno
 
T

timeOday

Arno said:
Hehe. Here is one rule of thumb that I use: Swap should be the same size
as the main memory, but not larger than 256MB, since it then starts to
take forever to actually use it.

My Linux currently runns swappless without issue. For XP, I think
I have 250MB static size.

Arno

I've run without swap space on Linux for years now. I think swap is an
idea whose time has passed.
 
A

Aidan Karley

So in Windows, you cannot use more memory than you have physically
available? Unbelivable! That was the reason pageing was invented in the
first place!
I'm not being a fan of Mr Gates, but I have to disagree. Say you have
a 128MB memory machine (more than adequate for normal purposes) and a 256MB
swap file ... you have 256 MB of memory available to processes and OS. Not
384MB as you might expect.
Why does MS always have to
ignore technology prooven over decades and do their own inferiour and
botched thing? Does nobody there study first what already exists?
Extremely incompetent or extremely arrogant. Maybe both.
Probably their patent lawyers have a special killing pen for any
programmers who might be infected with prior art. Sorry, "infested", not
"infected". Well, actually, both.
Maybe that would explain the high hiring rates of programmers - once
a programmer has been exposed to other programmer's work, there's the
possibility of a patent violation and the programmer can't be used again.
Single-shot disposable programmers. Now there's an idea.
 
A

Aidan Karley

I've run without swap space on Linux for years now. I think swap is an
idea whose time has passed.
Swap is a place for dumping the contents of memory to when there's
a serious application fault, without fear of overwriting work by any other
process. As far as that goes, the idea is very much alive and well. In
fact, making swap a distinct partition is a very good idea. (It can also
be used for dumping core to in the event of a kernel fault.)
Have you checked that your kernel isn't creating a swap *file*
without you noticing - the Linux virtual memory manager has the ability to
dynamically add swap as either partitions *or* as files in an existing
filesystem. I recall that this used to be the case in the 1.x series of
kernels, but I've not bothered learning the ins and outs in 2.x series.
 
F

Folkert Rienstra

Arno Wagner said:
Ok, that is plain stupid. They should have read some introductory
text on OS memory management!
If MS really does not know ho to do this
properly and implemented it this way, then your are correct, of course.

And you obviously have been talking from your arse again, as usual.
So in Windows, you cannot use more memory than you have physically available?

Brainfarct again, babblebot? The howmaniest this week?
Unbelivable! That was the reason pageing was invented in the first place!

Well, I usually only run single taks in XP (games), so I will probably
not be too much affected. Nut I now understand why a webbrowser run in
paralell can lead to memory exhaustion. Why does MS always have to
ignore technology prooven over decades and do their own inferiour and
botched thing? Does nobody there study first what already exists?
Extremely incompetent or extremely arrogant.

Like yourself, babblebot? No wonder that you don't work there.
 
A

Arno Wagner

Previously Aidan Karley said:
Swap is a place for dumping the contents of memory to when there's
a serious application fault, without fear of overwriting work by any other
process.

Under UNIX the kernel dumps to file. Swap is not unsed for that.
No idea why Windows is incapable of producing a dump-file (i you
are right that it is).
As far as that goes, the idea is very much alive and well. In
fact, making swap a distinct partition is a very good idea. (It can also
be used for dumping core to in the event of a kernel fault.)
Have you checked that your kernel isn't creating a swap *file*
without you noticing - the Linux virtual memory manager has the ability to
dynamically add swap as either partitions *or* as files in an existing
filesystem.

But id does not do so it unless you tell it to. And at leats in
Debian nobody has scripted in some hidden ''magic''. Most likely
users would be quite offended if somebody did.
I recall that this used to be the case in the 1.x series of
kernels, but I've not bothered learning the ins and outs in 2.x series.

The kernel never did this. It may have activates swap automatically
if started with the right options, but it never created swap space
on its own.

Arno
 
A

Alexander Grigoriev

What would you say of an OS that does randomly kill a process when it needs
to get a VM page when a previous allocation already reported as succeeded?
This is what *ix OSs do in case of overcommit. They allocate virtual memory
vithout caring if there is enough pagefile to support it. This is done to
make fork() succeed. For a forked process pair, all pages become shared and
marked as copy on write. They don't get separate pages in PF alocated yet.
Thus, amount of committed memory may exceed amount of available VM. When
there is no need to actually use PF (no copy on write happened yet), it's
OK. When COW happens, PF page gets allocated. If it was impossible to
allocate PF page, the OS just kills some process. Talk to me about
introductory text on good design.

Regarding Windows, I said max(RAM, PF), not min(RAM, PF).
 
A

Alexander Grigoriev

When kernel bugcheck occurs, it's not a good idea to go to filesystem and
create a file. After all, your filesystem driver may be screwed all over by
a faulty driver that caused the crash. If you call it, it might as well say
goodbye to the whole partition.

In Windows, crash dump is written by a special part of the disk driver,
which is normally not even mapped to kernel space, to avoid its corruption.
Position to where to write the dump is known beforehand, during pagefile
initialization. When crash dump starts, bugcheck handler maps the dump
writer which then does the job.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top