Optimum page file size for 1 GB?

T

timeOday

Alexander said:
When kernel bugcheck occurs, it's not a good idea to go to filesystem and
create a file. After all, your filesystem driver may be screwed all over by
a faulty driver that caused the crash. If you call it, it might as well say
goodbye to the whole partition.

In Windows, crash dump is written by a special part of the disk driver,
which is normally not even mapped to kernel space, to avoid its corruption.
Position to where to write the dump is known beforehand, during pagefile
initialization. When crash dump starts, bugcheck handler maps the dump
writer which then does the job.

Kernel dumps are certainly something I'm willing to sacrifice. I have
no use for them.
 
A

Alexander Grigoriev

OK. Suppose your favorive videocard vendor produced a driver that
occasionally crashes. You may be blaming darn winblows for that wonderful
BSOD. But if you give it opportunity to save the crashdump (even minidump 64
kbytes) and to do an automated postmortem analysis, MS will 1) tell the
videocard vendor that their driver is causing a crash, and 2) suggest that
the update for the driver is available.
 
A

Arno Wagner

Previously Alexander Grigoriev said:
What would you say of an OS that does randomly kill a process when it needs
to get a VM page when a previous allocation already reported as succeeded?
This is what *ix OSs do in case of overcommit.

That is one of the possible strategies. Not the best one. Going into
deep slowdown on massive swapping is typically not better at all.
They allocate virtual memory
vithout caring if there is enough pagefile to support it. This is done to
make fork() succeed. For a forked process pair, all pages become shared and
marked as copy on write. They don't get separate pages in PF alocated yet.
Thus, amount of committed memory may exceed amount of available VM. When
there is no need to actually use PF (no copy on write happened yet), it's
OK. When COW happens, PF page gets allocated. If it was impossible to
allocate PF page, the OS just kills some process.

Still these systems work and get uptimes in the year range.

Arno
 
A

Arno Wagner

Previously Alexander Grigoriev said:
When kernel bugcheck occurs, it's not a good idea to go to filesystem and
create a file. After all, your filesystem driver may be screwed all over by
a faulty driver that caused the crash. If you call it, it might as well say
goodbye to the whole partition.

Ah, you are talking about kernel crashes!
In Windows, crash dump is written by a special part of the disk driver,
which is normally not even mapped to kernel space, to avoid its corruption.

This sounds very much like wishful thinking to me. Could explain some
of the disk corruption people have experienced after windows crashes.
The right way to do this is to not touch the disk at all after akernel
problem. You can allow remote debugging over a serial line, e.g..
Position to where to write the dump is known beforehand, during pagefile
initialization. When crash dump starts, bugcheck handler maps the dump
writer which then does the job.

And why in this universe would anybody want a windows system crash-dump
on their disk? Is there anything at all you can do with it?

Arno
 
F

Folkert Rienstra

Ah, you are talking about kernel crashes!

No, really?
This sounds very much like wishful thinking to me. Could explain some
of the disk corruption people have experienced after windows crashes.
The right way to do this is to not touch the disk at all after akernel
problem. You can allow remote debugging over a serial line, e.g..

How useful.
And why in this universe would anybody want a windows system crash-dump
on their disk? Is there anything at all you can do with it?

Wotanidjut. Wotamoron.
 
A

Alexander Grigoriev

Arno Wagner said:
Ah, you are talking about kernel crashes!

You said 'Under UNIX the kernel dumps to file". This usually happens during
kernel panic (AKA kernel crash). If an application blows chunks, it's
usually called coredump. By the way, in Windows you can do a similar thing
and save an application dump for postmortem debugging. This is used by ISV
to analyse customer site crashes.
This sounds very much like wishful thinking to me. Could explain some
of the disk corruption people have experienced after windows crashes.
The right way to do this is to not touch the disk at all after akernel
problem.

Especially creating a file for kernel dump, like you say UNIX does.

I've seen my share windows crashes. Some happened when I was debugging my
own driver. Lost some files on a FAT32 partition because of that. Other
happened because of a faulty video driver. Other happened because RAM sticks
went sour (Crucial, if you're curious). Other happened because MB went south
(a particular bit on memory bus was unstable _only_ when disk I/O was
active). Never had NTFS corruption because of that, which is amazing. On the
other hand, I've never run any Norton/Symantec crapware. That might explain.
You can allow remote debugging over a serial line, e.g..

If Windows is run with /crashdebug boot option, it will try to connect to a
remote debugger (which can be serial, Firewire, or, in Vista, over Ethernet)
in case of bugcheck. If /debug option is used, the debugger will be always
active, it will do that even in case of application crash, and DivX will go
on a breakpoint, too. This allows debugging applications in headless
configuration, and debugging system services.
And why in this universe would anybody want a windows system crash-dump
on their disk? Is there anything at all you can do with it?

Postmortem crash analysis and reporting implemented in XP allowed MS to
identify many problematic in-house and 3rd party drivers. 3rd party driver
crashes are reported to the vendors. This is better than users suffering in
silence, pulling hair from all places.
 
A

Arno Wagner

You said 'Under UNIX the kernel dumps to file". This usually happens during
kernel panic (AKA kernel crash).

Not at all. Under UNIX the kernel dumps the memory of a crashed
application to file. Dumping the kernel memory is such a bad
idea that I did not even consider you were talking about that.
If an application blows chunks, it's usually called coredump.

And it is performed by the kernel...
By the way, in Windows you can do a similar thing
and save an application dump for postmortem debugging. This is used by ISV
to analyse customer site crashes.
Especially creating a file for kernel dump, like you say UNIX does.

Let me repeat: There are no kernel dumps under UNIX. The only thing
I know of is the Linux kernel debugger were you can dump to serial
line, i.e. a different system. Accessing the disk after a kernel problem
is not a sane thing to do.
I've seen my share windows crashes. Some happened when I was
debugging my own driver. Lost some files on a FAT32 partition
because of that. Other happened because of a faulty video
driver. Other happened because RAM sticks went sour (Crucial, if
you're curious). Other happened because MB went south (a particular
bit on memory bus was unstable _only_ when disk I/O was
active). Never had NTFS corruption because of that, which is
amazing. On the other hand, I've never run any Norton/Symantec
crapware. That might explain.
If Windows is run with /crashdebug boot option, it will try to
connect to a remote debugger (which can be serial, Firewire, or, in
Vista, over Ethernet) in case of bugcheck. If /debug option is used,
the debugger will be always active, it will do that even in case of
application crash, and DivX will go on a breakpoint, too. This
allows debugging applications in headless configuration, and
debugging system services.

Ans that is how it should be done.
Postmortem crash analysis and reporting implemented in XP allowed MS to
identify many problematic in-house and 3rd party drivers. 3rd party driver
crashes are reported to the vendors. This is better than users suffering in
silence, pulling hair from all places.

Well, this has some uses for developers, but for ordinary users it
sounds a) mostly useless b) dangerous because there are disk accesses
after the kernel (and hence possibly the disk and controller) is in
an undefined state. This shoul be turned off as default.

Arno
 
A

Aidan Karley

If it was impossible to
allocate PF page, the OS just kills some process.
You've got the source. If you don't like it, change it.
Talk to me about
introductory text on good design.
Put a suitable quote (with reference) into your patch as a
comment.
Regarding Windows, I said max(RAM, PF), not min(RAM, PF).
As I understood and elaborated to Arno. Well, actually I
elaborated on max(RAM,PF) != sum(RAM,PF)

Arno points out that systems that use this allocation strategy
do however work, and routinely achieve year-plus uptimes. Certainly the
last *nix system that I had running in any seriousness at home achieved
~475 days uptime with 64MB only of physical memory and an off-the-shelf
kernel. In short - reliability of the domestic mains system is lower
than the computer system.
Which is part of the reason that I spent a couple of hours
building a low-power mini-ITX system yesterday with RAIDed hard drives,
to act as file server for the rest of the house. But I'll also need to
reclaim my old UPS from Peet before I consider it ready for prime time.
 
P

Pipboy


Seeing as none of you can agree on what is the best static size it's quite
obvious that the smartest method is to just let the OS handle it. I've
tried the pagefile in many different configs and never notuced any
performance improvement over letting the OS handle it. Not saying there
isn't some miniscule benefit but if it is not noticable then it is not woth
getting anal over. Just let XP handle it and forget about it.
 
A

Arno Wagner

Seeing as none of you can agree on what is the best static size it's quite
obvious that the smartest method is to just let the OS handle it. I've
tried the pagefile in many different configs and never notuced any
performance improvement over letting the OS handle it. Not saying there
isn't some miniscule benefit but if it is not noticable then it is not woth
getting anal over. Just let XP handle it and forget about it.[/QUOTE]

Under UNIX page-file size is something that can be planned and optimized.
Especially since typically you don't use a pagefile, but a complete
partition, for performence reasons. It seems that under XP, letting the OS
handle it, is indeed the best choice. ''No user serviceable parts inside'',
it seems.

Arno
 
A

Alexander Grigoriev

Arno Wagner said:
Under UNIX page-file size is something that can be planned and optimized.
Especially since typically you don't use a pagefile, but a complete
partition, for performence reasons. It seems that under XP, letting the OS
handle it, is indeed the best choice. ''No user serviceable parts
inside'',
it seems.

Arno

I wonder what performance difference would be between a dedicated paging
partition and non-fragmented (or little fragmented) pagefile. My guess is it
won't be noticeable. I think that older Unix required a dedicated partition
simply because it was not able to use a file, not because of performance
gain.
 
F

Folkert Rienstra

Arno Wagner said:
Under UNIX page-file size is something that can be planned and optimized.
Especially since typically you don't use a pagefile, but a complete
partition, for performence reasons.
It seems that under XP, letting the OS handle it, is indeed the best choice.
''No user serviceable parts inside'', it seems.

And the babblebot makes a 180 degree turn, once again.
 
A

Arno Wagner

I wonder what performance difference would be between a dedicated paging
partition and non-fragmented (or little fragmented) pagefile. My guess is it
won't be noticeable. I think that older Unix required a dedicated partition
simply because it was not able to use a file, not because of performance
gain.

AFAIK the primary difference is that a pageing-file needs one level
more of indirection. In UNIX if you use a pageing-file, you get
additional disk accesses in order to figure out where a sector n
is. In the swap partition, that is simple arithmetric. It is likely
that Windows just burns some additional memory for that mapping and
that a swap file is not actually several times slower than a swap
partition in Windows. You just waste that additional memory. If
you keep in mind, that e.g. Linux was initially designed to run
with 4MB of memory (8MB with X11), you can understand why they did
not want to waste that.

In addition I expect that for a typical Unix system administrator
creating a swap partition is not a difficult operation. Very likely
Microsoft did not expect their users to be able to do that and went
the ''one large partition for everything'' approach. A thing which
is frowned upon heavily in the UNIX world for several reasons.

Arno
 
F

Folkert Rienstra

Really? How'd you figure that?

Yeah, I can see your point with the level of snipping that you applied to the quoting.

Ok, here's what stares you in the face:

" Not at all. Good performance requires a static size. No control by the OS then."
and
" It seems that under XP, letting the OS handle it, is indeed the best choice.
''No user serviceable parts inside'', it seems. "

What exactly did you not understand.
 
A

Alexander Grigoriev

NTFS describes file fragment positions in terms of "extents", each having
starting cluster (sector) offset and length. For an open file, it keeps
certain number (like up to 32) of extents descriptors in memory, without
need to re-read them. This is different from FAT-style filesystems, which
were not designed to handle big files efficiently, though FAT FS driver can
optimize the accesses by creating extent descriptors from FAT chain on the
fly, as accessed. A few cycles necessary to go through the extent table
wouldn't be more noticeable than phase of moon influence. Paging subsystem
can even request pagefile extents from the NTFS driver (see
FSCTL_GET_RETRIEVAL_POINTERS) and use direct sector offsets for each page
descriptor, thus removing even that tiny cost of indirection.
 
A

Arno Wagner

Previously Alexander Grigoriev said:
NTFS describes file fragment positions in terms of "extents", each having
starting cluster (sector) offset and length. For an open file, it keeps
certain number (like up to 32) of extents descriptors in memory, without
need to re-read them. This is different from FAT-style filesystems, which
were not designed to handle big files efficiently, though FAT FS driver can
optimize the accesses by creating extent descriptors from FAT chain on the
fly, as accessed. A few cycles necessary to go through the extent table
wouldn't be more noticeable than phase of moon influence. Paging subsystem
can even request pagefile extents from the NTFS driver (see
FSCTL_GET_RETRIEVAL_POINTERS) and use direct sector offsets for each page
descriptor, thus removing even that tiny cost of indirection.

Just as I suspected: The penalty is in extra memory needs, not speed.

Arno
 
A

Alexander Grigoriev

Arno Wagner said:
Just as I suspected: The penalty is in extra memory needs, not speed.

Arno

Yes, if you want to call 8 bytes per pagefile fragment ("extent") a
"penalty".
 
C

chrisv

Folkert said:
Yeah, I can see your point with the level of snipping that you applied to the quoting.

Ok, here's what stares you in the face:

" Not at all. Good performance requires a static size. No control by the OS then."
and
" It seems that under XP, letting the OS handle it, is indeed the best choice.
''No user serviceable parts inside'', it seems. "

What exactly did you not understand.

Maybe where he's from, that would be called a pi turn. 8)
 
F

Folkert Rienstra

Alexander Grigoriev said:
Yes, if you want to call 8 bytes per pagefile fragment ("extent") a "penalty".

Just one more of babblebot's clueless bailouts.
Is there some reason why you must encourage the babblebot?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top