Page File size - I've been thinkinig.

K

kony

No I don't really agree, I think 90% of the time it will just be swapping a
load of crap
you will never need again to disk, and disk writes are slow.
I would say my computer is a lot more responsive since I ditched the page
file.

I don't wish to be rude, but the idea that you have over 1 gig of frequently
used
data is ludricrous - cloud cuckoo land. If you are reading in files that
large might
as well read the origiinal file.

If you read in 100 meg then you would have to write a 100 meg to disk which
is a much
slower process then simply reading in the original file.

Poiintless.


Suppose you have a system with 1GB in it. If your system
has a pagefile and swaps out unneeded files once, it then
has more real memory available for other tasks the rest of
the time it is running. What can you do with this addt'l
real memory? Not just run large jobs, it serves as a
filecache so the tasks you ARE doing, don't have to be
reread from HDD each time you start an application.

Do you see? By paging something you never use out to
swapfile, it can allow things you do use to be read from HDD
only once instead of several times inbetween reboots. This
tradeoff means that some may have a minor performance
improvement disabling pagefile, but others won't and would
have a significantly larger benefit keeping it enabled and
set to a reasonable figure.
 
L

Lord Turkey Cough

Yes, was just a joke really, you might recognise it from the end of this :O)

That 10 gig didn't seem a lot when I added my new 250 gig drive ;O)

 
G

GT

startrap said:
Don't disable the paging file unless you are really, really, short of
disk space. If windows needs the paging file and cannot find it, it may
BSOD or crash.

Never happenned to me in over 3 years of no paging file!
 
G

GT

kony said:
Depends entirely on how much memory you have and what you're
running.

Absolutely - as I said, monitor the memory usage before turning off the
swapfile. If you fill the RAM and have no swapfile 'overflow', then things
will hang/crash. Personally I find my 1.5GB of RAM can come close to full,
but I simply close one or two of the applications I am no longer using and
usage drops down again. I prefer it this way, so when I load a large image
or work with hundreds of small files, the hard disk access is not competing
with swapfile access, which I was experiencing previously.
 
K

kony

Absolutely - as I said, monitor the memory usage before turning off the
swapfile. If you fill the RAM and have no swapfile 'overflow', then things
will hang/crash.

I agree with this, and I agree that having a pagefile is in
itself a performance loss if the system /use allows doing
without.

However, if there is no pagefile, application memory
allocation practically always exceeds real memory usage.
This means less memory remaining for not only other apps,
but that the portion of memory used for file caching is
flushed to the extent necessary to provide the memory space
requested by each app.

If there is a pagefile, the extra but unused memory
allocation can be virtualized without flushing the file
cache. It means that (like most things), even in cases
where a system's running apps remain within amount of
physical memory installed, there can still be detriments as
well as benefits to doing without a pagefile/virtual memory.

Essentially I'm saying that beyond what memory usage is
reported, there are still other issues.


Personally I find my 1.5GB of RAM can come close to full,
but I simply close one or two of the applications I am no longer using and
usage drops down again. I prefer it this way, so when I load a large image
or work with hundreds of small files, the hard disk access is not competing
with swapfile access, which I was experiencing previously.

IMO, if your real memory is close to full on a regular
basis, you'd benefit from at least 256-512MB more memory to
allow a more persistent filecache.
 
G

GT

kony said:
I agree with this, and I agree that having a pagefile is in
itself a performance loss if the system /use allows doing
without.

However, if there is no pagefile, application memory
allocation practically always exceeds real memory usage.
This means less memory remaining for not only other apps,
but that the portion of memory used for file caching is
flushed to the extent necessary to provide the memory space
requested by each app.

If there is a pagefile, the extra but unused memory
allocation can be virtualized without flushing the file
cache. It means that (like most things), even in cases
where a system's running apps remain within amount of
physical memory installed, there can still be detriments as
well as benefits to doing without a pagefile/virtual memory.

Essentially I'm saying that beyond what memory usage is
reported, there are still other issues.




IMO, if your real memory is close to full on a regular
basis, you'd benefit from at least 256-512MB more memory to
allow a more persistent filecache.

Of my 1.5GB, I am normally using around 400-600MB during the day. One
particular game takes this easily to around 1 GB on its own, but performance
is otherwise excellent. My experiementation with swap files was a while
back, so maybe I should re-assess, but have been considering a little more
RAM for a while - trouble is that it means scrapping what I have now. Such
is life!
 
D

DevilsPGD

In message <[email protected]> "GT"
My experiementation with swap files was a while
back, so maybe I should re-assess, but have been considering a little more
RAM for a while - trouble is that it means scrapping what I have now. Such
is life!

That's why god invented eBay.
 
L

Lord Turkey Cough

startrap said:
Defragging makes a difference if you use programs that access the disk
frequently. Reading a contiguous file is faster than reading a
fragmented file although it does depend on the actual size of the file
and degree of fragmentation (eg 2 fragments = not a big deal. 50
fragments = bad). A fragmented MFT (Master File Table) is particularly
bad for performance.

This is a pretty neglible amount of time. Afterall your computer arrives
defragged, most of the programs you add to it will be in the early days
and hence to a 'clean' drive, and hence defragged. After loading everything
is run from memory anyway, does not matter if the original file was
fragmented.

Futhermore, the files assoicated with a program will be but in the most
convient place on the drive initially, when you then defrag the drive those
files will be scattered pretty randonly over the drive. The defragger cannot
know which programs those files are normally accessed by - impossible.
Defragmentation also reduces drive wear because reading/writing a file
contiguously stresses it relatively less (mechanically i.e.) since the
acuator arm does not have to go nuts trying to pick up fragments from
around the platter. Finally, this last point saves battery power on
laptops, though it is not a factor for desktops.

The last point I made about files associated with programs nullifies
your point.Just think about the ware and tear your background program
causes looking for fragmented files too!!!!

First time I ever defragged my computer, to speed up the start up time
I timed it to see 'how much faster' it was. If anything it appeared to be
slower!!!
(honest!!). I have not really bother much with it after that, it seems to
make little of know difference. The six hours or so of constant disk
activity
didn't really endear me to the idea either!!

Like virus scanninig it is a by and large a waste of time. Never finds
anything
baring red herrings.

As for automatic defragmentation, its not that the drive runs all the
time during idle...only when necessary i.e a few minutes a day. It's
hardly a bother.


Anyway, all this fragmentation stuff is OT lol. Coming back to the
paging file: is there any demonstrable performance increase in NOT
having a paging file, or is it just a 'feeling' that everything is
faster? (Though that's good enough for me lol) Any hard performance
data? I am really curious about this, but too chicken to actually try it
out on my rig.

Well I have taken the plunge and tried it out on my 'rig' and whilst I have
no
data to prove it is better, it certaintly feels no worse. I think my machine
is quieter, but I can't prove that really, all I can say I have 540 meg of
free
memory and so I can't really seem much reason for any disk activity, and
indeed
there does not appear to me much if any of that even thouogh I have a few
programs
running which are connected to advertising stuff etc..
My rig at the moment is a C2D E6550 on a Gigabyte P35-DS3R mobo with 2
GB of RAM, 2x160GB + 1x250GB HDDs, 7800GT 256MB and XP Pro. It's a
decent rig with sufficient RAM, but I still leave the paging file on (in
a small, separate partition) because, frankly I am very scared of
crashes in the middle of something important (I use Photoshop and huge
DSLR RAW files a lot) and losing my data.

I think mine locked up once early on but that could have been due to a
number
of reasons as it has done that before with pageing on. since then it has
been fine,
and thats about 5 days now.

Also before it could get into a state where it ran inccredibly slow due to
constant
disk activity so I have to wait untill that stops, and quite frankly it
would be
quicker and better to reboot!

Personally I would just give it a go otherwise you will never know, 2 gig is
a lot,
it's not that long ago I only had 2 gig of drive space!!!! I am sure you
have run into
problems even with paging on so what have you got to lose?

If you consider the massive difference between access time of ram and hard
drive
then quite frankly, it is counter productive.
For example I sometimes run a statistical program on 10's of thousands of
poker
hand history files, the first run takes ages as all the files are on disk,
after that when
the files are cached in memory it is much faster, by a factor of at least
20,
maybe 50 or 100. So paging to my mind is rather pontless, if you get to the
stage
where you are pageing a lot you would probably be better off rebooting!!

No harm in trying it.
 
L

Lord Turkey Cough

startrap said:
Not for a HDD. Even a few millisecs is a long time -for the system-.
When you consider track seek time, rotational latency, settle time etc
for each fragment that the drive has to pick up sequentially, it can
have substantial impact on performance. But as I mentioned, the degree
of fragmentation of the files is key. Apart from the optical drives, the
harddrive is the slowest component of the PC because of it's mechanical
operation, so if it runs even slower due to heavy fragmentation, it's
not good.

Well it has to do that for even if the whole file is unfragmented it has to
find the file. When a file is written to a fragmented disk I would imagine
it
puts it in the places which are quickest to access, (seems sensible) so
I doubt the access overhead would be that much.
Not really. I keep adding and deleting programs quite often, and with
the size of today's files fragmentation can build up quickly. And
fragmentation affects all files not just 'programs'..modify any file and
it may get fragmented or cause free space fragmentation if it is flanked
by other files.

But it's not a great overhead all things considered.
Er....'the loading into memory' is what is affected by fragmentation.
As is writing to the drive. Once it's in RAM, it shouldnt matter unless
you are *gasp* paging it to the HDD :D and the page file itself is
fragmented.

I don't use a page file anymore. I think it is better to ensure you never
need
a page file by not overloading your system.
Not necessary at all. In NTFS, it gets puts into the first bits of free
space available, which might or might not be fragmented free space.

However it's likely to be on the same trackor nearest track to the read
head, so not to much work. Next time you use that file the read head is
also likely to be in a similar position, unless of course you have defragged
in which case it will be likely be in some random position on the disk.
Not at all. Defragmenters consolidates files and directories.

And that is what I am what I am saying could be the cause of the problem.
A file will have been moved from what was a convienient place to access into
a different place based upon directory structures. Initally the required
files
might have been written on the same track, now they will be pretty much
scattered
randomly all over the drive.
Actually, they can. Atleast most of the new ones offer sequencing and
placment options based on a number of file attributes.

I don't think that wil be helpful.
Once the files are defragmented, the head can pick them up sequentially
so no wear and tear. A defragmented drive with well consolidated free
space suffers from lesser fragmentation during future file writes.

Whilst the files themselves may be defragmented, a set of files as used
as a functional group are liklely scattered all over the drive.

It's a bit like an untidy desk, it may look untidy but things tend to be
automatically
be grouped togeather by usage, everything for a particualar function will
tend
to be grouped togeather by last uasge, which is likely them most convienient
grouping for their next usage. When you tidy up that desk you destroy that
'natural grouping'. Things become grouped by other things unrelated their
most lilkely usage.
And the auto defraggers dont go to work 24x7; as I said, only when
necessary, and using the barest minimum of resources. Usually, they
would run for a few minutes a day at the most.

Better than the head going crazy *each time* it has to pick up a
fragmented file.

I don't think that would happen, the fragments would be initially
writte to the most convienient space and hence bein a convienient space
when it comes to reading them again.
First time I ever defragged my computer, to speed up the start up time
I timed it to see 'how much faster' it was. If anything it appeared to
be
slower!!!
(honest!!). I have not really bother much with it after that, it seems
to
make little of know difference. The six hours or so of constant disk
activity
didn't really endear me to the idea either!!

You are right, that's quite a departure from the norm. It has never
been the case in my experience. Usually, manual fragmentation ought to
be as follows: [defragmentation of files] -> [boot-time defrag to defrag
the MFT, paging file :D etc] -->[final file defrag]. Once this is done,
you are all set.

Well whatever the case I don't think find fragmentation an issue for me.
My disk does not go crazy in general, and if it does I am pretty sure it is
nothing to do with fragmented files.More likely to do with excessive
pageing,
my view is once it starts trying to use your hard drive as RAM you may as
well
give up, the differnce in access times is collossal.
Not a waste of time at all, since it is completely automatic in nature.
And it is useful for those who use their systems heavily. I game, use
Photoshop, and my PC is my main entertainment device in my room, so
defragging definitely helps me.

Well in my experience it makes no noticable difference, I did it several
times
on my old system and it seemed eaxctly the same, if not worse. Even if you
defrag individual files you will oftend be working with hundreds of small
files
anyway, which is the same as one file in a hundred fragments. Defragging may
well put these 100 files in less convienient places than those which they
were initially
in, so it's swings and roundabouts. I certaintly have no initension
whatsoever of defragging
any of my drives at the moment. I think it would more likely make thing
worse
than better, and as it is fine at the moment it is not a risk I am prepared
to take.
As for AV scans, if your AV setup is good in the first place, no
viruses will get through the net; but fragmentation is an inherent trait
(er, 'feature', thanks Microsoft!) of the FAT and NTFS file systems.
Others such as ext3 dont suffer as much from this.

Anything Microsoft produces is rubbish, it takes 2 seconds to pop up my
volume control, from RAM. No ammount of defragging will make a silk
purse out of a cows ear. Enough said.
If you say there is no drawback or benefit from disabling the paging
file apart from the relative lack of HDD activity, then it does not seem
to be necessary to take the risk. Maybe I can try it out on my office PC
which is er..'expendable' and ironically contains no important data.

Well I was a little worried at first "Will it crash?" I thought, but it has
been
fine for about a week now, and considerably quiter I would say. Certaintly
no noiser.
That slowdown could have been due to a number of reasons including
fragmentation or a fragmented paging file, background processes/programs
accessing the disk etc.

Actually, I've never had any problems with the paging file being
enabled since it sits inside it's own little partition on the outer edge
of the platter. In fact, I cant remember when was the last time my
system BSODed or hard crashed. It's always been running smoothly since I
first built it 2 years ago with a A64/1GB RAM as the starting point. I
upgraded the sytem to intel only recently.

Never has a BSOD on mine yet, seemed to have locked up a couple
of times but generally I just reboot pretty quickly rather than wait to
see if it 'sorts itself out' and then have to reboot anyway. Better to
reboot
in a couple of minutes than wait 5 hoping it will cure itself!!
You do have a point, that RAM is always much faster than the HDD, but
it still has to get the poker files from the HDD to the RAM, and that's
where the bottleneck comes in. I doubt paging has much to do with it.


No it can't really.
Actually another poker site puts all the poker hand historys into one big
file, or
several big files and I think this is a much better approach, much less disk
activity.
with say one 40 meg file than 40,000 1KB files and I do men much less, I
would
say at least 50 times faster.
It was a bit of a pain modifying the program though especially as I was not
100% sure
of the structure of the history files initially, I do now so the second
program is structured
better. I think I would also be better off bunging other sites files into
one big file
too. Mind you the statistics its gather on a player are not of much use,
they don't tell
you what cards he holds, and it is easier to guess than from how he plays
his current
hand rather than statistics on how he played his previous hands. So counter
productive
in a way, but it sharpened up my programming skills.
 
J

John

Noozer said:
That is an old wives tale.

Does it really make sense to have a 128meg swapfile if you only have 128meg,
but have a 1gig swapfile if you have 1gig?

Let windows manage the size... If you have lots of ram you won't be hitting
it very often anyhow.



Letting Windows manage the size is a good idea. The only other
suggestion I would make is to put the swap/page file in its own
partition to prevent fragmentation. An even better solution is to put it
in its own partition on a different hard drive than the drive where the
OS is located, obviously the faster the drive the better. If running
Windows turn off indexing and system restore on a dedicated page file
partition as it serves no purpose and just slows down access to the page
file.

John
 
L

Lord Turkey Cough

Lord Turkey Cough said:
Yes I suppose they would, I think I might have had that once when I opened
up a window on an application, sounds plausible.. Typically I have around
1/2 gig available. I have just put my machine up towhat I would call 'max'
usage, 4 poker applications, OE, several IE and a digital TV application
running and I have 300 meg free. I would not normally run with that kind
of
load as it is quite a load on the CPU, especially the TV app.
Anyway I will keep an eye on things in the task manager and see how I get
on.
It was fine yesterday and has been fine so far today. Generally I would
prefer to run without a pagefile.

OK then 10 days on from the above post and all has been fine, not one crash
or lock up. I can go down to my last 250 meg of ram sometimes but there
is so much stuff running, that I would need a faster CPU before I needed
more Ram.
 
K

kony

OK then 10 days on from the above post and all has been fine, not one crash
or lock up. I can go down to my last 250 meg of ram sometimes but there
is so much stuff running, that I would need a faster CPU before I needed
more Ram.


Again I remind you that used memory <> total allocated
memory. When you open an app it reserves a certain amount
more than used, so when you use that app if you do something
demanding it will exceed your real memory expectations.

Maybe your use isn't so demanding and thus you can get away
with this running without a pagefile, but many can't.
Mainly I suggest that if/when you receive an out of memory
type message, an abrupt app termination, or a bluescreen,
that the first attempt at resolution be to re-enable the vm
pagefile.

I write this having traveled down that road, that things so
seem to work fine until you run something that needs this vm
allocation and when it can't get it, it crashes.
 
L

Lord Turkey Cough

startrap said:
To Lord Turkey Cough,
Dude, no offence, but I don't think you have a clear understanding of
fragmentation and defragmentation and how it relates to file
read/writes. When you say that your performance is better with one large
40MB file than 40,000 1KB files, thats analogous to a a defragmented
file.

I do understand it, what fragmeation there is is not noticable.
I know if I defrag I won't notice any difference so why bother?

Anyway, Let me not drag this off-topic conversation any further.
Let us agree to disagree :)

I disabled the paging file on my office HP desktop, and after running
it for a few hours, I have found zero difference in performance or disk
activity. I don't think disabling the paging file helps in any way
whatsoever in my case, so I re-enabled it and left it on.

I think this is because it is not pagiing anyway as I say below, it's
not used so it does not matter if it is on or off.


I ran fine untill a few days ago when I got a warning to increase my page
file, I just closed down some stuff instead but I have put it to windows
managed
page file. So you will get a warming before any majo problem.
Does not seem to make much difference either way.
I think now it does not matter how I sestit because it is never or rarely
used
anyway.Bascially once in a month it got low enouth to give a warming
(think I had over 100 meg spare at the time.)

As for your case, where loading those poker files from the HDD to the
RAM takes a long time, I am curious as to your memory usage before and
after loading those files.

Not sure what you mean.
I know all the files fit in memory because when I run it a second time
I do no hear the same disk access, unless I have run something in between
to overwrite them.
If I do it on a lot of files there might not be enough memory so it would
have
to do the whole lot again. Thus I don't do it on ahuge pile of files as
it takes ages.
Have you checked? Also, I am not sure why
your OS would page the drive when initially loading those files into
the RAM, if you have sufficient RAM in the first place. Me thinks you
have some other disk, hardware or OS related problems... run chkdsk on
the drive to see if it comes up with errors, and also run a
fragmentation analysis and let us know (yeah I know you don't believe in
defragging, but never hurts to check). Also if you have an IDE drive,
check if it has downgraded itself to PIO mode from UDMA.

I don't think I have a problem with my drive, it works fine I am sure of
that.
I have benchmarked it before, it is 'normal' enough. Just did another and
it's
figures look OK considering I have a lot of other apps running.
12 ms random access
Drive Index 42 MB/s

I don't really have a problem with my computers performance generally
You also ought to download and run this freeware program called HDtach
(google for it) to check if your disk performance is normal.

I ran Sisoft benchmark and it looked comparable to similar drives.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top