Win XP Fragmentation on 160 GB drive

  • Thread starter Thread starter Seawulf
  • Start date Start date
S

Seawulf

I run Win XP Home SP1 on my desktop machine with about 40GB of the 160GB
drive in use. It seems to fragment rapidly. but, that, I suppose is
relative. What I cannot understand is why, with 120GB of free space on
my drive, the OS insists on writting to available fragments vice large
areas of contiguous space on the disk.

Is there any way to change this?
Thanks,
Charlie
 
Consider purchasing and using an "industrial strength"
disk defrag program, such as:

PerfectDisk Exclusive Features
http://www.raxco.com/products/perfectdisk2k/

--
Carey Frisch
Microsoft MVP
Windows XP - Shell/User

Be Smart! Protect Your PC!
http://www.microsoft.com/athome/security/protect/default.aspx

----------------------------------------------------------------------------------------

"Seawulf" wrote:

| I run Win XP Home SP1 on my desktop machine with about 40GB of the 160GB
| drive in use. It seems to fragment rapidly. but, that, I suppose is
| relative. What I cannot understand is why, with 120GB of free space on
| my drive, the OS insists on writting to available fragments vice large
| areas of contiguous space on the disk.
|
| Is there any way to change this?
| Thanks,
| Charlie
 
I run Win XP Home SP1 on my desktop machine with about 40GB of the 160GB
drive in use. It seems to fragment rapidly.

Let me guess; each user account has its own 250M+ of web cache?
What I cannot understand is why, with 120GB of free space on
my drive, the OS insists on writting to available fragments vice large
areas of contiguous space on the disk.

I think it is trying to do automatically what some of us have been
doing manually for years - store "cold" seldom-used material far away
where access is slower, so that frequently-used or arbitrarily-used
material can be faster to access.

If anyone has URL on defrag logic, I'd love to read it!

I fix the issue by partitioning (which in itself doesn't speed things
up unless you assert control over what goes where):

C: FAT32 7.99G OS, core apps, page file, temp, 20M TIF
D: FAT16 2G Small crucial user data files
E: FAT32 Massive Nearly everything else
F: FAT16 2G Autobackups of data, cold storage crucials

On a 120G HD, this means that no matter how dumb Defrag's strategy may
be, most disk activity stays within the first 10% of the HD - and
those post-bad-exit file sysrem checks generally only check less than
10% of the HD. Even if E: gets chipped, you can avoid using E: at all
until such time as you can check and ?fix the file system there.


------------ ----- ---- --- -- - - - -
The most accurate diagnostic instrument
in medicine is the Retrospectoscope
 
Chris

You can place files which constantly fragment in their own partitions.
The page file and temporary internet files are examples. If you keep
these out of the system folder then you don't need to defragment the
system files so often. In practice I never defragment the page file and
temporary internet files. Of course the inbuilt Defragmenter will not
defragment the pagefile. I also only occasionally the system and
application folders. Excluding files which do not fragment from the
Defragmentation process naturally reduces the time spent on this
housekeeping chore.

Another point I would make is that System Restore creates unnecessary
large files daily so their regular removal using the disk cleanup option
is good practice to ensure that a healthy amount of free space is
maintained in the system partition.

If you use the My Documents folder you can change the default to a data
folder to remove constantly changing files from the default location,
which is where the system files are located.


~~~~~~

Regards.

Gerry

~~~~~~~~~~~~~~~~~~~~~~~~
FCA

Stourport, Worcs, England
Enquire, plan and execute.
~~~~~~~~~~~~~~~~~~~~~~~~
 
Actually, I am the only user of the machine. I use Firefox and have
20MB set up for Web Cache...
 
The System Restore suggestion was good for me. I checked and found out
I had MAX set up for System Restore File space, which amounted to 18GB!!

I also use a system backup/restore utility, ERUNT, which does a save on
startup. Each day's folder is about 30 MB and I had a few months worth.
So, I have added that to my periodic cleanup routine.

Thanks for your help,
Charlie
 
On Mon, 13 Dec 2004 19:05:46 -0000, "Gerry Cornell"
Chris
Hi!

You can place files which constantly fragment in their own partitions.
The page file and temporary internet files are examples.

Yes, but because these are so often accessed, I take a reverse
approach; I keep them on C: in the interests of less head travel, but
then I get most everything off C: and keep C: small, so that no matter
how bad it gets, max head travel is still a small % of HD's "length".

Also makes for faster C: defrags :-)
In practice I never defragment the page file

In Win9x, I left swap file to be managed automatically by the OS. If
I felt there was a need to stop swap from fragmenting or resizing, I'd
set a minimum that was above the max I thought I'd ever need, but
leave the maximum "open" in case I'd thought wrong.

In XP, I see the OS defaults to setting a page file maximum size, and
stupidly chooses these sizes based on RAM size. As if a 128M PC was
going to get by with less swap space than a 512M PC - yeah, right.

So my practice on XP is to set min=max=512M, which freezes whatever
fragmentation you already have, into aspic forever. To reduce that
frag, I set pagefile to None, and dfrag until free space is
consolodated, then create the 512M pagefile and re-defrag to see that
it's not stuck at the far end of the volume for some reason.

I do this early in the build process, so pagefile becomes embedded
midway between the first-installed code files and the new-file edge.
Because the whole volume is only 8G (< 10% of HD size), no matter how
wrong I get this, it will be better than one big C:
temporary internet files.

Best thing to do with TIF is set a sensible capacity limit, e.g. 20M.
Be prepared to chase after newly-created accounts to fix, as new
accounts always start with MS duhfaults and each has its own TIF
Another point I would make is that System Restore creates unnecessary
large files daily so their regular removal using the disk cleanup option
is good practice to ensure that a healthy amount of free space is
maintained in the system partition.

Because I have core code on C: only, I disable SR on all other
volumes. That alone reduces the SR bulk factor; I reduce that further
by setting a lower max capacity, say 433M.
If you use the My Documents folder you can change the default to a data
folder to remove constantly changing files from the default location,
which is where the system files are located.

I want the "My Docs" object to be small and clean, so I can back it up
easily, and restore it without fear of embedded malware.

So I override the default shell folder locations, as follows:
- "My Docs" to D:
- "My Pics", "My Music", "My Videos" out of "My Docs" to E:
- all incoming material OUT of "My Docs" to E:\Incoming
- all desktops to E:\Incoming\Desktops

Part of keeping incoming material, and thus malware, out of the data
and backup sets is to avoid apps that mix your data and other ppl's
incoming junk. No Outlook .PST, no OE mailboxes - instead, I use
Eudora, keep the mail data in "My Docs" and attachments in E:\Incoming

And if folks dump on the desktop, it doesn't matter anymore, because
they are no longer stinking up C: - and you can bet E:\Incoming is
permanently under the cross-hairs of my antivirus apps :-)

Now it's easy; a Task runs a .bat that automates an archiver to crunch
"My Docs" into a .ZIP that fits easily on USB stick or CDRW.

Trouble is, every time someone creates a new user account, that
account will screw this up completely until it's manually fixed.

Forced with a choice between risk management and the safety benefits
of limited user accounts, I chuck user accounts overboard first.


---------- ----- ---- --- -- - - - -
On the 'net, *everyone* can hear you scream
 
Back
Top