"The poster formerly known as Nina DiBoy"
AusLogics Defrag: Quote "It's not as good as the built-in Windows
Defragmenter. Considering that Auslogics uses this freebie program
to tout the virtues of its PC performance optimization package, it
only succeeded in making me extremely skeptical of "
For some better choices see:
http://www.openaccess.co.za/BlackAndWhiteInc/Defrag.htm
Good links; I like this one's coverage of MFT issues...
http://donnedwards.openaccess.co.za/2007/06/great-defrag-shootout-xv-paragon-total.html
There are some dubious claims made, in some of this writing:
1) That defrag is the #1 reason for PC slowdown
Unless waiting for network, Internet or peripherals, when you wait for
the PC, you are nearly always waiting for the hard drive.
But there are three different reasons why the HD is in use:
- your working set is too large for RAM, so paging to disk
- you have "underfootware" hogging RAM, accessing disk etc.
- you really are writing to or reading from disk
So before you get to making disk use less "expensive" - of which
defragging is merely a part - you'd want to reduce disk traffic by
killing off unwanted "underfootware" (especially underfootware that
hammers the disk, like indexing or SR) and by adding RAM.
2) Implied; that transfer speed is more important than seek time
There are three aspects to HD speed at the raw platters level:
- head travel from one track to another
- latency on the right track, waiting for the right sector(s)
- rate of data to or from disk
The middle item can be discounted in various ways, such as having the
HD read everything from a track (or cylinder) into the HD's own RAM in
whatever order is fastest, then sending the requested sectors over the
interface to the PC. Buffering in HD's own RAM also unlinks the raw
disk transfer speed from that of the interface, which may vary in
speed (e.g. native S-ATA vs. external housing's USB) and which impose
their own latencies if multiple devices on the bus etc.
So that leaves moving the heads, typically reported as seek time, and
the raw data rate. Which is more important?
I'd pick moving the heads, for two reasons:
- it takes a long time, in inside-the-PC terms
- no data can flow to/from disk while heads are moving
The article speaks of locating data on tracks that are "faster" in
terms of the number of sectors per track (thus higher raw off-the-disk
rate for the HD's fixed spin speed) but this will be
counter-productive if it increases the head travel required.
The most effective way to make disk use "cheaper", is:
- pick a fast and large HD, with high capacity per cylinder
- use a fast interface in a mode that minimizes CPU use
- favor HDs with sequence intelligence?
- concentrate most-used material in a small number of cylinders
By "cylinder", I mean all tracks on all disk platter surfaces that can
be read without moving the heads to other tracks.
The last item is what defragging attempts to address (there may also
be caching and HD look-ahead efficiencies if data is in a contiguous
run of sectors too). But the problem is that the files most often
accessed may be bother the oldest ones stored at the "front" of the
volume, and the newset ones stored at the far "edge" of the file mass.
Defragging can help squeeze out all the free space to the end of the
volume, so that these two ends of the file mass are closer together -
but they will still span the file mass, especially if the "edge" of
this mass is far from needed file system structural items.
This last issue applies both to FATxx's "FAT at the front" strategy,
as well as post-FATxx strategies that locate this material in the
center of the volume, as NTFS may do.
A more effective may of reducing head travel is to split the physical
HD into partitions and volumes, such that most activity is
concentrated in a small volume. Then, no matter how fragged things
get, head travel will never be worse than the small number of
cylinders that the volume occupies.
-------------------- ----- ---- --- -- - - - -
"If I'd known it was harmless, I'd have
killed it myself" (PKD)