Defrag - best way?

L

louise

For several years I've been using Diskeeper Prof to defrag
my Win XP Pro hard drive.

A month or so ago, I used the defrag option to defrag at
bootup and for some unexplained reason, it went berserk and
changed my windows theme from classic to XP, along with
realligning the icons on my desktop etc. Since then,
booting up the computer has been slow and sometimes it wont
work.

An effort to defrag said the drive was set to chkdsk first,
but it wouldn't run chkdsk no matter how many times I
booted, even in safe mode - things were pretty messy.

I uninstalled Diskeeper completely, and finally called
Microsoft to find the registry entry and change it so that
the computer would not think it was supposed to run chkdsk
at every bootup, even though it wouldn't do it :).

Eventually this was solved and I was able to run chkdsk from
the Windows CD. I can use MS to defrag but it takes a long
time - otherwise it seems ok.

I could just continue to use MS to defrag or I could
reinstall Diskeeper and try again.

Is there any advantage to defragging with Diskeeper rather
than MS Win XP Pro?

TIA

Louise
 
T

Timothy Daniels

louise said:
I could just continue to use MS to defrag or I could
reinstall Diskeeper and try again.

Is there any advantage to defragging with Diskeeper
rather than MS Win XP Pro?


One of the computer magazines did an evaluation
of the various defraggers on the market about 2 years
ago, and their conclusion regarding WinXP was that
defragging wasn't needed, and that if you had to defrag,
the WinXP built-in defragger was good enough. It seems
that WinXP does its own defragging in the background,
and the observed performance didn't improve with
defragging regardless of the publisher. Of course,
seeing the fragmented files in the fragmenter's GUI
drives people nuts (myself included), so I defrag
about once a month, but I've never seen a performance
increase as a result of it. OTOH, WinXP's defragger has
never trashed my OS. If I had Diskeeper, I'd uninstall
it just for the freed up space.

*TimDaniels*
 
R

Rod Speed

The best way is to not bother. There arent many
situations where its worth the trouble anymore.

louise said:
For several years I've been using Diskeeper Prof to defrag my Win XP Pro
hard drive.
A month or so ago, I used the defrag option to defrag at bootup and for
some unexplained reason, it went berserk and changed my windows theme
from classic to XP, along with realligning the icons on my desktop etc.
Since then, booting up the computer has been slow and sometimes it wont
work.
An effort to defrag said the drive was set to chkdsk first,
but it wouldn't run chkdsk no matter how many times I
booted, even in safe mode - things were pretty messy.
I uninstalled Diskeeper completely, and finally called
Microsoft to find the registry entry and change it so that
the computer would not think it was supposed to run chkdsk at every
bootup, even though it wouldn't do it :).
Eventually this was solved and I was able to run chkdsk from
the Windows CD. I can use MS to defrag but it takes a long
time - otherwise it seems ok.
I could just continue to use MS to defrag or I could reinstall Diskeeper
and try again.
Is there any advantage to defragging with Diskeeper rather than MS Win XP
Pro?

There isnt usually any point in defragging with anything anymore.
 
A

Arno Wagner

Previously louise said:
For several years I've been using Diskeeper Prof to defrag
my Win XP Pro hard drive.
A month or so ago, I used the defrag option to defrag at
bootup and for some unexplained reason, it went berserk and
changed my windows theme from classic to XP, along with
realligning the icons on my desktop etc. Since then,
booting up the computer has been slow and sometimes it wont
work.
An effort to defrag said the drive was set to chkdsk first,
but it wouldn't run chkdsk no matter how many times I
booted, even in safe mode - things were pretty messy.
I uninstalled Diskeeper completely, and finally called
Microsoft to find the registry entry and change it so that
the computer would not think it was supposed to run chkdsk
at every bootup, even though it wouldn't do it :).
Eventually this was solved and I was able to run chkdsk from
the Windows CD. I can use MS to defrag but it takes a long
time - otherwise it seems ok.
I could just continue to use MS to defrag or I could
reinstall Diskeeper and try again.
Is there any advantage to defragging with Diskeeper rather
than MS Win XP Pro?

With a modern FS defragging should be mostly unnecessary.
If you have old FAT32 partitions (I do for Linux compatibility),
you may want to defrag them occasionally, but even here the
OSes do better today than they used to. At least that is
my subjective impression.

Arno
 
J

Jon Forrest

Here's a paper I wrote about this recently.

---
Why PC Disk Fragmentation Doesn't Matter (much)
Jon Forrest ([email protected])

[The following is an hypothesis. I don't have
any real data to back this up. I'd like to know
if I'm overlooking any technical details.]

Disk fragmentation can mean several things.
On one hand it can mean that the disk blocks
that a file occupies aren't right next to each
other. The more pieces that make up a file, the
more fragmented the file is. Or, it can mean
that the unused blocks on a disk aren't all right
next to each other. Win9X, Windows 2000, and Windows XP
come with defragmentation programs. Such programs
are also available for other Microsoft and non-Microsoft
operating systems from commercial vendors.

The question of whether a fragmented disk really
results in anything bad has always been a topic
of heated discussion. On one side of the issue
the vendors of disk defragmentation programs can
always be found. The other side is usually occupied
by sceptical system managers, such as yours truly.

For example, the following claim is made by the
vendor of one commercial vendor:

"Disk fragmentation can cripple performance even worse
than running with insufficient memory. Eliminate it
and you've eliminated the primary performance bottleneck
plaguing even the best-equipped systems." But can it, and
does it? The user's guide for this product spends some 60 pages
describing how to run the product but never justifies this
claim.

I'm not saying the fragmentation is good. That's one reason
why you can't buy a product whose purpose is to fragment a disk.
But, it's hard to imagine how fragmentation can cause any noticable
performance problems. Here's why:

1) The greatest benefit from having a contiguous file would
be when the whole file is read (or written) in one I/O operation.
The would result in the minimal amount of disk arm movement,
which is the slowest part of a disk I/O operation. But, this
isn't the way most I/Os are. Instead, most I/Os are fairly small.
Plus, and this is the kicker, on a modern multitasking operating
system, those small I/Os are coming from different processes
reading from different files. Assuming that the data to be read
isn't in a memory cache, this means that the disk arm is
going to be flying all over the place, trying to satisfy all
the seek operations being issued by the operating system.
Sure, the operating system, and maybe even the disk controller,
might be trying to re-order I/Os but there's only so much of
this that can be done. A contiguous file doesn't really help
much because there's a very good change that the disk arm is
going to have to move elsewhere on the disk between the time
that pieces of a file are read.

2) The metadata for managing a filesystem is probably
cached in RAM. This means when a file is created, or
extended, the necessary metadata updates are done at memory
speed, not at disk speed. So, the overhead of allocating
multiple pieces for a new file is probably in the noise.
Of course, the in-memory metadata eventually has to be flushed
to disk but this is usually done after the original I/O completes,
so there won't be any visible slowdown in the program that issued
the I/O.

3) Modern disks do all kind of internal block remapping so there's
no guarantee that what appears to be contiguous to the operating
system is actually really and truly contiguous on the disk. I have
no idea how often this possibility occurs, or how bad the skew is
between "fake" blocks and "real" blocks. But, it could happen.

So, go ahead and run your favorite disk defragmenter. I know I do.
Now that W2K and later have an official API for moving files in an atomic
operation, such programs probably can't cause any harm. But
don't be surprised if you don't see any noticable performance
improvements.

The mystery that really puzzles and sometimes frightens me is
why a NTFS file system becomes fragmented so easily in the first
place. Let's say I'm installing Windows 2000 on a newly formatted
20GB disk. Let's say that the total amount of space used by the
new installation is 600MB. Why should I see any fragmented files,
other than registry files, after such an installation? I have no
idea. My thinking is that all files that aren't created and then
later extended should be able to be created contiguously to begin with.
----

Jon Forrest
(e-mail address removed)
Computer Resources Manager
Civil and Environmental Engineering Dept.
305 Davis Hall
Univ. of Calif., Berkeley
Berkeley, CA 94720-1710
510-642-0904
 
R

Rod Speed

Jon Forrest said:
Here's a paper I wrote about this recently.
[The following is an hypothesis. I don't have any real data to back this
up.

It isnt hard to get that real data, just do proper double blind trials.
I'd like to know if I'm overlooking any technical details.]
Disk fragmentation can mean several things.
On one hand it can mean that the disk blocks
that a file occupies aren't right next to each
other. The more pieces that make up a file, the
more fragmented the file is. Or, it can mean
that the unused blocks on a disk aren't all right
next to each other. Win9X, Windows 2000, and Windows XP
come with defragmentation programs. Such programs
are also available for other Microsoft and non-Microsoft
operating systems from commercial vendors.
The question of whether a fragmented disk really
results in anything bad has always been a topic
of heated discussion. On one side of the issue
the vendors of disk defragmentation programs can
always be found. The other side is usually occupied
by sceptical system managers, such as yours truly.
For example, the following claim is made by the
vendor of one commercial vendor:
"Disk fragmentation can cripple performance even worse
than running with insufficient memory. Eliminate it
and you've eliminated the primary performance bottleneck
plaguing even the best-equipped systems." But can it, and
does it? The user's guide for this product spends some 60 pages
describing how to run the product but never justifies this claim.
I'm not saying the fragmentation is good. That's one reason
why you can't buy a product whose purpose is to fragment a disk. But,
it's hard to imagine how fragmentation can cause any noticable
performance problems. Here's why:
1) The greatest benefit from having a contiguous file would
be when the whole file is read (or written) in one I/O operation.

And with many of the contiguous file uses, the speed of the
movement thru the file is irrelevant as along as its fast
enough, most obviously when playing mp3 and video files.

Same with video capture, as long as the file is written
fast enough, it doesnt matter if it can be written faster.
And thats always the case now with digital video capture.

Most personal desktop system dont do much
else in the way of contiguous file use now.
The would result in the minimal amount of disk arm movement,
which is the slowest part of a disk I/O operation. But, this
isn't the way most I/Os are. Instead, most I/Os are fairly small.
Plus, and this is the kicker, on a modern multitasking operating
system, those small I/Os are coming from different processes
reading from different files. Assuming that the data to be read
isn't in a memory cache, this means that the disk arm is
going to be flying all over the place, trying to satisfy all
the seek operations being issued by the operating system.
Sure, the operating system, and maybe even the disk controller,
might be trying to re-order I/Os but there's only so much of
this that can be done. A contiguous file doesn't really help
much because there's a very good change that the disk arm is
going to have to move elsewhere on the disk between the time
that pieces of a file are read.

And modern hard drives seek fast enough to handle that stuff basically.
2) The metadata for managing a filesystem is probably
cached in RAM. This means when a file is created, or
extended, the necessary metadata updates are done at memory
speed, not at disk speed. So, the overhead of allocating
multiple pieces for a new file is probably in the noise.
Of course, the in-memory metadata eventually has to be flushed
to disk but this is usually done after the original I/O completes,
so there won't be any visible slowdown in the program that issued
the I/O.

And most personal desktop systems spend the absolute
vast bulk of their time with the HDD light off too.
3) Modern disks do all kind of internal block remapping so there's no
guarantee that what appears to be contiguous to the operating system is
actually really and truly contiguous on the disk. I have no idea how
often this possibility occurs,

Not that often in practice, just with reallocated bad sectors
and most modern drives have very few if any of those.

The original bads seen at drive manufacture time
dont involve any head movement, they disappear in
the drive's internal mapping from LBA to CHS values.
or how bad the skew is between "fake" blocks and "real" blocks.
None.

But, it could happen.

Sure, but its a much less important effect than multitasking
and very basic stuff like the internet temporary files/cache.
So, go ahead and run your favorite disk defragmenter. I know I do.

I havent bothered in years now. I dont even bother with the
drive thats in the PVR which does get significantly fragmented
because of the nature of how files get written on that drive.

Because would make absolutely no difference
what so ever to anything at all with ops.
Now that W2K and later have an official API for moving files in an atomic
operation, such programs probably can't cause any harm. But don't be
surprised if you don't see any noticable performance improvements.

And those OSs do some file optimisation themselves anyway.
The mystery that really puzzles and sometimes frightens me is
why a NTFS file system becomes fragmented so easily in the first
place. Let's say I'm installing Windows 2000 on a newly formatted
20GB disk. Let's say that the total amount of space used by the
new installation is 600MB. Why should I see any fragmented files,
other than registry files, after such an installation? I have no idea.

Basically because a full install is a surprisingly
complex process with an OS that complex.
My thinking is that all files that aren't created and then later extended
should be able to be created contiguously to begin with.

Those OSs basically reoptimise the files once
the install has been used for a while instead.
 
J

Jon Forrest

Rod Speed wrote:

And modern hard drives seek fast enough to handle that stuff basically.

I don't agree. Any kind of mechanical motion (spinning, seeking)
is so much slower than CPU and memory speeds. The proof of this is
the speedups that a RAM drive can give you if you're I/O bound.
Of course, if you're not I/O bound then this is a moot point.
Not that often in practice, just with reallocated bad sectors
and most modern drives have very few if any of those.

How do you know this? I'm not saying you're wrong but I've
never seen any data describing this.
Sure, but its a much less important effect than multitasking
and very basic stuff like the internet temporary files/cache.

Not according to some of the file system designers I've talked to.
They told me it was very frustrating to try to add code to the
operating system to try to minimize seeking when there's
code inside the disk drive trying to do the same thing.
Basically because a full install is a surprisingly
complex process with an OS that complex.

This statement doesn't explain anything. Although I can see why the free space might
be fragmented, since files are created and then deleted during an install,
I don't see why any files that aren't modified during an install should
be fragmented. (I suppose before shooting off my mouth I should do
a true analysis to see what files are fragmented to see if they're
written once or modified multiple times).

Jon
 
R

Rod Speed

Jon Forrest said:
Rod Speed wrote
I don't agree.

You're confusing two entirely separate issues here.
Any kind of mechanical motion (spinning, seeking)
is so much slower than CPU and memory speeds.

Yes, but we are discussing the difference between
a fragmented and defragged drive here, not the
difference between a hard drive and ram.

And no personal desktop system uses ram
for large files used sequentially anyway.
The proof of this is the speedups that a RAM drive can give you if you're
I/O bound. Of course, if you're not I/O bound then this is a moot point.

All completely irrelevant to what was being discussed, whether
fragmentation of files is even noticeable with personal desktop systems.
How do you know this?

Its fundamental to how bad sectors are handled.

Its also the obvious way to implement the mapping between
LBA numbers and CHS values in the physical drive.
I'm not saying you're wrong but I've
never seen any data describing this.

You can measure it by using something like HDTach
on a new drive. You wont be able to see any dips in the
chart where the bads seen at manufacturing time are
skipped on a long serial read of the entire physical drive.
Not according to some of the file system designers I've talked to.
They told me it was very frustrating to try to add code to the
operating system to try to minimize seeking when there's
code inside the disk drive trying to do the same thing.

Again, you are confusing two entirely separate issues.
Most bads wont actually involve any seeking at all, the
bads in a particular cylinder just see the drive wait a
little longer for the bads to pass under the heads.
This statement doesn't explain anything.

Yes it does.
Although I can see why the free space might be fragmented,

It doesnt in fact get fragmented much. Have a look
at a drive just after the install has been completed.
since files are created and then deleted during an install, I don't see
why any files that aren't modified during an install should be
fragmented.

Basically because files written to the drive are
normally written onto the fragmented free space
produced by the deletions you just referred to.
(I suppose before shooting off my mouth I should do
a true analysis to see what files are fragmented to see if they're
written once or modified multiple times).

Some like the registry obviously have to be modified multiple times.
 
J

Jon Forrest

Rod said:
You're confusing two entirely separate issues here.


Yes, but we are discussing the difference between
a fragmented and defragged drive here, not the
difference between a hard drive and ram.

Right, but the effect of a fragmented hard drive is
excessive seeking. Seeking slows down IOs. Defragging
is an attempt to reduce seeking.
Its fundamental to how bad sectors are handled.

Please expand on this because I sure don't see it.
Its also the obvious way to implement the mapping between
LBA numbers and CHS values in the physical drive.

I don't think so because bad sectors become inaccessible
whether you're doing LBA or CHS accesses.
You can measure it by using something like HDTach
on a new drive. You wont be able to see any dips in the
chart where the bads seen at manufacturing time are
skipped on a long serial read of the entire physical drive.

I suspect, but can't prove, that with 8MB and 16MB of caching,
it's very difficult to see what's really going on at the
physical sector level, especially for a small number of bad
blocks. I bet that disk vendors don't ship drives with
more than a certain number of mapped bad blocks.
Again, you are confusing two entirely separate issues.
Most bads wont actually involve any seeking at all, the
bads in a particular cylinder just see the drive wait a
little longer for the bads to pass under the heads.

In this case, I wasn't talking about the effects of bad
blocks. I was talking about seek optimization.

Jon
 
R

Rod Speed

Jon Forrest said:
Rod Speed wrote
Right, but the effect of a fragmented hard drive is excessive seeking.

Nope, as you correctly pointed out, modern OSs do FAR
more seeking due to multitasking and other basic stuff like
caching the internet than they ever do due to fragmented files.

And modern hard drives seek so quickly that the occasional
extra seek due to fragments have no effect on actual work,
because modern desktop systems hardly ever do much
sequential reading or writing of large files except when
the speed is determined by other factors like the speed
at which an mp3 file or video file needs to be played at.
Seeking slows down IOs.

No it doesnt when modern desktop systems hardly ever
do much sequential reading or writing of large files except
when the speed is determined by other factors like the speed
at which an mp3 file or video file needs to be played at.

The files that are involved in most disk activity with
modern desktop systems are quite small files, mostly
in the internet cache etc and reasonable sized documents
which arent generally very fragmented at all. And even
when say a word file is in two bits instead of one, the
time required to load that document wont have the
extra seek visible at all with a modern hard drive.
Defragging is an attempt to reduce seeking.

Duh. What is being discussed is whether it reduces the
seeking much, it doesnt, because the OS is doing a lot
more seeking due to other stuff like the internet cache
etc and that stuff isnt time critical anyway. Its done
in the background and isnt even visible to the user.
Please expand on this because I sure don't see it.

Modern OSs issue read and write commands to hard drives in
terms of logical block numbers. The drive has to turn those into
cylinder head sector numbers before it can actually do what its
just been told to do. The mathematical conversion between a
logical block number and CHS can allow for bad blocks
mathematically, just by an algorithm that effectively says,
'there are x sectors per track in tracks y to z, w tracks
per cylinder etc' and a table of bad block CHS values.
I don't think so because bad sectors become inaccessible
whether you're doing LBA or CHS accesses.

Fraid so. The conversion between a logical block number
and a CHS set is an entirely separate issue to what
physical sectors the OS level has access to.

Yes, bad sectors are mapped away in the
mathematical mapping between LBSs and CHSs.
I suspect, but can't prove, that with 8MB and 16MB of caching,
it's very difficult to see what's really going on at the physical sector
level, especially for a small number of bad blocks.

That caching is completely irrelevant when you read
the drive end to end using increasing LBA numbers.
I bet that disk vendors don't ship drives with more than a certain number
of mapped bad blocks.

Sure, but thats an entirely separate issue to
whether they involve extra head seeks. They dont.
In this case, I wasn't talking about the effects of bad
blocks. I was talking about seek optimization.

That was obvious. And you're still confusing entirely
separate issues, because there is no seek optimisation
at all with the bad sectors identified at manufacturing time.
 
G

Gerhard Fiedler

Rod said:
Nope, as you correctly pointed out, modern OSs do FAR
more seeking due to multitasking and other basic stuff like
caching the internet than they ever do due to fragmented files.

Could that be different for the case of application load times? There are
usually sequential reads of whole files involved. But I'm not sure whether
loading the files from disk is usually the determining factor.

If this is the case, this could make at least an occasional defrag
worthwhile -- to get the recently installed applications defragged.

Gerhard
 
R

Rod Speed

Gerhard Fiedler said:
Rod Speed wrote
Could that be different for the case of application load times?

Nope, XP does reorganise files after an install to minimise app load
times, and ensuring that those files are defragged is part of that.
There are usually sequential reads of whole files involved.

Sure, but they arent necessarily that large with the modern approach
of dlls, most of which are loaded at boot time for various reasons.
But I'm not sure whether loading the files
from disk is usually the determining factor.

Specially when XP doesnt necessarily unload the app when you
close it, so at most if only affects the first app start since a reboot.
If this is the case, this could make at least an occasional defrag
worthwhile -- to get the recently installed applications defragged.

Most dont actually install apps very much so the most that might
make sense is a defrag after an install. Tho I bet you wouldnt be
able to pick it in a proper double blind trial with modern hard drives.
 
A

Arno Wagner

Previously Gerhard Fiedler said:
Rod Speed wrote:
Could that be different for the case of application load times? There are
usually sequential reads of whole files involved. But I'm not sure whether
loading the files from disk is usually the determining factor.

Not really. Today many applications use a lot of files. Unless they
are also grouped together in the right order you do not get anything
resembling linear reads.

Arno
 
S

Stevo

You all have created a maze of posts...so I decided to throw my two
cents in too. :D One of my home computers was working fine until I
ran a defrag with the XP defragger. The defrag got confused for some
unknown reason and move a file that should not have been moved. The
file moved was a system file. My computer crashed, died, and never
came back (although I sent it to some company in California that got
all of my files for me, $1900). I talked to some guys at Microsoft
and they said to
always
backup before running any disk utility.

Stevo
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top