Vista defrag, why so slow.

D

DevilsPGD

In message <[email protected]> Juarez
Why would I bother going through all that hassle when Linux doesn't need
to be defragged?

NTFS doesn't need to be defragmented either.

There isn't a filesystem on the planet that can avoid fragmentation at
all times without a performance hit.

There are three possibilities;

1) You can refuse to write fragments, meaning that you'll need to move
files around on the fly, moving the performance hit to writes instead of
reads. This works great for occasionally written, but often read
filesystems, especially where you have a large portion of the volume
available as free space.

2) Defragment occasionally, either as a scheduled task or in the
background constantly.

3) Accept fragmentation. This is often the best route to go if you have
a multi-disk array with individual drives supporting NCQ, are using
copy-on-write, or other techniques which make fragmentation harder to
avoid.

There is absolutely no filesystem that can overcome the physical reality
that moving the head takes time. There is also no possible way to

Consider this as a cluster allocation,

AAA.BBB.CCC.DD..EE....F

What happens when you want to write a file that will take three blocks?
Where do you put it?

Most modern filesystems, NTFS included, can figure out to put a fie that
only needs one block between A, B, C or D, not break up an existing free
block (DOS under FAT12/FAT16 wouldn't take advantage of this, nor did it
have the memory available to keep track of small and large gaps)

If you're writing a file that needs five blocks, where do you put it?

Now, all that being said, there is one other consideration -- There is
more to a modern defragmentation tool then just defragmentation. There
is a difference between defragmentation and optimization.

An entirely defragmented filesystem may not perform as well as one with
some fragmentation which is more optimized. Optimization cannot be done
on the fly as easily as writing without fragmentation, although it's
certainly not impossible.
 
D

dennis@home

ray said:
Easy enough to defrag an ext3 file system with the ext2 defragger which
someone wrote several years ago - simply not beneficial to do so. Not
worth the effort as Linux files systems do not slow down as a result of
defragmentation - due to the design.

If they are on hard disks they slow down if the files are fragmented, there
isn't much you can do about it other than rearrange the files.
 
D

dennis@home

ray said:
Sure, from time to time I've done that - probably a half dozen times -
never made any noticeable difference on a Linux machine.

It doesn't make much difference on windows either, but it does make a
difference on both.
 
C

Charles W Davis

I have set the system to defrag automatically. I could care less how long it
takes. It runs in the background.
 
J

JethroUK

DevilsPGD said:
In message <[email protected]> Juarez


Consider this as a cluster allocation,

AAA.BBB.CCC.DD..EE....F

What happens when you want to write a file that will take three blocks?
Where do you put it?

Most modern filesystems, NTFS included, can figure out

I'll need to stop you right there - NTFS is probably the only filing system
that does not figure anything out - i used to have an 80 gig NTFS drive
which has never been more than half full and extending your logic means it
will never fragment a file and therefore never need defragmenting and yet
was always fragmented - i put this down to the fact that NTFS didn't care
whether a file was fragmented or not

to put a fie that
only needs one block between A, B, C or D, not break up an existing free
block
(DOS under FAT12/FAT16 wouldn't take advantage of this, nor did it have
the memory available to keep track of small and large gaps)

totally contrary - Win98 did not fragment files whilst ever there was a big
enough contiguous space on the drive
 
D

DevilsPGD

In message <[email protected]> "JethroUK"
I'll need to stop you right there - NTFS is probably the only filing system
that does not figure anything out - i used to have an 80 gig NTFS drive
which has never been more than half full and extending your logic means it
will never fragment a file and therefore never need defragmenting and yet
was always fragmented - i put this down to the fact that NTFS didn't care
whether a file was fragmented or not

Certain methods of creating files will also cause fragmentation. One
fairly easy one to understand is when a file is created without any
specific size, and constantly appended.

For example, create four 1-cluster files, the drive will look like this:
ABCD................................

Now append 1-cluster's worth of data, you'll end up with this:

ABCDABCD............................

No filesystem in the world can avoid that unless it reallocates/defrags
on the fly.

This is a fairly common scenario when writing multiple logfiles
simultaneously, since you don't know the maximum size you cannot
pre-allocate the space.
totally contrary - Win98 did not fragment files whilst ever there was a big
enough contiguous space on the drive

I didn't mention Win98 at all there, now did I?

I said, and I quote "DOS" and "under FAT12/FAT16" -- Note the distinct
lack of "Windows" in that quote?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top