D
DevilsPGD
In message <[email protected]> Juarez
NTFS doesn't need to be defragmented either.
There isn't a filesystem on the planet that can avoid fragmentation at
all times without a performance hit.
There are three possibilities;
1) You can refuse to write fragments, meaning that you'll need to move
files around on the fly, moving the performance hit to writes instead of
reads. This works great for occasionally written, but often read
filesystems, especially where you have a large portion of the volume
available as free space.
2) Defragment occasionally, either as a scheduled task or in the
background constantly.
3) Accept fragmentation. This is often the best route to go if you have
a multi-disk array with individual drives supporting NCQ, are using
copy-on-write, or other techniques which make fragmentation harder to
avoid.
There is absolutely no filesystem that can overcome the physical reality
that moving the head takes time. There is also no possible way to
Consider this as a cluster allocation,
AAA.BBB.CCC.DD..EE....F
What happens when you want to write a file that will take three blocks?
Where do you put it?
Most modern filesystems, NTFS included, can figure out to put a fie that
only needs one block between A, B, C or D, not break up an existing free
block (DOS under FAT12/FAT16 wouldn't take advantage of this, nor did it
have the memory available to keep track of small and large gaps)
If you're writing a file that needs five blocks, where do you put it?
Now, all that being said, there is one other consideration -- There is
more to a modern defragmentation tool then just defragmentation. There
is a difference between defragmentation and optimization.
An entirely defragmented filesystem may not perform as well as one with
some fragmentation which is more optimized. Optimization cannot be done
on the fly as easily as writing without fragmentation, although it's
certainly not impossible.
Why would I bother going through all that hassle when Linux doesn't need
to be defragged?
NTFS doesn't need to be defragmented either.
There isn't a filesystem on the planet that can avoid fragmentation at
all times without a performance hit.
There are three possibilities;
1) You can refuse to write fragments, meaning that you'll need to move
files around on the fly, moving the performance hit to writes instead of
reads. This works great for occasionally written, but often read
filesystems, especially where you have a large portion of the volume
available as free space.
2) Defragment occasionally, either as a scheduled task or in the
background constantly.
3) Accept fragmentation. This is often the best route to go if you have
a multi-disk array with individual drives supporting NCQ, are using
copy-on-write, or other techniques which make fragmentation harder to
avoid.
There is absolutely no filesystem that can overcome the physical reality
that moving the head takes time. There is also no possible way to
Consider this as a cluster allocation,
AAA.BBB.CCC.DD..EE....F
What happens when you want to write a file that will take three blocks?
Where do you put it?
Most modern filesystems, NTFS included, can figure out to put a fie that
only needs one block between A, B, C or D, not break up an existing free
block (DOS under FAT12/FAT16 wouldn't take advantage of this, nor did it
have the memory available to keep track of small and large gaps)
If you're writing a file that needs five blocks, where do you put it?
Now, all that being said, there is one other consideration -- There is
more to a modern defragmentation tool then just defragmentation. There
is a difference between defragmentation and optimization.
An entirely defragmented filesystem may not perform as well as one with
some fragmentation which is more optimized. Optimization cannot be done
on the fly as easily as writing without fragmentation, although it's
certainly not impossible.