Greg said:
If a file is
in 1000 logical fragments, then 1000 logical requests have to be made to the
hard drive controller.
I admit I don't know how NT and following work inside, but I am certain
that DOS did sector-at-a-time. Later versions had a queue of pending
write-operations but nothing coalesed the pending operations.
Conversely, if a file is logical contiguous, it
means that only 1 logical request has to be made to the hard drive
controller. The performance improvement comes in because you are only
making 1 request - not 1000 requests.
The cost of initiating a transfer is very low, insignificant. The cost
of actually executing a transfer is high because you need to wait for
the arm to position to the right cyclinder and for the disk to turn to
the right place. Of course the overhead due to physical movement is
non-existent on a solid-state disk. This is what prompted my original post.
If you are correct, current disk controllers must be able to start a
transfer, wait until it is almost ready to start, look at the disk queue
to see if more contiguous sectors have been added, and update the number
of sectors in the transfer if possible. Some how I don't think they are
this clever, but I could easily be wrong. But this is what it would
take to make defragmenting have an effect on efficiency.
If you could post a URL for how disk controllers work these days, I sure
would like to see it. I have some ideas for performance improvements
that are guaranteed to work but I need to understand all the low level
stuff as it exists today. Thanks.