Yes! My fast fingers got the better of me. I didn't even notice that I
did a 'kb' instead of just a 'b'. That would explain the confusion.
However, I did want to address something else you brought up.
Hang on - you don't "lose" any space due to file fragmentation! A contiguous
file consumes exactly the same amount of disk space as the same file
split into a dozen fragments.
That's both correct and incorrect, depending on how you look at it.
When we think about a file, what we really are focused on is the file's
$DATA:"" stream. That's what we see. That's what we edit. And a file
that is listed as 100kb has a $DATA:"" stream exactly the same no matter if
it is stored contiguously or not. And that's where you have it right.
But a file is more than its $DATA attribute. A file includes all the
structures used to define itself. So, as fragmentation grows, the overhead
for storing the $DATA attribute greatly increases. With a truly contiguous
file, we would just need a single mapping pair (a virtual cluster number
and a LENgth) to show us where the file was. As fragmentation increases,
we add additional mapping pairs to the file that track the location of all
the parts of the file.
See my blog entries to find out just how complex that can get as we add
additional structures to track all the parts of the file.
Also keep in mind that when the shell displays file information, it is
really an estimate based on the $DATA attributes and doesn't take into
account any of the storage overhead.
So yes, you do lose space due to fragmentation. And the worse it is, the
more you lose.
Robert Mitchell
Microsoft PSS