Calculating NTFS overhead

T

Troels Ringsmose

Are there a way to calculate the overhead in NTFS? I understand that the
MFT zone gets 12.5% initially(cf. <[email protected]>), but
is there some way of calculating the entire use?
 
D

Dan Lovinger [MSFT]

The MFT zone is not overhead. Its just the seperation between the initial
MFT and where the filesystem starts allocating for files. It will be used
for file data if the disk is filled. The reason the seperation is there,
initially, is to allow the MFT to grow in a contiguous allocation
(unfragmented).
 
T

Troels Ringsmose

See tip 697 in the 'Tips & Tricks' at http://www.jsiinc.com
See tip 5351
See tip 7891 for an example of scripting the extraction of the data
you want.
<SNIP>

well what I was really looking for was some kind of formula for
calculating the overhead, but extremely useful site though, one for the
bookmarks.
 
T

Troels Ringsmose

The MFT zone is not overhead. Its just the seperation between the
initial MFT and where the filesystem starts allocating for files. It
will be used for file data if the disk is filled. The reason the
seperation is there, initially, is to allow the MFT to grow in a
contiguous allocation (unfragmented).

thank you for correcting me, I hadn't thought of the difference. The reason
I called it overhead, was my urge to calculate the entire non-free part of
a volume. Is there some kind of formula? or am I completely daft?
 
A

Al Dykes

<SNIP>

well what I was really looking for was some kind of formula for
calculating the overhead, but extremely useful site though, one for the
bookmarks.


The 12% figure isn't really overhead (ie added to your data) because
files that are small onough to fit into an MFT slot are stored there,
eliminating the waste of a full cluster and a seek to get from the
index to the first block of the cluster. For someone with
lots and lots of small files it might be a wash.
 
D

Dan Lovinger [MSFT]

If you're just asking what the overhead is on a volume you have in your
hand:

VolumeSize - SumOverAllFiles(filesize rounded up to cluster size)

Trying to go at it the other way, counting all of the bytes in this or that
stream of metadata (USN journal, MFT, directory index, security index, etc.)
will suffer from a lot of imprecision and just be a headache. File
fragmentation affects how many MFT file records are required to store the
allocation information for the file, you can't precisely predict the cutoff
for small files being embedded in their MFT record (as Al pointed out), and
so forth.

You can roughly, but reasonably, predict overhead at 1KB/file by just
accounting for the MFT record. Since all kinds of specific details factor
into this, it could just be easier for you to empirically derive the average
overhead given your volume usage pattern.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top