A
Andrew Mayo
I am astonished that, search as I might, all I can find about the
overhead of NTFS compression are informal comments where the authors
surmise, without a shred of evidence, that the overhead is
'substantial'.
Now, I remember when DoubleSpace was first released for DOS 6, that
Microsoft at that point said that the overhead of decompressing the
data was approximately 5% of the processing power of a 486/66.
I conclude from this that unless NTFS compression is incredibly
inefficient, that the overhead of decompressing on the fly must
therefore be vanishingly small. This implies that in most cases,
compression would actually benefit, not hinder, performance, because
hard drive access times have not improved much since the days of the
486 but processor speed certainly has!
I assume that NTFS compression must operate at the block level, since
random access to a compressed file would otherwise require that the
entire file be decompressed first, and given the kind of performance I
have experienced with it, I do not believe this is true. There will be
some write overhead, and I do understand that the algorithm used is
asymmetrically tilted in favour of decompression speed - a wise
decision.
I am astonished that no-one seems to have ever scientifically measured
the overhead of NTFS compression. Given that it tends to reduce the
size of a SQL Server database by a factor of 10, you would suppose
that the gain in effective disk bandwidth would considerably overcome
the compression overhead. Indeed, on my laptop at home, I have NTFS
compression on all folders and have never noticed any overhead at all,
even running SQL Server.
Does anyone know of any tests that have been made to actually measure
the overhead?.
overhead of NTFS compression are informal comments where the authors
surmise, without a shred of evidence, that the overhead is
'substantial'.
Now, I remember when DoubleSpace was first released for DOS 6, that
Microsoft at that point said that the overhead of decompressing the
data was approximately 5% of the processing power of a 486/66.
I conclude from this that unless NTFS compression is incredibly
inefficient, that the overhead of decompressing on the fly must
therefore be vanishingly small. This implies that in most cases,
compression would actually benefit, not hinder, performance, because
hard drive access times have not improved much since the days of the
486 but processor speed certainly has!
I assume that NTFS compression must operate at the block level, since
random access to a compressed file would otherwise require that the
entire file be decompressed first, and given the kind of performance I
have experienced with it, I do not believe this is true. There will be
some write overhead, and I do understand that the algorithm used is
asymmetrically tilted in favour of decompression speed - a wise
decision.
I am astonished that no-one seems to have ever scientifically measured
the overhead of NTFS compression. Given that it tends to reduce the
size of a SQL Server database by a factor of 10, you would suppose
that the gain in effective disk bandwidth would considerably overcome
the compression overhead. Indeed, on my laptop at home, I have NTFS
compression on all folders and have never noticed any overhead at all,
even running SQL Server.
Does anyone know of any tests that have been made to actually measure
the overhead?.