Adding compression

  • Thread starter Thread starter chance
  • Start date Start date
D. Yates said:
I'm interested in why you would use a BufferedStream for reading data in
and then writing data back to a file? I can see it benefits if you
don't know how much data is coming down the pipe (the MSDN example uses
a NetworkStream with sockets...I get that...) and you want to gradually
feed data into the BufferedStream till it hits its preset size limit and
then flushes data, but in a case like this are there any advantages?

A few comments.

I have not seen the anyone explain why the buffering effects
the compression.

GZip uses LZH which is a combination of LZ77 and Huffman.

If you compress 1 byte at a time then that algorithm will
degenerate into a pure Huffman with 1/8 overhead added.

GZipStream does not override WriteByte, so we are actually
calling Stream WriteByte which just do:

byte[] buffer = new byte[] { value };
this.Write(buffer, 0, 1);

That is why this is happening.

Could it be fixed ? Yes !

GZipStream could overide WriteByte and use an internal
buffer.

Should it do that ? I am not sure !

The actual bytes written will depend on the buffer size. I do
not think it is nice to have functionality depend on an internal
const. OK - then we could make it a property in the class and
have an constructor with an argument.

But then I think it is just as simple to have the programmer
wrap with a BufferedStream.

Nobody should be using ReadByte and WriteByte on big files
anyway.

A note in the docs would definatetly be nice though !

Arne
 
Back
Top