Robert said:
yeah, just got a lossless from 4.1MB down to 23kB using IrfanView 4.25
operating on a simple .bmp
There are various ways to reduce the amount of information in a picture.
1) Reduce the pixel dimensions. Take a 1600 by 1200 image (2 million pixels,
6 million bytes in 24 bit color) and reduce it to 320 by 200 pixels.
That ruins the visual appearance of the picture, so I guess I'd still
class such a method as lossy. There is less info in the picture than
originally. But no compression code is involved (no LZW for example).
So if that's what you're doing, changing the resolution, that can help
achieve more than 3:1 improvement. As long as the image is viewed on
a small screen, or printed on wallet sized photo paper, it might still
be acceptable.
2) Actual compression. If you start with the 1600 by 1200 image and
compress it losslessly, the file might be reduced from 6MB to 2MB.
You can get roughly 3:1 compression, without changing the pixel dimensions
of the picture. If you seek a higher compression ratio, more information is
discarded. JPEG does this, in the frequency domain, tossing higher
frequency (sharper edges in image) content. Once the lossy compression
reaches 144:1, it's getting pretty useless. Video compression works the
same way, using "macroblocks" and DCT to work in the frequency domain.
The compression method throws away information that doesn't bother
the human eye too much. Especially in video, you can throw away a lot
of info, and still have usable content. (In video, 100:1 would not be
unusual.) Still pictures, your eye and brain are less willing to
compromise, because there's plenty of time to detect the problems with it.
See the examples (JPEG "tombstone" images) about 70% of the way down
this web page. The samples show what happens when the resolution is
held constant, and the "Q factor" is varied. Q=100 is a compression
ratio of 2.6:1, and is still slightly lossy.
http://en.wikipedia.org/wiki/JPEG
One kind of error in JPEG, is roundoff error in the color space.
You can start with a 256 color image, run it through JPEG. Later,
when viewing the JPEG, your viewer application will request the
screen operate in 24 bit or 32 bit mode, for best results. If you
use a program which counts "color palette entries", the output
from JPEG might take 100,000 colors to represent. What you'll
find is there are a number of colors which are very close to
one another. They all should have been the same color, but due
to math roundoff errors, the color palette refers to the pixels
as being different colors. So that's another kind of degradation
which is not apparent to the human eye, and is only of theoretical
interest. If you need to convert the 100,000 color image back
to 256 colors, it's not hard to do with the appropriate quantizer
code, so it's really not a big deal. It's just an example of
another aspect, not widely considered, of lossy operation. The
colors were not preserved with good accuracy. The human eye
isn't good enough to detect the difference.
Paul