Gamma correction question

M

Mike Engles

Kennedy said:
That is merely part of the effect, and only the obvious part which
completely misses the subtlety of the encode-decode effect. Linearity
is *NOT* the only metric of image quality.

To understand this consider what would happen if the CRT had the inverse
gamma (ie. 0.45 instead of 2.2) - then you would have to apply a gamma
compensation of 2.2 to the image. This would have the effect of
darkening the image, which would then be brightened by the CRT. You
would *still* "see the image correctly" in terms of it's brightness
(because you have perfectly compensated the CRT non-linearity) but it
would look very poor in terms of shadow posterisation.

This is trivial to demonstrate. Take a 16-bit linear gradient from
black to white. Apply a gamma of 2.2 which will darken the image. Then
reduce the image to 8-bits, which would be the state it would appear in
prior to being sent to the CRT. Then apply a gamma of 0.45 to simulate
how such a CRT would display the image. It is still apparently the
correct brightness and is perfectly linear. However, it is now severely
posterised in the shadows and a visibly poor gradient.

This exercise should demonstrate clearly that simply precompensating for
the non-linearity of the display is not enough. It is important that
the display non-linearity itself is the opposite of the perceptual
non-linearity, otherwise you need far more bits to achieve tonal
continuity and inevitably waste most of the available levels.


On the contrary, since the gamma compensated image is in a perceptual
evenly quantised state, you have equalised the probability of losing
data by making the lighter parts lighter as you have by making darker
parts darker by any processing you wish to apply. In the linear state
there are insufficient levels to adequately describe the shadows with
8-bit data, and consequently processing in *that* state results in lost
information - in the shadows.

Editing any image will cause image degradation irrespective of the
number of bits. The issue is whether that degradation, or loss of
information, is perceptible. Editing 8-bit images in the linear state
will produce much more perceptible degradations, particularly in the
shadows, than editing in 8-bit gamma compensated data.


And hence your edits are applied with a perceptual weighting to the
available levels.


With 16-bits it is much less of an issue, but the same rules apply - you
have a higher probability of your processing causing loss of details in
the shadows than you have in the highlights, and processing in
"perceptual space" (ie. gamma compensated data) equalises the
probability of data loss throughout the image range so that you do not
damage the shadows any more than the highlights any more than the
mid-tones by the application of the same process.

Seeing the image in a linear state is only part of the solution, and
whilst you continue to focus on linearity at the expense of the other
issues then you will never understand the reason why gamma is necessary.

A binary (1-bit) image is perfectly linear, but isn't a very good
representation of the image, neither is 2, 3 or 4 bits and so on. 6-bits
is adequate (and 8-bits conveniently gives additional headroom for
necessary colour management functions) *if* the available levels
produced by those 8-bits are distributed optimally throughout the
luminance range, which is such that the discrete levels are equally
distributed throughout the perceptual response range. As soon as you
depart from *that* criteria the you increase the risk of discernible
degradation in those regions of the perceptual response range which have
fewest levels. This is irrespective of how many bits you have in your
image although, obviously, the more bits you have the less likely the
problem is to become visible. Less likely doesn't mean never though!
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)



Hello
I was not quite arguing that we should not apply a gamma in 16 bit
linear, but to decode not in the display but in the image and then treat
it linearly, rather than in a gamma space. Presumably this code decode
process meets the demand of gamma to maximise the bits.

Mike Engles
 
K

Kennedy McEwen

Mike Engles said:
Hello
I was not quite arguing that we should not apply a gamma in 16 bit
linear, but to decode not in the display but in the image and then treat
it linearly, rather than in a gamma space. Presumably this code decode
process meets the demand of gamma to maximise the bits.
No it doesn't, which is why it isn't done that way with 8-bits. The
fact that you have 16-bit data doesn't change the rules for
optimisation, it just changes the percentage of lost information
necessary for it to become discernible.
 
C

Chris Cox

Mike Engles said:
Hello

Applying a gamma to a image, brightens a linear image.

Using a gamma encoding doesn't change the appearance of the image
(except due to quantization issues).
It only brightens or darkens if you apply a gamma adjustment and fail
to take that into account when displaying the image.

That image is fed
to a CRT which dulls the image. Now this is convenient.

We see the image correctly because the CRT has the opposite non
linearity from that applied as the gamma.

This would seem to be fine if we did nothing else to the image. If we
edit this in 8 bits,with the image is in a brightened state, there is a
danger of making the already brighter bits, brighter and loosing
information. Editing any image in 8 bits will cause image degradation.

Yes, editing in any bit depth can cause degredation.
But the gamma (or any other) encoding has little to do with it.


If we were using 16 bits and applied the gammaed image to a linear
display, we would have to apply the effect of a CRT to the display, but
we are still editing in a gamma state.

Uh, what?


I still cannot see why 16 bit images need not be edited in linear state,

Because if you want 16 bit per channel image quality in a linear
encoding you'd need something on the order of 20 bits per channel.


and apply the gamma
correction to the image rather than the display,

Again: the display and the gamma encoding of the image are unrelated.
Why do you keep trying to make them related?
How many ways can I explain the concept of "unrelated" before it sinks
in?

Chris
 
M

Mike Engles

Chris said:
Using a gamma encoding doesn't change the appearance of the image
(except due to quantization issues).
It only brightens or darkens if you apply a gamma adjustment and fail
to take that into account when displaying the image.


Yes, editing in any bit depth can cause degredation.
But the gamma (or any other) encoding has little to do with it.


Uh, what?


Because if you want 16 bit per channel image quality in a linear
encoding you'd need something on the order of 20 bits per channel.


Again: the display and the gamma encoding of the image are unrelated.
Why do you keep trying to make them related?
How many ways can I explain the concept of "unrelated" before it sinks
in?

Chris


Hello

It does strike me that the display transfer function is as important as
as the gamma encoding.A CRT with its transfer curve decodes the gamma
applied to the image. At some point with a linear display system and
gamma encoding,some sort of decoding would have to take place.
For a image to look correct the display would have to apply the inverse
coding. So the gamma encoding and the display function does have a
relationship.

Mike Engles
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top