How to calc 8-bit version of 16-bit scan value

K

Kennedy McEwen

Jens-Michael Gross said:
Well, to prove your statement wrong just go the opposite way and
calculate a 16 bit value from the calculated 8 bit value:

65535/256 is 255. Well, right.

Well, considerably wrong - and completely irrelevant!

65535 / 256 = 255.99609375, which is hardly surprising since you have
mixed up the number of states in one range with the peak in another!

65536 / 256 = 256
but 255*256 = 65280.

but 256 * 256 = 65536
You're 255/65536 off
your original value.

So there is no error, if you are consistent with the information that is
used.
65535/257 (with rounding) is 255 too. But 255*257 is 65535 again. Zero
error.
Zero error because you IGNORE the zero state!
 
S

SJS

The other solution would be adding 8 and then dividing by 17.

While this may reduce the error at the extremes it increases the average
error for values not at the extremes. Unless you have images that
consist of extremely white samples and black samples just below the
first step and very little in between I can't see the value of this
approach.

I still think dropping the last 8 bits is the correct way to convert
16-bits to 8-bits (we are talking fractions here). Converting 65535 to
255 doesn't really cause an error of 255/65536 because 255/256 actually
refers to a range starting at 255/256 and extending to just before
256/256. This is 65280/65536 to 65535/65536 and since the centre of
this area is 65407/65536 the maximum error we get by dropping the
rightmost bits is 128. So no rounding (e.g. +127) is needed.

However, if I had to go from 8-bits to 16-bits (heaven forbid) I would
add 127 to the answer (e.g. 255/256 would convert to 65407/65536).

-- Steven
 
P

Philip Homburg

All of that depends on where your baseline is drawn from - the centre of
the range or the transition points.

If you set the baseline at the center, we are back to rounding (but
the computations can be done with truncation). Is that what you
are proposing?

Basically you are giving up representations for pure black and white for
a slightly easier computation and a nicer histogram. Sounds like a
bad trade-off to me. But if you document it, it should not cause to
much harm.
Of course there is a reason - symmetry when dealing with negative
numbers. Truncation rounds positive numbers down in magnitude but
negative numbers up. Images are coded in positive only integers though,
so the reason for rounding is irrelevant.

So when you are measuring distances, mass, time, etc. you always truncate
because there are no negative values? Interesting argument.
 
K

Kennedy McEwen

If you set the baseline at the center, we are back to rounding (but
the computations can be done with truncation). Is that what you
are proposing?

Basically you are giving up representations for pure black and white for
a slightly easier computation and a nicer histogram. Sounds like a
bad trade-off to me.

Since when has truncation resulted in loss of black? Quite
specifically, all of the values between 0 and 255 inclusive, convert to
zero, which is as pure black as it gets - well, unless you extend the
discussion to an invention of one of my colleagues, negative
luminescence, but I don't think that is relevant in this concept.
But if you document it, it should not cause to
much harm.
By stating it openly on a public forum such as this it has been as
widely published as it could ever be by any other means. Does that
imply that you are happy with it now?
So when you are measuring distances, mass, time, etc. you always truncate
because there are no negative values? Interesting argument.
You measure these continuous parameters in positive only integers do
you? That is what real numbers were invented for!
 
P

Philip Homburg

You measure these continuous parameters in positive only integers do
you? That is what real numbers were invented for!

Okay, that is more than enough. Sheesh.
 
W

Wayne Fulton

I'd like to point out an obvious fact before everyone gives up.

That being that those arguing that the so-called +127/257 "method" is
best (for 16 bit to 8 bit conversion) dont seem to realize that their
only point, their only detail, their only "proof", their only concern,
is ONLY about conversion of 8 bits back to 16 bits. Which is a
different artifical problem, and is NOT the current thread's subject.
Nevertheless nothing else seems important to them except to map it back
to 16 bits. They seem to have the false notion that this can somehow
measure 8 bit accuracy. Every word they utter is about converting back
to 16 bits. Every notion of accuracy is after converting back to 16
bits.

They do have a 16 bit problem, but there is no 8 bit problem, except
the one they made. It is the wrong idea for this goal.

That conversion back to 16 bits must map one single value to one of 256
possible values, which is indeed an inpossible problem. After we
discard 255/256 of the data, regardless of method, there is no clue
left telling how to get the exact original value back. We can only
approximate. There are far greater problems regarding this.

Amazingly, those arguers want to distort the desired 8-bit results for
the ONLY reason to slightly improve the average error of that
back-conversion goal (which wasnt even the goal, it wont even be done,
and the original data simply isnt that accurate anyway). They seem
willing to mess up the 8-bit conversion for a false goal, which seems a
really pointless reason to distort the useful 8-bit results which we
actually seek.

This thread's topic is instead about converting 16 bits to 8 bits. The
goal is NOT to go back. We'd be really stupid to convert to 8 bits if
the goal was to retain 16 bit data. The typical usage of 8-bit
conversion never considers going back (I doubt 1 case in millions ever
considers it). Going back to recover exact values is going to be a
huge problem regardless. But it is a very different problem than
addressed here.

This thread is instead about converting to 8 bits, presumably in the
best way giving the best 8-bit results. The arguers should instead try
to discuss this topic of 8 bit conversion. And for this purpose, the
so-called truncation method (divide by 256) is clearly and obviously
far superior to any other method, not only because of its ideal and
perfect results, but also because that is simply how our numbering
system actually works.

By number system, I refer to the concept that example value 254 = 2x100
+ 5x10 + 4x1 (decimal). This is fundamental above all, and it also
specifies the obvious way to simply use the high byte hexidecimal value
for the 8-bit value (THE 8-bit value). Divide by 256 is one way to
access that high byte in hex, due to our number system's design.

Probably we either know or dont care, but hexidecimal is just an easy
way to visually see numbers in binary groupings, and in this case:

0..255 decimal 0000..00ff hex maps to 0
256..511 decimal 0001..01ff hex maps to 1
512..767 decimal 0002..02ff hex maps to 2
...
65024..65279 decimal fe00..feff hex maps to fe which is 254
65280..65535 decimal ff00..ffff hex maps to ff which is 255

It simply cannot get prettier than that! Dividing by 256 gives exactly
the same result in every case, just another way to think of it.

So the very best reason is that this "truncation" method obviously does
perfectly distribute 65536 possible values 0..65535 into exactly 256
groups, each with exactly 256 values 0..255, precisely so. Its results
begin with 256 values 0..255 (hex 0000..00ff) perfectly mapped to 8-bit
zero. It ends with 256 values 65280..65535 (hex ff00..ffff) perfectly
mapped to 8-bit 255 (the other method fails poorly at this, it doesnt
know the rules).

Yes, it is truncation, but intended, its the purpose in order to group
the data evenly. Obviously there is no possible concept of "rounding"
those results differently, 0 and 255 are the only possible result
values these two end groups can have, and these results are full range
and linear and precisely accurate as is. Every point between these end
points are also equally good, perfectly distributed. It is all
beautifully and ideally and optimally and theoretically and exactly
perfect, in every possible respect, and by definition (because this is
simply how numbers work).

It simply doesnt get any better, and frankly this is obviously true,
fundamentals more a matter of definition than of computation.

This so-called truncation method (dividing by 256) for converting 16
bit to 8 bit data has no possible concept of error, no more than
counting 1,2,3 has possibility of computational error. It is simply
how numbers work, by definition, and it gives clearly ideal results.
Ideal results really should be the primary concern.
 
W

Wayne Fulton

Oops! I was duplicating similar lines, and didnt edit it right.
That high byte was a major point too. <g>
Sorry, should have been:

0..255 decimal 0000..00ff hex maps to 0
256..511 decimal 0100..01ff hex maps to 1
512..767 decimal 0200..02ff hex maps to 2
...
65024..65279 decimal fe00..feff hex maps to fe which is 254
65280..65535 decimal ff00..ffff hex maps to ff which is 255
 
P

Philip Homburg

Yes, it is truncation, but intended, its the purpose in order to group
the data evenly. Obviously there is no possible concept of "rounding"
those results differently, 0 and 255 are the only possible result
values these two end groups can have, and these results are full range
and linear and precisely accurate as is. Every point between these end
points are also equally good, perfectly distributed. It is all
beautifully and ideally and optimally and theoretically and exactly
perfect, in every possible respect, and by definition (because this is
simply how numbers work).

The starting point is a function that maps numbers in the range [0.0 ... 1.0]
onto a set of integers. There are at least three methods:
1) divide by 255 and round. 0.0 maps to 0, 1.0 maps to 255 and everything
else has an average error of 0.25/255 or 1/1020.
2) divide by 256 and truncate. 0.0 maps to 0, 255/256 maps to 255.
Average error is 0.5/256 or 1/512
3) subtract 0.5/256, divide by 256 and round, 1/512 maps to 0, 511/512
maps to 255, and the average error is 0.25/256 or 1/1024.

Of course you can recompute these three methods for 16-bit values.

If you use method 1) throughout, you convert from a 16-bit value to an
8-bit value using the 'divide by 257 and round' method.
For methods 2 and 3 you convert by dividing by 256 followed by truncation.

The appropriate convertion method from 16 to 8-bits is completely determined
by the convertion method from fractional numbers to integers.

I think method 2 is no good because the average error is two times as
high as the other methods and you cannot represent 1.0.
I think that method 3 is far to complex when you want to do other things
with your data (such as masking, scaling, etc). And you cannot represent
0.0 and 1.0.

Which leaves method number 1.
This so-called truncation method (dividing by 256) for converting 16
bit to 8 bit data has no possible concept of error, no more than
counting 1,2,3 has possibility of computational error. It is simply
how numbers work, by definition, and it gives clearly ideal results.
Ideal results really should be the primary concern.

If I take a value for pi, for example 3.14159265358979323846264338327950288.
Now I convert to a 9 digit integer: 314159265. Now I want to reduce the
number of digits to 5. So I divide by 10000 and round, and the result is
31416.

Using your argument (divide by 10000 and truncate) I would end up with
31415.

(I'm am done with this thread. Unless I made a serious error, you can
keep counting your fractions, I don't care).
 
D

Don

Just an observation (a reality check, really) but both/all sides seem
to take it as a given that the 16-bit image will have values in all
256-bit buckets.

This is rarely the case so if ultimate 8-bit accuracy is desired a far
superior method would be adaptive and limit the conversion only to
values actually present in the 16-bit image.

Easiest explained with an example: Let's assume the 16-bit image only
occupies 75% of the histogram. Using either method would waste 25% of
the available dynamic range in order to implement a theoretically
perfect algorithm.

The difference between the two methods (truncation or +128/257) pales
to insignificance to the waste of 25% of dynamic range! A far superior
routine (using either method) would convert only the 75% actually
used.

In other words, instead of blindly squeezing each of the 256 16-bit
values into a single 8-bit value (whether those 16-bit values exist or
not) an intelligent, adaptive algorithm - in the above example - would
squeeze "only" 192 16-bit values into a single 8-bit value. A
considerable improvement.

Don.

P.S. Yes, one can play with levels and friends in order to "expand"
the dynamic range of the original 16-bit image before "blind"
conversions, but all that does is create a "comb" histogram with
missing values (blurring the difference between the two methods even
more). Not to mention that some range expansion is not even
proportional introducing a potentially even larger margin of error in
some areas of the image (again, blurring the distinction between
truncation and +128/256).

Anyway, not really important, but since "nits are being picked" I just
thought I'd throw this "reality check" in the mix...
 
K

Kennedy McEwen

Yes, it is truncation, but intended, its the purpose in order to group
the data evenly. Obviously there is no possible concept of "rounding"
those results differently, 0 and 255 are the only possible result
values these two end groups can have, and these results are full range
and linear and precisely accurate as is. Every point between these end
points are also equally good, perfectly distributed. It is all
beautifully and ideally and optimally and theoretically and exactly
perfect, in every possible respect, and by definition (because this is
simply how numbers work).

The starting point is a function that maps numbers in the range [0.0 ... 1.0]
onto a set of integers. There are at least three methods:
1) divide by 255 and round. 0.0 maps to 0, 1.0 maps to 255 and everything
else has an average error of 0.25/255 or 1/1020.
2) divide by 256 and truncate. 0.0 maps to 0, 255/256 maps to 255.
Average error is 0.5/256 or 1/512
3) subtract 0.5/256, divide by 256 and round, 1/512 maps to 0, 511/512
maps to 255, and the average error is 0.25/256 or 1/1024.
Now you are getting ridiculous, Philip. It is impossible to map
numbers, even real numbers, from the range [0.0 .. 1.0] to a set of
integers by any of those methods!

1) divide by 255 results in real numbers in the range of [0.0 ..
0.00'3921568627450980'] and no amount of rounding will shift this range
to [0 .. 255]. Perhaps you mean multiply rather than divide.

2) divide by 256 and truncate has similar results, so again perhaps you
mean multiply.

3)subtract 0.5/256 (= 0.001953125) and divide by 256 results in the
range becoming [-0.00000762939453125 .. 0.00389862060546875] and, again,
no amount of rounding will map this to the range you refer.

Since none of the three methods that you describe actually performs the
operation you claim for it, the errors you compute are irrelevant.

Furthermore, even if you had specified methods which performed the
claimed operations the error you compute would still be irrelevant since
at no point in your argument do you define what that error is based on -
ie. what is ZERO error or, in simple terms, what are you claiming to be
a perfect computation. Before you even begin to convince anyone that
your definition of perfect really is perfect, you must explain why that
is the case.

For the situation in the subject thread, I contest that integer division
by 256 (or a shift right by 8 bits) is perfect because it scales the
original number of states into the target number of states with equal
distribution and weighting throughout the full range. No other method
suggested so far achieves this property and the only arguments put
forward for their alleged superiority is a reference to some undefined -
and apparently undefinable - error magnitude.

Using the comparative conversion suggested by Jens-Michael of an 8-bit
image to 4-bit for simplicity, the perfection of the integer division is
immediately apparent. Simply create a ramp from peak black to peak
white across and image 256 pixels wide. Then convert the image to 4-bit
data using either integer division by 16 or the equivalent of the
alternative method you argue. Ignoring your obvious computational
errors above, I suspect that this reduces to the function int((source +
8)/17) as suggested by Jens-Michael.

The two images are significantly different. Using simple integer
division by 16 and truncation, the full range from black to white is
produced with an even population - as would be expected from a linear
ramp original, which also has an even population. Each colour in the
range from 0 to 15 is represented by exactly 16 pixels wide. The "add 8
and divide by 17" method results again in the full range from black to
white being produced but, contrary to what Jens-Michael suggested, looks
much less natural because each colour is now represented by 17 pixels
wide except for peak black and white, which are only 9 pixels wide each.
In short, a linear ramp has been transformed into an "S" curve!

By examining the resulting data from exactly this test it is very clear
that the reduced error argument for the alternative to simple integer
division is false, because it ignores one basic fact:
as the number of colours reduces, the value which represents peak white
reduces as a proportion of the number of available states available in
the range. In other words, for 16-bit data, peak white is 65535 of
65536 available colours. For 8 bit data it is only 255 of 256 states,
whilst for 4-bit data it is only 15 of 16 states. In short, reducing
the number of available colours ALSO reduces the range threshold
required to achieve peak white. Consequently, the "error" that you
estimate based on the difference between the integer result and real
number divisions is completely erroneous in imaging terms. Quite
simply, your "constant" or average error estimates given above and in
previous posts are complete bunkum - they may be accurate in numerical
terms for computation of the minimum difference between integers and
real numbers, but in terms of image luminance, it is completely false.

In fact, if you compute the luminance error correctly (taking account of
the change in thresholds across the range) both methods have *exactly*
the same average error across the entire range. I therefore invoke
Einstein's universal rule - nothing should be more complex than it needs
to be. Simply right shift the data by the required number of bits. For
an x86 processor this is around 2-5 clock cycles per instruction,
depending on whether the data is in cache or not, compared to 3-5 clock
cycles for an add and 26-28 cycles for an integer division, again
depending on whether the data is a cache hit or not. In short, shifting
the data is around 6 to 15 times faster, has exactly the same mean
luminance error and retains histogram integrity than the alternative.
(I'm am done with this thread. Unless I made a serious error, you can
keep counting your fractions, I don't care).
You have made numerous serious errors in that, and previous, posts!
 
K

Kennedy McEwen

Don said:
Just an observation (a reality check, really) but both/all sides seem
to take it as a given that the 16-bit image will have values in all
256-bit buckets.

This is rarely the case so if ultimate 8-bit accuracy is desired a far
superior method would be adaptive and limit the conversion only to
values actually present in the 16-bit image.

Easiest explained with an example: Let's assume the 16-bit image only
occupies 75% of the histogram. Using either method would waste 25% of
the available dynamic range in order to implement a theoretically
perfect algorithm.

The difference between the two methods (truncation or +128/257) pales
to insignificance to the waste of 25% of dynamic range! A far superior
routine (using either method) would convert only the 75% actually
used.

In other words, instead of blindly squeezing each of the 256 16-bit
values into a single 8-bit value (whether those 16-bit values exist or
not) an intelligent, adaptive algorithm - in the above example - would
squeeze "only" 192 16-bit values into a single 8-bit value. A
considerable improvement.
You make a valid point that any ideal conversion should adaptively
weight the conversion to the actual levels used, however the
implementation that you suggest would result in an uncontrolled level
and gain shift with every image. Rather, such an adaptive algorithm
would implement the transformation on a histogram level priority such
that the highest populated source levels would be transformed to the
closest corresponding luminance level (in each RGB or HSI parameter) of
the lower bit depth. The result would still waste 25% of the available
dynamic range, but that is just a fact of life - garbage in garbage out!

To do as you propose would require a new image file format, such that
all possible levels could be occupied and a lookup table mapped these to
the actual data fed to the display DAC or printer jets. Whilst such a
format already exists for 2, 4, and 8-bit colours (eg. gif or palleted
bmp and tif, amongst others) I am not aware of such a format for higher
bit depths, such as 16, 24 or 48 bit colour. In either case, all you
would be doing was shifting the problem from the data itself to the
lookup table.
 
W

Wayne Fulton

The starting point is a function that maps numbers in the range [0.0 ... 1.0]
onto a set of integers. There are at least three methods:
1) divide by 255 and round. 0.0 maps to 0, 1.0 maps to 255 and everything
else has an average error of 0.25/255 or 1/1020.
2) divide by 256 and truncate. 0.0 maps to 0, 255/256 maps to 255.
Average error is 0.5/256 or 1/512
3) subtract 0.5/256, divide by 256 and round, 1/512 maps to 0, 511/512
maps to 255, and the average error is 0.25/256 or 1/1024.

Of course you can recompute these three methods for 16-bit values.


I guess I wasted my breath. <g> And it is NOT at all the same concept as
rounding PI. The 8-bit conversion task only involves dividing 65536 possible
16-bit values linearly into 256 equal groups, each group representing one
8-bit value 0..255. The main requirement is that there must be exactly 256
equal groups, equally divided, because there are 256 possible 8-bit values,
and we have high regard for unskewed results. Any other result is wrong. I
didnt follow your 0..1 range, but as best I understand your 1) and 3),
involving rounding and dividing by 255 or 256, both are wrong, giving
incorrect results. They dont produce 256 equal unskewed groups.

The right answer is that this problem is just a simple regrouping. It is as
simple as counting; just count off the first 256 16-bit values and map each
value in that group to 8-bit value 0, and count the next group of 256 and map
each in it to value 1, and keep counting until the last group of 256 which is
mapped to value 255. Since there are 65536 values, it comes out exactly
perfect, as required. There is no computation, and no concept of error, only
counting. Any correct method must give only this same result. Any different
result is obviously wrong.

But we do need a better automatic algorithm, and because of the binary nature
of this problem, and the factor of 256 relationship between 8 bits and 16
bits, then your 2) divide by 256 with no rounding, accomplishes that same
result easier, without actually counting (division is like counting by 256 at
a time). The result is still perfect, still the exact same result, with
absolutely zero possibility of any error, same as no error is possible when
counting 1,2,3. I suppose one might count WRONG, but by definition, there is
no possible error percentage, the answer is simply 1,2,3, and it is either
correct or not.

Since there is no concept of error percentage when counting, then you totally
lose me when you imagine some error percentage. I'd guess you may also be
concerned with converting back to 16 bits, which as mentioned before is a
different difficult problem, and a silly thing to do in this case. I am
certain that any error you perceive only involves that later 16-bit
conversion, and does not involve the 8-bit conversion in any way (assuming it
was done via the perfect /256 conversion method, which is error free).

This is a very easy problem, dont make it hard.
 
W

Wayne Fulton

Just an observation (a reality check, really) but both/all sides seem
to take it as a given that the 16-bit image will have values in all
256-bit buckets.

This is rarely the case so if ultimate 8-bit accuracy is desired a far
superior method would be adaptive and limit the conversion only to
values actually present in the 16-bit image.

Easiest explained with an example: Let's assume the 16-bit image only
occupies 75% of the histogram. Using either method would waste 25% of
the available dynamic range in order to implement a theoretically
perfect algorithm.


Right, but this conversion subject must be about All Possible Values
that might conceivably exist in any image. All Possible Values do in
fact exist in the realm of possiblity, but you're right, probably not in
any one image. So the entire discussion is without exception about All
Possible Values in any possible file, and not about actual values in
some one file.

There are 65536 possible 16 bit values, and 256 possible 8 bit values.
These two numbers are 2 to power of 16, and 2 to power of 8, and those
two numbers are simply how many different or unique values can
physically be stored in that number of bits.... All Possible Values,
ready for anything that shows up.

So for this purpose, it doesnt matter what actual values do exist in
some image, as we are not speaking of any specific image. The next image
might be of a black cat in a coal mine, or a polar bear in a snow storm,
and they will be different, but 8 bit conversion simply maps whatever 16
bit values that might actually be there to the new 8-bit equivalents,
using this general rule developed for All Possible Values which it may
or may not encounter in any one image. But the general rule applies to
any and all images equally, so it obviously must necessarily include All
Possible Values, ready for anything that shows up.

Yes, we certainly might improve the image with histogram adjustments, or
not, which probably would be good for this image if we did, but it
doesnt really matter regarding the subject of 8 bit conversion. The
8-bit conversion is a conversion process, not an editing process.
 
D

Don

Yes, we certainly might improve the image with histogram adjustments, or
not, which probably would be good for this image if we did, but it
doesnt really matter regarding the subject of 8 bit conversion. The
8-bit conversion is a conversion process, not an editing process.

I realize that, but the difference between the two methods is now down
to half a pixel, I think. So, I was just trying to "re-calibrate" the
discussion and put it into context where this minute difference - no
matter how theoretically right or wrong - become less important.

I do, however, agree that the orthogonal nature of truncation
(technically, a division by 256) somehow "feels" theoretically more
correct even if the difference between the two methods is only
noticeable in the rare situation where the full 16-bit dynamic range
is present. It's probably the programmer in me talking...

Don.
 
D

Don

You make a valid point that any ideal conversion should adaptively
weight the conversion to the actual levels used, however the
implementation that you suggest would result in an uncontrolled level
and gain shift with every image. Rather, such an adaptive algorithm
would implement the transformation on a histogram level priority such
that the highest populated source levels would be transformed to the
closest corresponding luminance level (in each RGB or HSI parameter) of
the lower bit depth. The result would still waste 25% of the available
dynamic range, but that is just a fact of life - garbage in garbage out!

Always true: GIGO never fails.

The above example was just a throwaway illustration to indicate other
aspects of the context which, in most cases, override the difference
of half a pixel to which the discussion seems to have settled on.

Don.
 
C

CSM1

Don said:
I realize that, but the difference between the two methods is now down
to half a pixel, I think. So, I was just trying to "re-calibrate" the
discussion and put it into context where this minute difference - no
matter how theoretically right or wrong - become less important.

I do, however, agree that the orthogonal nature of truncation
(technically, a division by 256) somehow "feels" theoretically more
correct even if the difference between the two methods is only
noticeable in the rare situation where the full 16-bit dynamic range
is present. It's probably the programmer in me talking...

Don.

Just remember, when converting 16 bits to 8 bits, you are truncating 65536
possible steps to 256 steps, so for each 255 values of the 16 bits you get 1
value of 8 bits. 16 bit values 65280 to 65535 will all map to 8 bit value of
255. hex FF00-FFFF to hex FF.

Earlier, I had said Left shift 8, that is a multiply by 256, when I meant
Right shift 8 which is a divide by 256.
(I don't know my right from my left<g>)
 
W

Wayne Fulton

I realize that, but the difference between the two methods is now down
to half a pixel, I think. So, I was just trying to "re-calibrate" the
discussion and put it into context where this minute difference - no
matter how theoretically right or wrong - become less important.

I do, however, agree that the orthogonal nature of truncation
(technically, a division by 256) somehow "feels" theoretically more
correct even if the difference between the two methods is only
noticeable in the rare situation where the full 16-bit dynamic range
is present. It's probably the programmer in me talking...


The two operations (8 bit and histogram levels) could of course be
combined into a histogram operation that outputs 8 bits, seems trivial
to do, just one more subroutine call, but in practice, this is not done.

It might be OK for many average images, but to me, the separate steps seem
better to give me choice. The problem with automation is that it is
normally pretty dumb. For example, the images of black cat in a coal mine
or the polar bear in a snow storm are surely much better if not
automatically manimpulated to full range, otherwise we probably just have
a couple of drab gray pictures. I'd rather it be my choice.

8 bit conversion doesnt need to know about subject content (data
distribution), there are no modification choices present so it runs fine
unattended. Histograms can too, and do in many cases, but results are
often better with human visual input to judge and guide it.
 
P

Philip Homburg

The starting point is a function that maps numbers in the range [0.0 ... 1.0]
onto a set of integers. There are at least three methods:
1) divide by 255 and round. 0.0 maps to 0, 1.0 maps to 255 and everything
else has an average error of 0.25/255 or 1/1020.
2) divide by 256 and truncate. 0.0 maps to 0, 255/256 maps to 255.
Average error is 0.5/256 or 1/512
3) subtract 0.5/256, divide by 256 and round, 1/512 maps to 0, 511/512
maps to 255, and the average error is 0.25/256 or 1/1024.
Now you are getting ridiculous, Philip. It is impossible to map
numbers, even real numbers, from the range [0.0 .. 1.0] to a set of
integers by any of those methods!

1) divide by 255 results in real numbers in the range of [0.0 ..
0.00'3921568627450980'] and no amount of rounding will shift this range
to [0 .. 255]. Perhaps you mean multiply rather than divide.

Sorry. Of course you multiply when converting from small fractions to
integers. I guess I was still thinking about going from 16-bit to 8-bit.
Furthermore, even if you had specified methods which performed the
claimed operations the error you compute would still be irrelevant since
at no point in your argument do you define what that error is based on -
ie. what is ZERO error or, in simple terms, what are you claiming to be
a perfect computation. Before you even begin to convince anyone that
your definition of perfect really is perfect, you must explain why that
is the case.

Given a set of integers s, an interval i, and a function f that maps
from s to i, I can define a function g that maps from i to s such that for
every v in i, f(g(v)-v is minimal.

Given a propability distribution of the values in i, I can compute the
average error.

The 'perfect computation' is the function f.
 
D

Don

Just remember, when converting 16 bits to 8 bits, you are truncating 65536
possible steps to 256 steps, so for each 255 values of the 16 bits you get 1
....

Actually, for each *256* 16-bit values you get 1 8-bit value... :)

....
value of 8 bits. 16 bit values 65280 to 65535 will all map to 8 bit value of
255. hex FF00-FFFF to hex FF.

Earlier, I had said Left shift 8, that is a multiply by 256, when I meant
Right shift 8 which is a divide by 256.
(I don't know my right from my left<g>)

Not a problem, I know what you meant.

I still get confused which is little-endian and which is big-endian
but I do know I hate the Intel-way and that includes segmentation...
;o)

Don.
 
D

Don

The two operations (8 bit and histogram levels) could of course be
combined into a histogram operation that outputs 8 bits, seems trivial
to do, just one more subroutine call, but in practice, this is not done.

It might be OK for many average images, but to me, the separate steps seem
better to give me choice. The problem with automation is that it is
normally pretty dumb. For example, the images of black cat in a coal mine
or the polar bear in a snow storm are surely much better if not
automatically manimpulated to full range, otherwise we probably just have
a couple of drab gray pictures. I'd rather it be my choice.

8 bit conversion doesnt need to know about subject content (data
distribution), there are no modification choices present so it runs fine
unattended. Histograms can too, and do in many cases, but results are
often better with human visual input to judge and guide it.

I totally agree about the choice. Whenever I hear about some "new,
improved" feature that will change my life, my first question is
inevitably: "How do I turn it off?"... It draws some really
interesting blank stares by the sales drones rendering them speechless
for a split second... ;o)

Anyway, the gist of my message was to make use of most dynamic range
available i.e. have each 8-bit value represent as few 16-bit values as
possible and therefore reducing the conversion artifacts. As a side
effect this would, indeed, perform a sort of Auto-Levels. However, if
this is not desired one can always reduce the dynamic range of the
8-bit image after the conversion.

The advantage of that workflow over fixing the histogram in 16-bit
first and then converting to 8-bit is a marginal improvement i.e. the
fixed 16-bit histogram would exhibit a "comb" appearance with missing
values.

But that's really "picking nits" now...

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top