How to calc 8-bit version of 16-bit scan value

S

SJS

The 'reduced error argument' is not false, it is interpreted wrongly by
you. The error range in case of truncation is 0..1, while in case of
rounding it is -0.5..0.5. The error is NEVER bigger than 0.5. This is a
mathematical fact (as you seem to believe mathematics more than your own
eyes).

I disagree with this in this application. The number 255 (for instance)
refers not only to the point 255/256 on the number line but also all
values from this up to 256/256. The average value of all samples
recorded as 255 will actually be 511/512 (middle of the 255/256 to
256/256 interval) so the rounding has already been done.

We can't round up as we only have the intervals labelled 0 to 255
available so if we rounded 511/512 up we couldn't record it. The labels
are the lower end (not the middle) of their respective range on the
number line from 0 to 1.

Knowing that we can't round up we can safely assume that 255/256
actually means 511/512 +- 1/512. Considering this, the error due to
truncation will never be greater than 1/512.

-- Steven
 
R

Roger Riordan

CSM1 said:
To convert 16 bit to 8 bit drop the upper 8 bits or divide by 256.
Do an integer divide to drop the fractions.

There are three 16 bit values in color, 16 bits of red, 16 bits of green and
16 bits of blue. For a total of 48 bits.

So, you would divide the Red value by 256, the Green Value by 256 and the
Blue value by 256, leaving 24 bit color.

This whole discussion reminds me very much of the learned discussions in the
Middle Ages about precisely how many angels would fit on the head of pin.

The first cause of confusion is the fact that the eye is nominally (more or
less) logarithmic in its response, while all digital sensors are linear. In his
equation Steven has attempted to take the nonlinearity of the eye into account
with his exponential term, but I don't think anyone has noticed this, and in any
case it is quite inappropriate, as both the sensor in the camera, and the
printer or screen used to examine the resultant picture, are (more or less)
linear.

Then we have had endless discussion as to whether we should round up, simply
truncate or divide by 257 to get from 16 bits to eight bits. I think all the
contributors in favour of rounding up have overlooked one small point, which is
that if you use integer arithmetic and round up by adding 80 and then shifting 8
bits right, any number greater than 0FF80 will be converted to zero. The
correct 8X8X code for doing this is:

Add AX,80h FF80 + 80 -> 0000 + C
Sbb Ah,0 00 - 1 -> FF
Shr AX,8 (NOT Sar AX,8)

If the important second line is omitted, rounding will have the unfortunate
effect of converting pure white into black.

Apart from this the whole discussion has been totally irrelevant from a
practical point of view. The only difference between the two methods is that if
you had a picture of the sky, grading evenly from pure black to pure white (or
pure blue), the pure black band would be slightly narrower, and the pure white
band slightly wider if you used rounding. From a purely theoretical point of
view, I would round, because it seems more logical that 0.6 should be converted
to one, rather than to zero. But I cannot imagine any practical circumstances
in which the difference could be detected by examining the resulting image by
eye.

Apart from anything else, the photographer will always try to ensure that the
scene does not cover the whole range of the sensor, so that he does not lose
detail at either end. And then, in 99% of cases, he will use Photoshop 'Curves'
or the like to distort the dynamic range of the image to give a result which is
more to his taste. After this any discussion as to the relative merits of
rounding or truncating is totally academic. The only proviso is that the
rounding/truncating should be the last thing done to the image, after all the
manipulating has been done in 16 bits.

Roger Riordan AM
 
C

CSM1

minimum non-zero 16-bit scan value my scanner can produce is 4. Applying
the above formula gives me an 8-bit value of 3 so I should never get an
8-bit value of 1 or 2 in a raw file. These are colour files not greyscale.
CSM1: I am going to correct my statement above. You do not drop the bits,
you shift them right. The correct method is to Right shift 8, which is the
same as divide by 256.
 
J

Jens-Michael Gross

Don said:
Let me try one last time with another example. I'll use hex because
it's easier.

Is it? well, if you think so... It's all numbers - and after all, there
are brightness levels we're talking about. Steps between dark and light,
black and white. No matter how you express it in numbers.
Assume a 16-bit image where only values from $0000 to $7FFF are
present. All values above i.e. $8000-$FFFF have *zero* pixel count.

Converting this image to 8-bits in the conventional (orthogonal) way
would squeeze 256 16-bit values in one single 8-bit value.

An adaptive, dynamic algorithm would realize there are no values in
the upper range and would ignore it. In other words, instead of
mapping values $0000-$FFFF to $00-FF it would only map values
$0000-$7FFF to $00-FF. The result of this is twofold:

Nice. Since out 0..ff range means no brightness to full brighness (as
0000 and ffff meant black and bright white) this dynamic adaption would
make a halfbright gray to bright white.
I tmight please a programmers heart, but since our 256 8bit levels are a
fixed brightness range and so are the original 64K values, an adaptive
algorithm would lead to a data format that only the programmer of this
algorithm woudl be able to interpret.
The converted data would require a translation table that tells the
interpretign application that the 256 values are just the lower half of
a 9 bit brightness value.
1. Smoother conversion because "only" 128 16-bit values are now mapped
to a single 8-bit value, resulting in less banding.
2. Increased contrast because, in effect, auto-contrast was performed.

No color shift takes place.

I think what you may be missing is that the *input* is reduced, not
output. Also, the reduced input "pool" of data is *contiguous*.

Yest, it is - and it needs its own lookup table of 256 16 bit values to
be interpreted. Which is supported by - let me guess - about zero
applications worldwide (maybe one, if you write it).

Grossibaer
 
J

Jens-Michael Gross

Kennedy said:
Yes we are, and whether you have seen 65536 in a 16-bit level really
depends on what the 65536 refers to. If it refers to a peak level then
certainly it cannot exist in only a 16-bit scale. However, if it refers
to the number of available states (as it *does* in this case) then 65536
discrete states certainly do exist the data range described by a 16-bit
number. You are using the limited mathematics of Ancient Mesopotamia -
prior to the time when zero was recognised as being important. Without
zero the development of mathematics could not have progressed beyond
your primitive argument.

Anyone who is capable of understanding the difference between the number
of discrete states or levels and the peak value of any number would
care.
I count it twice because it exists twice - in the original data set
*and* in the final data set!

No, you count it twice in each one with your argumentation.
You talk about both, the ZERO state and the 65536 value. If you talk
about 65536 states, you're correct. Zero to 65535. Or one to 65536 (this
makes every formula more complex). But NOT BOTH.
If you want your mapping function to give the number of states in the
range as a result, then the very same mapping function may never give
zero as result. As this would mean it is mapping to one state too many.
If we were building an aqueduct the ignorance of the significance of
zero would be irrelevant. However we are defining a conversion from one
range of luminance descriptors to another - and since zero exists in
each range it cannot be ignored, despite your persistent failure to
recognise its significance.

I do not deny its significance - I only do not count the zero state more
significant than all others. In opposition to you who seem to find it so
significant that you count it twice, once at the beginning, then again
to define the end of your calculation range.
Please use either 0..255 or 1..256 but not 0..256 in your calculations
and examples (and your reasoning).
And do not constantly ignore the fact that the visible result of one or
the other way is way more important than any mathematical arguing.

Sorry to say so, but if I'd know that it was you who wrote wrote a
particular program, I would maybe admire the mathematical beauty of your
solution - and buy a different product.

Grossibaer
 
J

Jens-Michael Gross

Kennedy said:
The human visual experience is applicable to both coding schemes in an
identical manner, black is just as black on the 16bit colour as it is in
8 bit colour scheme and white is equally white. Consequently the human
visual experience and it linearity or non-linearity is completely
irrelevant to this argument. What matters is the actual luminance range
and the consistency of luminance distributions in images as expressed in
each range. The corruption of histograms in one conversion is a clear
indication of its inferiority with respect to others.

Obviously you still didn't try it with a real image and stick with you
mathematical model which will not give the visually best result.
Maybe you'll be able to talk death away from your dying bed, but this
makes no difference in this matter.
I don't want to watch a histogram, I want to have an image that looks
best.
And experience tells me that I'm right and you're wrong (and believe me,
at first I tried it your way. I REALLY TRIED it and did not just think
that it has to be wrong)
Try it and *count* the pixels in each range. Try some basic arithmetic
too. 14 levels contain 17 pixels each, which takes a total of 238
pixels, leaving a total of 18 pixels in black and white.

And that's exactly what is desired in this case.
Since you
*argue* (rather than experiment, which demonstrates that you are wrong)

And here you're completely wrong. I tried. And I din't just try once. I
tried with several image sources and different conditions.
And the difference is just piercing your eyes. So it's obvious, who of
us does argue instead of trying.
that only 8 pixels exist in black and white, you have now a total of 254
pixels, the remaining two having somehow evaporated into the digital
aether.

You can obviously not even _count_ right. It's 9 for black and white.

Maybe you can grab the idea (and the truth behind) if you lift your
value range from its massive bottom of being 0..255 only.

Take 16 levels of 17 values. In the middle of each of these 17 values
per level is the point where the color is 100% correct.
At the edges we have the ranges for black and white. Both 17 values
wide. With pitch black and bright white in the middle.
Since there's nothing blacker than black or whiter than white, we can
cut off the 8 values below black and above white.
Giving us 16 values (of the original 256) being exactly identical and
240 values which are shifted by 1 to 8 positions.

With the /16 method, we would still get 16 values which are correct but
240 which are shifted by 1 to 15 positions from their original

Maybe the histogram doesn't look as linear as you'd prefer, but I prefer
colors that are -8 to 8 from their 'real' values instead of those which
are 0 to 15 off.
On the contrary, I *did* implement the 8-4 conversion (and the 8-1
conversion as well) with both real images and a linear ramp. Had you
done so then you might have discovered that the pixels/levels you claim
have evaporated into the aether are actually there in the black and
white levels.

The linear ramp is completely useless. it tells you mathematical mind
what it expects to see, but it does not help at all to judge the (real
life) quality of the result.
And for the rest... I wonder when you last met your doctor to check your
eyes.

Following your former argumentation, I guess your 8-1 conversion is by
dividing by 128 (taking the MSB as the value for black or white). Right?
Well, mathematically correct and practically useless. I wonder why a
value of 127 should be mapped to black and 128 mapped to white while
both are originally almost identical.


Maybe I'm a moron, but when there are two wires on a powerline and one
does nothing and the other one shocks me, you can tell me as much as you
want that this is AC and the polarity of the wires doesn't matter. This
is technically right, but if I had to touch one of these wires, I would
always chose the one that's not shocking me.

And I will always prefer the conversion method that gives the better
visual result, no matter what method gives the nicer histogram or
whatever.


Grossibaer
 
J

Jens-Michael Gross

SJS said:
I disagree with this in this application. The number 255 (for instance)
refers not only to the point 255/256 on the number line but also all
values from this up to 256/256.

No. There's no 256 if we start at 0.
Either 255 is the highest value possible (and there's no 255.0000000001)
or 255 refers to everything above 255 (including, but not limited to
1000).

We're not dealing with absolute numbers here, we're dealing with
brightness values. And they range from 0 photons to unlimited photons.
And since we cannot measure both limits, we have defined 0 being the
lowest brightness we can detect and 255 the highest one.

I admit it woudl be eaiser if we could count from 0 to 256, but this
would give us 257 different values and the limitation of our
one-byte-per-value storeage (the only reason why we do not store real
values) forbids this.
We can't round up as we only have the intervals labelled 0 to 255
available so if we rounded 511/512 up we couldn't record it. The labels
are the lower end (not the middle) of their respective range on the
number line from 0 to 1.

Now you're mixing up source and destination.
For an 8-4 conversion, we have 16 labels.
With truncation, we have each label sit on the bottom of its
representation range.

0 stands for 0..15 and 1 stands for 16..31 and so forth.

After conversion, a brightness of 15 is considered black (which is 15
original steps off its real meaning) and 16 is considered 1 (which fits
exactly).

By adding 8 and dividing by 17 (rounding) we also have 16 lables, but 14
of them are placed in the middle of their representing range and the
other two, well, are doing so too if you imagine that values darker than
black and brighter than white are also part of their range. At least
they are placed at their exact representation and the distances between
all labels are equal.

With rounding, a brightness of 8 is mapped to 0 (which is 8 off), 9 is
mapped to 1 (which is also 8 off) and 15 is also mapped to 1 (which is
only 1 off)

So while with truncation the error increases from 0 to 15 and then
suddenly jumps to 0 again, being worst where it is closest to the next
label, you'll get with rounding an error that grows from 0 to 8 and than
shrinks back to 0 again, being worst where it is equally distant from
both nearest labels. Which should be more logical. And which gives a way
more pleasant result. WAY more.

Grossibaer
 
K

Kennedy McEwen

Jens-Michael Gross said:
Kennedy McEwen schrieb:



And that's exactly what is desired in this case.
Really, so why did you state that
Yes, black and white are
represented by 8 values each (not 9!) ?


You can obviously not even _count_ right. It's 9 for black and white.

Precisely. I refer you to your *OWN* posting of 18th June at precisely
16:28.24GMT where you stated categorically that "black and white are
represented by 8 values each (not 9!)". In short, you disputed my
original statement that these tones were represented by 9 values, and
now have the temerity to claim that *I* have difficulty counting!

I suggest you view the record of your statements as recorded by any of
the many archiving systems - such as Google. There is no doubt that not
only did you claim there were 8 levels of black and white using this
technique but actually disputed the statement of the truth that there ar
in fact 9 levels!
Maybe you can grab the idea (and the truth behind) if you lift your
value range from its massive bottom of being 0..255 only.
Maybe you can stop trying to convince us that your argument is black and
white when you can't even remain consistent from one day of the week to
the next!
Take 16 levels of 17 values. In the middle of each of these 17 values
per level is the point where the color is 100% correct.

No - that is the complete fallacy of the argument. The central level is
*not* where the colour is 100% correct! It is true for some colours but
certainly not for all. For example, peak white is peak white at level
15 on the 4 bit scale, but level 255 in the 8 bit scale. The "100%
correct" level in that case certainly is *not* the central level, it is
the upper extremity. Similarly, peak black is represented by 0 in both
scales, again this is not the central level, but the lower extremity.
Maybe the histogram doesn't look as linear as you'd prefer, but I prefer
colors that are -8 to 8 from their 'real' values instead of those which
are 0 to 15 off.
That is because the levels that you refer to as "real" are the wrong
levels! You assume that the central value is correct, when in fact the
correct level in true luminance terms is only the central value for the
central colour in the range - for those below that it gradually tends
towards the lower extremity of the range and for those above it tends to
the upper extremity, reaching those extremities at the ends of the
range.
The linear ramp is completely useless. it tells you mathematical mind
what it expects to see, but it does not help at all to judge the (real
life) quality of the result.

What drivel - it is as useful a test as any real image. Why should the
transform work for a ramp and not for real images? Why should it work
for real images and not a ramp? What if the real image *is* a ramp?
And for the rest... I wonder when you last met your doctor to check your
eyes.
Obviously more recently than you - and I checked my calculator works and
gives consistent results, rather than having to contradict myself every
two days, like yourself!
Following your former argumentation, I guess your 8-1 conversion is by
dividing by 128 (taking the MSB as the value for black or white). Right?
Well, mathematically correct and practically useless. I wonder why a
value of 127 should be mapped to black and 128 mapped to white while
both are originally almost identical.
Its called thresholding - the differential has to be drawn somewhere,
rather than by your equation which results in the entire range being
converted to one level when two exist. If you weren't such a pigheaded
moron then you would see that!
Maybe I'm a moron, but when there are two wires on a powerline and one
does nothing and the other one shocks me, you can tell me as much as you
want that this is AC and the polarity of the wires doesn't matter. This
is technically right, but if I had to touch one of these wires, I would
always chose the one that's not shocking me.

And I will always prefer the conversion method that gives the better
visual result, no matter what method gives the nicer histogram or
whatever.
So why do you choose the conversion you suggest when it gives neither a
better image nor a correct histogram?
 
J

Jens-Michael Gross

Roger said:
This whole discussion reminds me very much of the learned discussions in the
Middle Ages about precisely how many angels would fit on the head of pin.

Only, if you do not intend to interact with these angels - or use the
result of this discussion to convert any image.
The first cause of confusion is the fact that the eye is nominally (more or
less) logarithmic in its response, while all digital sensors are linear. In his
equation Steven has attempted to take the nonlinearity of the eye into account
with his exponential term, but I don't think anyone has noticed this, and in any
case it is quite inappropriate, as both the sensor in the camera, and the
printer or screen used to examine the resultant picture, are (more or less)
linear.

I noticed this and indeed it is irrelevant since we're talking about
converting digital data to digital data.
And the resulting data (24 bit RGB) uses a fixed range of colors as well
as the original 48 bit data (and therefore things like adaptiv
conversion or whatever are irrelevant too).
Then we have had endless discussion as to whether we should round up, simply
truncate or divide by 257 to get from 16 bits to eight bits. I think all the
contributors in favour of rounding up have overlooked one small point, which is
that if you use integer arithmetic and round up by adding 80 and then shifting 8
bits right, any number greater than 0FF80 will be converted to zero. The
correct 8X8X code for doing this is:

Add AX,80h FF80 + 80 -> 0000 + C
Sbb Ah,0 00 - 1 -> FF
Shr AX,8 (NOT Sar AX,8)

If the important second line is omitted, rounding will have the unfortunate
effect of converting pure white into black.

Indeed, but the code implementation was never part of this discussion.
On current processors (And I wouldn't try to do a conversion of a 128MB
image on an XT :) ) and with current compilers you'd use a 32bit
register (and 32bit integers) and all is well. The penalty is small.
After all, 'integer' on current compilers means 32bit by default anyway,
you need to define them as 'small int' to get 16 bit values.
And your code above does not divide by 257 anyways ;)

Apart from this the whole discussion has been totally irrelevant from a
practical point of view. The only difference between the two methods is that if
you had a picture of the sky, grading evenly from pure black to pure white (or
pure blue), the pure black band would be slightly narrower, and the pure white
band slightly wider if you used rounding.

No, black AND white would be narrower - or at least it would seem to be,
as the original real world sky might have had white brighter than what
whas been taken as white or black darker than what the sensor has
considered black.
From a purely theoretical point of
view, I would round, because it seems more logical that 0.6 should be converted
to one, rather than to zero. But I cannot imagine any practical circumstances
in which the difference could be detected by examining the resulting image by
eye.

Well, for a 48 to 24 bit conversion this might be mostly true.
For a 8 to 4 bit conversion the difference is easy to detect and clearly
to see - even for the untrained eye.

As a matter of fact, I have tried both methods on an 8-4 conversion and
showed the original and the resulting images to several persons.
ALL of them (no exception) told me that the 'rounded' version was by far
the better one - without knowing of any mathematical formula or even
knowing why I asked. All they've seen is the original image and two
converted images and they had to tell me which of the two looks closer
to the original. No further explanation. Just this data and this simple
question. And a clear vote towardes the 'rounding' method.

Apart from anything else, the photographer will always try to ensure that the
scene does not cover the whole range of the sensor, so that he does not lose
detail at either end.

indeed. But what the photographer has done or not is unimportant as we
only see the result of his work.
And then, in 99% of cases, he will use Photoshop 'Curves'
or the like to distort the dynamic range of the image to give a result which is
more to his taste.

And it's up to him to do so if he wants (give people a toy and they will
play with it. This is how Windows got its market dominance)
After this any discussion as to the relative merits of
rounding or truncating is totally academic. The only proviso is that the
rounding/truncating should be the last thing done to the image, after all the
manipulating has been done in 16 bits.

And then it should keep the original as good as possible so all the
toying with the courves isn't annullated by a bad conversion.
Well, maybe a bad conversion is one of the reasons why the photographer
used the courves - to precompensate the quality loss by the bad
conversion. ;)

Grossibaer
 
K

Kennedy McEwen

Jens-Michael Gross said:
No, you count it twice in each one with your argumentation.
You talk about both, the ZERO state and the 65536 value.

Liar!
I have never referred to a 65536 value and I challenge you to cite a
post in this thread where I have!
If you talk
about 65536 states, you're correct.

Precisely what I have done throughout!
So, having admitted that I am correct, I accept your intrinsic apology
for being wrong and wasting everyone's time!
Zero to 65535. Or one to 65536 (this
makes every formula more complex). But NOT BOTH.

I have *never* used both - you clearly have some problems reading Usenet
posts and assigning them to specific authors, although I grant you that
this may be because the argument is not in your native language. You
are doing better than I would if the discussion were in German, but I
would not be so stupid as to try!

The number of states is what determines the divisor in the conversion.
In one case we have 65536 states, in the other 256 states. The divisor
is thus 65536/256=256, *NOT* 257, which is a premise based entirely on
the wrong information, the *peak* levels in each of the ranges, and as
such would depend on exactly what the peak level actually is in the 8
and 16-bit number ranges (eg, are the ranges both positive integers,
ones complement, twos complement, or one in one range whilst the other
is in a different one - all of which yield different divisors, proving
that the peak level is completely the wrong value to use).
If you want your mapping function to give the number of states in the
range as a result, then the very same mapping function may never give
zero as result. As this would mean it is mapping to one state too many.
Rubbish! Simple truncation maps the number of states correctly *and*
gives zero as a result.

Proof by contradiction - your premise is null and void. End of story!
 
K

Kennedy McEwen

Jens-Michael said:
Yes, black and white are
represented by 8 values each (not 9!)
Jens-Michael said:
You can obviously not even _count_ right. It's 9 for black and white.

9 days to calculate an arithmetic sum correctly and then he tries to
blame someone else for his original error! Or perhaps he hoped that his
original message had expired before he changed his story - just as well
archive services exist!

Jens-Michael Gross *YOU ARE A TROLL!!*
 
J

Jens-Michael Gross

Kennedy said:
Liar!
I have never referred to a 65536 value and I challenge you to cite a
post in this thread where I have!

You should read twice what you write before you post it.
In your formula, you calculate zero-based and then work with 65536 as
part of the function range.
I'm really too lazy to search back the posts. And have etter things to
do than trying to convince someone of the truth who doesn't want to
listen.
Precisely what I have done throughout!
So, having admitted that I am correct, I accept your intrinsic apology
for being wrong and wasting everyone's time!


I have *never* used both - you clearly have some problems reading Usenet
posts and assigning them to specific authors,

Not at all. YOur argumentation was about me ignoring the number (!)
65536 in a formula that is zero based and may therefore never use 65536
as a value.
So either I misunderstood you and you were agreeing to me all the time
(then I'm sorry, but I really doubt that), or, well maybe you lost
yourself in your mathematical resoning. ;)
although I grant you that
this may be because the argument is not in your native language. You
are doing better than I would if the discussion were in German, but I
would not be so stupid as to try!

Well, thanks for the compliment part of this statement. But after all, 0
or 256 or 257 as well as other part of the mathematical world are
language independent, so there cannot be any misunderstandings in the
numbers, may it be my native language or yours. ;
The number of states is what determines the divisor in the conversion.
In one case we have 65536 states, in the other 256 states. The divisor
is thus 65536/256=256

.... and back again to reality-removed plain mathematical programmers
thinking.

This is why I'm proud that I'm not a graduated programmer but a
graduated engineer.
One nice example why I think that programmers shouldn't talk about real
world issues is the robot arm in our university. The firmware has been
programmed by a 'real' programmer.
Its basement joint angle was 360 degree, with a contact switch at zero
degree to determine zero position.
Based on this plain, simple and clear mathematical facts, the firmware
causes the robot arm to turn clockwise at power up until it triggers the
zero switch. Sounds reasonable, simle and effective.
Well, powerup the arm twice and it will rip its own power cable off the
wall. Oh, power cable, well, this was not part of the description of the
basement joint. So the firmware was mathematically correct and maybe the
sourcecode was beautiful to read and I don't doubt that the programmer
got the best grade for his work - but when we had to work with his
product, the robot arm was more in repair than in use. Due to his
'mathematically correct' firmware.

So let's stop this discussion here, anything else is completely
fruitless.
Continue writing programs that are mathematically correct and I continue
writing software (and designing hardware) which actually works in a real
world environment.
We're both happy then.
The losers are your customers, but well, Microsoft does it the same way
and perhaps you'll get rich in future. ;)

Grossibaer
 
J

Jens-Michael Gross

Kennedy said:
9 days to calculate an arithmetic sum correctly and then he tries to
blame someone else for his original error! Or perhaps he hoped that his
original message had expired before he changed his story - just as well
archive services exist!

30 minutes to generate another insult you can add to your previous post.
Great. Practice a bit more and you'll get it unter 10 minutes. and
perhaps in distant future you'll be able to put it into the same post ;)

If everything fails, a good insult might help. And discrediting someone
is always more effective than having the better argument.

Grossibaer
 
J

Jens-Michael Gross

Kennedy said:
Really, so why did you state that
?

Maybe I have been weary after 10 hours of programming and circuit
design. And (despite of your later insulting statements) I took the time
now to find that old posting (no archieve required) you are referring
to.
Yes, I have been wrong, and it was of completely no importance at this
point of the discussion (I could have omitted any value without changing
the statement).
But you adopted this mistake later to prove your own argumentation.
Well, maybe this was meant ironically (see, I give you a hole to escape.
Ain't I nice?) and this slipped my recognition.
No - that is the complete fallacy of the argument. The central level is
*not* where the colour is 100% correct! It is true for some colours but
certainly not for all. For example, peak white is peak white at level
15 on the 4 bit scale, but level 255 in the 8 bit scale. The "100%
correct" level in that case certainly is *not* the central level, it is
the upper extremity. Similarly, peak black is represented by 0 in both
scales, again this is not the central level, but the lower extremity.

Once again: if you look at the plain numbers alone, you're
mathematically correct. But looking onto the purpose of the whole thing,
you're wrong. because numbers are just a snippet of reality.
You're ASSuming, that black (represented by 0) is the bottom of all and
white (represented by 255) is the peak of all. Thi sis true as long as
your world is the world of numbers alone. But in this particular case,
the numbers are just representations of brightness levels. And a number
of zero does not mean zero photons and a number of 255 does not mean
infinite photons.
And the results I gor with real world data proves me right, no matter
how often you repeat that I am mathematically wrong.
This is about scanners and images and image data conversion. This is not
comp.mathematics or something like that.
That is because the levels that you refer to as "real" are the wrong
levels!

Depending on the gamma and the calibration and the color temperature of
the used monitor or the settigns of the printer or whatever, none of the
colors is 'crrect' or 'wrong'. But what looks best IS best - no matter
how mathematically correct the calculation was.
And that's all what interests people who want to convert images.
What drivel - it is as useful a test as any real image. Why should the
transform work for a ramp and not for real images?

Because your eyes are adaptive and easily to fool?
Can you tell light gray from bright white or very dark gray from black
if you only see one of them?
If so, you'd be an enigma.
And your color ramp only tells you that there are a number of colors
which seem to be equidistant in brightness.

I bet I can give you any color ramp with 256 equidistant levels and if
you only see one at a time you couldn't tell one from the other.

That's lfe, that's human.
Why should it work
for real images and not a ramp? What if the real image *is* a ramp?

Ramps are generated and do not need to be converted. If you need a ramp,
generate it.
There are so many exaples of generated images (ramps, checker boards,
crossing lines) where your eyes will tell you that the lines are bent,
the white has black dots and whatever. Or images where the lines look
linear and they are not. Put them through a mathematically correct
conversion and you'll swear that the original and the result are two
completely different things.
Obviously more recently than you

What's obvious and what not, there we disagree obviously (or will you
deny even that?) ;)
and I checked my calculator works and
gives consistent results, rather than having to contradict myself every
two days, like yourself!

I never suspected your calculator being wrong, only your use of it.

Its called thresholding

And gives more than lousy results.
So why do you choose the conversion you suggest when it gives neither a
better image nor a correct histogram?

It gives the better image (and that's not just my opinion, I asked quite
a few people, only giving them the original image and the two converted
ones, asking them which of the conversions is the better one - all of
them preferred 'my' version). And I don't care for the nice histogram if
this gives the wors result.

I don't stick on mathematics if reality gives me a better result.

If all people would stick on linear mathematics, Newtons mechanics would
be still state-of-the-art and Einstein would have never written his far
superior theories (which proved much better to explain some but not all
things that cannot be explained at all with linear math)

I bet if you find that reality doesn't fit your mathematics, you'd try
to change the reality - or convince the people to ignore the reality.

Grossibaer
 
D

Don

Yest, it is - and it needs its own lookup table of 256 16 bit values to
be interpreted. Which is supported by - let me guess - about zero
applications worldwide (maybe one, if you write it).

No, it doesn't need its own lookup table and if you think it does, it
shows you don't really understand.

Like I said last time, you just can't seem to grasp the concept.

I explained it quite clearly already and there's no point in repeating
as it doesn't seem be to getting through.

Don.
 
K

Kennedy McEwen

Jens-Michael Gross said:
Maybe I have been weary after 10 hours of programming and circuit
design. And (despite of your later insulting statements) I took the time
now to find that old posting (no archieve required) you are referring
to.
Yes, I have been wrong, and it was of completely no importance at this
point of the discussion (I could have omitted any value without changing
the statement).

You could only have omitted it by *not* attempting to correct what was
already a correct statement, that the black and white tones were
represented by 9 levels in your conversion!
But you adopted this mistake later to prove your own argumentation.

No, I explained why your mistake was a mistake - at no time did I ever
"adopt" it.
Once again: if you look at the plain numbers alone, you're
mathematically correct. But looking onto the purpose of the whole thing,
you're wrong. because numbers are just a snippet of reality.
You're ASSuming, that black (represented by 0) is the bottom of all and
white (represented by 255) is the peak of all. Thi sis true as long as
your world is the world of numbers alone.

No, it is true in the output level from your video card. 255 and 15
represent the same identical peak white output in their relevant scales
- there is no whiter level. Similarly there is nothing blacker than 0
and this is identical in both scales.
But in this particular case,
the numbers are just representations of brightness levels. And a number
of zero does not mean zero photons and a number of 255 does not mean
infinite photons.

Perhaps you will enlighten us all with your interpretation of an output
level which is darker than 0, in either scale, together with outputs
which are lighter than 15 and 255 in their relevant scales!
And the results I gor with real world data proves me right, no matter
how often you repeat that I am mathematically wrong.

No, the results that you got prove that your video card and screen gamma
are incorrectly set! Recall that I have also implemented both
conversions and found the difference to be negligible other than
slightly a coarser conversion in the mid tones using your preferred
method. Of course, you prefer to accuse me of *not* doing this, which
amounts to no more than an accusation of lying! Still, there is little
hope of you actually reading and understanding my posts when you are
patently unable to read and maintain consistency in your own!
This is about scanners and images and image data conversion. This is not
comp.mathematics or something like that.
Precisely, which is why luminance output and *not* which conversion
produces minimum error against an arbitrary mathematical rounding scheme
is what matters. You are the correspondent which has been continually
referring to one conversion producing a lower error than the other, yet
you have only been able to quantify that error in mathematical terms
against some arbitrary numerical reference, rather than in luminance
terms. Even though this is what occurs when a luminance histogram is
examined, you still claim that the mathematical error is the better
assessment! (If in doubt, see your post of 18th June - requoted here
for your benefit:
"The error range in case of truncation is 0..1, while in case of
rounding it is -0.5..0.5. The error is NEVER bigger than 0.5. This is a
mathematical fact..."

Now who was citing differences of mathematical errors between the
techniques? I seem to recall stating some time back in this thread that
the average error of both methods cancelled out in luminance terms, but
I shall leave that to you to find, and read your own ludicrous responses
to those very words.
Depending on the gamma and the calibration and the color temperature of
the used monitor or the settigns of the printer or whatever, none of the
colors is 'crrect' or 'wrong'. But what looks best IS best - no matter
how mathematically correct the calculation was.

Yes, and with a properly calibrated screen you will find that the even
distribution, giving equal weighting to all of the final tones (instead
of slightly over half the weighting to the extreme tones) results in a
matching image.
And that's all what interests people who want to convert images.


Because your eyes are adaptive and easily to fool?
Can you tell light gray from bright white or very dark gray from black
if you only see one of them?

We are not discussing a single tone conversion, so seeing "only one of
them" is irrelevant in this context. You see the complete tonal range
in both an image and a greyscale ramp, so my question still stands - why
do you consider a ramp to be an unsuitable test, particularly when many
real images actually contain full scale and near full scale ramps, with
and without embedded texture?
Ramps are generated and do not need to be converted. If you need a ramp,
generate it.

By coincidence, I have in front of me a Fuji Velvia image, shot in
Havana last year, of a model standing in front of a doorway in a
whitewashed wall. The dark doorway behind the model, a white walled
hallway, shows a near perfect grey ramp from white just behind the model
to deep black in the building interior. No synthetic generation
involved, a totally natural and not uncommon type of image. So I ask
again, why you find a ramp an unsuitable test and what happens when the
image *is* a ramp?
There are so many exaples of generated images (ramps, checker boards,
crossing lines) where your eyes will tell you that the lines are bent,
the white has black dots and whatever. Or images where the lines look
linear and they are not. Put them through a mathematically correct
conversion and you'll swear that the original and the result are two
completely different things.
We are not discussing optical illusions. That is why a histogram is
important - optical illusions only affect one aspect of the view and a
histogram gives another perspective.

Answer the question - this test is *not* an optical illusion.
What's obvious and what not, there we disagree obviously (or will you
deny even that?) ;)


I never suspected your calculator being wrong, only your use of it.
Liar! I quote (from your message of 27th June):
"You can obviously not even _count_ right."

However, as you have already demonstrated, you cannot get a consistent
result from a calculator from one day to the next - or, for that matter,
understand text which clearly states the paradox that "your argument"
produces.
And gives more than lousy results.

Less lousy than an all black or all white? We can only thank God that
Messrs Floyd and Steinberg arrived on the scene before you did and
managed to develop their method of producing an image when only two
output tines are present!
I don't stick on mathematics if reality gives me a better result.
So why has this been the crutch your argument leans on?
If all people would stick on linear mathematics, Newtons mechanics would
be still state-of-the-art and Einstein would have never written his far
superior theories (which proved much better to explain some but not all
things that cannot be explained at all with linear math)
Its obviously news to you that both special and general relativity *are*
linear mathematics! I suggest you read Einstein's 1905 paper in which
he states that the entire theory depends on the known laws of physics
(and hence the mathematics which underpins them) applies in all
reference frames.

Take some free advice: when you are out of your depth, stop digging!
If you don't know a subject then don't introduce it as an analogy.
 
K

Kennedy McEwen

Jens-Michael Gross said:
30 minutes to generate another insult you can add to your previous post.
Great. Practice a bit more and you'll get it unter 10 minutes. and
perhaps in distant future you'll be able to put it into the same post ;)

If everything fails, a good insult might help. And discrediting someone
is always more effective than having the better argument.
The discredit was of your own making. I merely documented it as
evidence for future generations who would lose interest due to your
inconsistency.
 
K

Kennedy McEwen

Jens-Michael Gross said:
You should read twice what you write before you post it.
In your formula, you calculate zero-based and then work with 65536 as
part of the function range.

Indeed I did, because that is exactly the case. The zero exists and
defines the base level. There are still 655366 states in the range,
which is the number that I used.
I'm really too lazy to search back the posts. And have etter things to
do than trying to convince someone of the truth who doesn't want to
listen.
<snip intervening text to highlight the contradiction you make by
juxtaposition of your own statements>
Not at all. YOur argumentation was about me ignoring the number (!)
65536 in a formula that is zero based and may therefore never use 65536
as a value.

The words I used were "You ignore the zero state" - at no time did I
refer to the number 65536 except in reference to the total number of
states in the range.
So either I misunderstood you
Clearly!

and you were agreeing to me all the time

Clearly not!
(then I'm sorry, but I really doubt that), or, well maybe you lost
yourself in your mathematical resoning.

Unlikely given the first statement.
But after all, 0
or 256 or 257 as well as other part of the mathematical world are
language independent, so there cannot be any misunderstandings in the
numbers, may it be my native language or yours. ;
The numbers are the same, but what they refer to requires language other
than mathematics. You have repeatedly accused me of using 65536 in
reference to a peak level, which I certainly have not done, suggesting
your misunderstanding of my reference to the number of unique states in
the range.
... and back again to reality-removed plain mathematical programmers
thinking.
Nothing whatsoever to do with programming or mathematics. Just logic
and visible assessment of the luminance output on a properly setup
display. Irrespective of how you argue it, deviation from that simple
division of the number of states eventually leads to a paradox when
extended to the limits - as demonstrated by the case of reduction to
1-bit output. You might not like the fact that 127 goes to black and
128 to white in such a reduction, but you have yet to suggest an
alternative which does not result in the loss of all image content as a
single level.
This is why I'm proud that I'm not a graduated programmer but a
graduated engineer.

And clearly a fresh graduate at that. When you have some experience of
the real world, come back and discuss it.
One nice example why I think that programmers shouldn't talk about real
world issues is the robot arm in our university.

Since I am not a programmer but a design engineer for real world
electro-optical imaging systems your limited academic experience is of
little relevance to me or the discussion at hand. I could equally well
describe numerous situations to indicate why academics "shouldn't talk
about real world issues" - they would be equally irrelevant and proof of
nothing.
So let's stop this discussion here, anything else is completely
fruitless.
Continue writing programs that are mathematically correct and I continue
writing software (and designing hardware) which actually works in a real
world environment.

As we have seen, your definition of "work" requires a fertile
imagination.
 
K

Kennedy McEwen

Don said:
No, it doesn't need its own lookup table and if you think it does, it
shows you don't really understand.

Like I said last time, you just can't seem to grasp the concept.

I explained it quite clearly already and there's no point in repeating
as it doesn't seem be to getting through.
Unfortunately, Don, very little does seem to be getting through. Like
most kids, he still hasn't learned that he doesn't know it all.
 
S

SJS

By adding 8 and dividing by 17 (rounding) we also have 16 lables, but 14
of them are placed in the middle of their representing range and the
other two, well, are doing so too if you imagine that values darker than
black and brighter than white are also part of their range. At least
they are placed at their exact representation and the distances between
all labels are equal.

I can see the sense in your approach here (add 8, divide by 17). This
does allow you to record the intensity extremes (0 and 1) correctly at
the expense of slightly increasing the width of each step in between.
So while with truncation the error increases from 0 to 15 and then
suddenly jumps to 0 again, being worst where it is closest to the next
label, you'll get with rounding an error that grows from 0 to 8 and than
shrinks back to 0 again, being worst where it is equally distant from
both nearest labels. Which should be more logical. And which gives a way
more pleasant result. WAY more.

Whether the error goes from 0 to 15 or from -8 to +7 depends on how the
output is interpreted. If I had a display system (video card / CRT)
running in 4-bit mode I would expect a 0x0 pixel to be black. But which
value in 8-bit mode would give me the same display ? Would it be 0x00
or 0x07 or 0x0F ? Similarly, in 4-bit mode I would expect 0xF to be
white. But would the equivalent in 8-bit mode be 0xF0 or 0xF7 or 0xFF ?

I have tried to see how Photoshop converts say from 4-bit to 8-bit but
my junior version doesn't seem to support such things. I hope that
experts who write mainstream image software have developed a 'standard'
that covers this. I can see pros and cons to both approaches (divide by
256 versus divide by 257) but if one is in common use then I would also
use it as it is probably at least as good as the alternatives and also
it is usually better to follow standards rather than fight the entire
world. If there are lots of approaches in common use then I guess we
have a problem.

Thanks for your input. I can finally see the sense in dividing by 257
as suggested by others in this thread (Bart was first I believe).

Do you know how software (e.g. Photoshop) converts data from one format
to another ? Is there a popular standard ?

-- Steven
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top