How to calc 8-bit version of 16-bit scan value

C

CSM1

I still get confused which is little-endian and which is big-endian
but I do know I hate the Intel-way and that includes segmentation...
;o)

Don.
The new Intel 32 bit processors do not use segmentation. The old segment and
offset are stored in one 32 bit register. The segment goes in the high 16
bits and the offset goes in the low 16 bits. One 32 bit word.

Little-Endian is Intel and Big-Endian is Motorola.

Little-Endian:
Describes a computer architecture in which, within a given 16- or 32-bit
word, bytes at lower addresses have lower significance (the word is stored
'little-end-first'). The PDP-11 and VAX families of computers and Intel
microprocessors and a lot of communications and networking hardware are
little-endian

big-endian:
1. Describes a computer architecture in which, within a given multi-byte
numeric representation, the most significant byte has the lowest address
(the word is stored 'big-end-first'). Most processors, including the IBM 370
family, the PDP-10, the Motorola microprocessor families, and most of the
various RISC designs are big-endian.
 
W

Wayne Fulton

I still get confused which is little-endian and which is big-endian
but I do know I hate the Intel-way and that includes segmentation...
;o)


I may be the only one that actually liked the Intel segmentation.
Other than the 64KB limit, which did require some heroic programming at
times, it was otherwise generally invisible in languages like C.

In Assy, one did have to continually load the segment register, but this
quickly becomes automatic, no problem. But it did allow some really small
and tight and fast code, essential back then at lower levels. Plus all 16
bit code was simply a lot smaller than 32 bit code.

Little-endian isnt without advantages either, one could access the low
order bytes as char, int, long, whatever, directly at the one specified
address. The only real issue is the need to convert numeric data to other
machine formats.

I found some example-only code to reverse the endian order in either
direction (not at all efficient, merely descriptive of the problem)

Function Reverse (N:LongInt) : LongInt ;
Var B0, B1, B2, B3 : Byte ;
Begin
B0 := N Mod 256 ;
N := N Div 256 ;
B1 := N Mod 256 ;
N := N Div 256 ;
B2 := N Mod 256 ;
N := N Div 256 ;
B3 := N Mod 256 ;
Reverse := (((B0 * 256 + B1) * 256 + B2) * 256 + B3) ;
End ;


I couldnt resist making the point that one is very strongly advised NOT to
try to improve any notions of so-called accuracy by attempting rounding, or
using 257. <g>
 
K

Kennedy McEwen

Given a set of integers s, an interval i, and a function f that maps
from s to i, I can define a function g that maps from i to s such that for
every v in i, f(g(v)-v is minimal.

Given a propability distribution of the values in i, I can compute the
average error.

The 'perfect computation' is the function f.
But, as I pointed out (and I think it was Bart who originally made the
point whilst arguing the alternative) we are not just computing numbers
here! We are converting luminance in one data range to luminance in
another. Since the same luminance levels correspond to a different
proportion of the available range, the value that you term error above,
f(g(v))-v is not the error at all, nor is the ideal transfer function
one which minimises it.

It has been a strange thread this one, where some regular subscribers
who have spent years discussing how best to maintain faithful histogram
distribution from one process to another, have argued the case for a
conversion process which seriously distorts it on the grounds of some
undefined numerical accuracy rather than luminance accuracy.
 
D

Don

The new Intel 32 bit processors do not use segmentation. The old segment and
offset are stored in one 32 bit register. The segment goes in the high 16
bits and the offset goes in the low 16 bits. One 32 bit word.

I know, I was just being facetious.

I've got close to 20 years (fx: sigh) of assembler programmer
experience on more processors than I care to mention.

6502 and 68k being my favorites and that's without ever touching an
Apple, I hasten to add to avoid any misunderstanding.

Don.
 
D

Don

I may be the only one that actually liked the Intel segmentation.

Segmentation is easily "emulated" with any processor with unsegmented
address model by any address mode using an offset register, while it's
not (always or easily) possible to do it the other way around. Indeed,
it's easier to scan Kodachromes with an LS-30 than it is trying to
cajole some older Intels into a "flat" model... ;o)

Using an arbitrary address register to establish a base address is
very handy and I use it all the time, but you have a choice whether to
use it and which registers to use. Segmentation, on the other hand, is
a rigid imposition whether you want it or not, and that's what I
object to.

Now, I understand that when Intel introduced segmentation it was to
extend the address space, and (like bank switching, which I prefer) it
was a clever trick. But, with migration to 32-bit, segmentation became
an albatross carried around for "backward compatibility".

Whenever I'm told something is being done for "historical" reasons, I
hear that as "hysterical" reasons (both meanings: neurotic and
funny)... ;o)
Other than the 64KB limit, which did require some heroic programming at
times, it was otherwise generally invisible in languages like C.

I better be quiet now... ;o) You see, I'm not exactly a fan of C - to
put it very mildly.
Little-endian isnt without advantages either, one could access the low
order bytes as char, int, long, whatever, directly at the one specified
address.

Yes, that's the only advantage although most (all?) modern processors
take as much time to load a long as they do a byte so that becomes a
moot point. Actually, in most cases they simply load a long even when
asked for a byte and throw away the rest.
I couldnt resist making the point that one is very strongly advised NOT to
try to improve any notions of so-called accuracy by attempting rounding, or
using 257. <g>

Delightfully put... :)

Don.
 
W

Wayne Fulton

It has been a strange thread this one, where some regular subscribers
who have spent years discussing how best to maintain faithful histogram
distribution from one process to another, have argued the case for a
conversion process which seriously distorts it on the grounds of some
undefined numerical accuracy rather than luminance accuracy.


It must be the twiddle factor. Years ago when I was into ham radio,
the saying was that some users needed a "twiddle knob" on their equipment,
which was just a panel knob not connected to anything, so they could twiddle
without screwing anything up. We thought this was pretty funny.

In software programming, twiddling is also a small pointless change that
often creates unexpected bugs. Then the programmer says "but all I changed
was...". This is usually not very funny.
 
M

Mike Engles

Wayne said:
Right, but this conversion subject must be about All Possible Values
that might conceivably exist in any image. All Possible Values do in
fact exist in the realm of possiblity, but you're right, probably not in
any one image. So the entire discussion is without exception about All
Possible Values in any possible file, and not about actual values in
some one file.

There are 65536 possible 16 bit values, and 256 possible 8 bit values.
These two numbers are 2 to power of 16, and 2 to power of 8, and those
two numbers are simply how many different or unique values can
physically be stored in that number of bits.... All Possible Values,
ready for anything that shows up.

So for this purpose, it doesnt matter what actual values do exist in
some image, as we are not speaking of any specific image. The next image
might be of a black cat in a coal mine, or a polar bear in a snow storm,
and they will be different, but 8 bit conversion simply maps whatever 16
bit values that might actually be there to the new 8-bit equivalents,
using this general rule developed for All Possible Values which it may
or may not encounter in any one image. But the general rule applies to
any and all images equally, so it obviously must necessarily include All
Possible Values, ready for anything that shows up.

Yes, we certainly might improve the image with histogram adjustments, or
not, which probably would be good for this image if we did, but it
doesnt really matter regarding the subject of 8 bit conversion. The
8-bit conversion is a conversion process, not an editing process.


Hello

It strikes me in Don's argument that it does not matter if some values
in a 16 bit range are missing, because they will be translated to a real
8 bit value. That is unless 256 continuous values are missing,which is
somewhat unlikely. It is why we prefer to do histogram adjustments in 16
bit. This will result in some combing in 16 bit, but none when converted
to 8 bit.

Mike Engles
 
M

Mike Engles

Kennedy said:
I do hope they keep you well away from coding these days, Chris!

For data= 129, your method gives the WRONG answer!
(129+128)/257 = 1.000, and floor(1.000) =1, but the correct answer is 0!

However,
129 div 256 = 0, the correct answer.
255 div 256 = 0, the correct answer.
256 div 256 =1, the correct answer.
65535 div 256 =255, the correct answer.
32768 div 256 = 128, the correct answer.
32767 div 256 = 127, the correct answer!

In the programming language you used, this is simply floor(x / 256).

For reference, all methods of converting 16-bit data to 8-bit should
result in *exactly* 256 sequential 16-bit numbers resulting in the same
8-bit result for *every* 8-bit value. Your proposed formula results in
257 16-bit numbers mapping to every 8-bit except for 0 and 255, which
have 128 each. Why do you want to discriminate against these levels?

Don't they teach basic arithmetic in schools any more?
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

I guess that Chris' method is how how Photoshop works.

Mike Engles
 
J

Jens-Michael Gross

Don said:
Just an observation (a reality check, really) but both/all sides seem
to take it as a given that the 16-bit image will have values in all
256-bit buckets.

This is rarely the case so if ultimate 8-bit accuracy is desired a far
superior method would be adaptive and limit the conversion only to
values actually present in the 16-bit image.

You forget that the 256 levels of 8 bit are FIXED and identical-distance
values.
If it would be a 256 value lookup table with each entry representing a
16 bit value, then you'd be right. But it isn't. it is a range mapping
from x of 65536 different values to y of 256 different values with equal
distance between each value here and there. So the dynamic range is
completely unimportant.

There are, however, other applications where there IS a difference. E.g.
the ISDN vlotage-to-digital value mapping. There the majority of the
value sis assigned to the lower vlotakge and the minority to the higher
voltage levels (giving a gamma-like diagram). This is the reason why the
upper limit for analog modems is 56K and not 64K - the lower digital
values are too tight to be recogniced properly.

But in or case, the mapping is 65536 to 256 levels. Plain and simple. If
there is a histogram gap in the original data, it will be in the
destination too. Everything else would use the full dynamic range by
altering the colors so much that the result wouldn't have any similarity
to the original ;)
The difference between the two methods (truncation or +128/257) pales
to insignificance to the waste of 25% of dynamic range! A far superior
routine (using either method) would convert only the 75% actually
used.

Plain stupid for a 16 to 8 bit per color reduction.
For a 24 bit to 8 bit palette mapping, this would be true (and is
usually done) but this isn't subject of this discussion.
Anyway, not really important, but since "nits are being picked" I just
thought I'd throw this "reality check" in the mix...

This was what I've done in my other post.
Port the problem from 16/8 conversion to 8/4 bit conversion. Do both
(truncation and rounding) and look at the result. And the rounding gives
BY FAR the better result.
But some people cannot be convinced even by reality.

'Don't tell me facts! I have my prejudice!'

BTW: the only reason why scanners have more than 8 bit per color is to
apply gamma correction to the scan data before going down from 16 to 8
bit again.
No normal human can distinguish more than 1000 red levels - even if they
are side-by-side.
And no normal user can afford a reproduction engine (monitor/printer)
which could produce more than 24 bit RGB data. (usually _much_ less)
So the whole question for converting 16 to 8 bit is mostly academic. It
isn't, however, if you go further down.

Grossibaer
 
J

Jens-Michael Gross

Wayne said:
I may be the only one that actually liked the Intel segmentation.

You're not the only one.
Other than the 64KB limit, which did require some heroic programming at
times, it was otherwise generally invisible in languages like C.

And this happened only in two cases: when your code block was bigger
than 64K or you had to access monolithic data with more than 64K size.
OTOH it allows executing code without knowing the absolute physical
address. No need to relocate every jump, every data access. Well, in
flat memory model, your program can be statically compiled to start at
address zero, but well...
In Assy, one did have to continually load the segment register, but this
quickly becomes automatic, no problem.

Only for bigger amounts of data. The smaller a program and its data
requirements, the easier the programming. There is a reson why even
small device drivers are sooo huge in Wondoze.

And, well, I don't like the idea of writing a 1MB exe file in assembly
;)
But it did allow some really small
and tight and fast code, essential back then at lower levels. Plus all 16
bit code was simply a lot smaller than 32 bit code.
Indeed.

Little-endian isnt without advantages either, one could access the low
order bytes as char, int, long, whatever, directly at the one specified
address. The only real issue is the need to convert numeric data to other
machine formats.

Yep. Being able to get 8 to 32 bit data types from one source with one
address is the biggest advantage of little-endian.
And the existence of other models is the only drawback.
It also makes the coexistence of assembly codes with 8, 16 or 32 bit
arguments easier (depending on the argument size, the microcode stops
after reading the first, second or fourth byte).

The ony reason for big-endian was the shortsightedness (or lazyness) of
the RISC mdoel designers. Their VLIW instructions had the processor
command in the most significant bits and the arguments in the least
significant bits. And sinc ethy wrote the codes MSB to LSB (left to
right) in their whitepapers, they designed the processor to expect the
instruction in this order (so the first byte would contain the MSB with
the most significant (or complete) part of the instruction. Since all
RISC systems I know read all four bytes of a 32 bit data at once, there
is no real reason for this.
I found some example-only code to reverse the endian order in either
direction (not at all efficient, merely descriptive of the problem)

Function Reverse (N:LongInt) : LongInt ;
Var B0, B1, B2, B3 : Byte ;
Begin
B0 := N Mod 256 ;
N := N Div 256 ;
B1 := N Mod 256 ;
N := N Div 256 ;
B2 := N Mod 256 ;
N := N Div 256 ;
B3 := N Mod 256 ;
Reverse := (((B0 * 256 + B1) * 256 + B2) * 256 + B3) ;
End ;

'not at all efficient' describes it ;)

In c++:

long int reverse (long int N)
{
long int out;
((char *)&out)[0] = ((char*)&in)[3];
((char *)&out)[1] = ((char*)&in)[2];
((char *)&out)[2] = ((char*)&in)[1];
((char *)&out)[3] = ((char*)&in)[0];
return out;
}

Compiles to just a few bytes of code.
I couldnt resist making the point that one is very strongly advised NOT to
try to improve any notions of so-called accuracy by attempting rounding, or
using 257. <g>

Well, since this is a conversion from 32 to 32 bit, all three methods
(right-shifting by 0 bits, dividing by 1 with rounding or adding 0.5
before dividing by 1.0000000002328306437 would have the very same
result. <fg>

Grossibaer
 
D

Don

You forget that the 256 levels of 8 bit are FIXED and identical-distance
values.
If it would be a 256 value lookup table with each entry representing a
16 bit value, then you'd be right. But it isn't. it is a range mapping
from x of 65536 different values to y of 256 different values with equal
distance between each value here and there. So the dynamic range is
completely unimportant.

I don't think you really grasped the concept. Please re-read
carefully.
But in or case, the mapping is 65536 to 256 levels. Plain and simple. If
there is a histogram gap in the original data, it will be in the
destination too. Everything else would use the full dynamic range by
altering the colors so much that the result wouldn't have any similarity
to the original ;)

Let me try one last time with another example. I'll use hex because
it's easier.

Assume a 16-bit image where only values from $0000 to $7FFF are
present. All values above i.e. $8000-$FFFF have *zero* pixel count.

Converting this image to 8-bits in the conventional (orthogonal) way
would squeeze 256 16-bit values in one single 8-bit value.

An adaptive, dynamic algorithm would realize there are no values in
the upper range and would ignore it. In other words, instead of
mapping values $0000-$FFFF to $00-FF it would only map values
$0000-$7FFF to $00-FF. The result of this is twofold:

1. Smoother conversion because "only" 128 16-bit values are now mapped
to a single 8-bit value, resulting in less banding.
2. Increased contrast because, in effect, auto-contrast was performed.

No color shift takes place.

I think what you may be missing is that the *input* is reduced, not
output. Also, the reduced input "pool" of data is *contiguous*.

Don.
 
J

Jens-Michael Gross

Kennedy said:
For the situation in the subject thread, I contest that integer division
by 256 (or a shift right by 8 bits) is perfect because it scales the
original number of states into the target number of states with equal
distribution and weighting throughout the full range. No other method
suggested so far achieves this property and the only arguments put
forward for their alleged superiority is a reference to some undefined -
and apparently undefinable - error magnitude.

While from a programmers view (this is why I don't like people who a
plain programmers) you're right. 256 values here, 16 there, equal
distribution and a linear histogram. That's all what counts for a
programmer.
Unfortunately there's some thing called 'reality'. And real life isn't
linear at all.
Not to mention the fact that human visual perception is far from being
anything like linear at all.
Using the comparative conversion suggested by Jens-Michael of an 8-bit
image to 4-bit for simplicity, the perfection of the integer division is
immediately apparent. Simply create a ramp from peak black to peak
white across and image 256 pixels wide. Then convert the image to 4-bit
data using either integer division by 16 or the equivalent of the
alternative method you argue. Ignoring your obvious computational
errors above, I suspect that this reduces to the function int((source +
8)/17) as suggested by Jens-Michael.

The two images are significantly different. Using simple integer
division by 16 and truncation, the full range from black to white is
produced with an even population - as would be expected from a linear
ramp original, which also has an even population. Each colour in the
range from 0 to 15 is represented by exactly 16 pixels wide.

And here the 'programmers intellectual limitation' kicks in.

yes, the result is no equal distribution. Yes, black and white are
represented by 8 values each (not 9!) and the rest with 17 instead of
the 'informatically correct' 16 values, BUT...

With an equal distribution, zero (black) represents the values 0..15,
very dark gray would be 16..31
Looking at the original value, 15 isn't black at all anymore. It is far
more a very dark gray than black. It is very close to 16 and very
distant from 0. But is treated as if it would have been 0 all the time.

With a non-linear distribution, 15 is mapped to 1 (and therefore much
closer to the original 16 than with just truncation).
This is why the typical error is half as big as with truncation.
With truncation you jump from one level to the next at hte point of its
best fit. So when a value appears first, it matches 100% and then
matches worse and worse and reaches worst match the moment it jumps to
the next level where it then fits best again.
With the '/257 and round' mehtod, a vlaue jumps to the next when it
crosses the border where the original value is equally far away from
both possible mappings.

The results are indeed very different.
And after staring a couple of hours onto the result of both (and other
even more complex) methods when writing my own scanner software, i
decided that the '/257 and round' method gives by far the best results -
as far as my personal opinion is allowed.

And since the result of any image operation is intended to please the
user and not the programmer (or the computer on which the conversion
takes place), I prefer the method which gives the (by far) better result
over a method that gives a linear histogram or an equal distribution of
values or just pleases a programmers heart.

By examining the resulting data from exactly this test it is very clear
that the reduced error argument for the alternative to simple integer
division is false, because it ignores one basic fact:

By examining the resulting data from exactly this test it is very clear
that it ignores one basic fact:
The resulting data is irrelevant, the resulting image is relevant. And
how it is perceived by the user who wants to do the conversion.
The 'reduced error argument' is not false, it is interpreted wrongly by
you. The error range in case of truncation is 0..1, while in case of
rounding it is -0.5..0.5. The error is NEVER bigger than 0.5. This is a
mathematical fact (as you seem to believe mathematics more than your own
eyes). The average value alone is unimportant because all values correct
and one being completely off (cannot happen in this case) would give
also a good average error value but a extremely poor visual result.
I guess you never did the 8->4 conversion with a real image, only on
paper. If you'd done it with a real image, you could really judge what
I'm talking about.
And there is only one result: the linear distribution of values leads to
a much worse optical result than the rounding method. Ask Einstein, why.

Period. (And wonder why there is no Nobel prize for Mathematics - Nobel
knew what he did.)


Grossibaer
 
J

Jens-Michael Gross

Kennedy said:
Well, considerably wrong - and completely irrelevant!
65535 / 256 = 255.99609375, which is hardly surprising since you have
mixed up the number of states in one range with the peak in another!

It was you who proposed truncation - and w deal with integer values here
as the destination range is only 0..255 and no space to store the
fractions.
65536 / 256 = 256

I have never seen 65536 in a 16 bit value. And we are talkign about 16
to 8 bit conversion here, do we?
but 256 * 256 = 65536

Who cares? neither is 256 in our 8 bit range, no is 65536 in our 16 bit
range.
So there is no error, if you are consistent with the information that is
used.

Are we discussing the best way to be constistent with the used
information or are we discussing the best way to convert an image?
You seem to forget...
Zero error because you IGNORE the zero state!

Why? (0+128)/257 is as much zero as 0/256. There you have your
zero-state. But you are counting zero twice in your argumentation.

Well, the Romans didn't even know the concept of zero - and their
mathematical skills have been good enough to build buildings worthy of
being copied by the Americans 2000 years later (Kolosseum). And they
built Aquaducts which still provide all the fresh water for Rome - even
2000 years after construction and after a considerable city-growth. With
exactly the right fall to avoid bacterical or algae development on one
side and material erosion on the other. Everything without a zero but
with its practical use in mind.

Grossibaer
 
M

Mike Engles

Kennedy said:
I guess it is, and probably explains why it is so slow and bloated!
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)


Hello

Is not the 128 added in effect 'dither noise'. We do need to add dither
for any truncation. Certainly should happen for audio.

Mike Engles
 
K

Kennedy McEwen

Mike Engles said:
Is not the 128 added in effect 'dither noise'. We do need to add dither
for any truncation. Certainly should happen for audio.
If it was dither noise then it would be a random function. A fixed
offset is not dither.
 
K

Kennedy McEwen

Jens-Michael Gross said:
While from a programmers view (this is why I don't like people who a
plain programmers) you're right. 256 values here, 16 there, equal
distribution and a linear histogram. That's all what counts for a
programmer.
Unfortunately there's some thing called 'reality'. And real life isn't
linear at all.
Not to mention the fact that human visual perception is far from being
anything like linear at all.
The human visual experience is applicable to both coding schemes in an
identical manner, black is just as black on the 16bit colour as it is in
8 bit colour scheme and white is equally white. Consequently the human
visual experience and it linearity or non-linearity is completely
irrelevant to this argument. What matters is the actual luminance range
and the consistency of luminance distributions in images as expressed in
each range. The corruption of histograms in one conversion is a clear
indication of its inferiority with respect to others.
And here the 'programmers intellectual limitation' kicks in.

yes, the result is no equal distribution. Yes, black and white are
represented by 8 values each (not 9!) and the rest with 17 instead of
the 'informatically correct' 16 values, BUT...
Try it and *count* the pixels in each range. Try some basic arithmetic
too. 14 levels contain 17 pixels each, which takes a total of 238
pixels, leaving a total of 18 pixels in black and white. Since you
*argue* (rather than experiment, which demonstrates that you are wrong)
that only 8 pixels exist in black and white, you have now a total of 254
pixels, the remaining two having somehow evaporated into the digital
aether. If you had even bothered to conduct either the test or the
basic arithmetic then you would have a lot more credibility in your
argument, since it would not be the one you are making!
I guess you never did the 8->4 conversion with a real image, only on
paper. If you'd done it with a real image, you could really judge what
I'm talking about.

On the contrary, I *did* implement the 8-4 conversion (and the 8-1
conversion as well) with both real images and a linear ramp. Had you
done so then you might have discovered that the pixels/levels you claim
have evaporated into the aether are actually there in the black and
white levels.
 
K

Kennedy McEwen

Jens-Michael Gross said:
It was you who proposed truncation - and w deal with integer values here
as the destination range is only 0..255 and no space to store the
fractions.
No, I would not be so arrogant as to claim that the truncation
conversion was a method I proposed - it was a method that existed long
before I did! However, truncation applies to the final data - not to
the scaling factor! Do truncate the scale factor is complete stupidity.
I have never seen 65536 in a 16 bit value. And we are talkign about 16
to 8 bit conversion here, do we?
Yes we are, and whether you have seen 65536 in a 16-bit level really
depends on what the 65536 refers to. If it refers to a peak level then
certainly it cannot exist in only a 16-bit scale. However, if it refers
to the number of available states (as it *does* in this case) then 65536
discrete states certainly do exist the data range described by a 16-bit
number. You are using the limited mathematics of Ancient Mesopotamia -
prior to the time when zero was recognised as being important. Without
zero the development of mathematics could not have progressed beyond
your primitive argument.
Who cares? neither is 256 in our 8 bit range, no is 65536 in our 16 bit
range.
Anyone who is capable of understanding the difference between the number
of discrete states or levels and the peak value of any number would
care.

16-bits can be used to describe *any* range of numbers, it is merely
convention that causes us to define black in images as zero. The 16-bit
range could (and incidentally *does* in Photoshop, foe example) describe
the two's complement data set (-32768 .. +32767). Clearly peak level is
completely irrelevant to the conversion scale since implementing your
method the value that should be used in such a conversion in Photoshop's
16-bit defined range is 32767, resulting in a conversion scale factor of
32767/255=128.498, which you would truncate to 128.
Are we discussing the best way to be constistent with the used
information or are we discussing the best way to convert an image?
You seem to forget...
On the contrary, it would appear that you have forgot - assuming that
you ever knew in the first place. Zero is a value with as much
significance as any other in the data range.
Why? (0+128)/257 is as much zero as 0/256. There you have your
zero-state. But you are counting zero twice in your argumentation.
I count it twice because it exists twice - in the original data set
*and* in the final data set!
Well, the Romans didn't even know the concept of zero - and their
mathematical skills have been good enough to build buildings worthy of
being copied by the Americans 2000 years later (Kolosseum). And they
built Aquaducts which still provide all the fresh water for Rome - even
2000 years after construction and after a considerable city-growth. With
exactly the right fall to avoid bacterical or algae development on one
side and material erosion on the other. Everything without a zero but
with its practical use in mind.
If we were building an aqueduct the ignorance of the significance of
zero would be irrelevant. However we are defining a conversion from one
range of luminance descriptors to another - and since zero exists in
each range it cannot be ignored, despite your persistent failure to
recognise its significance.
 
D

Don

Then don't use Photoshop. Not only does it map 65535 to 255, it also is only
15-bit/channel :-O

Non sequitur.

Besides, I was commenting on the beautifully understated nature of his
comment and how well crafted it was.

You also seem to have missed 2 smileys... Oh wait, here comes
another... ;o)

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top