Is the EPSON 4990 really 16 bit?

K

Kennedy McEwen

Oliver Kunze said:
I think there have been misunderstandings due to my initial posting. I did
not mean that I want to see a difference between 8 and 16 bit by simply
looking at unedited raw scans on the screen. I meant that there is no
difference between 4990 8 or 16 bit scans even after sophisticating tonal
correction in cases when a wider bit-depth should bring benefits. I scan 6x6
transparencies and as Sarah Brown has already mentioned scans of these
images who include the complete tonal range of the film image usually
require (dependent from the contrast of the image) a certain amount of
curve and/or level corrections in order to achieve a good result suitable
for screen or print output with good overall contrast.

My question aimed at the 16-bit capability of the Epson 4990 flatbed scanner
and if anyone made the same or different observation than me, that image
acquisition by this scanner has only 8-bit quality.
More likely that the scanner is acquiring the data at 16-bits all the
time and then truncating this to 8-bit output *after* the conversion
from linear CCD working space to the gamma compensated space of its
output images. This is how the other Epson's operate, although my
flatbed is only 14-bits in any case, and indeed how most scanners work.

If the data was only acquired to 8-bit precision then you would have
highly posterised shadows and mid-tones in all of your output images,
whether in 16-bit or 8-bit, due to the need to implement gamma
compensation in the digital domain. The 16-bit output will contain a
little more precision in mid-tones and highlights, but I doubt that you
will see it due to the noise that is also present in those levels.

There has been a long running debate as to how useful 16-bit processing
is in gamma compensated workspace and I am not aware of anyone proving
conclusively that it has real benefit. Your observation is just another
input to the majority view of that same debate. ;-)
 
R

random user 12987

A an aside to this Epson specific discussion. My Nikon 5000 ED coolscan has
a RAW data capability. I have a continuous tone printer capable of printing
16 bit images at close to 16 bits. Certainly at greater bit depth than an
inkjet printer. The Epson does truncate it's 16 bit acquisition into 8 bit
output. There is little or no difference between an 8 bit scan and a 16 bit
scan when printed on this machine.

The Nikon can also capture 16 bit and output 16 bit TIFF files. I don't see
a great deal of difference (Although there is some) between 8 bit and 16 bit
(Nikon) files but when I use the RAW capture of the Nikon scanner and decode
(develop) the file into an image and print it, I see a fairly noticeable
difference.

Wether this is because I can gain control over the image creation or wether
it is a true 16 bit capture, I can't say. All I do know is that the Epson
4870 produces disappointing 35mm scans at any bit depth.

--
Having climaxed... She turned on her
mate and began to devour him.
Not a lot changes, eh Spiderwoman?

-------------------
: In article <[email protected]>,
:
: >I think there have been misunderstandings due to my initial posting. I
did
: >not mean that I want to see a difference between 8 and 16 bit by simply
: >looking at unedited raw scans on the screen. I meant that there is no
: >difference between 4990 8 or 16 bit scans even after sophisticating
tonal
: >correction in cases when a wider bit-depth should bring benefits. I scan
6x6
: >transparencies and as Sarah Brown has already mentioned scans of these
: >images who include the complete tonal range of the film image usually
: >require (dependent from the contrast of the image) a certain amount of
: >curve and/or level corrections in order to achieve a good result suitable
: >for screen or print output with good overall contrast.
: >
: >My question aimed at the 16-bit capability of the Epson 4990 flatbed
scanner
: >and if anyone made the same or different observation than me, that image
: >acquisition by this scanner has only 8-bit quality.
: >
: More likely that the scanner is acquiring the data at 16-bits all the
: time and then truncating this to 8-bit output *after* the conversion
: from linear CCD working space to the gamma compensated space of its
: output images. This is how the other Epson's operate, although my
: flatbed is only 14-bits in any case, and indeed how most scanners work.
:
: If the data was only acquired to 8-bit precision then you would have
: highly posterised shadows and mid-tones in all of your output images,
: whether in 16-bit or 8-bit, due to the need to implement gamma
: compensation in the digital domain. The 16-bit output will contain a
: little more precision in mid-tones and highlights, but I doubt that you
: will see it due to the noise that is also present in those levels.
:
: There has been a long running debate as to how useful 16-bit processing
: is in gamma compensated workspace and I am not aware of anyone proving
: conclusively that it has real benefit. Your observation is just another
: input to the majority view of that same debate. ;-)
: --
: Kennedy
: Yes, Socrates himself is particularly missed;
: A lovely little thinker, but a bugger when he's pissed.
: Python Philosophers (replace 'nospam' with 'kennedym' when
replying)
 
K

Ken Weitzel

Kennedy said:
More likely that the scanner is acquiring the data at 16-bits all the
time and then truncating this to 8-bit output *after* the conversion
from linear CCD working space to the gamma compensated space of its
output images. This is how the other Epson's operate, although my
flatbed is only 14-bits in any case, and indeed how most scanners work.

If the data was only acquired to 8-bit precision then you would have
highly posterised shadows and mid-tones in all of your output images,
whether in 16-bit or 8-bit, due to the need to implement gamma
compensation in the digital domain. The 16-bit output will contain a
little more precision in mid-tones and highlights, but I doubt that you
will see it due to the noise that is also present in those levels.

There has been a long running debate as to how useful 16-bit processing
is in gamma compensated workspace and I am not aware of anyone proving
conclusively that it has real benefit. Your observation is just another
input to the majority view of that same debate. ;-)


Hi...

I haven't a 4990; just a 3200 photo.

I invite those debating to try this experiment. It will be
perhaps of interest, though how valuable the results will be
in the real world I have no idea... :)

First, scan a neg or slide at (3200/4800) at 8 bits. This scan
should intentionally have the colour a bit off, the brightness a
bit low, the gamma off just a bit, etc.

Immediately after this scan save it; then scan again at 16 bits
without changing any of the intentionally mis-adjustments.

Now open each, correct them as best you are able, bringing
colour, intensity, contrast, etc., and save 'em again.

Finally, open each. Don't bother looking at the picture, just
take a look at the histogram. There will be (should be) no
comparison between the two. Last, reduce the 16 bit image to
8 bits, and compare once more... the difference will be even
more apparent.

I once again suggest saving 16; one day our grandkids may
look at one and say "darn, only 48 bits" :)

Take care.

Ken
 
R

rafe b

Finally, open each. Don't bother looking at the picture, just
take a look at the histogram. There will be (should be) no
comparison between the two. Last, reduce the 16 bit image to
8 bits, and compare once more... the difference will be even
more apparent.


Right. Now take that image with the "gappy"
histogram and add the slightest touch of
gaussian noise. Voila. Gaps gone.

Besides: you haven't shown that an image
with a "gappy" histogram looks better or
worse than one with a "smooth" histogram.

Here's an alternative experiment which
takes about 30 seconds and which anyone
can try on any image in Photoshop.

Open the image and drill down to Image->
Adjustments->Posterize. Now enter a
value, say 128, 64, or 32. For most images,
you will be surprised at how low you can
go before there's a visible effect --
and the effect is no always bad.

Where limited bit-depth will bite you is
in regions of near-monochrome, for example
in clear blue skies.

My take on all this: scan at 16 bits,
do your major color moves/corrections,
save at 8-bit. OR, simply do you major
color moves in the scanner driver, and
go 8-bit the rest of the way.

Or if you don't mind the consequences
(bigger files, more processing time)
by all means go 16-bit.


rafe b
www.terrapinphoto.com
 
K

Kennedy McEwen

random said:
A an aside to this Epson specific discussion. My Nikon 5000 ED coolscan has
a RAW data capability.

Not directly, you have to consciously go into Preferences and select a
gamma of 1.00, otherwise it will just gamma compensate like every other
scanner.
I have a continuous tone printer capable of printing
16 bit images at close to 16 bits.

How do you know? You can't visually tell the difference between 8-bits
gamma compensated and 16-bits linear. Do you know the printer is taking
16-bit linear working space and printing *in* that working space (very
unlikely, but possible)? How do your colour control operate if you are
printing linear images, since you can't edit in linear space and get
anything like the final output on the screen. The most obvious thing
you would notice on the linear image is that it is very dark on screen,
but there are other issues.
The Epson does truncate it's 16 bit acquisition into 8 bit
output. There is little or no difference between an 8 bit scan and a 16 bit
scan when printed on this machine.
In which case you are NOT printing or scanning in true RAW format,
because 8-bit linear would result in severe posterisation of the mid and
shadow tones.
The Nikon can also capture 16 bit and output 16 bit TIFF files. I don't see
a great deal of difference (Although there is some) between 8 bit and 16 bit
(Nikon) files but when I use the RAW capture of the Nikon scanner and decode
(develop) the file into an image and print it, I see a fairly noticeable
difference.

Wether this is because I can gain control over the image creation or wether
it is a true 16 bit capture, I can't say.

The former. It also indicates that although you *think* you are
scanning raw, you aren't, you are applying gamma compensation, since the
Nikon would produce a vast difference between the two if captured truly
raw.
 
K

Kennedy McEwen

Ken Weitzel said:
I once again suggest saving 16; one day our grandkids may
look at one and say "darn, only 48 bits" :)
If 16-bits is so necessary, how come *all* the dSLRs are working with
12-bits and producing better results than a 35mm film scan?
 
K

Kennedy McEwen

rafe b said:
Right. Now take that image with the "gappy"
histogram and add the slightest touch of
gaussian noise. Voila. Gaps gone.
That is true in terms of the final image Rafe, but Ken's suggestion is a
valid way of showing the difference between the two scan depths and
determining whether the Epson is scanning at 16-bits or not when it says
it is. Processing an 8-bit image will result in obvious histogram
deficiencies, 16-bit much less so.

I don't agree with Kens comments about storing in 16-bits just for his
grandkids to view the images - it will take a lot more than two
generations for his offspring to evolve the necessary improvements in
eyesight. ;-)
 
S

Sarah Brown

Right. Now take that image with the "gappy"
histogram and add the slightest touch of
gaussian noise. Voila. Gaps gone.

That's along the lines I was thinking as well - I'm pretty sure that the
low-order bits on the 4870 scans where I experience posterisation (and this
isn't universal - it depends on the slide) just contain noise, potentially
even in 8 bit mode. Can't tell from looking at the histogram though.
 
D

Don

Right. Now take that image with the "gappy"
histogram and add the slightest touch of
gaussian noise. Voila. Gaps gone.

And so has the quality! (BTW, another "fix" is to change image size,
even by a pixel, because interpolation will also eliminate gaps.)

But the proper way to correct such problems is not by *corrupting* the
image further but at source i.e. scanning at higher bit depth.
Besides: you haven't shown that an image
with a "gappy" histogram looks better or
worse than one with a "smooth" histogram.

That doesn't really prove anything because it depends on image
content.

Therefore, there are two options: Either, only shoot/process images
whose content does not expose those gaps (not a realistic option) or
make sure "gaps" are not present in the first place (use 16-bit) and
tshoot/process anything you like with one less thing to worry about.
Where limited bit-depth will bite you is
in regions of near-monochrome, for example
in clear blue skies.

Exactly! In other words: Image content.
My take on all this: scan at 16 bits,
do your major color moves/corrections,
save at 8-bit.

On that we agree 100%!
OR, simply do you major
color moves in the scanner driver, and
go 8-bit the rest of the way.

The only caveat is to make sure the driver works on 16-bit originals.
Or if you don't mind the consequences
(bigger files, more processing time)
by all means go 16-bit.

It's certainly a good idea if for no other reasons but to be able to
archive such image and be "future proof". Not only will the lossless
digital format freeze any further film deterioration but saving at
16-bit will enable the image to be reprocessed later once high
definition range monitors become common place.

Don.
 
M

Marjolein Katsma

Don ([email protected]) wrote in
Although NikonScan can be cranky e.g. if you want to turn auto
exposure off, in regular use it's very reliable.

I'm also hearing noises elsewhere that Nikon Scan is unreliable - but this
was in reference to the Mac OS X version. It's possible that version of
Nikon Scan is not as reliable as the Windows version.
 
R

rafe b

It's certainly a good idea if for no other reasons but to be able to
archive such image and be "future proof". Not only will the lossless
digital format freeze any further film deterioration but saving at
16-bit will enable the image to be reprocessed later once high
definition range monitors become common place.


Ok, here's the thing. After about eight
years of film scanning, three years of
digicam captures, I now have "archives"
consisting of around 180 CDs and 95 DVDs.

A large-format (4x5") film scan at 2500
spi is ~330 Mbytes with 24-bit color, or
~660 MBytes with 48-bit color.

Memory may be "cheap" in terms of media,
but these days I spend a third of my
time just burning DVDs and managing the
archives.

Like Kennedy, I am very skeptical of
the benefits of the 48-bit color workflow.
Maybe I'm just lucky.

Lately I have been doing more scanning
and saving in 48-bit color though it's
really just caving in to peer pressure.
I still don't really see the point.



rafe b
www.terrapinphoto.com
 
D

Don

I'm also hearing noises elsewhere that Nikon Scan is unreliable - but this
was in reference to the Mac OS X version. It's possible that version of
Nikon Scan is not as reliable as the Windows version.

I can't speak for the Mac version but 4.2 on Windows has one bug as
far as I know. Something to do with using the film strip adapter i.e.
multiple images, as far as I remember. Since I don't use that adapter
(or NS, for that matter) I didn't really pay closer attention.

Don.
 
D

Don

Ok, here's the thing. After about eight
years of film scanning, three years of
digicam captures, I now have "archives"
consisting of around 180 CDs and 95 DVDs.

Tell me about it! If I'm lucky, my estimate is that I will just about
be able to fit all of my digital images onto 100 DVDs (slides,
negatives and photos).

But that's the way the cookie crumbles. Assuming one wants to keep the
originals there's no way around it. Of course, such level of quality
is a personal preference and I'm sure there are many people who are
perfectly happy with 8-bit or even JPGs.
A large-format (4x5") film scan at 2500
spi is ~330 Mbytes with 24-bit color, or
~660 MBytes with 48-bit color.

I know. When I got my first scanner the LS-30 I thought ~30 MB images
it produced were huge. Now, with the LS-50 we're talking about ~125 MB
a pop! In my case it's even worse because I scan each slide twice
(twin scans) so it's close to 250 MB per image!
Memory may be "cheap" in terms of media,
but these days I spend a third of my
time just burning DVDs and managing the
archives.

I agree! Same here. And that's just the images. I still have all of my
paper documents (letters, diaries, etc) ahead of me, as I try to
"digitize my life".
Like Kennedy, I am very skeptical of
the benefits of the 48-bit color workflow.
Maybe I'm just lucky.

I think it's really personal preference, to a large extent. As we all
know out eyes have severe shortcomings and the hardware itself is not
all it's cracked up to be, either. To name a couple of things: The
actual resolution is less than what manufacturer's proclaim and one
only need to scan an image twice to see how inaccurate the scanners
really are.

It was a huge revelation to me when I went all the way down to 50
(that's fifty!) dpi on my flatbed and still two subsequent scans were
different. Now, I know there are tons of reasons why that is, but it
does point to how "unreliable" scanners are.
Lately I have been doing more scanning
and saving in 48-bit color though it's
really just caving in to peer pressure.
I still don't really see the point.

Especially when one adds what I wrote just above but I see it as
"insurance". I mean, in spite of everything there is no denying that
there are factual advantages to 16-bit, i.e. it's not just an urban
legend. Now, whether we can see or make use of those advantages is
another story. But I for one prefer to err on the side of caution.

I look at it this way. You can always go down to 8-bit later if you so
chose, and a few years down the road all these "huge" files will
appear insignificant. Let's just take the next DVD generation. Those
100 DVDs of mine will shrink down to 10 so it's all relative.

Don.
 
D

Don

If 16-bits is so necessary, how come *all* the dSLRs are working with
12-bits and producing better results than a 35mm film scan?

I was under the impression that such direct comparison based solely on
bit-depth was not really applicable because of the complexity of both
sides of the equation.

In other words, a certain bit-depth on one side does not directly
translate into the same bit-depth on the other side due to the
differences in technology (e.g. Bayer pattern, interpolated pixel
pitch, etc).

Can you throw any more light on this (if you have time)?

Don.
 
R

rafe b

In other words, a certain bit-depth on one side does not directly
translate into the same bit-depth on the other side due to the
differences in technology (e.g. Bayer pattern, interpolated pixel
pitch, etc).

Can you throw any more light on this (if you have time)?


The practical success of Bayer based CCD and
CMOS imaging rather gives the lie to the need
for 16-bit per pixel color depth.

This is also, BTW, part of what goes on in JPG
encoding -- ie., decimation of chroma info,
but preservation of luminosity info.

Both of these take advantage of the fact that
in human perception, detail and color don't
really mix -- we perceive detail from
luminosity, but not much detail in color.

Muck around a bit in Lab color space and
you'll soon see this.

Think about it: your LS-50 or my LS-8000
capture 20 million *real* (non-interpolated)
RGB triplets from 35 mm color film.

My Canon 10D captures a mere 6 million
*interpolated* RGB triplets. And yet,
images from the 10D often produce the
better print.


rafe b
www.terrapinphoto.com
 
B

Bart van der Wolf

SNIP
If 16-bits is so necessary, how come *all* the dSLRs are working
with 12-bits and producing better results than a 35mm film scan?

Maybe because:
- It's cheaper
- It's faster
- There's no competitive pressure to give in
- Raw storage requirements are lower.

Bart
 
B

Bart van der Wolf

SNIP
There has been a long running debate as to how useful 16-bit
processing is in gamma compensated workspace and I am not aware of
anyone proving conclusively that it has real benefit.

Mostly because it depends on the subsequent workflow and also because
of limitation in out eyesight. There is also a difference between
"first generation" image data, with little if any processing done to
it except for gamma adjustment, and film+scan which I consider "second
generation" image data.

Any cumulative rounding to integer errors (e.g. due to post-processing
steps) will increase the chance of posterization becoming visibile.

<http://www.xs4all.nl/~bvdwolf/main/downloads/8vs16-bpch processing.png>
is a small crop from a digicam which, due to the low noise, is extra
sensitive to posterization.

It's just from a Pixmantec RSP raw conversion, a Levels color
balancing adjustment (sky was a bit too pink for my taste), some
profile conversions (e.g. from a Wide to final sRGB for Web
publishing),
and RGB to Lab (for potential(!) Luminosity adjustment) and back, with
CS2 small radius Smart Sharpening at output size, shown at 200% zoom.
Whether the posterization is visible in output, depends on
magnification.

It is also a warning against choosing too wide a colorspace.
Colorspace conversions are lossy especially if done in integer value
accuracy. Rounding errors will accumulate with each step.

Bart
 
B

Bart van der Wolf

no... it's not! said:
History Bart, tells me you are more comfortable with software that
needs fiddling than out of the box, functionality that just works.

Maybe the out-of-the-box software result doesn't satisfy my
requirements :). I really don't like to complicate the process for
the sake of it. There has to be a purpose, and there usually is.

Maybe I also tend to err on the side of caution. Once the losses
become visible, there's no recovering without loss of further data and
time.

SNIP
In the mean time I had bought vuescan. I couldn't obtain a colour
balance when scanning Fuji ISO 200 negative film. I could not easily
get a colour balance when scanning Kodak ISO 400 negative film
either and the solution of developing a "profile" for a bought
application just to make it work, is distasteful to me.

Each and every individual scanner requires its specific profile for a
particular film dye-set (because of manufacturing tolerances and
different rates of lightsource aging). Color negative film is
subjected to many more variables, so Color profiling may be more
trouble than help.
If you are going to sell something that need further development
before it works, you really need to look closely at your business
ethics.

It has little to do with scanner drivers, unless they clip or
otherwise destroy data. It's more about basic profiling, whether we
like it or not.

Bart
 
K

Kennedy McEwen

Don said:
I was under the impression that such direct comparison based solely on
bit-depth was not really applicable because of the complexity of both
sides of the equation.
I don't see why the source of the information should enter into it,
after all, the bit depth issue is one of perception - what level of
quantisation you can see in the final image, not its origin. Indeed the
very low noise of recent digital cameras would make posterisation, when
it occurs, much more visible than the relatively noisy image on film.
In other words, a certain bit-depth on one side does not directly
translate into the same bit-depth on the other side due to the
differences in technology (e.g. Bayer pattern, interpolated pixel
pitch, etc).
The Bayer filter really only affects the resolution of the sensor and,
as Bart has mentioned, just takes advantage of a limitation of our
visual system that has been exploited since at least 1953 with the
invention of NTSC colour TV encoding, and possibly before that. It
shouldn't have any effect on the quantisation.

Assuming each final pixel in the image is composed of one real colour
and two interpolated ones, the real colour has at best 12-bits, whilst
the interpolated ones may have 13-bit precision. If rendered directly
to 2.2 gamma compensation, ignoring any colour balance issues, then we
would expect the extreme shadows to have posterisation. For example, if
0 linear data maps to 0 after gamma compensation then 1 on the 12-bit
scale maps to 6 on an 8-bit scale, with 2 mapping to 8 etc.

So the 12-bit linear encoding of the CCD output would result in visible
posterisation of the shadows from digicams if they encoded the gamma
directly as per sRGB or other working spaces require. I suspect, that
they deviate from that, thus avoiding the problem but rendering the
shadows somewhat darker than they should be. Of course, with slide
scans there is a readily available reference for comparison, the
original, which would prevent such fudges.
 
K

Kennedy McEwen

Bart van der Wolf said:
SNIP

Maybe because:
- It's cheaper
- It's faster
- There's no competitive pressure to give in
- Raw storage requirements are lower.
All true, of course, but they still make it *appear* better, which is
why your 3rd point exists. You simply can't provide a proper perceptual
representation of the original luminance scale with 12-bit linear
encoding of the source data without making visible posterisation of the
shadows. You can add noise to conceal it, you can artificially clip the
blacks, but you can't do the proper job. Sooner or later someone will
"blow the whistle" as it were.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top