NOT scanning negatives as positives

F

false_dmitrii

(e-mail address removed) wrote:

3) Expose so that mask color registers as (clipped) white, and the
brighest color in the frame barely reaches 255 (say, maximum color
value = R254 G95 B79, which is the previously mentioned valued
multiplied by 3.18, so that red touches value 255)

<snip>

Why does it clip? I'm still not clear about what's getting clipped to
white/eventual black using this approach.

false_dmitrii
 
F

false_dmitrii

May I scream and write in caps and use some words that are not
considered civil in the context of a newsgroup? Please?
Yesterday night I replied with an awfully lengthy message where I put a
lot of measurements I made while writing the message, and only wrote
down in the message, and GOOGLE DID NOT SAVE THE "#¤/&%#¤ MESSAGE!
Why ain't I getting a newsgroup program? Even Outlook, heck, at least
it saves my messages!
<snip>

Every so often, select all and CTRL-C. Do so again when finished
typing. Then you can CTRL-V paste it back if it vanishes. For an even
lower chance of accidental text loss, compose the message in a word
processor (with all the additional editing benefits) and copy-paste it
into the web browser when finished.

false_dmitrii
 
L

ljlbox

(e-mail address removed) ha scritto:
(e-mail address removed) wrote:



<snip>

Why does it clip? I'm still not clear about what's getting clipped to
white/eventual black using this approach.

Well, notice that I said that the brightest color in the *frame* (i.e.
the picture, the photo) barely reaches 255.
This means that if the picture's whitepoint (blackpoint) is anything
darker than the film mask, the film mask's color itself will clip.

Now, I suppose that scanning any overexposed negative (or underexposed
slide) will benefit from an exposure that optimizes the *picture*'s
whitepoint rather than the film base color, which can then freely clip
above 255 -- who cares.


by LjL
(e-mail address removed)
 
L

ljlbox

Don ha scritto:
As someone who runs Linux ocassionally, and just out of curiousity,
did you have to download this driver separately?

Yes. There is however an Epson driver in the standard SANE
distribution, and in theory it appears to support my RX500, but I
haven't made it to work.
The reason I ask is
because some distributions refuse to include packages which have a
component which is not GNU.

Namely, Debian. Which is what I'm using.
I actually like the concept of knowing that the software I get from
Debian repositories is 100% free; after all, I can still get the rest
from other sources.
That's very strange, indeed!

Note that I was talking about the Epson backend, not the whole of the
SANE sources. There is no mention of exposure *there*.
[snip exposure and gamma, see below]
"--film-type Negative" does just about what it's supposed to do: it
adjusts the exposure in order to remove the orange mask, and... well,
nothing else that I can see, since it doesn't even invert.

That, I don't understand. It appears to be something peculiar to SANE.
Negative mode usually means both, removing the orange mask and
inverting the image.

Actually, there is some code in the driver to invert the image (well,
it looks like that), but it's commented out.
God knows why, but who cares, it's easy enough to invert the image
(less easy is to do it *well*, but I guess the driver wouldn't really
help with that anyway).
[snip exposure and gamma again, still see below]
Why do you say it's impossible? I can set up that table to anything I
like, for example

Yes, but then it's not gamma anymore, but just a plain lookup table.

Sure. Sorry for being misleading by using the term "gamma", but I did
it simply because that's the word my driver uses.
But, for all intents and purposes, the "--something-gamma-table"
options allow me to set up any lookup table I like.
If you do that and set an arbitrary lookup table it's the same as
using Curves in Photoshop. Now, you can set gamma using Curves, but
you can also do much more. Gamma, however, is a very specific term.

Yes, sorry.
But, as an aside, not that it is *not* the same as using Curves in
Photoshop (at least when scanning at 8 bpc), since the scanner driver's
tables work at 16 bpc internally, while Photoshop's cannot.

But, see below as to why it is *really really not* the same as using
Curves in Photoshop, with my scanner.
scanimage [...] --red-gamma-table
0,32,64,96,128,160,192,224,255,255,[...],255

Sure, it still ends with 255, but I don't see any problem with that.
It's *not* a gamma table, it's a linear table. It's just like moving
the whitepoint, for all I can see.

It appears SANE uses the term gamma unconventionally. What SANE calls
"gamma table" is just a plain lookup table. It can be used for gamma,
of course, but it can also be used for anything else as well.

You got the point.
However, "gamma" has a very specific definition. It's not just *any*
lookup table. Gamma curves are created using a very specific formula:

output = input ^ gamma

Another important thing to understand is that using a lookup table is
*image editing". That's post-processing and has nothing to do with
exposure. Exposure is something which happes as the CCD array passes
accross the image. Anything one does to the image after that, is image
editing or post-processing.

But do note that the scanner driver *could* in theory (and on my
scanner, I think, does) use the lookup tables' contents - somehow - to
influence the actual scan -- at scantime, when the CCD array passes.
Although the time is one indicator it can also be very ambiguous.
There are many reasons why a scan may take longer.

For example, your bus throughput. If scans > 600 dpi take longer it's
an indicator that your bus is choked with data.

Uh, why? Scanning at 1200 dpi takes longer than scanning at 600 dpi,
but that's because the scanner's motor is moving at half the speed...
Does the scanner "stop-and-go" or is the scan performed in a single,
uninterrupted flow?

The short answer is "uninterrupted flow".

The long answer is that I do have bus problems (USB 1.1), and
stop-and-go's happen, but *only* if I scan at 16 bpc and/or if I scan
an area wider than a 35mm film.

I can assure you there are no stop-and-go's in the cases I am talking
about, as well as in those I'll present below.
That's very unlikely becuse it would cause banding. Stretching an
underexposed scan (which is what you describe) is very unlikely any
software would do because it would produce very poor results.

Sure... but I meant that I suspect that the Windows software just sets
tables with the appropriate whitepoint, and then the scanner
automagically does the exposure based on those tables.
Just like I suspect is happening with the Linux software.

But let's forget about the Windows software now, the open-source Linux
counterpart is already complicated enough.
Anyway, that can also be tested but you would have to use 16-bit depth
and some heavy math.


OK, that looks more and more to be a bus question (or some other
throughput problem) and not an exposure question. You're just simply
getting too much data and the resons it takes longer is because the
system has to catch up.

And why would that change depending on the specific lookup tables
provided, all the rest being equal?
Yes, so we have to "divide and conquer" which means back to basics and
not make any rash assumptions. Instead confirm each assertion before
going further. Otherwise we just compound the problem by basing
conclusions on incorrect data.

Ok. Let's start with the real thing :)
The first test I would do is the above "stop-and-go".

Well, I've answered this. Yes, but no -- in the sense that I'll take
care that this does not happen.

In the previous article that Google did not post, I did some
measurements with a stopwatch.
That had the advantage that I could measure the actual scan time,
ignoring the time needed for calibration and for moving the CCD to the
initial position.

This time I decided do let the computer do the measurements and write a
script -- but I can assure you that the current results are very
consistent with the ones taken manually with the stopwatch.

I took 33 scans of an unexposed area of developed negative Kodak film.
Each scan was made at 1200 dpi, with the "--film-type Positive" option
(not "--film-type Negative", below I'll explain why).
Each scan is of the same 5mm x 5mm area of the film.

Each scan was taken using a different lookup table (only one lookup
table for red, green and blue was used, even though the software
provides for three separate tables).

Let's call the "cutoff" the index of the last element of the lookup
table that is smaller than 255.
All elements before the cutoff are between 0 and 255, and all elements
after the cutoff are 255.

The elements before the cutoff are linear, so that, for example, a
table with cutoff=8 would look like

0,32,64,96,128,160,192,224,255,255,255,...,255

The tables I really used, however, do not follow exactly this pattern,
because of... well, errors in the program I wrote to generate them :)
But anyway.

In the following table, "cutoff" refers to the cutoff used for each
scan's lookup table.
"Red", "Green" and "Blue" refer to the mean color values in the
resulting scans.
"Time" refers to the time taken by each scan in seconds, measured as I
described above.


CUTOFF RED GREEN BLUE TIME
0 1.0 1.0 1.0 12.42
8 1.0 1.0 1.0 24.38
16 1.0 1.0 1.0 24.38
24 255.0 255.0 254.9 24.39
32 255.0 255.0 235.9 24.40
40 255.0 254.9 189.6 24.36
48 255.0 254.9 158.7 24.40
56 255.0 234.4 136.2 24.40
64 255.0 205.6 119.3 24.38
72 254.9 182.7 106.3 24.39
80 254.9 164.6 95.7 24.41
88 254.9 149.7 86.7 23.56
96 254.9 137.4 79.7 22.16
104 254.2 126.6 73.6 20.96
112 239.1 117.5 68.0 19.83
120 222.8 109.4 63.2 18.93
128 208.6 102.3 59.8 18.26
136 196.3 96.2 55.7 17.59
144 185.3 91.0 52.6 16.84
152 175.5 86.0 49.8 16.39
160 166.8 81.8 47.3 15.90
168 159.0 77.8 44.8 15.43
176 151.5 74.3 42.8 15.00
184 144.9 71.0 41.2 14.63
192 138.7 67.9 39.1 14.28
200 133.0 65.2 37.4 14.01
208 127.8 62.5 36.3 13.67
216 123.1 60.1 34.6 13.41
224 118.4 57.9 33.3 13.17
232 114.2 55.8 32.3 12.93
240 110.4 53.8 31.3 12.72
248 106.8 51.8 30.2 12.53
255 103.8 50.8 29.2 12.40


You can see that there is something strange with the low cutoff values
(probably due to errors in my generator program!), but the situation
stabilizes after 24.

At 88, exposure time seems to actually start changing, while before 88
it appears to remain constant (even though the mean colors do change in
the resulting image - so there definitely seems to be a mixture of
lookup table application *and* exposure changes).

When measuring the actual scan time (that is, minus calibration time
and stuff) with a stopwatch, scanning with cutoff=128 appears to take
just about exactly twice the time taken with cutoff=255.

By the way, perhaps I should mention that, one or two days ago, I
discovered that among the information that Epson only gives under NDA
is how to control auto-exposure. Oops. ;-)

Are you convinced now that I haven't been dreaming about this exposure
thing? <g>

But the bad parts come when scanning with "--film-type Negative" and
when using different lookup tables for each of the channels.

When using three lookup tables, strange things happen, and scan time
apparently changes depending on which channel uses which table.

When scanning with "--film-type Negative", scan time remains constant.
I suppose this is because, in negative mode, the green channel is
already automatically set to the longest possible exposure time (3x,
I'd judge).
But it's hard to know whether the exposures for the other two channels
change; they should, but given the strange behaviors mentioned above,
I'm not so sure they do.

When scanning with *both* "--film-type Negative" *and* three separate
lookup tables, well... you can guess... it actually rhymes with
"guess"... :)
Yes and no... Nominally it is but the film response curve is not
(strictly speaking). So while you can get acceptable results,
theoretically, it's two different things.


Yes and no... ;o) It depends on your requirements and your
environment. But that would be another digression...

I don't have fixed requirements, since I don't do this for a job or
anything.

My main requirement is to take the most from the scanner I have, with
as little per-scan manual intervention as possible.

I mean... if I have to spend two weeks writing a program to automate a
process, I'll probably do it.
But if I have to spend five minutes for *every* scan instead of
automating it, I probably won't.
(Which is why I wouldn't even consider your
multi-scan-and-manual-alignment-in-Photoshop, by the way!)

Also, I want to do things the right way. I don't care if I get a good
final scan, as long as I have a doubt that it might have been chance.
That's why I don't care if I just "seem" to be able to remove the
orange mask from a photo: it might not work for the next one, unless
it's the "right" way to do it.
It's also the reason for all the questions about how to handle
"pseudo-multisampling" 4800dpi scans: I don't *care* if bicubic resize
in Photoshop "looks like" working; I want to know it's the
mathematically correct way of doing it.
[snip]

Try googling for it and if that fails, get this:

http://www.color.org/membersonly/profileinspector.html

Next, locate the profile for your film and the above program will dump
the characteristic curves in that profile as data! However, it's raw
data and will need some massaging before it can be used. The Profile
Inspector does dump all the other data you need to do that (white
point, etc).

Thanks, that will be useful as a last resort if I can't find something
easier, or am not satisfied with it.
[snip]

The official definition involves 100 ASA film and I don't know it off
hand. However, the actual technical definition is not really
important. What is important is what it means: Each step doubles the
amount. So, ev=2 is twice the exposure of ev=1. And ev=3 is four times
the exposure or ev=1, etc.
Ok.

However, this only applies in linear gamma (gamma = 1.0)! That's why I
included gamma in the above formula, just in case.
Now, you know that I can't give my scanner the ev value directly: I
must directly map every possible OldPix value to every possible NewPix
value. Am I right in assuming that, using your formula, I could set an
ev value by solving
NewPix = 2^ev * OldPix (with gamma=1.0)
or
NewPix = 2^(ev/Gamma) * (OldPix^Gamma)^(1/Gamma) (with
gamma<>1.0)
?

If you're working in linear gamma it's real simple. You just
double/halve the value for each ev step. If you plot these values they
will be straight lines.

Yes, I asked because I am considering switching to gamma<>1.0 sooner or
later.
Currently, I'm working with gamma=1.0 simply because that's easier to
handle.

But since I'm getting 8 bpc from my scanner (16 bps hogs the USB port
and the images get too big), perhaps gamma=1.0 is not the best choice.

But don't just on me for working at 8 bpc straight off the scanner! I
have every intention to perform as much processing as possible using
the scanner's lookup tables, which still have 16 bits per channel to
work with.


by LjL
(e-mail address removed)
 
F

false_dmitrii

(e-mail address removed) ha scritto:


Well, notice that I said that the brightest color in the *frame* (i.e.
the picture, the photo) barely reaches 255.
This means that if the picture's whitepoint (blackpoint) is anything
darker than the film mask, the film mask's color itself will clip.

Now, I suppose that scanning any overexposed negative (or underexposed
slide) will benefit from an exposure that optimizes the *picture*'s
whitepoint rather than the film base color, which can then freely clip
above 255 -- who cares.

I didn't quite grasp this on the first try, but I think I get your
meaning now.

But if the negative's mask is pushed up to white, or even "beyond"
white via image-based white point adjustment, and the scan area is
cropped tightly around the actual image, what *in the image* might
still clip? Would it be the natural variance of the individual grains
and overall color mask, or a sign that something in the image itself,
however slight, is being thrown away? Or perhaps just an indication
that there's more surrounding mask in the frame than the preview
suggests? I was under the impression that the mask itself was the
clearest and therefore "whitest" possible part of the negative; is this
wrong in some way?

false_dmitrii
 
L

ljlbox

(e-mail address removed) ha scritto:
I didn't quite grasp this on the first try, but I think I get your
meaning now.

But if the negative's mask is pushed up to white, or even "beyond"
white via image-based white point adjustment, and the scan area is
cropped tightly around the actual image, what *in the image* might
still clip?

If the negative's mask is pushed up *to* right, nothing.
If it's pushed *beyond* white, then it depends -- on both *how much*
beyond white it is pushed, and on the image's contents.

But my goal would of course be *not to* make anything clip *in* the
image! (though a few clipped pixels here and there are usually not a
big problem)
Would it be the natural variance of the individual grains
and overall color mask, or a sign that something in the image itself,
however slight, is being thrown away?

Clipping certainly indicates that something is being thrown away (and
something gained! that is, if the clipping is caused by longer
exposure, and not just by curve adjustment during post-processing).

That's why, for example, Don is taking two scans, one at "normal"
exposure (making nothing clip) and one at "long" exposure (possibly
making part of the image clip, but gaining detail in the shadows).

By superimposing the two images obtained (though it is not really so
simple, as you may read in the various threads), he should obtain a net
gain.
Or perhaps just an indication
that there's more surrounding mask in the frame than the preview
suggests? I was under the impression that the mask itself was the
clearest and therefore "whitest" possible part of the negative; is this
wrong in some way?

It's wrong. The holes are the whitest possible part ;-)
But, really, yes, you're correct.


Anyway, I think you've been confused by my mention of "clipping the
orange mask": I was just describing a procedure I would use for
scanning negatives, and not saying that the clipping was a problem. It
can certainly be a problem when it's the image that's clipping, but not
if it's just the orange mask *around* the image (who cares about it?)
-- which is what I was saying.


by LjL
(e-mail address removed)
 
F

false_dmitrii

(e-mail address removed) ha scritto:

It's wrong. The holes are the whitest possible part ;-)
But, really, yes, you're correct.
;P

Anyway, I think you've been confused by my mention of "clipping the
orange mask": I was just describing a procedure I would use for
scanning negatives, and not saying that the clipping was a problem. It
can certainly be a problem when it's the image that's clipping, but not
if it's just the orange mask *around* the image (who cares about it?)
-- which is what I was saying.

Thanks, I understand now. Your post reminded me of certain symptoms I
thought I was seeing in my approach.

I have no doubt it's hideously hard to properly sync up the colors from
different exposures. I'm still coming to grips with what I'm able and
unable to do on a *single* negative...it'll be time for another post on
the matter once I've made another serious effort to track down related
info.

Actually, as long as I'm at it, can anyone point me to available
sources of the useful numerical measurements you use for negative color
balance? Are there easily obtained "typical" gamma
curve-at-various-exposures data sheets, for instance? I finally dug up
some of the basic equations behind negative gamma adjustments and don't
see a way, with my lack of film experience and tools, to hit the true
curves through trial-and-error. This stuff is really confusing from
the outside...are there *any* good reference books out there that fully
explain the math and the execution of negative gamma adjustments?

false_dmitrii
 
D

Don

Namely, Debian. Which is what I'm using.
I actually like the concept of knowing that the software I get from
Debian repositories is 100% free; after all, I can still get the rest
from other sources.

Indeed! I used to run Red Hat but felt increasingly uncomfortable as
they seem to become "Linux Microsoft". So when I finally re-organize
my system (one of these days...) I plan to install Debian too.
Note that I was talking about the Epson backend, not the whole of the
SANE sources. There is no mention of exposure *there*.

OK, forehead slapping time! ;o)

You're doing this on a flatbed, right?

I'm not familiar with Epson scanners and I just assumed (a dangerous
thing I often chastise others for doing) that you were running a film
scanner.

Now, assuming you are running a flatbed (here I go again, assuming...
;o)) no wonder there's no exposure! (Flatbeds in general don't offer
exposure because they do that automatically and internally.)

But before I go off on a tangent let's see if this assumption is true.
Actually, there is some code in the driver to invert the image (well,
it looks like that), but it's commented out.
God knows why, but who cares, it's easy enough to invert the image
(less easy is to do it *well*, but I guess the driver wouldn't really
help with that anyway).

That would make perfect sense too! If the flatbed does not offer film
scanning (for example there is no light in the lid) and you have to
place the film on the glass and then illuminate it yourself, no wonder
there's no negative mode in the driver!
Sure. Sorry for being misleading by using the term "gamma", but I did
it simply because that's the word my driver uses.

No, not your fault. I'm just making the distinction because I learned
(the hard way!) that all that matters down the road when we start
drawing conclusions. So, it's important to be clear we're talking
about the same thing.

Like the "detail" whether it's a film or a flatbed scanner! ;o)
Yes, sorry.
But, as an aside, not that it is *not* the same as using Curves in
Photoshop (at least when scanning at 8 bpc), since the scanner driver's
tables work at 16 bpc internally, while Photoshop's cannot.

Photoshop curves do work at 16 bit (some versions at 15 bit actually!)
but you need to have the data in 16-bit as well. However, Photoshop
curves are then extrapolated from 8-bit.
But, see below as to why it is *really really not* the same as using
Curves in Photoshop, with my scanner.
OK.


But do note that the scanner driver *could* in theory (and on my
scanner, I think, does) use the lookup tables' contents - somehow - to
influence the actual scan -- at scantime, when the CCD array passes.

That I still find difficult to understand because it seems a very
complicated (and inexact) way to set exposure. I think you may be
simply observing the effect of the look-up table on the image.

Now, why the scan takes longer may be because something in the chain
gets overtaxed. For example, if the curves are applied internally in
the scanner (unlikely, but just as an example...) the simple 4-bit
microcontroller in there would be overworked.
Uh, why? Scanning at 1200 dpi takes longer than scanning at 600 dpi,
but that's because the scanner's motor is moving at half the speed...

Yes, but it has twice the data as well, so 1200 may be "stop-and-go".
The short answer is "uninterrupted flow".

OK, so it seems we can eliminate the bus.
The long answer is that I do have bus problems (USB 1.1), and
stop-and-go's happen, but *only* if I scan at 16 bpc and/or if I scan
an area wider than a 35mm film.

Yes, 1.1 is very slow. Even though scanners are slow devices 1.1 is
even too slow for them.
And why would that change depending on the specific lookup tables
provided, all the rest being equal?

It's the infamous "it depends"... For example, I can conceive the
following scenario. Applying a look-up table (LUT) can be very time
consuming if floating point math is used and bit depth is 16-bit, for
example, especially if some 4-bit microcontroller has to do it on a
100 MB chunk of data!

So, if you supply it a linear gamma LUT (i.e. multiply by 1) that will
take far less time than some complicated curve. Therefore, it's
conceivable that a different LUT may cause different timing. But that
would be a very small difference, relatively speaking.

Anyway, that's all academic in the big scheme of things.
In the following table, "cutoff" refers to the cutoff used for each
scan's lookup table.
"Red", "Green" and "Blue" refer to the mean color values in the
resulting scans.
"Time" refers to the time taken by each scan in seconds, measured as I
described above.


CUTOFF RED GREEN BLUE TIME
0 1.0 1.0 1.0 12.42 .... cut ...
255 103.8 50.8 29.2 12.40


You can see that there is something strange with the low cutoff values
(probably due to errors in my generator program!), but the situation
stabilizes after 24.

At 88, exposure time seems to actually start changing, while before 88
it appears to remain constant (even though the mean colors do change in
the resulting image - so there definitely seems to be a mixture of
lookup table application *and* exposure changes).

When measuring the actual scan time (that is, minus calibration time
and stuff) with a stopwatch, scanning with cutoff=128 appears to take
just about exactly twice the time taken with cutoff=255.

At linear gamma, 128 correspond exactly to +1 AG exposure over 255.
So, in that sense, it does follow that this LUT somehow gets
translated into exposure!?

I really don't know what to make of the data? The time differences are
significant but I'm still puzzled how the scanner "knows" to
extrapolate a curve into exposure. I mean, everything is possible, but
it seems like such an unnecessarily complicated (and potentially very
inexact!) way of setting exposure!?

One other thing. What happens when you actually set different
"gamma-like" curves? What I mean by this is there is *no* cutoff. Both
starting and ending points are always the same but the middle changes.
You know, the curves look like a quarter of a circle. Just make sure
the start and end don't "clip" so the second value (from either end)
is not the same as maximum/minimum.

How do such curves affect the time?
By the way, perhaps I should mention that, one or two days ago, I
discovered that among the information that Epson only gives under NDA
is how to control auto-exposure. Oops. ;-)

Oh well, someone must have also reverse-engineered that in the Linux
community! Did you find out how to control exposure?
But the bad parts come when scanning with "--film-type Negative" and
when using different lookup tables for each of the channels.

When using three lookup tables, strange things happen, and scan time
apparently changes depending on which channel uses which table.

Another test may be to try changing one table at a time i.e. have G &
B tables flat (0,1... 254,255) but only change the R table, for
example.

I have nothing specific I'm trying to test with this, but just to see
how the scanner responds. Maybe some additional conclusion can be
drawn from that?
When scanning with "--film-type Negative", scan time remains constant.
I suppose this is because, in negative mode, the green channel is
already automatically set to the longest possible exposure time (3x,
I'd judge).

For now, I would limit the tests to Positive mode, because then no
additional processing takes place. I know that "negative" mode doesn't
invert or remove the mask but still, theoretically, Positive should
pass on the data "as is" without doing anything. Once that's figured
out we can turn on the Negative mode and see how it differs.
But it's hard to know whether the exposures for the other two channels
change; they should, but given the strange behaviors mentioned above,
I'm not so sure they do.

The above test may throw some light on that?
I don't have fixed requirements, since I don't do this for a job or
anything.

No, I just meant, you still have certain goals you want to achieve.
My main requirement is to take the most from the scanner I have, with
as little per-scan manual intervention as possible.

Then I would definitely advise looking into getting the 16-bit data
out somehow. Once you have streamlined your workflow, even at 1.1 you
can write a script and leave the scanner unattended while you go off
and do something else.
I mean... if I have to spend two weeks writing a program to automate a
process, I'll probably do it.
But if I have to spend five minutes for *every* scan instead of
automating it, I probably won't.
(Which is why I wouldn't even consider your
multi-scan-and-manual-alignment-in-Photoshop, by the way!)

I was thinking more about what happens with the data. The "prime
directive" in image processing is to limit the number of edits because
each change "corrupts" the data. So when editing an image it's always
best to use as few steps as possible. But that's really splitting
hairs...
But since I'm getting 8 bpc from my scanner (16 bps hogs the USB port
and the images get too big), perhaps gamma=1.0 is not the best choice.

Actually, there is a whole linear gamma "sect" ;o) which insists on
only using linear gamma (i.e. 1.0) and doing all the editing in linear
gamma because that minimizes any artefacts. They even calibrate their
monitors to gamma 1.0 so the images can be edited correctly (otherwise
they are too dark).

However, most people (me included) just use 2.2 because it's easier. I
consider the "loss" too small to be significant given one works with
16-bit depth, especially taking into account all other problems.
But don't just on me for working at 8 bpc straight off the scanner! I
have every intention to perform as much processing as possible using
the scanner's lookup tables, which still have 16 bits per channel to
work with.

I would be very careful about this. Are you absolutely certain that
the LUTs are applied at 16-bit? Even if they are you are supplying
8-bit versions so they need to be converted/interpolated internally.

I know USB 1.1 is slow, but if you really want to get maximum quality
it would be better to scan with 16-bit depth. USB 2 cards are not
really that expensive. Or is it the scanner that's limited to 1.1?

Don.
 
D

Don

Clipping certainly indicates that something is being thrown away (and
something gained! that is, if the clipping is caused by longer
exposure, and not just by curve adjustment during post-processing).

One can also clip using curves, of course, but risks banding at low
bit depths. So it's always better to "push" the exposure.
That's why, for example, Don is taking two scans, one at "normal"
exposure (making nothing clip) and one at "long" exposure (possibly
making part of the image clip, but gaining detail in the shadows).

Yes, the clipping in the "shadows" scan is *very* severe! But I don't
care, of course, because that clipped data in the shadows scan (which
I throw away) is perfectly exposed in the nominal (highlights) scan.

I've settled on +4 AG for the shadows scan effectively turning my
14-bit scanner into an 18-bit scanner, once the two images are
combined.
By superimposing the two images obtained (though it is not really so
simple, as you may read in the various threads), he should obtain a net
gain.

The process is very complicated and time consuming (!) but the results
are really fantastic. It's night and day.

Superimposing a nominal scan and a combined scan in Photoshop, then
boosting shadows in both and comparing them is a real eye opener! In
the nominal scan the shadows are full of noise, while in the combined
scan the image is clear and I see all sorts of detail.

I just do that sometimes as "revenge" after all the Nikon and
Kodachromes have put me through!

I just look at them and go "Ahhh..." ;o)

Don.
 
D

Don

I have no doubt it's hideously hard to properly sync up the colors from
different exposures.

Yes, that has driven me nuts for years! Literally! In my case it's
even worse because of the non-linear Kodachrome characteristic film
curves. Nothing behaved as it "should" as different channels react to
exposure differently.

As I wrote before, my solution is to "compare" the two exposures and
generate a look-up table (LUT) to convert one into the other. That was
*the* breakthrough, in my case anyway.
I'm still coming to grips with what I'm able and
unable to do on a *single* negative...it'll be time for another post on
the matter once I've made another serious effort to track down related
info.

The thing I found most frustrating is that all that theoretical info
is, well... theoretical. It seems there is always some "other thing",
some "catch" which throws the theory off.

I'm sure it's possible to get to the bottom of it but I finally just
threw my hands up in the air and came up with the above "empirical"
solution. That method doesn't care about any theory and just simply
generates a LUT for any two images I throw at it.

There are some minor inaccuracies but I take care of that by averaging
out all the LUTs for a whole film to generate a single curve. That's
why I settled on a single exposure difference (the +4 AG) even though
individual images could be done with less exposure difference. But by
going for the "maximum" I streamline the operation *and* can average
out for a more accurate curve.
Actually, as long as I'm at it, can anyone point me to available
sources of the useful numerical measurements you use for negative color
balance? Are there easily obtained "typical" gamma
curve-at-various-exposures data sheets, for instance? I finally dug up
some of the basic equations behind negative gamma adjustments and don't
see a way, with my lack of film experience and tools, to hit the true
curves through trial-and-error. This stuff is really confusing from
the outside...are there *any* good reference books out there that fully
explain the math and the execution of negative gamma adjustments?

I'm still to tackle my negatives by according to my (preliminary)
notes there should be some stuff at:

www.marginalsoftware.com and www.aim-dtp.net

Don.
 
L

ljlbox

Don ha scritto:
[snip]

I've settled on +4 AG for the shadows scan effectively turning my
14-bit scanner into an 18-bit scanner, once the two images are
combined.

Uhm wait a moment, where do you *store* those 18 bits? :blush:)
Are you using HDR Shop all the time?
The process is very complicated and time consuming (!) but the results
are really fantastic. It's night and day.

I can imagine. Pity that my scanner apparently can't go beyond 3x
exposure (*if* it can even change the exposure time at all, but), and
since negatives are already 3x exposed on the green channel and little
less on the blue channel, I'm afraid I've got little to work with --
I'm not sure boosting the red channel alone would give appreciable
benefits...

Besides, longer exposure, as well as single-pass multi-sampling, ought
to be very easy to integrate in a scanner, for the manifacturer! Almost
cost-free, I'd think.

I suppose it's just marketing that prevents lower-end scanners from
having these functions... :-\

by LjL
(e-mail address removed)
 
L

ljlbox

Don ha scritto:

Indeed! I used to run Red Hat but felt increasingly uncomfortable as
they seem to become "Linux Microsoft". So when I finally re-organize
my system (one of these days...) I plan to install Debian too.

While we are off topic, let me suggest you to use aptitude instead of
apt-get for installing programs, if you get Debian.
I learned the hard way that apt-get leaves a lot of useless
dependencies around that are very hard to keep track off when you
uninstall programs, and the only remedy is to use aptitude right from
the start.

OK, forehead slapping time! ;o)

You're doing this on a flatbed, right?

I'm not familiar with Epson scanners and I just assumed (a dangerous
thing I often chastise others for doing) that you were running a film
scanner.

Oops. No I'm not. Yes it's a flatbed.
Now, assuming you are running a flatbed (here I go again, assuming...
;o)) no wonder there's no exposure! (Flatbeds in general don't offer
exposure because they do that automatically and internally.)

I wouldn't be surprised if my scanner didn't have exposure control. I
am surprised that it does seem to have it, albeit as an awkward
side-effect of other settings! (i.e. our famous lookup tables)
[snip]
Actually, there is some code in the driver to invert the image (well,
it looks like that), but it's commented out.
God knows why, but who cares, it's easy enough to invert the image
(less easy is to do it *well*, but I guess the driver wouldn't really
help with that anyway).

That would make perfect sense too! If the flatbed does not offer film
scanning (for example there is no light in the lid) and you have to
place the film on the glass and then illuminate it yourself, no wonder
there's no negative mode in the driver!

But there *is* a negative mode... only, it doesn't invert the image :)
still it's called "negative", and exposes so to remove the orange mask.
In the Windows software, the "negative" mode inverts the image. I think
it's just some bug in the SANE driver that made them (temporarily?)
comment out the code.

Anyway, my scanner was sold with a transparency adaptor and a film
holder, and it's not even an option for my model, you have to buy it
that way.
Illumination comes by means of a lamp in the cap, which gets turned on
when transparency mode is selected (and the main lamp is of course
turned off).
[snip]
But do note that the scanner driver *could* in theory (and on my
scanner, I think, does) use the lookup tables' contents - somehow - to
influence the actual scan -- at scantime, when the CCD array passes.

That I still find difficult to understand because it seems a very
complicated (and inexact) way to set exposure.

Indeed. But do you always understand what's in the mind of
corporations?
I think you may be
simply observing the effect of the look-up table on the image.

Nah, come on, I've made some hundred test scans by now, I'm not that
thick! ;-)
Now, why the scan takes longer may be because something in the chain
gets overtaxed. For example, if the curves are applied internally in
the scanner (unlikely, but just as an example...) the simple 4-bit
microcontroller in there would be overworked.

Now this could be an explanation. I guess this possibility can only be
ruled out by visual observation of two 16 bpc scans, one made with
"long exposure" and one with "short exposure" and a stretched histogram
(stretched so that it looks like the "long exposure" one).

Done this, and you can see two scans of the same part of a very
underexposed slide, scanned at 2400 dpi, 16 bpc, color correction
disabled, at

http://ljl.150m.com/scans/ts1.jpg
("cutoff" = 255, then whitepoint moved to 30 in Photoshop)

http://ljl.150m.com/scans/ts2.jpg
("cutoff" = 30, whitepoint subsequently left alone)

Sorry, I had to save them in JPEG because of their size, but I set it
to the lower possible compression.

By the way, I can tell with reasonable certainty that the curves *are*
applied internally in the scanner: I have the driver's source code for
the set_gamma_table() function in front of me right now, and have also
read the Epson scanner protocol reference, which lists a command to
*send lookup tables to the scanner*.

It's the infamous "it depends"... For example, I can conceive the
following scenario. Applying a look-up table (LUT) can be very time
consuming if floating point math is used and bit depth is 16-bit, for
example, especially if some 4-bit microcontroller has to do it on a
100 MB chunk of data!

Yes, but still, in that case I'd find it quite remarkable that the
lookup table / scan time relation looks almost exactly like the one
you'd expect from exposure changes corresponding to the table's
whitepoint!
[snip]

At linear gamma, 128 correspond exactly to +1 AG exposure over 255.
So, in that sense, it does follow that this LUT somehow gets
translated into exposure!?

That's what I think. I also observed the "correct" scan time change at
cutoff=64, or something like that -- I don't remember now, it was in
the old data I've lost.
I really don't know what to make of the data? The time differences are
significant but I'm still puzzled how the scanner "knows" to
extrapolate a curve into exposure. I mean, everything is possible, but
it seems like such an unnecessarily complicated (and potentially very
inexact!) way of setting exposure!?

Come on, it's not really so complicated. 255/cutoff gives the correct
amount of exposure for a given table (relative to "standard", "1x"
exposure).

What might be more complicated is *modifying* the user-supplied lookup
table -- which definitely has to be modified, if part of it is
"implemented" using exposure time.

Yet, this modification simply consists of "stretching" the lookup table
so that the cutoff becomes 255 again.

Hard to do for a 4-bit microcontroller perhaps, but why do you assume
there must be such an underpowered little chip in the scanner? How much
does a 30MHz CPU or DSP cost today?
One other thing. What happens when you actually set different
"gamma-like" curves? What I mean by this is there is *no* cutoff. Both
starting and ending points are always the same but the middle changes.
You know, the curves look like a quarter of a circle. Just make sure
the start and end don't "clip" so the second value (from either end)
is not the same as maximum/minimum.

How do such curves affect the time?

Haven't had time to try that too today -- and it's 4:00 now.
Will do that tomorrow.
Oh well, someone must have also reverse-engineered that in the Linux
community! Did you find out how to control exposure?

I don't think it's been reverse engineered, unless *we* are
reverse-engineering it right now.

(Ed Hamrick possibly reverse engineered it, as moving the exposure
slider in VueScan does change my scan times)

Above, you wrote:

"I mean, everything is possible, but
it seems like such an unnecessarily complicated (and potentially very
inexact!) way of setting exposure!?"

Well, couldn't this be your answer? It's unnecessarily complicated
because some marketing head decided that only NDA-bound people ought to
know how to control exposure.

About it being inexact, well, it's still a consumer flatbed. And it's
not going to be inexact if well implemented, anyway.
Another test may be to try changing one table at a time i.e. have G &
B tables flat (0,1... 254,255) but only change the R table, for
example.

I have nothing specific I'm trying to test with this, but just to see
how the scanner responds. Maybe some additional conclusion can be
drawn from that?

Done that. To my surprise, I didn't get the mess I expected: in fact,
the time taken for each scan was always proportional to the "cutoff" of
the color that had highest "cutoff" (i.e., if red cutoff = 200, green
cutoff = 100, blue cutoff = 50, then the scan time was proportional to
200).

I've observed some slight variations where, for example, if red cutoff
= 200 (and it is higher than the blue and green) the scan time is not
the same as if green cutoff = 200 (and higher than red and blue).
But these variations were small, and I suppose they can be attributed
to the fact that, even before scanning, the scanner has an idea of "1x
exposure" that varies between colors, because of the automatic
calibration.

Moreover, things go strange when cutoff < 20 or so, similar to what
happened with the other test I posted.
I think this is the reason why I thought it would be a mess: until now,
I mostly tested with "extreme" (very low) cutoff values, and so it
seemed that the results were weird.

As an example: if cutoff is R1 G51 B1 then the mean pixel value is R255
G254.8 B255, and the scan takes 20 seconds; but if the cutoff is R51 G1
B1, the mean pixel is R1 G1 B1, and the scan still takes 20 seconds.

So, apparently, there is a bug in the firmware's lookup table handling,
which shows with "extreme" lookup tables. However, it remains to be
seen whether the same bug could also affect "normal" lookup tables in a
subtler way..
For now, I would limit the tests to Positive mode, because then no
additional processing takes place. I know that "negative" mode doesn't
invert or remove the mask

Wait, it does remove the mask (tries to at least, then not all masks
are equal, but it generally comes up with something approaching neutral
gray). It doesn't invert, though.
[snip]

My main requirement is to take the most from the scanner I have, with
as little per-scan manual intervention as possible.

Then I would definitely advise looking into getting the 16-bit data
out somehow. Once you have streamlined your workflow, even at 1.1 you
can write a script and leave the scanner unattended while you go off
and do something else.

The script is there already, and used to work decently for slides (I
say "used to" simply because I haven't used it for a long time). Less
so for negatives.

However, I can't leave the scanner unattended for "too long", as the
film holder only takes 6 frames (or 4 slides).
I was thinking more about what happens with the data. The "prime
directive" in image processing is to limit the number of edits because
each change "corrupts" the data. So when editing an image it's always
best to use as few steps as possible. But that's really splitting
hairs...

No, it's not, I realize that and I intend to achieve it. Much of the
script I use for scanning is written with this in mind... even though
"in mind" doesn't always correspond with "in practice" :) yet.
Actually, there is a whole linear gamma "sect" ;o) which insists on
only using linear gamma (i.e. 1.0) and doing all the editing in linear
gamma because that minimizes any artefacts. They even calibrate their
monitors to gamma 1.0 so the images can be edited correctly (otherwise
they are too dark).

I am aware of that. Won't name names, but I'm aware of that. I must say
that I have not made up my mind on the issue yet, but for now I indend
to follow the majority and assume gamma 2.2 is best (at least for 8 bpc
images).
At least until I decide to investigate this further.
However, most people (me included) just use 2.2 because it's easier. I
consider the "loss" too small to be significant given one works with
16-bit depth, especially taking into account all other problems.

Oh, but I was thinking 8-bit. With 16 bits per channel, I personally do
not care at all about the gamma used.
But I'd probably work at gamma=1.0 in that case, because, uh... I find
*that* easier. Well, your exposure formula at least becomes much easier
with gamma=1.0!
I would be very careful about this. Are you absolutely certain that
the LUTs are applied at 16-bit?

Quite, from seeing various histograms.
Even if they are you are supplying
8-bit versions so they need to be converted/interpolated internally.

Which is what I think happens. However, as I described above, the
interpolation behaves weirdly with "extreme" lookup tables, which might
mean that it is not so good in general, either.
I know USB 1.1 is slow, but if you really want to get maximum quality
it would be better to scan with 16-bit depth. USB 2 cards are not
really that expensive. Or is it the scanner that's limited to 1.1?

No, it's my old computer.

I have a couple of problems with buying an USB 2 card.
To begin with, I have only one free PCI slot (that computer has 4
network cards installed; yes I know what a hub is).
Then it's a K6 300Mhz, with a slow IDE interface (don't remember
ATA-what); so, even if the 16-bit data came fluent, perhaps the HD
wouldn't cope with them. And even if it did, processing would get
irritatingly slow.
Lastly, I doubt my little flatbed actually resolves much more than 8
bits of color; thus it is possible that everything I can gain with
16-bit can also be achieved with some wise use of 8-bit.


by LjL
(e-mail address removed)
 
D

Don

Don ha scritto:
[snip]

I've settled on +4 AG for the shadows scan effectively turning my
14-bit scanner into an 18-bit scanner, once the two images are
combined.

Uhm wait a moment, where do you *store* those 18 bits? :blush:)

I just pack them very tightly so they fit into a 16-bit file. ;o)

But seriously...
Are you using HDR Shop all the time?

No, I don't because it can only handle 8-bit input.

During my "merge" I bring the shadows scan down so the two histograms
overlap. After that I can merge with a hard edge (no feathering) and
still see no visible border where the two images are "glued".

This "tone-mapping", or as I call it "color coordination", was a big
problem for a very long time. The trouble is that the 3 RGB channels
do not respond to an exposure boost equally. The blue just races ahead
with green behind and trailing by red. In other words, the higher the
exposure the more blue the scan gets!

Now, that's bad enough, but Nikons have a problem with Kodachromes in
the first place! Even a perfectly exposed slide (both when picture was
taken i.e. film exposure and scanning exposure) still comes out with a
terrible and ugly blue cast. This is due to a number of different
reasons.
I can imagine. Pity that my scanner apparently can't go beyond 3x
exposure (*if* it can even change the exposure time at all, but), and
since negatives are already 3x exposed on the green channel and little
less on the blue channel, I'm afraid I've got little to work with --
I'm not sure boosting the red channel alone would give appreciable
benefits...

You should really look into a film scanner. Even a cheap film scanner
is likely to give you much better results and also you'll get much
more freedom to try out things and experiment. For example, a Nikon
LS30 (2700 dpi) can be had quite cheaply second hand. The only problem
with Nikons is Kodachromes as explained above, but they do a very (!)
good job with everything else.
Besides, longer exposure, as well as single-pass multi-sampling, ought
to be very easy to integrate in a scanner, for the manifacturer! Almost
cost-free, I'd think.

I suppose it's just marketing that prevents lower-end scanners from
having these functions... :-\

Absolutely! It's all marketing which makes me absolutely furious. I
hate it when products are intentionally *crippled* for marketing
reasons!

As I say, if there was access to each scan line to change exposure and
focus, even a cheap consumer scanner could be made to produce results
similar to a high-end drum scanner, at least, with respect to dynamic
range. But, of course, nobody would then waste money on high priced
models.

Don.
 
D

Don

While we are off topic, let me suggest you to use aptitude instead of
apt-get for installing programs, if you get Debian.
I learned the hard way that apt-get leaves a lot of useless
dependencies around that are very hard to keep track off when you
uninstall programs, and the only remedy is to use aptitude right from
the start.

Those dependencies (at least in case of Red Hat's rpms) are worse than
Microsoft's "DLL hell"! It just drives me nuts. Upgrading a single
application was close to impossible as I kept getting "circular
references".

So, thanks very much for the aptidude tip! Filed for future use!
I wouldn't be surprised if my scanner didn't have exposure control. I
am surprised that it does seem to have it, albeit as an awkward
side-effect of other settings! (i.e. our famous lookup tables)

Yes, that is very strange.
But there *is* a negative mode... only, it doesn't invert the image :)
still it's called "negative", and exposes so to remove the orange mask.
In the Windows software, the "negative" mode inverts the image. I think
it's just some bug in the SANE driver that made them (temporarily?)
comment out the code.

I wasn't referring to SANE but to the native low level driver you
downloaded from Epson. In theory, SANE should first interrogate this
low level driver for capabilities. If this driver reports that the
scanner does not support negative scanning (for example, there is no
light in the lid, as I mentioned) then SANE would not offer it.

Now, SANE can try to do negatives anyway, but that would be outside of
scanner's operating parameters.
Anyway, my scanner was sold with a transparency adaptor and a film
holder, and it's not even an option for my model, you have to buy it
that way.
Illumination comes by means of a lamp in the cap, which gets turned on
when transparency mode is selected (and the main lamp is of course
turned off).

In that case the scanner does offer film scanning so there should be
support. I suspect the low level Epson driver you downloaded only
provides the data and it's up to the application to do negative
inversion. In your case that would be SANE, so in theory, it should do
it. But as you say it's been commented out.
Indeed. But do you always understand what's in the mind of
corporations?

No, but the problem here is it would make their software unreliable
and by extension produce poor results. I just can't see why would they
do that.
Now this could be an explanation. I guess this possibility can only be
ruled out by visual observation of two 16 bpc scans, one made with
"long exposure" and one with "short exposure" and a stretched histogram
(stretched so that it looks like the "long exposure" one).

I think that would be tricky because we don't know the starting point
but that's a question for Kennedy, really... ;o)
By the way, I can tell with reasonable certainty that the curves *are*
applied internally in the scanner: I have the driver's source code for
the set_gamma_table() function in front of me right now, and have also
read the Epson scanner protocol reference, which lists a command to
*send lookup tables to the scanner*.

That would support the theory that the reason the scan takes longer
may be because of all the internal calculations within the scanner?
Yes, but still, in that case I'd find it quite remarkable that the
lookup table / scan time relation looks almost exactly like the one
you'd expect from exposure changes corresponding to the table's
whitepoint!

Yes, that's another strange coincidence.
Come on, it's not really so complicated. 255/cutoff gives the correct
amount of exposure for a given table (relative to "standard", "1x"
exposure).

That's because you're assuming a simple case (a straight line). But
what if you give it a complicated curve (not a straight line)? Such a
curve has no relation to exposure. That's why I suggest the test below
with "gamma-like" curves where start/end points are not changed.
What might be more complicated is *modifying* the user-supplied lookup
table -- which definitely has to be modified, if part of it is
"implemented" using exposure time.

That's what I mean!
Yet, this modification simply consists of "stretching" the lookup table
so that the cutoff becomes 255 again.

Hard to do for a 4-bit microcontroller perhaps, but why do you assume
there must be such an underpowered little chip in the scanner? How much
does a 30MHz CPU or DSP cost today?

I don't think it's a question of hardware cost but software
development. Firmware is the trickiest programming out there because
everything depends on it. (That's why all manufacturers these days
have modifiable firmware.) So, it may be the case of knowing they have
good firmware/hardware combination and don't want to brake it.
Haven't had time to try that too today -- and it's 4:00 now.
Will do that tomorrow.
OK.

Above, you wrote:

"I mean, everything is possible, but
it seems like such an unnecessarily complicated (and potentially very
inexact!) way of setting exposure!?"

Well, couldn't this be your answer? It's unnecessarily complicated
because some marketing head decided that only NDA-bound people ought to
know how to control exposure.

No, that's too much of a conspiracy theory. There are much easier ways
to lock people in than sabotage their own code. I mean, that would
cause them more trouble than it's worth.
About it being inexact, well, it's still a consumer flatbed. And it's
not going to be inexact if well implemented, anyway.

Yes, but that's a whole new level of inexactness.
Done that. To my surprise, I didn't get the mess I expected: in fact,
the time taken for each scan was always proportional to the "cutoff" of
the color that had highest "cutoff" (i.e., if red cutoff = 200, green
cutoff = 100, blue cutoff = 50, then the scan time was proportional to
200).

Hmmm!? Oh well, it was worth a try. At least we know, that the highest
cutoff is consistent and it's not related to any one single color.
I've observed some slight variations where, for example, if red cutoff
= 200 (and it is higher than the blue and green) the scan time is not
the same as if green cutoff = 200 (and higher than red and blue).
But these variations were small, and I suppose they can be attributed
to the fact that, even before scanning, the scanner has an idea of "1x
exposure" that varies between colors, because of the automatic
calibration.

Yes, you can eliminate small variations because no two scans are ever
the same. One of the first tests I made with my flatbed was to keep
decreasing the resolution until I get the same image (binary compare)
but this never happened! Even at the smallest resolution (50) the
images are different.
Moreover, things go strange when cutoff < 20 or so, similar to what
happened with the other test I posted.
I think this is the reason why I thought it would be a mess: until now,
I mostly tested with "extreme" (very low) cutoff values, and so it
seemed that the results were weird.

As an example: if cutoff is R1 G51 B1 then the mean pixel value is R255
G254.8 B255, and the scan takes 20 seconds; but if the cutoff is R51 G1
B1, the mean pixel is R1 G1 B1, and the scan still takes 20 seconds.

So, apparently, there is a bug in the firmware's lookup table handling,
which shows with "extreme" lookup tables. However, it remains to be
seen whether the same bug could also affect "normal" lookup tables in a
subtler way..

No, once you get into the deep shadows other things come into play.
For example, you may not have any data that dark in the image! Or at
least the CCD may not sense it (check the histogram of a raw scan).
Also, this is where most noise is present and that seriously corrupts
any data which makes measurements at the shadow edge under a certain
threshold very unreliable.
Wait, it does remove the mask (tries to at least, then not all masks
are equal, but it generally comes up with something approaching neutral
gray). It doesn't invert, though.

In that case definitely do all tests in Positive mode because we can
eliminate at least one unknown variable. Positive should just pass the
data directly or at least not mess with it as much as other modes.

....
Oh, but I was thinking 8-bit. With 16 bits per channel, I personally do
not care at all about the gamma used.
But I'd probably work at gamma=1.0 in that case, because, uh... I find
*that* easier. Well, your exposure formula at least becomes much easier
with gamma=1.0!

Yes, all calculations become easier. That's how I started when I was
just experimenting. But now I've modified all my formulas to take
gamma into account so I don't have to think about it anymore.

Regarding 8/16-bit. If you plan to do some editing afterwards, then
16-bit would be essential. Otherwise there would be too much
corruption. Even just changing to gamma 2.2 to display or print the
image would seriously corrupt the histogram (you'd get the so-called
"comb histogram" with huge gaps on the left side).
Lastly, I doubt my little flatbed actually resolves much more than 8
bits of color; thus it is possible that everything I can gain with
16-bit can also be achieved with some wise use of 8-bit.

The main advantage of 16-bit is not that you get more color (although
that's nice) but it gives you more "elbow room" to make edits without
resulting in artefacts. After all, images get converted into 8-bit in
the end because our monitors and printers are only 8-bit.

But just like my "resolution test" these colors also vary from scan to
scan. Nevertheless, even if I start with an 8-bit image, I would
definitely convert it to 16-bit before I do any editing.

Don.
 
L

Lorenzo J. Lucchini

Don said:

Those dependencies (at least in case of Red Hat's rpms) are worse than
Microsoft's "DLL hell"! It just drives me nuts. Upgrading a single
application was close to impossible as I kept getting "circular
references".

So, thanks very much for the aptidude tip! Filed for future use!

That's not precisely what I meant. I know it's a "dependency hell" with
RedHat, but it's not even nearly as bad with Debian in my experience, no
matter whether you use apt-get or aptitude.

The problem with apt-get is that it automatically installs dependencies
when you install a program, but it doesn't remove them when you remove
the program. Aptitude does that.

You should really give Debian a try if you were unsatisfyed with
RedHat's package system, IMHO.
[snip]
But there *is* a negative mode... only, it doesn't invert the image :)
still it's called "negative", and exposes so to remove the orange mask.
In the Windows software, the "negative" mode inverts the image. I think
it's just some bug in the SANE driver that made them (temporarily?)
comment out the code.


I wasn't referring to SANE but to the native low level driver you
downloaded from Epson. In theory, SANE should first interrogate this
low level driver for capabilities. If this driver reports that the
scanner does not support negative scanning (for example, there is no
light in the lid, as I mentioned) then SANE would not offer it.

Hold on, there's something I've forgotten to mention. The driver is from
"Epson Kowa" (now "Epson Avasys"), a Japanese company that, from what
I've been able to understand, has really little to do with actual Epson.

Well, it must have *something* to do with it (otherwise it couldn't even
call itself "Epson" I suppose!), but I have the feelings that bonds are
far from tight.

The Epson Kowa ("epkowa") backend for SANE was forked from the original
SANE driver ("epson"), and most of the code in the two drivers is the same.

Actually, looking at recent SANE versions, it appears that some original
"epkowa" code has been brought back to the "epson" backend.

This is all to say that the driver is far from "official". But anyway,
it does check for a transparency unit before showing the relative options.

In that case the scanner does offer film scanning so there should be
support. I suspect the low level Epson driver you downloaded only
provides the data and it's up to the application to do negative
inversion. In your case that would be SANE, so in theory, it should do
it. But as you say it's been commented out.

No, it wouldn't be SANE.

As far as I can understand SANE, it's conceptually divided into three
parts: the main SANE libraries, the frontends and the backends.

Backends drive the scanner and are specific to vendors and/or models.
Frontends are the "applications", which call SANE to scan.
The SANE libraries are the glue.

The frontend I use is normally "scanimage", a command-line program
supplied with SANE.
Another important frontend, for Epson scanner, is iScan! from Epson
Kowa, which comes together with their own "epkowa" backend.

iScan! does invert images when in negative mode. The code that I found
commented out, however, is in the "epkowa" backend, not in iScan!.

Perhaps they changed their minds at some point about where to handle
negative inversion.

Now this could be an explanation. I guess this possibility can only be
ruled out by visual observation of two 16 bpc scans, one made with
"long exposure" and one with "short exposure" and a stretched histogram
(stretched so that it looks like the "long exposure" one).

I think that would be tricky because we don't know the starting point
but that's a question for Kennedy, really... ;o)

Well, but have you looked at the JPEGs? Perhaps it takes some knowledge
to do the fine judgements, but I can see a very clear different in noise
amount -- in favor of the one with "longer exposure".

Both were kept as 16bpc until right before JPEGging them.
[snip]
Come on, it's not really so complicated. 255/cutoff gives the correct
amount of exposure for a given table (relative to "standard", "1x"
exposure).

That's because you're assuming a simple case (a straight line). But
what if you give it a complicated curve (not a straight line)? Such a
curve has no relation to exposure. That's why I suggest the test below
with "gamma-like" curves where start/end points are not changed.

Yes, I've been busy with trying to get 4800dpi (which, by the way, I
might have finally achieved), but I've done that now.

The result is that it always takes the same time no matter what gamma is
applied. I've tried from gamma=0.1 to gamma=5.0 (the limits my table
generator allows).

Also, a scan taken with "cutoff" < 255 (namely, 128) *and* gamma<>1.0
(namely, 0.1) takes the the same time as the same scan taken with gamma=1.0.

This is evidence that only the "cutoff" influences scan time, and curve
shape does not.
That's what I mean!

But is it that hard?

Another thing I didn't tell you yet: scanimage also offers a
"--gamma-correction" option, with only two settings: 1.0 and 1.8 (though
I suspect that the scanner firmware itself supports more possibilities,
as the reference mentions there should be settings for obtaining "CRT
gamma", "dot-matrix gamma", and things like that).

The interesting thing is that this "--gamma-correction" is applied *in
addition* to the "--***-gamma-table" I supply.

That is, if I give it a linear lookup table *but* set
"--gamma-correction" to 1.8, the resulting image comes out gamma corrected.
This shows that the firmware is able to, and does, modify the
user-supplied lookup tables.

And yes, it's the firmware that does this, not the driver. It's
specified in the Epson reference manual.

Besides, color correction is done in the scanner as well, and that's
also an operation that requires a little maths!
I don't think it's a question of hardware cost but software
development. Firmware is the trickiest programming out there because
everything depends on it. (That's why all manufacturers these days
have modifiable firmware.) So, it may be the case of knowing they have
good firmware/hardware combination and don't want to brake it.

Firmware is also hard to code because it often needs to be coded in
assembly or something like that. Using more advanced processors opens
the road to C firmware programming.

An aside: has anybody ever found the way to send a new firmware to a
scanner, and possibly to read out the currently installed firmware?
It'd be interesting to know if someone has managed to do this, even
though it would only work with a specific brand or even a specific model.
[snip]
About it being inexact, well, it's still a consumer flatbed. And it's
not going to be inexact if well implemented, anyway.

Yes, but that's a whole new level of inexactness.

But think of this: the lookup tables must, have you said, be
interpolated *anyway* since they only contain 256 values instead of 16384.
It's just a matter of, how can I say it, space the values differently
when interpolating, depending on the exposure time.

When the exposure time is longer, the table has to be "stretched"; there
should be no loss in doing this (since interpolation would be required
even with "normal" exposure), as long as the code that does it does not
contain too many bugs.

As for exactness in specifying exposure time, well, 0 to 255 gives
plenty. Ok, make that 90 to 255, since cutoff=90 seems to be the lower
limit to exposure changes.
[snip trying different "cutoffs" for different colors]

Hmmm!? Oh well, it was worth a try. At least we know, that the highest
cutoff is consistent and it's not related to any one single color.

Yes. Well, teaches me that writing a script to do these tests instead of
just doing them manually in a random fashion gives results that are
harder to misinterpret.
[snip]
So, apparently, there is a bug in the firmware's lookup table handling,
which shows with "extreme" lookup tables. However, it remains to be
seen whether the same bug could also affect "normal" lookup tables in a
subtler way..

No, once you get into the deep shadows other things come into play.
For example, you may not have any data that dark in the image! Or at
least the CCD may not sense it (check the histogram of a raw scan).
Also, this is where most noise is present and that seriously corrupts
any data which makes measurements at the shadow edge under a certain
threshold very unreliable.

Really, there is some bug. How could cutoff = R1 G1 B1 gives different
pixel values from cutoff = R1 G1 B51, *in the red and green channels*?
They switch from 1 to 255 depending on what's in the *other* channels!

Certainly, cutoff=1 is a *very* extreme case.
Less weird weirdnesses can probably be attributed to optical problems
with the shadows.
[snip]
[snip]
Well, your exposure formula at least becomes much easier
with gamma=1.0!

Yes, all calculations become easier. That's how I started when I was
just experimenting. But now I've modified all my formulas to take
gamma into account so I don't have to think about it anymore.

That's about my plan, too.
Regarding 8/16-bit. If you plan to do some editing afterwards, then
16-bit would be essential. Otherwise there would be too much
corruption. Even just changing to gamma 2.2 to display or print the
image would seriously corrupt the histogram (you'd get the so-called
"comb histogram" with huge gaps on the left side).

But keep in mind that, although using a 16-bit scan wouldn't exhibit
such a "comb histogram" but a more continuous one, much of that
continuity is really just the scanner's random noise!

Try this: possibly using a low-end scanner, scan an picture at 8-bit.
Then play with the levels, boost gamma and whatever.
The histogram will get "comby", like this one:
http://ljl.150m.com/scans/hist1.gif
from http://ljl.150m.com/scans/scan1.jpg

Now apply some artificial noise to the image, such that it is just
visible: you will get a continuous histogram again, like this:
http://ljl.150m.com/scans/hist2.gif
from http://ljl.150m.com/scans/scan2.jpg

Now scan the same picture at 16-bit and do the same levels adjustments.
Is the 16-bit image really that different (or, if you did use a low-end
scanner, *any* different) from the 8-bit image with noise added?


But back to exposure - I seem to have discovered one more thing.
Obviously, exposure time is also lengthened by placing a dark
transparent material (such as blank negative film) on the calibration
area of the scanner, which then thinks that's the white point.

Now, if I scan with a very low "cutoff" (lower than about 80, since that
seems to be the useful limit) *and* use the trick above, I get longer
scan times than by just using the low "cutoff" -- not by very much, but
I have 18 seconds vs 21 seconds for a 1200dpi scan of a one millimeter area.

So, it would seems that the firmware can control exposure to a larger
extent than it allows me to do with the weird lookup tables (the latter
apparently being about 3x).


by LjL
(e-mail address removed)
 
D

Don

The problem with apt-get is that it automatically installs dependencies
when you install a program, but it doesn't remove them when you remove
the program. Aptitude does that.

Oh, I see! I misunderstood! That's also very important so no "garbage"
is left behind after you uninstall something.

BTW, I always install "everything" in a distribution - including the
sources! So uninstalling was never really a problem for me because I
don't delete anything afterwards.

But upgrading a package while waiting for the next official
distribution was always a problem because of those "circular
references".
You should really give Debian a try if you were unsatisfyed with
RedHat's package system, IMHO.

Oh, I will! Also, because of the whole Debian approach of "GNU only".
Hold on, there's something I've forgotten to mention.

Aha! Another missing detail! ;o)
The driver is from
"Epson Kowa" (now "Epson Avasys"), a Japanese company that, from what
I've been able to understand, has really little to do with actual Epson. ....
This is all to say that the driver is far from "official". But anyway,
it does check for a transparency unit before showing the relative options.

Oh... I though it was the "official" Epson driver. I was actually a
little surprised by that because even though some companies release
Linux drivers for their hardware most don't. Especially, for "niche"
markets like scanners.

OK, that could account for the lot of the "unusual" things!
As far as I can understand SANE, it's conceptually divided into three
parts: the main SANE libraries, the frontends and the backends.

Backends drive the scanner and are specific to vendors and/or models.
Frontends are the "applications", which call SANE to scan.
The SANE libraries are the glue.

OK, I didn't know much about how SANE was constructed.
iScan! does invert images when in negative mode. The code that I found
commented out, however, is in the "epkowa" backend, not in iScan!.

Yes, that's exactly what I said and what I would expect. It's the job
of the actual end-user application to do negative inversion and orange
mask removal.
Perhaps they changed their minds at some point about where to handle
negative inversion.

Could be...? But, usually that would be a job for the front-end i.e.
end-user application. Sometimes these things are done in the library
(the Windows equivalent would be TWAIN, I guess) but never in low
level routines (the actual drivers, or back end) called by these
libraries
Well, but have you looked at the JPEGs? Perhaps it takes some knowledge
to do the fine judgements, but I can see a very clear different in noise
amount -- in favor of the one with "longer exposure".

Yes I have, but my problem is that we're dealing with two different
scans. So there is different data. It is possible to do some heavy
math which will not be affected by these small differences, but that's
above my head. Also, because of all the bad experiences in the past
when dealing with something like that I'm always afraid of some
"unknown" which may cause a wrong conclusion so that's why I'm so
careful.
Yes, I've been busy with trying to get 4800dpi (which, by the way, I
might have finally achieved), but I've done that now.

The result is that it always takes the same time no matter what gamma is
applied. I've tried from gamma=0.1 to gamma=5.0 (the limits my table
generator allows).

I didn't actually mean gamma but a curve which is not straight -
although gamma 0.1 comes close to a straight line while gamma 5 would
be a very curved.

So, that would be a good test.
Also, a scan taken with "cutoff" < 255 (namely, 128) *and* gamma<>1.0
(namely, 0.1) takes the the same time as the same scan taken with gamma=1.0.

This is evidence that only the "cutoff" influences scan time, and curve
shape does not.

Does the curve shape influence the image in any way? Or is the curve
only used to get the cut-off and the shape is ignored?

In other words, is the logic as follows:

1. Determine cut-off point and deduce exposure from that. (That's the
hardware bit which causes the scan to take different amount of time.)

2. After getting the scan with this exposure, apply the curve. (That's
the software bit independent from the actual hardware scan.)

The trouble is, that would be tricky to do because after the exposure,
the curve is no longer applicable! In other words, if the curve is
applied "as is" it would effectively do the exposure *again* only this
time in software.

Because of that, in order to avoid double-exposure this (the cut-off)
would have to be removed by setting it to 255,255 *but* keep the shape
of the curve the same! This can be done mathematically, but it just
seems so unnecessarily complicated.

The alternative (which is what I thought was happening) is to always
use the same exposure and simple apply the curve afterwards. That will
give the illusion of changing exposure but it's all done in software.

But the trouble with that is, why does the scan timing change!?
But is it that hard?

Not really but it does require more calculations.
Another thing I didn't tell you yet: scanimage also offers a
"--gamma-correction" option, with only two settings: 1.0 and 1.8 (though
I suspect that the scanner firmware itself supports more possibilities,
as the reference mentions there should be settings for obtaining "CRT
gamma", "dot-matrix gamma", and things like that).

Gamma 1.0 is so-called "linear gamma" i.e. no change. You get the data
directly from the scanner. Gamma 1.8 is "Macintosh" gamma because
Apple monitors are calibrated to this value. Windows monitors are
usually calibrated to 2.2.
The interesting thing is that this "--gamma-correction" is applied *in
addition* to the "--***-gamma-table" I supply.

That is a bit strange.
That is, if I give it a linear lookup table *but* set
"--gamma-correction" to 1.8, the resulting image comes out gamma corrected.
This shows that the firmware is able to, and does, modify the
user-supplied lookup tables.

And yes, it's the firmware that does this, not the driver. It's
specified in the Epson reference manual.

Gamma is usually done by end-user application. I've never heard of
firmware doing gamma, but it's possible.
An aside: has anybody ever found the way to send a new firmware to a
scanner, and possibly to read out the currently installed firmware?
It'd be interesting to know if someone has managed to do this, even
though it would only work with a specific brand or even a specific model.

That depends on each model and manufacturer. There are no open
standards and it's all very proprietary. What's more, firmware code is
usually encoded to avoid disassembly. And, of course, in most cases
you don't even know what microcontroller is used?
But think of this: the lookup tables must, have you said, be
interpolated *anyway* since they only contain 256 values instead of 16384.
It's just a matter of, how can I say it, space the values differently
when interpolating, depending on the exposure time.

Yes, but there are 256 intermediate values for each "step" in the
8-bit lookup table. In order to convert from 8-bit to 16-bit each of
these steps is converted to 256 values.

So, using 8-bit tables to set exposure would be very inaccurate. You
really need 16-bit accuracy.
When the exposure time is longer, the table has to be "stretched"; there
should be no loss in doing this (since interpolation would be required
even with "normal" exposure), as long as the code that does it does not
contain too many bugs.

As for exactness in specifying exposure time, well, 0 to 255 gives
plenty. Ok, make that 90 to 255, since cutoff=90 seems to be the lower
limit to exposure changes.

No, even 0-255 is not enough if you want to set the exposure exactly.
Of course, it all depends on how much accuracy you want, but 8-bit
scale is just way too "rough".

When I set exposure, I do it in 100's i.e. something like 1.26 AG. You
could never get that much accuracy by using 8-bit scale especially as
you get into higher exposure values.
Yes. Well, teaches me that writing a script to do these tests instead of
just doing them manually in a random fashion gives results that are
harder to misinterpret.

It just gives you more data. The more data the more accurate the
results! I simply wrote a program to plot or analyze this data
automatically.
Really, there is some bug. How could cutoff = R1 G1 B1 gives different
pixel values from cutoff = R1 G1 B51, *in the red and green channels*?
They switch from 1 to 255 depending on what's in the *other* channels!

That would simply be random noise and is to be expected. Actually,
that's exactly what I mean and why it's so hard (impossible!) to
measure anything in the shadow part. Especially using only 8-bit
accuracy!
Certainly, cutoff=1 is a *very* extreme case.
Less weird weirdnesses can probably be attributed to optical problems
with the shadows.
Exactly.


But keep in mind that, although using a 16-bit scan wouldn't exhibit
such a "comb histogram" but a more continuous one, much of that
continuity is really just the scanner's random noise!

No, we're talking about raw data. At gamma 1.0 there *is* continuity
at the left side. If you edit this in 16-bit it gives you a lot of
elbow room. An 8-bit scan also has continuity at gamma 1.0 but your
elbow room is virtually 0. Even the smallest edit will cause banding
and a comb histogram if you start with an 8-bit image.

If, on the other hand, you start with a 16-bit image and edit it
followed by a conversion to 8-bit (for display or print) you will get
continuity in the histogram.

As a test, convert a 16-bit image to 8-bit first and then apply
*exactly the same* edits and it would case all sorts of artefacts.
Try this: possibly using a low-end scanner, scan an picture at 8-bit.
Then play with the levels, boost gamma and whatever.
The histogram will get "comby", like this one:
http://ljl.150m.com/scans/hist1.gif
from http://ljl.150m.com/scans/scan1.jpg

Now apply some artificial noise to the image, such that it is just
visible: you will get a continuous histogram again, like this:
http://ljl.150m.com/scans/hist2.gif
from http://ljl.150m.com/scans/scan2.jpg

That's what I call "corruption" of data. This, BTW, is exactly what
Vuescan does to *mask* its original data because it's so bad.
Now scan the same picture at 16-bit and do the same levels adjustments.
Is the 16-bit image really that different (or, if you did use a low-end
scanner, *any* different) from the 8-bit image with noise added?

Absolutely! This is *very* important. The difference is *huge*!

Don't be mislead by what the histogram looks like *on the surface*.

You actually have to analyze the data. In your 8-bit example by
applying noise you are seriously corrupting this data - and that *on
top* of all the corruption caused by editing in 8-bit space in the
first place!!!

If you want quality it's absolutely essential to edit in 16-bit!
But back to exposure - I seem to have discovered one more thing.
Obviously, exposure time is also lengthened by placing a dark
transparent material (such as blank negative film) on the calibration
area of the scanner, which then thinks that's the white point.

Now, if I scan with a very low "cutoff" (lower than about 80, since that
seems to be the useful limit) *and* use the trick above, I get longer
scan times than by just using the low "cutoff" -- not by very much, but
I have 18 seconds vs 21 seconds for a 1200dpi scan of a one millimeter area.

That's probably to do with how exposure is handled. You're probably
dealing with *two* exposures: absolute and relative. Calibration
apparently sets the absolute (or baseline) exposure, and any other
exposure after that is then relative i.e. applied *on top* of the
baseline exposure.
So, it would seems that the firmware can control exposure to a larger
extent than it allows me to do with the weird lookup tables (the latter
apparently being about 3x).

No, you're just changing the baseline on top of which you apply the
given exposure.

Don.
 
L

Lorenzo J. Lucchini

Don said:
On Fri, 16 Sep 2005 18:44:23 +0200, "Lorenzo J. Lucchini"

[snip]
The driver is from
"Epson Kowa" (now "Epson Avasys"), a Japanese company that, from what
I've been able to understand, has really little to do with actual Epson.
...

This is all to say that the driver is far from "official". But anyway,
it does check for a transparency unit before showing the relative options.


Oh... I though it was the "official" Epson driver. I was actually a
little surprised by that because even though some companies release
Linux drivers for their hardware most don't. Especially, for "niche"
markets like scanners.

I really don't know how "official" to consider it.
If you go to the Epson site (I only tried with www.epson.it), choose
Support, select "Multifunzione" (multi-function device), "Stylus Photo
RX500", and then "Linux", you'll find references to the Epson Kowa
driver, at least in the FAQ.

It's not there as a "featured" driver like the one for Windows is, though.
OK, that could account for the lot of the "unusual" things!

Most probably. When using Linux, you get used to living with the
"unusual" :)
Yes I have, but my problem is that we're dealing with two different
scans. So there is different data. It is possible to do some heavy
math which will not be affected by these small differences, but that's
above my head. Also, because of all the bad experiences in the past
when dealing with something like that I'm always afraid of some
"unknown" which may cause a wrong conclusion so that's why I'm so
careful.

I realize that. But still, it's the scan made with the "weird" settings
(custom tables and all) that looks better -- I would expect the opposite
to happen if the scanner were doing something obscure, and not just
taking a longer exposure.

The only other thing justifying this that I can think of would be
excessive histogram stretching caused by setting the "short"
scans'whitepoint at 30 in Photoshop... but since I kept everything in 16
bpc all the time, that shouldn't be an issue.
[snip]

I didn't actually mean gamma but a curve which is not straight -
although gamma 0.1 comes close to a straight line while gamma 5 would
be a very curved.

So, that would be a good test.

Well, gamma has an advantage on other curves, in that the
"gamma4scanimage" utility I have can generate gamma curves, while other
curves would have to be generated by hand :)

Just for completeness, gamma4scanimage has the syntax
gamma4scanimage gamma [shadow [highlight [maxin [maxout]]]]

where maxin is the number of values in the table (255 in my case - well
ought to be 256 actually, but I suppose it's off by one), and maxout is
the maximum value to be output in the table (still 255 for me).

The other parameters are what you would expect, with "highlight" being
what I've called the "cutoff" point.
Does the curve shape influence the image in any way? Or is the curve
only used to get the cut-off and the shape is ignored?

The curve's shape is definitely used.

FOr example, by setting "--gamma-correction" to 1.0 and then using a
curve representing gamma=1.8, I obtain the same kind of picture (and the
same histogram) I get by setting "--gamma-correction" to 1.8 and using a
curve representing gamma=1.0 (or giving no curve at all).
In other words, is the logic as follows:

1. Determine cut-off point and deduce exposure from that. (That's the
hardware bit which causes the scan to take different amount of time.)

2. After getting the scan with this exposure, apply the curve. (That's
the software bit independent from the actual hardware scan.)

Yes, that's what I think it is, except that the curve must be modified
at step "1 1/2" before being applied, but you know this.
The trouble is, that would be tricky to do because after the exposure,
the curve is no longer applicable! In other words, if the curve is
applied "as is" it would effectively do the exposure *again* only this
time in software.

Yes, precisely.
[snip]

The alternative (which is what I thought was happening) is to always
use the same exposure and simple apply the curve afterwards. That will
give the illusion of changing exposure but it's all done in software.

But the trouble with that is, why does the scan timing change!?

.... so consistently with exposure (also see the graph I'm pointing you
to further below)?

And, why is a scan made with a low cut-off much less noisy than one made
with cutoff=255 and then "cutoff'ed" in software?
[snip]
The interesting thing is that this "--gamma-correction" is applied *in
addition* to the "--***-gamma-table" I supply.

That is a bit strange.

I found that weird too, but as I said previously, I'm starting to get
used to this kind of things...

But again, no matter how strange, it does show that the firmware is
capable of applying trasformation to the lookup tables before applying them.

Gamma is usually done by end-user application. I've never heard of
firmware doing gamma, but it's possible.

You can check, if you want to get the PDF from Epson with the ESC/I
protocol reference. You'll see there is a command for setting gamma, one
for setting "gamma tables" (i.e. the lookup tables we're working with),
and one for setting color correction.

You have to (freely) register with Epson to get that PDF, though, but
the licence doesn't seem particularly restrictive (i.e. you can still
write code for driving an Epson after reading their manuals :).

There are even commands to set "sharpness" and "brightness", though they
aren't implemented in my scanner model... now, you wouldn't think
sharpening is something the firmware ought to do!
But on the other hand, I gather that digicams usually also apply
internal sharpening.
[snip]

Yes, but there are 256 intermediate values for each "step" in the
8-bit lookup table. In order to convert from 8-bit to 16-bit each of
these steps is converted to 256 values.

So, using 8-bit tables to set exposure would be very inaccurate. You
really need 16-bit accuracy.

[snip]

No, even 0-255 is not enough if you want to set the exposure exactly.
Of course, it all depends on how much accuracy you want, but 8-bit
scale is just way too "rough".

When I set exposure, I do it in 100's i.e. something like 1.26 AG. You
could never get that much accuracy by using 8-bit scale especially as
you get into higher exposure values.

Look, I realize you've been nurtured by a real film scanner ;-)
But to me, 160 or 170 exposure time steps, which seems to be the maximum
my scanner can be told to do (and maybe the number of actual possible
exposure times is even less), looks quite plenty for a flatbed scanner
that "can also do film as an afterthought"!
It just gives you more data. The more data the more accurate the
results! I simply wrote a program to plot or analyze this data
automatically.

I've done that too. You can see a graph at
http://ljl.150m.com/scans/scantime.gif

It's the same concept as the table I showed you in a previous posting,
though the actual data are not the same (those I used for the graph were
taken at finer sampling intervals).
The graph's X axis is the "cutoff".
Really, there is some bug. How could cutoff = R1 G1 B1 gives different
pixel values from cutoff = R1 G1 B51, *in the red and green channels*?
They switch from 1 to 255 depending on what's in the *other* channels!

That would simply be random noise and is to be expected. Actually,
that's exactly what I mean and why it's so hard (impossible!) to
measure anything in the shadow part. Especially using only 8-bit
accuracy!

But *random* noise is supposed to be random! I get a scan where *all*
the pixels are zero in (say) the red channel, and then in the following
scan they are all 255 -- and the red channel LUT was *not* changed
between the two scans, only the LUTs for the other two channels were!

Though I think I'm seeing weird histograms with images taken with "not
very" extreme LUTs.
I think I'll script some tests for this.
No, we're talking about raw data. At gamma 1.0 there *is* continuity
at the left side. If you edit this in 16-bit it gives you a lot of
elbow room. An 8-bit scan also has continuity at gamma 1.0 but your
elbow room is virtually 0. Even the smallest edit will cause banding
and a comb histogram if you start with an 8-bit image.

But that's what I meant.
My point was that, even though you see a "nicer" histogram after editing
the 16-bit image than the 8-bit one, much of that "niceness" is simply
due to noise "filling up the gaps" in the histogram.

In the 8-bit version, the noise can't "fill up the gaps", because it's
already irremediably "snapped" to a 0..255 value.

In other words, I think you'd see more posterization in the edited 8-bit
image, but also think much of that posterization is there in the 16-bit
version as well -- only masked by some noise that the 8-bit version
can't reproduce.
If, on the other hand, you start with a 16-bit image and edit it
followed by a conversion to 8-bit (for display or print) you will get
continuity in the histogram.

As a test, convert a 16-bit image to 8-bit first and then apply
*exactly the same* edits and it would case all sorts of artefacts.

But that's also because the image editor would be *working at 8-bit
internally*, so each subsequent edit loses data.

But try this instead: start with an 8-bit image, then *convert it to
16-bit*, then make your edits, then convert it back to 8-bit.

This is as opposed to your: start with a 16-bit bit image, then make
your edits, then convert it to 8-bit.

I think you'd see that, with low-end scanners, the culprit is *the image
editor* working at 8-bit internally, rather than the 16-bit image being
(much) "better" from the start.
That's what I call "corruption" of data. This, BTW, is exactly what
Vuescan does to *mask* its original data because it's so bad.

Please don't talk Vuescan to me... mom has told me that, when I join a
newsgroup, it's good manners that I leave the flames to the regulars.

Anyway, it's not really corruption of the data, as long as you take care
that the noise *only* "fills in the gaps" in the histogram, without
actually starting to overlap real data.

Which I have not done in my example, since it's not quite easy to do (at
least in Photoshop).

But I think that, in principle, this would give you the same results as
an equivalently "stretched" 16-bit scan -- if that scan was made with a
scanner featuring a 16 bit A/D but without enough quality for data to
really go beyond 8 bits per channel.

I obviously can't know *for sure* that mine is such a scanner, but I've
read in various places that these low-end flatbeds usually cat't much
much use of the lower 8 bits. The figure I have in mind right now is 8.5
bits per channel of real data, for an average flatbed.

(OK, 8.5 is still more than 8.0, granted)

by LjL
(e-mail address removed)
 
D

Don

I really don't know how "official" to consider it.
If you go to the Epson site (I only tried with www.epson.it), choose
Support, select "Multifunzione" (multi-function device), "Stylus Photo
RX500", and then "Linux", you'll find references to the Epson Kowa
driver, at least in the FAQ.

I see, then it is an official driver.
Most probably. When using Linux, you get used to living with the
"unusual" :)

Oh, I know... That's why we love Linux! ;o)
I realize that. But still, it's the scan made with the "weird" settings
(custom tables and all) that looks better -- I would expect the opposite
to happen if the scanner were doing something obscure, and not just
taking a longer exposure.

Yes, but the stuff I look for can't be seen with the naked eye. You
need to actually analyze the data.
... so consistently with exposure (also see the graph I'm pointing you
to further below)?

And, why is a scan made with a low cut-off much less noisy than one made
with cutoff=255 and then "cutoff'ed" in software?

That's easy!

When you boost exposure (which is what a low cutoff is) this longer
exposure penetrates the shadows.

When you edit a short exposure (cutoff=255) scan in software, all you
do is *show* the noise. You just make it more visible by brightening
it up in software.

That's why it's essential to use 16-bit depth for any post-processing.
Even thought 16-bit can't recreate the data which doesn't exists, it's
much more "forgiving" with the data that you do have.
I found that weird too, but as I said previously, I'm starting to get
used to this kind of things...

But again, no matter how strange, it does show that the firmware is
capable of applying trasformation to the lookup tables before applying them.

I think we can now slowly confirm your initial results.

There is definitely some weird processing going on especially with
regard to exposure. However, you've pretty much figured out *what* it
does and we can make some pretty good educated guesses *how* it's done
although we are both confused *why* it's done this way!?

But that doesn't matter. If you can get *repeatable* results - and you
can - then you can simply use this knowledge and not worry about *why*
Epson does it this way.
You can check, if you want to get the PDF from Epson with the ESC/I
protocol reference. You'll see there is a command for setting gamma, one
for setting "gamma tables" (i.e. the lookup tables we're working with),
and one for setting color correction.

Oh, I believe you! It's just unusual.
Look, I realize you've been nurtured by a real film scanner ;-)

Tortured, is more like it! ;o)

Kodachromes/Nikon! Grrrr... ;o)
But *random* noise is supposed to be random! I get a scan where *all*
the pixels are zero in (say) the red channel, and then in the following
scan they are all 255 -- and the red channel LUT was *not* changed
between the two scans, only the LUTs for the other two channels were!

I didn't realize the jump was that high. I thought you just meant a
few random pixels here and there are different.

With the above settings a jump from 0 to 255 is definitely wrong. I
would expect some difference between the two scans not only because of
noise but also because even two scans with the same setting are never
the same. Also, there is some leakage between channels where one
channel can "corrupt" a neighboring channel but such a drastic jump
from 0 to 255 is just too much.
But that's what I meant.
My point was that, even though you see a "nicer" histogram after editing
the 16-bit image than the 8-bit one, much of that "niceness" is simply
due to noise "filling up the gaps" in the histogram.

No, it's not! You do get more legitimate image data in a 16-bit image.

Another thing to keep in mind is that you can't see all this data with
a naked eye! Your monitor is 8-bit as are your eyes (!). Actually some
believe eyes are really only 6-bit. Therefore much of that data is
"invisible". However, it's essential for editing because you can do
much more before negative effects become visible.
In other words, I think you'd see more posterization in the edited 8-bit
image, but also think much of that posterization is there in the 16-bit
version as well -- only masked by some noise that the 8-bit version
can't reproduce.

The thing is that is *not* noise. It's data. Believe me, 16-bit is
essential to producing better results.
But that's also because the image editor would be *working at 8-bit
internally*, so each subsequent edit loses data.
Exactly!

But try this instead: start with an 8-bit image, then *convert it to
16-bit*, then make your edits, then convert it back to 8-bit.

This is as opposed to your: start with a 16-bit bit image, then make
your edits, then convert it to 8-bit.

And that will produce better results than staying in 8-bit! Indeed,
that's precisely what I do with images from my digital camera which
only generates 8-bit.
I think you'd see that, with low-end scanners, the culprit is *the image
editor* working at 8-bit internally, rather than the 16-bit image being
(much) "better" from the start.

Believe me, even with low end scanners, 16-bit is always better if you
plan to edit images afterwards.
Please don't talk Vuescan to me... mom has told me that, when I join a
newsgroup, it's good manners that I leave the flames to the regulars.

No, it's just a fact. Using noise in such a way is very good at hiding
original data regardless of what software uses it.
Anyway, it's not really corruption of the data, as long as you take care
that the noise *only* "fills in the gaps" in the histogram, without
actually starting to overlap real data.

No, adding noise to an image is always corruption of data. If the data
is not there to start with you are "inventing" it.

I mean, this can produce visually pleasing results by eliminating one
problem, but it will create others. Adding noise is a known editing
method, but it's a method of last resort i.e. when there's no other
option.

If you do have a choice, you should always go with better initial
data, instead of trying to recreate it artificially later.
But I think that, in principle, this would give you the same results as
an equivalently "stretched" 16-bit scan -- if that scan was made with a
scanner featuring a 16 bit A/D but without enough quality for data to
really go beyond 8 bits per channel.

If there's no data to start with, that's another story. But I do
believe that even your low end scanner can get more than 8-bits.

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top