Should I scan Slides, Negatives, Photos at 24-bit, 48-bit, or some other color depth? 2400 or 4800 p

A

Alan Bremner

Compressed with LZW, yes, it [TIF] is lossless. It is also possible to
compress TIF with JPG algorithms, which is lossy. LZW will give you
between 35% and 70% compression with no loss of data.

'Scuse me butting in here, but related to the above....

Elements 3.0 gives me the option of using LZW or ZIP compression when
saving files in TIF format. Both are described as "lossless". Is there
any advantage in using one over the other?
 
D

Dances With Crows

Compressed with LZW, yes, it [TIF] is lossless. It is also possible
to compress TIF with JPG algorithms, which is lossy.
Elements 3.0 gives me the option of using LZW or ZIP compression when
saving files in TIF format. Both are described as "lossless". Is there
any advantage in using one over the other?

ZIP compression? Er... tiff.h from the SGI TIFF library defines the
following compression modes, plus a few others that are obsolete or
rarely used:

COMPRESSION_NONE 1 /* dump mode */
COMPRESSION_CCITTRLE 2 /* CCITT modified Huffman RLE */
COMPRESSION_CCITTFAX3 3 /* CCITT Group 3 fax encoding */
COMPRESSION_CCITTFAX4 4 /* CCITT Group 4 fax encoding */
COMPRESSION_LZW 5 /* Lempel-Ziv & Welch */
COMPRESSION_JPEG 7 /* %JPEG DCT compression */
COMPRESSION_PACKBITS 32773 /* Macintosh RLE */
COMPRESSION_DEFLATE 32946 /* Deflate compression */

....so ZIP compression is *probably* RLE or Deflate, though it'd be
impossible to tell for sure without saving an image in "ZIP compression"
and running tiffinfo on it. IME LZW is more efficient than Deflate or
RLE if you have color or grayscale images. Also, practically everything
will read LZW TIFF, while Deflate and RLE may not be supported since
they were always less popular.
 
K

Kennedy McEwen

Don said:
The reason for scanning 48-bit is for editing (I explained that further
down in the original message). And since you want to archive images for
possible later editing, using 48-bit would be essential.

Not strictly true. The reason for scanning at more than 8-bits/channel
is so that it can retain 8 bit accuracy after gamma compensation
enabling the image to subsequently be stored, edited and displayed with
only 8-bits and without loss of visible information. As you yourself
have discovered by trial and error, linear coding requires somewhere in
excess of 17-bits to achieve the same dynamic range as an 8-bit gamma
compensated image ready for display.

If we had scanners which digitised directly in gamma compensated space
then there would be no need at all for more than 8-bits per channel -
and some debate over whether all of those were necessary. Well, we used
to have scanners which did exactly that - traditional photo-multiplier
based drum scanners! Not surprisingly, they produced superior results
on 8-bits per channel than you will ever achieve from a 16-bit linear
encoded and compensated CCD based scanner. Even with top of the range
Nikon, Minolta and Imacon CCD scanners, you will still get better
results from a traditional drum scanner. Since, as you note, you can't
see 8bits/channel and you certainly cannot display or print any more, it
is obvious that the improvement is within the capabilities of 8-bit
systems - 8-bits properly used.
 
P

Peter D

Don said:
Oh well, if it doesn't matter... ;o)

Now, I didn't say that. I asked "why should it matter to me?"

I work in the computer industry and I'm beginning to better understand why
non-techy people looking for answers to fairly simple questions get
frustrated a the plethora of information presented but the paucity of actual
comprehensible (to them) solutions. :)

There are many ways to skin a cat. All I really want to know if what kind of
knife and how sharp. I'm not looking for a surgical analysis or the chemical
compostion of the anathesthetic. :)
Yes, there is definitely a point of diminishing returns. Where that
point is, depends on each person's requirements.

Mine are to scan photos, slides, and negatives and archive them. I doubt
I'll do much editing of any significance. I'm simply trying to stop any
further deterioration in my slide/neg colleciton by "suspending" them in
time and creating digital copies.
 
H

Hecate

Have a look at this, and from now on consider yourself religious:
http://www.xs4all.nl/~bvdwolf/main/downloads/SatDish.jpg
It is just the result of a gamma adjustment from linear to average PC
monitor gamma in 8-bit/channel versus 16-b/ch mode with Photoshop. In
case one doesn't (want to) see it, look at the left side of the
satellite dish and to the shadows. The original linear gamma file was
from a 12-b/ch P&S digicam.


They should look at the image above, with open eyes/mind.


It is okay to save the final result in 8-b/ch mode, even as high
quality JPEG, as long as the result is final, no further
post-processing required.

Postprocessing 8-b/ch files will accumulate 8-bit round-off errors
with each operation. One can better change the mode to 16-b/ch for
multiple processing steps, and back to 8-b/ch for saving the result.
That will only be slightly less accurate than starting with a 16-b/ch
file because of the initial quantization error of half a bit.
Hoi Bart, good link. And as we aren't talking about Vuescan I
completely agree with you :)
 
H

Hecate

Whatever; there are and will probably always be Free programs like
ImageMagick that can do TIFF<->PNG without much effort. If you want to
put losslessly-compressed junk on the Web, PNG is the way to go.

But only if you don't mind users of IE not being able to see the image
if you use more than a gif's 1 bit transparency. And even then...
 
K

Kennedy McEwen

Hecate said:
Hoi Bart, good link. And as we aren't talking about Vuescan I
completely agree with you :)
That's the theory but nobody has demonstrated it yet, so I can't see any
reason for you, or anyone else, to agree, Vuescan or otherwise.

Quite simply, you do not need any more than 8 bits per channel, and
probably less than that, if all bits are used to produce evenly
distributed levels in the perceptual range. Nothing Bart (or anyone
else for that matter) has posted disproves this.

Given the freedom to choose any domain of operation, we can produce
images equally posterised as Bart's for 12, 16 or even 24 bits per
channel - in fact any channel bit depth you choose to mention. Bart
knows this, and also know that the only domain that actually matters is
the perceptual domain, where 8-bits per channel is more than adequate.
In fact, 6 bits per channel is actually good enough, if no colour
management is required. By coincidence, the perceptual domain is close
to the gamma compensated domain for CRTs, so 8bpc is more than necessary
for perfect image reproduction in that domain.
 
E

Elwood Dowd

YPrepressAgencyMV.

Indeed! Yours sounds much more progressive than those I have regularly
dealt with over the years. I have gotten more than one document back as
"unreadable" that even our secretary could open.

In any case, I hope what you say is true, long live PNG.

In a world where FORTRAN and sendmail.cf still exist, I can believe
anything related to format stupidity.

I always wondered.. do those who contribute regularly to FORTRAN
magazines submit regular columns instead of rows? (badum-bum)
 
D

Don

The reason for scanning at more than 8-bits/channel
is so that it can retain 8 bit accuracy after gamma compensation
enabling the image to subsequently be stored, edited and displayed with
only 8-bits and without loss of visible information. As you yourself
have discovered by trial and error, linear coding requires somewhere in
excess of 17-bits to achieve the same dynamic range as an 8-bit gamma
compensated image ready for display.

Well, compensating for linear gamma is still editing in the broader
sense and that was implied.
If we had scanners which digitised directly in gamma compensated space
then there would be no need at all for more than 8-bits per channel -
and some debate over whether all of those were necessary.

Isn't there is still a problem of editing the image afterwards?
Editing in 8-bit could still cause banding - at the very least in some
extreme images - so, in that respect, the 16-bit elbow room would
still be indispensable.
Well, we used
to have scanners which did exactly that - traditional photo-multiplier
based drum scanners! Not surprisingly, they produced superior results
on 8-bits per channel than you will ever achieve from a 16-bit linear
encoded and compensated CCD based scanner.

I wonder why consumer scanners don't do that? As you mentioned once
before, it's pretty elementary to implement. Yes, hardcoding gamma
appears less flexible at first blush but since gamma of monitors and
perception coincide around 2.2 it would seem like a natural choice.
Not to mention it can always be made selectable and would certainly
make life easier for most people.

I suppose it must be those Mac users again with their urge to be
different with their 1.8 gamma. Form over substance, indeed! ;o)

<fx: runs and hides from angry hordes of Macites>

Don.
 
D

Don

Now, I didn't say that. I asked "why should it matter to me?"

I was just kidding! (smiley)
I work in the computer industry and I'm beginning to better understand why
non-techy people looking for answers to fairly simple questions get
frustrated a the plethora of information presented but the paucity of actual
comprehensible (to them) solutions. :)

Well, I work in the computer industry as well and had the same
frustration (still do!) with scanning. When I joined this group over
two years ago I thought, I'd have it figured out in a week...

The difficulty, in my view, is that scanning is like those Russian
dolls. Every time you open one there is another one inside... So each
time I thought I fixed one problem, all that did is just lift the veil
so I could see the next problem!

At the most basic level, I think, this is due to the duality of light.
Because of that, physics seems to have a "gotcha" at every level.
That's why it's important (if one really wants to get to the bottom of
it) to grasp some of those concepts.
There are many ways to skin a cat. All I really want to know if what kind of
knife and how sharp. I'm not looking for a surgical analysis or the chemical
compostion of the anathesthetic. :)

What we have here is a Cheshire cat! ;o) So, all bets are off.

Indeed, forget the anaesthetic, we need to torture this cat back for
all the frustration it's been subjecting us to! ;o)
Mine are to scan photos, slides, and negatives and archive them. I doubt
I'll do much editing of any significance. I'm simply trying to stop any
further deterioration in my slide/neg colleciton by "suspending" them in
time and creating digital copies.

Then, to be on the safe side, I would definitely scan "raw". That
means using maximum optical resolution and no editing of any kind at
the scanning stage (disable everything). The only thing I would use is
ICE, if available, because that's hardware based and can't be done
afterwards.

Don.
 
B

Bart van der Wolf

SNIP
Quite simply, you do not need any more than 8 bits per channel, and
probably less than that, if all bits are used to produce evenly
distributed levels in the perceptual range.

Yes, but that's the whole point. There aren't many scanners that
quantize in evenly distributed levels in perceptual range. Thus we are
stuck with what we do have, linear gamma quantization, followed by
significant post-processing, e.g. gamma adjustment.
Nothing Bart (or anyone else for that matter) has posted disproves
this.

All I intended to demonstrate was that given linear gamma quantization
we *do* need more than 8-b/ch, initially, if we want to reduce the
risk of posterization. Once the major gamma adjustment (preferably
combined with HDR tonemapping) is done, then an 8-b/ch workflow is
acceptable.

Bart
 
A

Alan Bremner

IME LZW is more efficient than Deflate or RLE if you have color or
grayscale images. Also, practically everything will read LZW TIFF,
while Deflate and RLE may not be supported since they were always less
popular.

Thanks for your advice, Matt. I'd never heard of ZIP compression in
TIFs either, and wanted to make sure I wasn't missing something.
 
K

Kennedy McEwen

Bart van der Wolf said:
SNIP

Yes, but that's the whole point. There aren't many scanners that
quantize in evenly distributed levels in perceptual range. Thus we are
stuck with what we do have, linear gamma quantization, followed by
significant post-processing, e.g. gamma adjustment.
There aren't many (any?) scanners that produce an 8-bit output (whether
linear or gamma compensated) without taking cognisance of the higher
bits that they have quantised by default. So the situation you present
only occurs in the rather artificial case where the signal is kept in
linear state, bit truncated and *then* gamma compensated - which can be
imposed as a workflow, but isn't what the scanner outputs directly.
All I intended to demonstrate was that given linear gamma quantization
we *do* need more than 8-b/ch, initially, if we want to reduce the risk
of posterization. Once the major gamma adjustment (preferably combined
with HDR tonemapping) is done, then an 8-b/ch workflow is acceptable.
Which comes back to the original point that 8-bits per channel workflow
is more than adequate in a gamma compensated workspace.
 
T

tom

I scan, process, and store all images in high bit mode. I did have to
learn how to dodge and burn in high bit mode, since the PS toolbox does
not work, but I am glad that I learned. the dodge and burn tools in PS
are so crude, once you learn high bit processing (feathered selections
and history erase) you may not want to go back. My processed scans are
worth more to publishers, since they can apply global gamma adjustments
to suit thier publication without deterioration. As far as many
publishers are concerned, a high bit CCD scan is about equal to a drum
scan.

I would agree that you cannot see the difference between lo bit and
high bit, however the question should be: if you change the gamma or
levels several more times after it is low bit, then can you see the
banding from low-bit? In my experience, yes.

As you dodge and burn an image, you use up your bits. Lets say you
need to pull some detail out of the shadows, a very small range of
tones will now be used to fill a much broader range of levels. So when
you apply further corrections, you will likely wish you had more levels
to start with.

The idea that you can set the levels correctly in the scanner driver
presumes that you have a well exposed and properly lit image. If you
have to work on your image to bring out detail, you will be better off
with more visual info to start with.

Tom Robinson
 
K

Kennedy McEwen

Don said:
Well, compensating for linear gamma is still editing in the broader
sense and that was implied.
Why? Until relatively recently (10-15yrs) all scanners produced a
digital output directly in gamma compensated space. I would argue that
the "editing" you refer to above is merely a preprocessing requirement
of the technology used in modern scanners, similar to dark current
compensation, necessary to generate the information in the standard,
displayable, form.
Isn't there is still a problem of editing the image afterwards?
Editing in 8-bit could still cause banding - at the very least in some
extreme images - so, in that respect, the 16-bit elbow room would
still be indispensable.
That is the theory - Dan Margulis issued a challenge some years ago for
anyone to demonstrate it. To date, nobody has. Perhaps you would like
to try to be first? Simply create an image working from a hi-bit depth
original, gamma compensated film scan by processing in 16bit per channel
mode which cannot be matched by processing in only 8-bit per channel
mode throughout.

Note that the 8-bit process steps do not have to be the same as those of
the 16-bit process, merely produce the same end point - so steps such as
applying ridiculous compression in 16-bits only to re-expand the data
into a displayable form, would simply be bypassed on the 8-bit process.

Over to you, Batman!
I wonder why consumer scanners don't do that? As you mentioned once
before, it's pretty elementary to implement. Yes, hardcoding gamma
appears less flexible at first blush but since gamma of monitors and
perception coincide around 2.2 it would seem like a natural choice.
Not to mention it can always be made selectable and would certainly
make life easier for most people.

I suppose it must be those Mac users again with their urge to be
different with their 1.8 gamma. Form over substance, indeed! ;o)
Nothing to do with Macs, Don - the difference between 1.8 and 2.2, in
terms of the effect on posterisation, is negligible. The reason this
isn't done in CCD based scanners is primarily because of something I
mentioned above - dark current compensation. The dark current present
on each CCD element is significant and unique to each element and would
produce the effect of some columns of the scan having visibly more
effective levels than others if the signal was digitised after an
analogue gamma compensation step and the dark current removed in gamma
compensated space. As I explained to you during your own process
development, removing dark current in gamma compensated space is not a
simple subtraction, but some fairly complex arithmetic functions which,
in this sequence would require very high precision to prevent visible
errors in the result just as a consequence of analogue noise on the CCD
output. It is much simpler and faster to remove the dark current from
the CCD output in linear space and then convert the resulting signal to
gamma compensated space afterwards. If there was a simple, cheap and
effective way to suppress the dark current in the CCD itself and flat
field them so that they produced a uniform output then it certainly
would be worth applying the majority of the gamma compensation prior to
the digital conversion step. Since there isn't, the cheapest and
simplest approach is just to throw more bits at the problem, digitise
linearly, dark current and flat field in linear space and convert to
gamma compensated space, in that order. However, as you know yourself,
you need more than 17-bits of ADC range in linear space just to achieve
the same range as an 8-bit gamma compensated range.

You can imagine the conversation in the product pre-release conference:

Engineers: The new scanner has a 20-bit ADC, the signal is gamma encoded
and output at 8-bits, this uses the full 17.5-bit linear range with some
margin for noise and dark current suppression, so that the scanner
delivers true drum scanner performance in 8 bits per channel.

Marketing: Why don't we give the customers all 20-bits?

Engineers: They don't need them, they can't see them and most of the
range is used anyway to produce the high quality output in perceptual
space.

Marketing: But Nikon, Canon and Minolta sell their products on the
number of bits they output - we won't compete if we only give the
customers 8 of them, when they have twice as many.

Engineers: But we are using our bits smarter, so we just need some
smarter marketing to... err, right, we see your, um, problem. OK, we'll
output an extra 8-bits and fill them with random numbers so they won't
compress losslessly, nobody will know the difference till their disk
fills up. We are scheduled to launch our new disks a month after the
scanner...

Marketing: I love it when a plan comes together. ;-)
 
H

Hecate

That's the theory but nobody has demonstrated it yet, so I can't see any
reason for you, or anyone else, to agree, Vuescan or otherwise.

Quite simply, you do not need any more than 8 bits per channel, and
probably less than that, if all bits are used to produce evenly
distributed levels in the perceptual range. Nothing Bart (or anyone
else for that matter) has posted disproves this.

Given the freedom to choose any domain of operation, we can produce
images equally posterised as Bart's for 12, 16 or even 24 bits per
channel - in fact any channel bit depth you choose to mention. Bart
knows this, and also know that the only domain that actually matters is
the perceptual domain, where 8-bits per channel is more than adequate.
In fact, 6 bits per channel is actually good enough, if no colour
management is required. By coincidence, the perceptual domain is close
to the gamma compensated domain for CRTs, so 8bpc is more than necessary
for perfect image reproduction in that domain.

Guess I'll just have to bow to your superior knowledge...
 
D

Don

Why? Until relatively recently (10-15yrs) all scanners produced a
digital output directly in gamma compensated space. I would argue that
the "editing" you refer to above is merely a preprocessing requirement
of the technology used in modern scanners, similar to dark current
compensation, necessary to generate the information in the standard,
displayable, form.

Nevertheless, anything done to the image - for whatever reason - is at
it's most basic level still image editing (regardless of whether this
editing causes major artifacts or not).

Now, you may conceptually think of software gamma as preprocessing,
but pragmatically any such change to the image can cause image
deterioration and in case of software gamma it does.

The only exception would be output in hardware gamma compensated space
using, what was it... photo-multipliers. Another exception may be any
software process which does not create such a drastically lopsided
image (as software gamma does) and therefore not be as objectionable.
That is the theory - Dan Margulis issued a challenge some years ago for
anyone to demonstrate it. To date, nobody has. Perhaps you would like
to try to be first? Simply create an image working from a hi-bit depth
original, gamma compensated film scan by processing in 16bit per channel
mode which cannot be matched by processing in only 8-bit per channel
mode throughout.

Note that the 8-bit process steps do not have to be the same as those of
the 16-bit process, merely produce the same end point - so steps such as
applying ridiculous compression in 16-bits only to re-expand the data
into a displayable form, would simply be bypassed on the 8-bit process.

Over to you, Batman!

OK, Robin. ;o) Even with the above catch, the particular slide which
caused me to stop scanning with the LS-50 and go back to the drawing
board is a case in point. I believe I already posted examples months
ago, but basically it's a severely underexposed indoor shot at night
relying only on room lighting. Bringing the shadows up, even in
16-bits, causes banding as well as other undesirable artifacts. In
8-bits it's all considerably more pronounced. The only thing that
helps is the previously described twin-scanning process using two
different exposures. Granted, I did not try the same editing after the
twin exposure (i.e., do shadows in the boosted scan), and you may have
a point that the difference - although existent - may not be as
pronounced as with a single (nominal exposure) scan.

However, one very important aspect which needs stressing is that image
degradation due to image editing is cumulative! (Your test above
focuses only on a single - or, at best, very few - step(s)!)

But given enough time (read editing steps), it's simply unavoidable
that artifacts would start to creep in regardless of bit depth. Now,
this may be to a large extent self-inflicted by to-ing and fro-ing
instead of using only a few targeted edits, but it's an undeniable
fact that given enough processing steps artifacts would start to creep
in no matter of bit depth used. And given that fact, I'm sure you must
agree that 16-bits will provide more elbow room (postpone the
appearance of artifacts) than 8-bits. That's what it's all about!
Nothing to do with Macs, Don - the difference between 1.8 and 2.2, in
terms of the effect on posterisation, is negligible.

Just kidding Kennedy! (Smiley) Can't pass an opportunity to poke fun
at Macites. ;o)
The reason this
isn't done in CCD based scanners is primarily because of something I
mentioned above - dark current compensation. ....
It is much simpler and faster to remove the dark current from
the CCD output in linear space and then convert the resulting signal to
gamma compensated space afterwards.

How about staying in the analog domain after taking a linear scan,
removing dark current (analog, in hardware) and then applying
(hardware, analog) gamma compensation. Only then, throw the image to
the ADC for digitizing. In other words, stay in the contiguous analog
domain until the very last step when the discrete aspect of digital
can't do any damage. Does that make sense?
Engineers: But we are using our bits smarter, so we just need some
smarter marketing to... err, right, we see your, um, problem. OK, we'll
output an extra 8-bits and fill them with random numbers so they won't
compress losslessly, nobody will know the difference till their disk
fills up. We are scheduled to launch our new disks a month after the
scanner...

Marketing: I love it when a plan comes together. ;-)

Oh, don't get me started on marketroids... Probably the only lowest
life form than the proverbial lawyers.

Thanks to marketroids, my LS-50 doesn't have single-pass multiscanning
even though the scanner is perfectly capable of it.

Actually, that's what I'm currently wrestling with. Having received
the SDK last year I'm now finally getting down to trying to find a way
to turn multiscanning on.

Don.
 
K

Kennedy McEwen

Good article, but dated 2001. Have changes been made to address this in
the current film scanners?

Only increasing the resolution, which is why it is less of an issue now
than it was - but it still would be a problem scanning at 2400ppi on a
4000ppi scanner.
Is that why the Minolta 5400 has a Grain
Dissolver?

One of the main reasons.
 
K

Kennedy McEwen

Don said:
Nevertheless, anything done to the image - for whatever reason - is at
it's most basic level still image editing (regardless of whether this
editing causes major artifacts or not).

Now, you may conceptually think of software gamma as preprocessing,
but pragmatically any such change to the image can cause image
deterioration and in case of software gamma it does.
Does it? In what way is it any different whatsoever from hardware gamma
compensation encoding? Other than starting from limited equivalent
linear bits, which can just as readily be the case with a hardware gamma
compensation system due to the noise floor, it makes no difference at
all.
The only exception would be output in hardware gamma compensated space
using, what was it... photo-multipliers. Another exception may be any
software process which does not create such a drastically lopsided
image (as software gamma does) and therefore not be as objectionable.
Where is this lopsided image that you are referring to? It is lopsided
in a linear coding scheme simply because there are far more levels
dedicated to the highlights than the viewer could ever discern with
fewer, often insufficient, levels dedicated to the shadows. Re-coding
for perceptual or gamma-compensated space equalises the lopsided
distribution exactly the same as direct hardware encoding would - within
the limits of the available bits. In the case of 16-bit linear data, it
is only the first level or two that are would be different from the
ideal.
the particular slide which
caused me to stop scanning with the LS-50 and go back to the drawing
board is a case in point. I believe I already posted examples months
ago, but basically it's a severely underexposed indoor shot at night
relying only on room lighting. Bringing the shadows up, even in
16-bits, causes banding as well as other undesirable artifacts. In
8-bits it's all considerably more pronounced.

Again, however, you are applying the process in linear space. The 8-bit
rule only applies in perceptual space - and in the shadows, as you
found, those 8 perceptual bits are worth a lot more than 8 linear bits,
and even a little more than 16 linear bits.
However, one very important aspect which needs stressing is that image
degradation due to image editing is cumulative!

Cumulative: yes.
Additive: no, not by a long way. Even applying the same magnitude of
change at each step, the information loss rapidly reduces to zero after
a few steps.

That is why you get the major levels transitions out of the way, such as
gamma encoding, before reducing the data to the necessary 8-bits. After
that you are well into the laws of diminishing effects in terms of
degradation on the image by subsequent relatively minor edit
transitions. Even in 8-bits per channel there is more than adequate
headroom to accommodate this.
But given enough time (read editing steps), it's simply unavoidable
that artifacts would start to creep in regardless of bit depth.

Intuitively you might think that - in practice you will find enough time
rapidly converges to infinite time.
How about staying in the analog domain after taking a linear scan,
removing dark current (analog, in hardware)

Bzzzt!!
The dark current is different for each element in the CCD, so that
requires, say, 4000 separate analogue circuits for the Nikons and 16000
separate circuits for the Minoltas. Impractical, but such systems have
been built in the past - unfortunately only the military could afford
them and the maintenance costs continually adjusting all of those
circuits.

Alternatively you can store a dark current reference on a second CCD and
subtract the signal from the primary device by reading the two
synchronously - which increases the noise floor of the result by a
minimum of sqrt(2). I have built systems like this in the past.

Finally, you can digitise the dark current, convert it back to analogue
and subtract it from the CCD output in the analogue domain, however this
also involves quantisation noise since the dark current compensation
data has to be stored to a limited precision. I have built, and own
patents, on systems utilising this approach - but I wouldn't consider it
now because it could never achieve the performance of 16-bit linear
ADCs, let alone what you want here.
and then applying
(hardware, analog) gamma compensation. Only then, throw the image to
the ADC for digitizing. In other words, stay in the contiguous analog
domain until the very last step when the discrete aspect of digital
can't do any damage. Does that make sense?

No - for the reason above, it became nonsensical at the first premise.
Actually, that's what I'm currently wrestling with. Having received
the SDK last year I'm now finally getting down to trying to find a way
to turn multiscanning on.
What do you actually get in the SDK? API calls for discrete functions
such as stepping the scanner head, reading single lines, exposing
individual RGBI channels or just entry points to the TWAIN interface?

I ask because I am currently writing an upgrade to my cine film scanning
approach, having made some hardware modifications that reduce, but not
eliminate, the frame wander problem I had and would like to be able to
pick up messages directly from the scanner software. For example, if I
can respond to a message that the scan is complete, I can then move my
cine film forward to the next batch of frames, while a processing
complete message would allow my application to initiate the next scan
capture. At the moment I just wait a fixed period of time but, as you
might appreciate, a few seconds saved on each scan amounts to hours when
you are scanning 200ft of film!
 
Top