coolscan 5000 vs Minolta dimage II

G

gotnoname

just doing a comparison between the two and was wandering if you guys
can shed some light into something:

Nikon
======
Light source
R, G, B and Infrared (IR) LEDs
Image sensor
3,964-pixel, two-line linear CCD image sensor
Color separation
Performed by RGB LEDs

Minolta
=======
Image sensor 3-line color CCD, 5340 pixels per line, primary-color
filter
Scan method Moving film, fixed sensor, single-pass scan
Light source White LED

The light source, what''s the purpose of the Infrared one in the Nikon?
Since the image sensor for the Nikon is 2-line linear does that mean it
scans 2 lines at a time effectively making the scan time faster?

The minolta uses a white LED light source whereas the Nikon uses 3
(RGB) and the IR. How does this effect the image scanned?

TIA
 
U

UrbanVoyeur

gotnoname said:
just doing a comparison between the two and was wandering if you guys
can shed some light into something:

Nikon
======
Light source
R, G, B and Infrared (IR) LEDs
Image sensor
3,964-pixel, two-line linear CCD image sensor
Color separation
Performed by RGB LEDs

Minolta
=======
Image sensor 3-line color CCD, 5340 pixels per line, primary-color
filter
Scan method Moving film, fixed sensor, single-pass scan
Light source White LED

The light source, what''s the purpose of the Infrared one in the Nikon?

ICE and other dust/scratches/noise/grain reduction systems base part of
their work on the difference between the infrared and non-infrared
scans. Dust & scratches show on one and not the other.

With a dedicated IR LED, Nikon can presumably gain tweak & filter the
main imaging LED's without worrying about how it affects their IR
performance. Likewise they can optimize the Infared LED fro noise
reduction.

Just because Minolta does not list the IR LED does not mean it does not
have a dedicated IR LED inside. (wow a triple negative condition)
The 5400 does have ICE

Since the image sensor for the Nikon is 2-line linear does that mean it
scans 2 lines at a time effectively making the scan time faster?

The minolta uses a white LED light source whereas the Nikon uses 3
(RGB) and the IR. How does this effect the image scanned?

3 LED's, each tuned for R, G & B theoretically produce a more linear,
consistent output for scanning. Each can be individually gain tweaked
and filtered. Whereas in the Minolta they have to trust that the R, G &
B components of the white LED light are properly balanced. There is
little they can do beyond filtering correction when make the scanner.

Whether this makes a real world difference I don't know.
 
K

Kennedy McEwen

just doing a comparison between the two and was wandering if you guys
can shed some light into something:

Nikon
======
Light source
R, G, B and Infrared (IR) LEDs
Image sensor
3,964-pixel, two-line linear CCD image sensor
Color separation
Performed by RGB LEDs

Minolta
=======
Image sensor 3-line color CCD, 5340 pixels per line, primary-color
filter
Scan method Moving film, fixed sensor, single-pass scan
Light source White LED

The light source, what''s the purpose of the Infrared one in the Nikon?

The infrared source permits the scanner to detect the presence of dirt
and defects on the film since the emulsion of most colour films is
relatively transparent to infrared. The Minolta also has this
capability with an IR channel too.
Since the image sensor for the Nikon is 2-line linear does that mean it
scans 2 lines at a time effectively making the scan time faster?
Yes.

The minolta uses a white LED light source whereas the Nikon uses 3
(RGB) and the IR. How does this effect the image scanned?
Two effects both of which are quite subtle.

Firstly, in the Nikon, each row of pixels in all colours is captured
with the CCD in exactly the same position relative to the film - the CCD
is stationary and the colours of each pixel are captured by sequentially
operating the LEDs. Consequently, there is no relative misalignment of
the three colours or the IR channel, right down to sub-pixel level.

In the Minolta approach, the three CCD lines capture individual colours
for three *different* rows of the image. Thus the full colour
information of each row in the image can only be reconstituted by moving
the CCD relative to the film. This means that the colours may not
perfectly align with each other. Although any misalignment is likely to
be well below a pixel size, it could cause colour fringing on sharp
transitions along the long side of the frame (ie. vertical edges on a
landscape).

More significant, however, is the effect on ICE and how the scanner
copes with dirt and defects on the film. For example, dirt or dust may
well be significantly off the focal plane. Since the CCD must move
relative to the film to capture all three RGB components of the image
then the image of this dirt will appear in a different position for each
colour - unless the light source remains stationary (ie. a full frame
illumination, as in a traditional enlarger, rather than collimated or
diffuse strip which moves across the film with the scan head). Which of
these positions is actually closest to the infrared image, which will be
different again, is insignificant, since the mask used by the ICE
process to conceal the dirt must be large enough to span the image of
the dirt on all three channels.

So the Nikon approach enables a finer mask to be used to conceal any
defects or dirt on the film - fewer pixels around each defect are
"guessed" at. However, don't forget that the pixels are effectively
smaller on the Minolta, so this is not as significant as it would
otherwise appear - it is *very* subtle.

The second issue is colour purity and saturation. To appreciate this
you must first recognise that scanning the image and subsequently
displaying it, whether on a monitor or print, is an additional process
compared to viewing it directly as a slide or projected image. In the
traditional view of a projected slide, your eyes respond to the spectral
characteristics of the dyes on the film emulsion, in the reproduced
scanned image your eyes respond to the spectral characteristics of the
phosphors on your monitor or the inks on your printer. All of these
components, your eyes, the film dyes, the monitor phosphors and printer
inks have relatively wide spectrums which often bleed from one primary
colour to another. For example, the red dye on E6 emulsions usually has
a significant density in some areas of the green and blue parts of the
spectrum and so on. Of course the film has been designed to "balance
out" much of this colour impurity by selecting the appropriate densities
for the dyes, so that the final image looks similar to the original
scene. The same is true of the monitor phosphors and the printer inks.

However, when you then introduce the additional process of scanning the
image you need to determine how best to capture each of the individual
colours in the image. In the Nikon approach, they use coloured LEDs
which have a very narrow wavelength range, so that the density of each
of the film dyes is measured at those wavelengths - and only those
specific wavelengths. This permits a very high purity of colour to be
achieved at the scanning stage, since the bleed of one colour into other
areas of the spectrum is eliminated. The resulting data can then be fed
directly to the monitor or printer to stimulate the production of the
appropriate density of phosphor or ink with its specific spectral
spread. In short, this approach enables a near perfect representation
of each colour to be achieved within the limits of the colour management
of your monitor and printer etc. without any additional processing being
necessary. This is also the approach that most high end scanners use,
with the LEDS being replaced by even more spectrally pure lasers for
both scanning and exposing the print paper itself.

By comparison, the Minolta approach (along with most other scanners) is
to use a process which is more akin to a digital camera, with coloured
filters on the CCD itself. These filters, however, are simply dyed etch
resist and have a very wide spectral response - which also bleed into
adjacent areas of the spectrum. The CCD data produced for each colour
is the average response across the spectrum that the filter transmits -
multiplied by the density of the dye at each wavelength. The result of
all of this is that the raw colours are somewhat muted, being further
spread into each other by the scanning process. You might have noticed
that if you photograph a colour print then the colours of the photograph
are never quite as pure or saturated as the original - and this is the
same process. If you repeated it many times then the final image of the
colour photograph would have no saturation at all and would be a dull
off-neutral monochrome image. The additional stage of the scan is
exactly the same as one of those reproduction stages - so the raw scan
is somewhat muted colours compared to the original film. The scanner
firmware compensates for this by increasing the saturation of the
colours - achieved by matrix manipulation of the colour data according
to the known or ideal spectral characteristics of the CCD filters. The
end result may be an image which has a colour saturation, but not a
colour purity, as good as that from the Nikon approach. However, such
mathematical manipulation of the data results in increased noise. This
is because the matrix manipulation involves the subtraction of two or
more colours at different weightings - which always reduces the signal
to noise.

These differences are very subtle and may even be insignificant
depending on the other imbalances in your system. In any case, they are
certainly insignificant when compared to the other major difference
between the scanners - the resolution. The Nikon is a 4000ppi scanner,
whilst the Minolta is a 5400ppi device.
 
R

rafe bustin

These differences are very subtle and may even be insignificant
depending on the other imbalances in your system. In any case, they are
certainly insignificant when compared to the other major difference
between the scanners - the resolution. The Nikon is a 4000ppi scanner,
whilst the Minolta is a 5400ppi device.


I believe there's one important benefit of the Nikon
approach which you've not mentioned, and that is the
ability to do a true per-channel exposure control.

As far as I know, this cannot be accomplished with
any tri-color CCD array, but it's easy to do simply
by varying the relative duty cycles of the R, G and
B LEDs in the Nikon scheme.

And this benefit ends up helping a good deal when
scanning C41 film, where the red, green and blue
channels generally need very different exposures.


rafe b.
http://www.terrapinphoto.com
 
W

Wilfred

Kennedy said:
However, when you then introduce the additional process of scanning the
image you need to determine how best to capture each of the individual
colours in the image. In the Nikon approach, they use coloured LEDs
which have a very narrow wavelength range, so that the density of each
of the film dyes is measured at those wavelengths - and only those
specific wavelengths. This permits a very high purity of colour to be
achieved at the scanning stage, since the bleed of one colour into other
areas of the spectrum is eliminated.

By comparison, the Minolta approach (along with most other scanners) is
to use a process which is more akin to a digital camera, with coloured
filters on the CCD itself. These filters, however, are simply dyed etch
resist and have a very wide spectral response - which also bleed into
adjacent areas of the spectrum.

Are you sure about the type of filters Minolta uses? AFAIK filters that
are selective for a very narrow wavelength range *do* exist. They are
expensive but for use in a scanner, only a small filter area would be
needed so I guess it could be done.
 
R

ramonablue

rafe said:
I believe there's one important benefit of the Nikon
approach which you've not mentioned, and that is the
ability to do a true per-channel exposure control.

As far as I know, this cannot be accomplished with
any tri-color CCD array, but it's easy to do simply
by varying the relative duty cycles of the R, G and
B LEDs in the Nikon scheme.

And this benefit ends up helping a good deal when
scanning C41 film, where the red, green and blue
channels generally need very different exposures.

The Minolta DSE 5400 has individual exposure control for each of the rgb
channels.
 
R

rafe bustin

The Minolta DSE 5400 has individual exposure control for each of the rgb
channels.


Are you quite sure this is a true exposure
control, and not a post-processing control?

To do this with a white light source requires
individual integration times per scan line
and per color channel, to wit:

(for each scan line)
{
integrate Nr microseconds for red
// ignoring green and blue outputs
integrate Ng microseconds for green
// ignoring red and blue outputs
integrate Nb microseconds for blue
// ignoring red and green outputs
}

Alternatively this could be done with a
so-called "electronic shutter" control
on the CCD chip but I know of only one
high-end CCD chip that supports this.


rafe b.
http://www.terrapinphoto.com
 
K

Kennedy McEwen

rafe bustin said:
I believe there's one important benefit of the Nikon
approach which you've not mentioned, and that is the
ability to do a true per-channel exposure control.

As far as I know, this cannot be accomplished with
any tri-color CCD array


I haven't ever encountered a single tri-linear CCD which does not have
independent exposure control on each line.

This has come up several times on this forum in the past, but nobody has
yet identified a single device which has this limitation, whilst I have
provided several links to a range of tri-linear devices which certainly
do offer independent exposure control. So I admit I am at a bit of a
loss as to where you are getting your misinformation from.
 
K

Kennedy McEwen

Wilfred said:
Are you sure about the type of filters Minolta uses?
Yes.

AFAIK filters that are selective for a very narrow wavelength range
*do* exist.

Yes, multilayer dichroic filters can produce very narrow transmission
bands - with a commensurate loss of total intensity from a wide spectral
bandwidth source.
They are expensive but for use in a scanner, only a small filter area
would be needed so I guess it could be done.
The filters are part of the CCD structure itself, they are not separate
components. Given the thickness of a suitably narrow band filter, they
would be impractical to mount on the CCD. Each line on the CCD is
typically only a hundred microns or less apart from its neighbour. A
dichroic filter would be dominated by edge effects, reflections from the
imperfect edges, if cut to that size.
 
R

rafe bustin

I haven't ever encountered a single tri-linear CCD which does not have
independent exposure control on each line.

This has come up several times on this forum in the past, but nobody has
yet identified a single device which has this limitation, whilst I have
provided several links to a range of tri-linear devices which certainly
do offer independent exposure control. So I admit I am at a bit of a
loss as to where you are getting your misinformation from.


OK, try this one: NEC uPD 8871

This a ubiquitous CCD with which I'm quite familiar;
it's used in dozens of consumer products and there
are probably at least 10 million copies in use or
in landfills.

Yes, there is one transfer gate per color, but I
cannot visualize how the three gates might be
operated independently. In the NEC data sheet
"Application Circuit Example," the three gates
are tied together, and every scanner circuit
I've ever seen using this chip has the three
gates tied together.

Admittedly -- the scanner circuits I've worked
with are high volume, low end consumer grade
products, not film scanners.

You can get the PDF data sheet by googling
on "uPD8871 ccd".


rafe b.
http://www.terrapinphoto.com
 
R

ramonablue

rafe said:
OK, try this one: NEC uPD 8871

This a ubiquitous CCD with which I'm quite familiar;
it's used in dozens of consumer products and there
are probably at least 10 million copies in use or
in landfills.

Yes, there is one transfer gate per color, but I
cannot visualize how the three gates might be
operated independently. In the NEC data sheet
"Application Circuit Example," the three gates
are tied together, and every scanner circuit
I've ever seen using this chip has the three
gates tied together.

Admittedly -- the scanner circuits I've worked
with are high volume, low end consumer grade
products, not film scanners.

You can get the PDF data sheet by googling
on "uPD8871 ccd".

This may be true for this particular CCD, but it does not lead to your
conclusion about "any tri-color CCD array" as stated below:

"I believe there's one important benefit of the Nikon
approach which you've not mentioned, and that is the
ability to do a true per-channel exposure control.

As far as I know, this cannot be accomplished with
any tri-color CCD array [snip]"

Judging from your past posts here, you seem to have a habit of making
such erroneous and unsubstantiated statements.
 
K

Kennedy McEwen

rafe bustin said:
OK, try this one: NEC uPD 8871

This a ubiquitous CCD with which I'm quite familiar;
it's used in dozens of consumer products and there
are probably at least 10 million copies in use or
in landfills.

Yes, there is one transfer gate per color, but I
cannot visualize how the three gates might be
operated independently.

Why do you think NEC bothered to go to the expense of bringing a
separate transfer gate for each colour out of the 10 million packages if
they can't be driven independently? They have tied both clock pins
together internally as well as commoning them for all three channels, so
why bother leaving the TG lines separate?
In the NEC data sheet
"Application Circuit Example," the three gates
are tied together, and every scanner circuit
I've ever seen using this chip has the three
gates tied together.
Well, perhaps that is the difference between designing with CCDs and
just copying the example circuit on the data sheet. ;-)

Think about how TG interacts with P1/P2 in a CCD. Put another way, what
happens if all the gates are open at the same time? That is how
exposure can be controlled independently. I'll grant you that it isn't
as ideal as a separate substrate dump gate, but it works.
 
S

Steven

Think about how TG interacts with P1/P2 in a CCD. Put another way, what
happens if all the gates are open at the same time? That is how
exposure can be controlled independently. I'll grant you that it isn't
as ideal as a separate substrate dump gate, but it works.

Hi Kennedy,

Just to clarify; is exposure dependent on the TG pulse width ? If so,
the allowed range is 5000 ns to 50000 ns. Does this mean the exposure
range is 10:1 ? This is smaller than I expected.

-- Steven
 
R

rafe bustin

Hi Kennedy,

Just to clarify; is exposure dependent on the TG pulse width ? If so,
the allowed range is 5000 ns to 50000 ns. Does this mean the exposure
range is 10:1 ? This is smaller than I expected.


No, the exposure time is the TG period, minus
the TG pulse width. When TG is released, the
data being clocked out is the data from the
*last* integration period.

Normally there's one short TG pulse per
scanline, and both the period and pulse
width of TG are constant.

To Kennedy: You win. I believe the trick
would be to double-pulse TG and thus
throw away charge in the channel(s) where
shorter exposure is wanted. Rather
inefficient -- and I've never seen it
done that way -- but yes, doable.

(TG pulse width limitations preclude using
the TG duty cycle to keep any one channel
in discharge state.)

In my earlier post I should have been
clearer. I am aware of the Kodak CCD
chips with "electronic shutter" controls,
but never worked with one.

In the products I've worked on, the TGs
have always been tied together. My
incredulity was based on prior experience
a lack of imagination...


rafe b.
http://www.terrapinphoto.com
 
K

Kennedy McEwen

Steven said:
Hi Kennedy,

Just to clarify; is exposure dependent on the TG pulse width ? If so,
the allowed range is 5000 ns to 50000 ns. Does this mean the exposure
range is 10:1 ? This is smaller than I expected.
Sort of, but it isn't as direct as that. 5 to 50uS would be a very short
exposure time. I haven't used this particular IC but its architecture
is identical to some Sony devices I have used (and, as anyone who has
ever used Sony chips will attest, their documentation is even poorer
than NEC's!).

The photocharge that has accumulated in the image CCD cells during the
exposure is transferred to the transport cells when the TG line goes
high. Setting TG high effectively connects the image cells to the
corresponding transport cells with a potential bias so that all of the
charge flows out of the image cells. By suitable manipulation of the
transport clocks and the TG line you can dump all of the photocharge
accumulated prior to the exposure directly to the output drain until you
decide to start the exposure by dropping the TG line (shutting the
transfer gate off) and thus restricting the photocharge to accumulate in
the imaging cells, ready to be transferred to the transport cells when
TG is next pulsed and then subsequently read out by the transport
clocks.

As I mentioned, control of individual exposure certainly is possible
even with this device, albeit less than ideal because it is not
optimised for the type of application such as film scanning.

Film scanner CCDs usually have an additional gate which controls
exposure independently of the transport similar to that described in:
http://www.kodak.com/global/plugins/acrobat/en/digital/ccd/papersArticles
/highRes14400Trilinear.pdf
(watch for possible line wrap on that URL!)
You can see from this paper how TG operates in all linear devices. In
this case the LOG lines control the exposure directly and independently
of the TG lines which control transfer of the photocharge from imaging
to transport cells. This permits much more range, linearity and
individual control of channel exposure and is typical of how a film
scanner CCD works.
 
R

rafe bustin

Sort of, but it isn't as direct as that. 5 to 50uS would be a very short
exposure time. I haven't used this particular IC but its architecture
is identical to some Sony devices I have used (and, as anyone who has
ever used Sony chips will attest, their documentation is even poorer
than NEC's!).


Kennedy, you may know your Sony chips but
I know this NEC CCD pretty well.

In a flatbed scanner a typical integration
period is around 3ms. I've seen it as low
as 1ms where high throughput (for monochrome
copies) was the driving factor.

On the NEC chips at least, the TG pulse width
is just some nominal value, and as far as I
know doesn't have much bearing on the output
signals. It needs to be "wide enough" to
effect the transfer, but there's zero benefit
to making it any wider than that.

Again, my experience has been with high-
volume, low-end consumer devices, more
specifically MFPs.

You are correct in that there's a great
deal left unsaid in the data sheets,
particularly with regard to the many
possible timings of the pixel-level
clocks.


rafe b.
http://www.terrapinphoto.com
 
K

Kennedy McEwen

rafe bustin said:
Kennedy, you may know your Sony chips but
I know this NEC CCD pretty well.
They are, to all intents and purposes identical, both being licensed
developments from the same original Fairchild design from the late
1970s.
In a flatbed scanner a typical integration
period is around 3ms. I've seen it as low
as 1ms where high throughput (for monochrome
copies) was the driving factor.
Doesn't that just confirm what I said? If the TG were to control
exposure directly, as Steven asked, the exposure would be in the range
he quoted from the datasheet, which is around three orders of magnitude
lower than a typical exposure. However TG can be used to control the
exposure indirectly.
On the NEC chips at least, the TG pulse width
is just some nominal value, and as far as I
know doesn't have much bearing on the output
signals.

Again, doesn't that just confirm what I said? The TG *CAN* be used to
control exposure, but not simply by its own level, it requires to be
pulsed in combination with other signals to do so and "clear out" any
unwanted photocharge.
It needs to be "wide enough" to
effect the transfer, but there's zero benefit
to making it any wider than that.
Actually there is, but it isn't documented and you need to understand
the problems that occur with real CCDs (traps etc.) to appreciate the
advantage of making it as wide as practical, achieving the maximum
transfer efficiency and minimum lag.
You are correct in that there's a great
deal left unsaid in the data sheets,
particularly with regard to the many
possible timings of the pixel-level
clocks.
Typical Japanese component.
 
S

Steven

No, the exposure time is the TG period, minus
the TG pulse width. When TG is released, the
data being clocked out is the data from the
*last* integration period.

Normally there's one short TG pulse per
scanline, and both the period and pulse
width of TG are constant.

Hi Rafe,

Thanks for the explanation and also the datasheet reference. I can't
see any way to independently vary each colour's exposure (although
apparently it can be done).
To Kennedy: You win. I believe the trick
would be to double-pulse TG and thus
throw away charge in the channel(s) where
shorter exposure is wanted. Rather
inefficient -- and I've never seen it
done that way -- but yes, doable.

Wouldn't the second TG pulse transfer the charges to be discarded into
the shift register ? This would corrupt the readings from the previous
line which would be in the process of being shifted out to the ADC.

-- Steven
 
S

Steven

Film scanner CCDs usually have an additional gate which controls
exposure independently of the transport similar to that described in:
http://www.kodak.com/global/plugins/acrobat/en/digital/ccd/papersArticles
/highRes14400Trilinear.pdf
(watch for possible line wrap on that URL!)
You can see from this paper how TG operates in all linear devices. In
this case the LOG lines control the exposure directly and independently
of the TG lines which control transfer of the photocharge from imaging
to transport cells. This permits much more range, linearity and
individual control of channel exposure and is typical of how a film
scanner CCD works.

The LOG lines certainly are the feature missing from the NEC sensor that
Rafe referenced. Please ignore this question if it has already been
answered but is there any way to discharge the imaging cells apart from
pulsing TG which would have the unfortunate side-effect of loading the
transport cells ?

-- Steven
 
K

Kennedy McEwen

Steven said:
Wouldn't the second TG pulse transfer the charges to be discarded into
the shift register ? This would corrupt the readings from the previous
line which would be in the process of being shifted out to the ADC.
It would only corrupt the previous line if it hasn't been read out yet.
After that the transport cells are empty and you can pulse the TG lines
independently to transfer any photocharge that has accumulated
pre-exposure into the transport cells and then dump it all from there
once the shortest exposure has been initiated. As I said, it is less
than ideal, but it works.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top