Nikon Coolscan V vs 5000

J

JuneSaprono

In Nikon's brochure, the Coolscans V and 5000 are both spec'ed at
4000dpi. The V's image sensor is described as a "3,964-pixel linear
CCD", while the 5000's as a "3,964pixel two-line linear CCD". Is there a
physical difference in the sensor size, i.e. the 5000's is twice as big?
Does the difference has anything to do with resolution and scan quality?
Is that one of the reasons why the 5000 costs twice as much as the V?
Thanks.
 
W

Wilfred

In Nikon's brochure, the Coolscans V and 5000 are both spec'ed at
4000dpi. The V's image sensor is described as a "3,964-pixel linear
CCD", while the 5000's as a "3,964pixel two-line linear CCD". Is there a
physical difference in the sensor size, i.e. the 5000's is twice as big?
Does the difference has anything to do with resolution and scan quality?
Is that one of the reasons why the 5000 costs twice as much as the V?

To my understanding, the purpose of this double sensor is to scan twice
as fast. The main point of the 5000's being 'professional' is that it is
better capable of bulk scanning. The only other additional features seem
to be the 16-bit A/D converter (vs. 14-bit) and multi-sampling.
 
I

Ian

In Nikon's brochure, the Coolscans V and 5000 are both spec'ed at
4000dpi. The V's image sensor is described as a "3,964-pixel linear
CCD", while the 5000's as a "3,964pixel two-line linear CCD". Is there a
physical difference in the sensor size, i.e. the 5000's is twice as big?
Does the difference has anything to do with resolution and scan quality?
Is that one of the reasons why the 5000 costs twice as much as the V?
Thanks.

The 5000 is twice as fast - that's why there is a "two-line" CCD. The A/D
converters are 16-bit instead of 14, and the 5000 also allows multi-sample
scanning (repeated samples before moving the CCD, reducing noise).
That last one is simply a marketing trick to help "justify" the extra cost,
there's no reason the V could not do the same.
Resolution of the two scanners is identical.
Oh yes, the 5000 can take a complete film roll with an optional adapter.

Regards
Ian
 
K

Kennedy McEwen

In Nikon's brochure, the Coolscans V and 5000 are both spec'ed at
4000dpi. The V's image sensor is described as a "3,964-pixel linear
CCD", while the 5000's as a "3,964pixel two-line linear CCD".

Yes - it scans the image two lines at a time, meaning it can scan the
image twice as fast.
Is there a
physical difference in the sensor size, i.e. the 5000's is twice as big?

The sensors are the same length in both scanners and they have the same
dimensions.
Does the difference has anything to do with resolution and scan quality?

No difference in resolution or scan quality, just time.
Is that one of the reasons why the 5000 costs twice as much as the V?

Yes, as well as having a 16-bit ADC and all of the other top line
features that Nikon don't put on their second tier models, like
capability to take bulk film adapters and single pass multiscanning.
 
J

JuneSaprono

Kennedy said:
Yes - it scans the image two lines at a time, meaning it can scan the
image twice as fast.


The sensors are the same length in both scanners and they have the same
dimensions.


No difference in resolution or scan quality, just time.


Yes, as well as having a 16-bit ADC and all of the other top line
features that Nikon don't put on their second tier models, like
capability to take bulk film adapters and single pass multiscanning.

I can see why some would pay for doubling the scanning speed, but it is
unimportant to me. Single pass multiscanning may come in handy from time
to time, but not sure it is that important for me either. Besides it is
available on the Coolscan 4000. That leaves 16-bit ADC vs 14-bit ADC,
and I wonder how much scan quality difference it would make?

Based on these feature differences, the 5000 seems to take a step back
from the 4000. Why would Nikon decide to come out with the 5000?
 
O

Ole-Hjalmar Kristensen

<snip>

J> I can see why some would pay for doubling the scanning speed, but it is
J> unimportant to me. Single pass multiscanning may come in handy from time
J> to time, but not sure it is that important for me either. Besides it is
J> available on the Coolscan 4000. That leaves 16-bit ADC vs 14-bit ADC,
J> and I wonder how much scan quality difference it would make?

None at all if that is the only difference. There is no way you are
going to get more than 14 bits of real data out of a CCD without
active cooling, and even then it's dubious. But they *may* have put
better analog circuitry before the ADC, which would translate to
slightly lower noise. I doubt it, but the proof of the pudding lies in
the eating...

J> Based on these feature differences, the 5000 seems to take a step back
J> from the 4000. Why would Nikon decide to come out with the 5000?
 
K

Kennedy McEwen

I can see why some would pay for doubling the scanning speed, but it is
unimportant to me. Single pass multiscanning may come in handy from time
to time, but not sure it is that important for me either. Besides it is
available on the Coolscan 4000. That leaves 16-bit ADC vs 14-bit ADC,
and I wonder how much scan quality difference it would make?

Based on these feature differences, the 5000 seems to take a step back
from the 4000.

I don't follow your logic there. The LS-4000 is a 14-bit ADC scanner
with a single line CCD, the LS-5000 is a 16-bit ADC scanner with a
double line CCD. Hence the LS-5000 scans twice as fast as the LS-4000
and has 4x lower noise floor. Why do you think this is a step
backwards?
Why would Nikon decide to come out with the 5000?

Progress.
 
S

Steven

I don't follow your logic there. The LS-4000 is a 14-bit ADC scanner
with a single line CCD, the LS-5000 is a 16-bit ADC scanner with a
double line CCD. Hence the LS-5000 scans twice as fast as the LS-4000
and has 4x lower noise floor.

Kennedy,

Is the lower noise floor really significant ? Is any CCD accurate to
anywhere near 14 let alone 16 bits ? I have a Canon scanner and I
appreciate that Nikon scanners may be much better but this sounds like
drum scanner accuracy to me.

-- Steven
 
K

Kennedy McEwen

Steven said:
Kennedy,

Is the lower noise floor really significant ?
It isn't a case of whether the scanner can achieve 14 or 16 real bits.
The 14-bit scanner might only have an rms noise floor that is say 66db
lower than the saturation level (11 perfect bits) while the 16-bit
scanner would be around 75db (12.5 perfect bits).
Is any CCD accurate to
anywhere near 14 let alone 16 bits ?

Well I have a few linear CCD specifications here that tend to indicate a
readout noise of around 15-electrons at room temperature is perfectly
achievable, while the storage capacity of more than 700,000 electrons is
possible. That would give a dynamic range equivalent to 15.5-bits - so
it would be sensible to use a couple of bits more to ensure that the
full range can be maintained digitally. However I do not know the
device that Nikon (or anyone else) uses, which may be and probably is a
bargain basement CCD with very limited real performance. In any case,
it demonstrates that 14-bits is certainly not adequate for many CCD
outputs.
I have a Canon scanner and I
appreciate that Nikon scanners may be much better but this sounds like
drum scanner accuracy to me.

Much of the quality of the older drum scanners came from the fact that
the film was wet mounted with an intermediate refractive index fluid,
and used photomultiplier tubes for sensing. This latter aspect is very
significant since it permits the signal to be gamma compensated *before*
it is digitised, which is not possible with CCDs because of the need for
digital non-uniformity calibration before any processing can be
undertaken. Even 8-bit quantisation of a pre-compensated signal will
give far superior shadow fidelity than a 14 or 16 bit linearly encoded
signal.
 
S

Steven

On Sat, 20 Nov 2004 10:07:30 +0000, Kennedy McEwen

[snip]
However I do not know the
device that Nikon (or anyone else) uses, which may be and probably is a
bargain basement CCD with very limited real performance. In any case,
it demonstrates that 14-bits is certainly not adequate for many CCD
outputs.

I am interested in this because I cannot get better than 10-bit accuracy
from my FS4000. If I multi-sample (100x) a black scan line the average
value for any point is about 60 but the average deviation of each of the
readings for a single point from the average for that point is about 40.
If I multi-sample a white line the average deviation is about 250.
These readings are all 16-bit linear.

Do you achieve much better performance with your Nikon (I presume you
have a 5000) ?

-- Steven
 
K

Kennedy McEwen

Steven said:
On Sat, 20 Nov 2004 10:07:30 +0000, Kennedy McEwen

[snip]
However I do not know the
device that Nikon (or anyone else) uses, which may be and probably is a
bargain basement CCD with very limited real performance. In any case,
it demonstrates that 14-bits is certainly not adequate for many CCD
outputs.

I am interested in this because I cannot get better than 10-bit accuracy
from my FS4000. If I multi-sample (100x) a black scan line the average
value for any point is about 60 but the average deviation of each of the
readings for a single point from the average for that point is about 40.
If I multi-sample a white line the average deviation is about 250.
These readings are all 16-bit linear.
Oh it's even worse than a perfect 10 bit device, because you have all of
the quantisation noise in those measurements. Evenly distributed
quantisation noise accounts for about 10.8dB, so the formula for the
peak signal to noise ratio of an ADC is 6n + 10.8dB. You have a noise
of around 40 (assuming that your deviation is the noise) and a peak
signal of 65536 possible states, resulting in an SNR of 64.3dB, or
roughly equivalent to a little less than 9-bits.
Do you achieve much better performance with your Nikon

I did some assessments of noise and dynamic range a couple of years ago
when I bought my latest Nikon scanner and, whilst I don't have the
source data any more and it would take ages to regenerate it, I do have
the summary spreadsheet. This shows the noise (standard deviation) and
mean signal levels for each of the RGB channels in a 70x70 image segment
(to show the effect of calibration limits) and a 100x1 image segment
with 1, 2, 4, 8 and 16x multisamples.

On the 100 samples from the same pixel, the results were as follows,
scaled to the 0-65535, 16-bit, range:

1x Multiscan Red Green Blue
Black Average 43.68 59.36 72
Black Noise 34.4 38.4 34.4
White Average 3474.9 3768.32 3928.48
White Noise 353.6 261.76 196.32

The noise does seem to be a little better than you have from the Canon,
but not significantly so. This would indicate a peak signal to noise,
including the CCD readout, analogue preamps, ADC and quantisation is
about 2000:1, or about 66dB, being just over 9-bits.

The same data, measured for higher multiscan rates, showed the expected
squareroot(n) improvement, although it started to fall off at x16,
nevertheless that only brings the effective number of bits up to 12 on
the Nikon scanner. Even then, this is all in linear space, so it is
still a lot worse than an old drum scanner digitising gamma compensated
output to only 8-bit precision.
(I presume you
have a 5000) ?
These measurements were made on an LS-4000, which is a 14-bit device,
however as you can see, it falls far short of 14 perfect bits. To be
fair, I would be surprised if it had shown itself to be much better than
11 or 12 real bits, but I was surprised it was as low as that.

Nevertheless, I suspect that much more of that loss is actually in the
analogue front end and in the ADC itself, rather than the CCD. So going
up to 16-bits can offer a real advantage, even though the CCD itself
might not have a SNR which justifies it.
 
K

Kennedy McEwen

Steven said:
On Sat, 20 Nov 2004 10:07:30 +0000, Kennedy McEwen

[snip]
However I do not know the
device that Nikon (or anyone else) uses, which may be and probably is a
bargain basement CCD with very limited real performance. In any case,
it demonstrates that 14-bits is certainly not adequate for many CCD
outputs.

I am interested in this because I cannot get better than 10-bit accuracy
from my FS4000. If I multi-sample (100x) a black scan line the average
value for any point is about 60 but the average deviation of each of the
readings for a single point from the average for that point is about 40.
If I multi-sample a white line the average deviation is about 250.
These readings are all 16-bit linear.
Oh it's even worse than a perfect 10 bit device, because you have all of
the quantisation noise in those measurements. Evenly distributed
quantisation noise accounts for about 10.8dB, so the formula for the
peak signal to noise ratio of an ADC is 6n + 10.8dB. You have a noise
of around 40 (assuming that your deviation is the noise) and a peak
signal of 65536 possible states, resulting in an SNR of 64.3dB, or
roughly equivalent to a little less than 9-bits.
Do you achieve much better performance with your Nikon

I did some assessments of noise and dynamic range a couple of years ago
when I bought my latest Nikon scanner and, whilst I don't have the
source data any more and it would take ages to regenerate it, I do have
the summary spreadsheet. This shows the noise (standard deviation) and
mean signal levels for each of the RGB channels in a 70x70 image segment
(to show the effect of calibration limits) and a 100x1 image segment
with 1, 2, 4, 8 and 16x multisamples.

On the 100 samples from the same pixel, the results were as follows,
scaled to the 0-65535, 16-bit, range:

1x Multiscan Red Green Blue
Black Average 43.68 59.36 72
Black Noise 34.4 38.4 34.4
White Average 3474.9 3768.32 3928.48
White Noise 353.6 261.76 196.32

The noise does seem to be a little better than you have from the Canon,
but not significantly so. This would indicate a peak signal to noise,
including the CCD readout, analogue preamps, ADC and quantisation is
about 2000:1, or about 66dB, being just over 9-bits.

The same data, measured for higher multiscan rates, showed the expected
squareroot(n) improvement, although it started to fall off at x16,
nevertheless that only brings the effective number of bits up to 11 on
the Nikon scanner. Even then, this is all in linear space, so it is
still a lot worse than an old drum scanner digitising gamma compensated
output to only 8-bit precision.
(I presume you
have a 5000) ?
These measurements were made on an LS-4000, which is a 14-bit device,
however as you can see, it falls far short of 14 perfect bits. To be
fair, I would be surprised if it had shown itself to be much better than
11 or 12 real bits, but I was surprised it was as low as that.

Nevertheless, I suspect that much more of that loss is actually in the
analogue front end and in the ADC itself, rather than the CCD. So going
up to 16-bits can offer a real advantage, even though the CCD itself
might not have a SNR which justifies it.
 
S

Steven

On Sun, 21 Nov 2004 17:40:11 +0000, Kennedy McEwen

[snip]
Nevertheless, I suspect that much more of that loss is actually in the
analogue front end and in the ADC itself, rather than the CCD. So going
up to 16-bits can offer a real advantage, even though the CCD itself
might not have a SNR which justifies it.

Yes, I can certainly see your point here.

Thanks for your detailed reply. The info and effort are appreciated.

-- Steven
 
W

WD

Kennedy McEwen said:
It isn't a case of whether the scanner can achieve 14 or 16 real bits.
The 14-bit scanner might only have an rms noise floor that is say 66db
lower than the saturation level (11 perfect bits) while the 16-bit
scanner would be around 75db (12.5 perfect bits).


Well I have a few linear CCD specifications here that tend to indicate a
readout noise of around 15-electrons at room temperature is perfectly
achievable, while the storage capacity of more than 700,000 electrons is
possible. That would give a dynamic range equivalent to 15.5-bits - so
it would be sensible to use a couple of bits more to ensure that the
full range can be maintained digitally. However I do not know the
device that Nikon (or anyone else) uses, which may be and probably is a
bargain basement CCD with very limited real performance. In any case,
it demonstrates that 14-bits is certainly not adequate for many CCD
outputs.


Much of the quality of the older drum scanners came from the fact that
the film was wet mounted with an intermediate refractive index fluid,
and used photomultiplier tubes for sensing. This latter aspect is very
significant since it permits the signal to be gamma compensated *before*
it is digitised, which is not possible with CCDs because of the need for
digital non-uniformity calibration before any processing can be
undertaken. Even 8-bit quantisation of a pre-compensated signal will
give far superior shadow fidelity than a 14 or 16 bit linearly encoded
signal.

Kennedy,

Couldn't an architecture be devised whereby with a CCD gamma compensation
could be done prior to digitisation?

For example:
===== ==================== =====
CCD Analog output--->| PGA |--->| Analog Gamma Comp. |--->| A/D |
===== ==================== =====

The CCD output is essentially analog (although of course discretely
sampled in space), the PGA would be a programmable gain amplifier which
could be used for the 'digitial non-uniformity cal.', This is followed
by an analog circuit which does gamma compensation prior to the A/D
converter.

Would this accomplish what you are referring to with the PMT analog
gamma comp in drum scanners?

W
 
K

Kennedy McEwen

WD said:
Kennedy,

Couldn't an architecture be devised whereby with a CCD gamma compensation
could be done prior to digitisation?

For example:
===== ==================== =====
CCD Analog output--->| PGA |--->| Analog Gamma Comp. |--->| A/D |
===== ==================== =====

The CCD output is essentially analog (although of course discretely
sampled in space), the PGA would be a programmable gain amplifier which
could be used for the 'digitial non-uniformity cal.', This is followed
by an analog circuit which does gamma compensation prior to the A/D
converter.
Wow that takes me back to my youth! ;-)

I held some long since expired patents that addressed essentially this
very approach - albeit for different reasons - and it was put into
production in several systems.

Indeed, prior to the days of high speed, high bit depth ADCs, this was
the *only* way of correcting the CCD output in low contrast situations,
such as I encountered with early infra-red focal plane arrays.

However it doesn't just require a programmable *gain* amplifier, but a
programmable dark current removal too.

These functions are most logically undertaken by storing the offset and
gain data digitally (as at present) and then applying the offset through
a D/A convertor fed to an analogue subtraction circuit to remove the
dark current variation. The result is then passed to the analogue input
of a multiplying DAC, or MDAC and the gain correction data applied to
the digital input, to produce a linear calibrated analogue output which
can then be processed in analogue and digitised as required. In the
70's and 80's DACs and MDACs were available at video speeds with
adequate resolution, whilst fast, economic high resolution ADCs only
became available fairly recently.

When I first proposed this approach, nobody took it seriously because
the world was transitioning to digital technology and the concept of
precision analogue computation was considered to be old fashioned, bulky
and expensive. I had to manufacture some new cameras that were a
fraction of the size, power consumption, weight and cost of the
competition to demonstrate its worth. I remember one of the world's
experts in thermal imaging technology actually asking to look round the
back of our exhibition stand, because he didn't believe the images could
be produced in the cameras on display without a rack of power guzzling
ADCs digitising the data!
Would this accomplish what you are referring to with the PMT analog
gamma comp in drum scanners?
It would, but the performance achieved would not be any better than what
can currently be achieved by digitising the CCD output directly. The
bottleneck in the approach is the gain correction step, which can't be
achieved with any better precision than 16-bits at a sensible speed.
Consequently, the approach has long since become obsolete and we let the
patent lapse.

If your local fire department still uses an old CairnsIris helmet
mounted thermal imaging system for search and rescue, you might be able
to see an imaging sensor that used that principle in operation - if not,
a Google on CairnsIris pulls thousands of links up. At the time it was
first introduced, it was revolutionary, because the thermal contrast in
a scene, particularly in smoke, is extremely low and this required
16-bit precision calibration in real time to extract a useful image from
the low detector signals. This meant the conventional approach would be
much too heavy and consume too much power to fit on a helmet, which was
important when the firefighter had to use both hands to operate
equipment or rescue people. The analogue processing made the whole
concept of a hands free imager that could allow rescuers to see through
smoke right to the fire victim possible.

These days, the detector output is just digitised directly - often on
the focal plane itself sometimes with the dark current corrected in
analogue first, but never the gain any more - and the signal extracted
from the digital data, just like it is with scanners. CairnsIris was
the second last product I designed using this analogue calibration
approach.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top