Nikon Super CoolScan 4000 ED - some questions

H

Hans-Georg Michna

Does anybody here have some experience with this scanner? I have
a few questions.

I got this scanner to scan all my slides and am not quite sure
about the optimal settings.

1. I find that this scanner, like practically every other, isn't
as sharp as the manufacturer states. It cannot nearly reproduce
a good slide in all of its resolution. Sharp, fine text, for
example becomes mushy and, in extreme cases, unreadable.

I conclude therefore that it serves no purpose to set the
scanner to its full native resolution. The pictures would only
be bigger and unsharp. So I reduced the resolution from 4,000 to
2,000 dpi. Even at that resolution the results are still
somewhat unsharp.

Does this make sense?

If you want to test this, put a piece of paper into a slide
frame and scan its edge. I bet you'll see 4 to 5 grey pixels
between the white and the black area, rather than the 1 or, at
most, 2 that you should see.

I guess all scanner manufacturers exaggerate their resolution by
a factor of 2. The do put out the pixels, but at max res the
pictures are simply unsharp.

2. Has anybody experimented with the sharpening function, the
one called unsharp masking? Does this in fact compensate for a
scanner shortcoming or does it only exaggerate edges? What did
you find to be the best settings?

3. When scanning with 8 bits per color, rather than the full
color depth, does it still make sense to do multiple passes per
slide? Or is the sensor noise lower than what can be represented
in 8 bits per color?

4. While scanning, the scan software uses 100% of an Athlon 64
3000+ processor. Is this normal? Is the processor really the
bottleneck? Anything I could set up differently to make it scan
faster?

5. Any other hints?

Hans-Georg
 
D

degrub

THe nikons are notorious for having limited depth of field. Two things.
play with the manual focus and see if you can improve. SOmetimes the
autofocus does not work well when it cannot find high contrast areas.
Also make sure the cropping is only on the film and not the carrier or
the slide holder. 2) if you have curled film, even a little bit, it will
be tough to get a good scan.

Search this group on nikon 4000 and you will find a lot of discussion.

Frank
 
P

Philip Homburg

1. I find that this scanner, like practically every other, isn't
as sharp as the manufacturer states.

Please provide a reference to a Nikon document that makes any claims about
the performance of the LS-4000.
 
H

hpowen

I find the film strip holder to be a help in dealing with curved film,
but it won't help your slides. One option for slides is anti-newton
glass mounts such as those from Gepe.

I use this scanner semi-professionally and I am pleased. Have a look at
one of my outtakes pages:

http://www.pbase.com/ho72/grandprix_2005&page=all

You should be seeing better results since these are compressed.

If you bought your scanner used, perhaps it is in need of adjustment or
cleaning.
 
P

(PeteCresswell)

Per Hans-Georg Michna:
I conclude therefore that it serves no purpose to set the
scanner to its full native resolution. The pictures would only
be bigger and unsharp. So I reduced the resolution from 4,000 to
2,000 dpi. Even at that resolution the results are still
somewhat unsharp.

Does this make sense?

I tried doing that with my 4000, but found that when I zoomed the pix they broke
down a lot sooner and had inferior detail than at 4000 dpi.

I compromised on a JPEG level of 80 and 4,000 dpi, which gives me scans that are
around 2 megs each - mostly a little less.
 
D

Don

1. I find that this scanner, like practically every other, isn't
as sharp as the manufacturer states. It cannot nearly reproduce
a good slide in all of its resolution. Sharp, fine text, for
example becomes mushy and, in extreme cases, unreadable.

Several things here. First of all, the scanner does focus fine, but
your film may be warped. This means that not all areas of the image
will be in perfect focus. Inspect the image at maximum resolution
until you see individual grain. It will be in focus at the point where
your focus marker is. Also, and this is very important (!), perform
all these tests using maximum resolution and having turned off (!) all
editing features. Things like ICE, GEM, etc can all affect the
perceived sharpness. Finally, Nikons do have a relatively narrow depth
of field.
I conclude therefore that it serves no purpose to set the
scanner to its full native resolution.

Quite the contrary! It is essential to use maximum or more accurately
*native* resolution of the scanner if you want maximum quality. You
may decrease the size later in your image editing software but it's
absolutely essential to get the most from your scanner up front. Not
only, can you then experiment in your image editor later, but you'll
get much better results than any interpolation the scanner does on the
fly, no matter how good it may be.
The pictures would only
be bigger and unsharp. So I reduced the resolution from 4,000 to
2,000 dpi. Even at that resolution the results are still
somewhat unsharp.

That's to be expected. The scanner hardware can only scan at its
native resolution. If you reduce resolution then some image processing
will take place. This may be a simple, "throw away every other pixel"
or more complex interpolation but either way it starts with the
scanner's native resolution and may reduce sharpness. That's why, to
get maximum quality, scan at the scanner's *native* resolution and
then reduce the size afterwards in an image editing program using
desired interpolation and any required post processing like unsharp
masking, etc.
Does this make sense?

No... ;o)
If you want to test this, put a piece of paper into a slide
frame and scan its edge. I bet you'll see 4 to 5 grey pixels
between the white and the black area, rather than the 1 or, at
most, 2 that you should see.

That edge is not as sharp as you may think! At 4000 dpi each pixel is
*tiny* so no wonder there is a transition. That's why people use the
so-called "slanted razor edge" test to determine the scanner's true
resolution.
I guess all scanner manufacturers exaggerate their resolution by
a factor of 2. The do put out the pixels, but at max res the
pictures are simply unsharp.

Manufacturers do exaggerate their claims as they always assume ideal
conditions. That's why tests like the above are important.

Don.
 
N

Noons

Don apparently said,on my timestamp of 15/09/2005 3:03 AM:
*tiny* so no wonder there is a transition. That's why people use the
so-called "slanted razor edge" test to determine the scanner's true
resolution.

Pray tell?
 
H

Hans-Georg Michna

Thanks to all who replied!

I should provide a few samples for the resolution tests. Let's
see if I can find the time.

Wonder how that slanted razor edge test works.

Meanwhile the first results are online at
http://www.michna.com/photos/ , albeit reduced to 900 x 600
pixels.

One thing I cannot find out is under what circumstances the
scanner moves the autofocus spot back to the center of the
picture. Does it do that after each scan, does it do that when I
close and reopen the program (Nikon Scan 4), or does it never do
that at all?

Hans-Georg
 
D

Don

Don apparently said,on my timestamp of 15/09/2005 3:03 AM:


Pray tell?

My notes say there should be more info here:

http://www.normankoren.com/Imatest/sharpness.html

I'm offline as I write this so I hope the page is still up.

The software to calculate the MTF can be had here (say the said
notes...):

http://www.imatest.com

It basically means putting a razor in a slide mount at an angle. The
slant of, say, 5 degrees will produce a scan where the edge (which is
sharp beyond scanner's resolution!) will be reproduced as a "pixel
staircase" (my wording). From this, using some math, the actual
resolution of the scanner can be deduced.

Don.
 
K

Kennedy McEwen

Noons said:
Don apparently said,on my timestamp of 15/09/2005 3:03 AM:


Pray tell?
It is a very simple test which overcomes the sampling consequences, such
as aliasing etc. with digital imaging and assesses the total MTF of the
overall imaging system. This is fine in itself, but you do have to
beware that this is a sampled sensor, so MTF is not the only resolution
restriction. However a knowledge of the MTF and the sampling density
can determine the actual limiting resolution and the amount of aliasing
that is likely as a consequence. The actual process has been adopted in
the ISO-12233 and 16067 resolution test charts and methods for digital
cameras and scanner systems of both transmissive and reflective
material.

The principle is quite straight forward although some of the mathematics
involved is less so. The MTF is a measure of the contrast that is
produced by any imaging component as a function of spatial frequency -
the fine-ness of the detail that is presented to it. It is basically a
frequency response curve. As MTF degrades with increasing spatial
frequency the contrast eventually falls to a level at which it is no
longer useful and this is the limiting resolution of the component. The
advantage of MTF for system analysis is that for a system of linearly
combined components, the total MTF is simply the product of the
component MTF. So, knowing the MTF of the lens, the CCD and the
electronics we can determine the MTF of the system - or we might measure
the MTF of the system and, with a knowledge of the CCD and electronics
MTF determine if the lens is living up to its promised performance
without having to remove it from the system for independent measurement.

MTF is defined in terms of the system response to a sine wave input at
each spatial frequency, and can be a long and laborious task to measure
directly - trust me, I've been there! However an infinitely thin line
produces all spatial frequencies with equal level simultaneously, and
using mathematics it is possible to separate out all of the responses to
individual frequencies based on a process known as fourier
transformation or FT. This speeds up the measurement of MTF
considerably and most MTF assessment kit these days uses an FT approach.
Unfortunately, an infinitely thin line has an infinitely small signal,
so that is a bit of a problem, but a lot of MTF measurement kit uses
very thin line test targets and highly sensitive photomultipliers to get
the signal and then correct the result for the known dimensions of the
line. A second problem with an infinitely thin line is that when you
use that with a digital sensor you have no idea where on the sensor the
image of the line will fall, and that may determine the amount of signal
and the MTF that is measured due to aliasing of the high spatial
frequency components of that thin line. This is also a major limitation
of conventional analogue measurements with bar patterns, especially the
near useless USAF-1951 chart which has only three bars for each spatial
frequency!

The ISO method overcomes the first of these problems, lack of signal, by
replacing that infinitely thin line with an edge - a simple black to
white or vice versa transition. It overcomes the second problem by
placing the edge at a slight angle to the pixel array - not enough to
influence the result significantly, but enough to ensure that on
successive lines the edge is in a slightly different position. This
second step, which a colleague of mine (Kevin Murphy of the Royal
Signals and Radar Establishment, Malvern, England) developed over 20
years ago, is very cunning because not only does it ensure that the edge
occupies many positions, it also permits the MTF measurement to be
oversampled, thus eliminating all aliasing artefacts from the
measurement process.

For example, if the edge is at a gradient of 1 in 10 compared to the
pixel matrix, then the edge will appear 1/10th of a pixel further to the
right on each successive row of the image. The normal measurement
process would be to measure the signal at each pixel in a given row
across the edge, however that would yield only a few pixels in the
region of the edge with useful information. If instead of measuring the
signal change across the edge, we measure it down the edge then we get
10 times as many pixels in the region of the transition, each in order
as the edge moves across them. This effectively oversamples the data,
and permits all of the calculations to be undertaken without any of the
uncertainties of aliasing and random phase perturbations. It also
improves the fidelity of the measurement, because many more useful
samples are contributing to the end result.

So that is why a slanted target is much more useful as a measurement
tool than an aligned target and why it is an intrinsic part of the ISO
test method. Why an edge? Well, mathematically, the MTF is the modulus
of the FT of the point spread function (PSF) - a measure of how much the
imaging component spreads an infinitely fine point in its image. The
PSF in any specific direction is the equal to the line spread function
(LSF) for a line in the orthogonal direction - so measuring the spread
of a vertical line will give the PSF in the horizontal axis, and vice
versa. If we have the LSF then we can calculate the MTF in the
orthogonal direction. However, the spatial differential of an edge
spread function (ESF) is a very close approximation to the LSF. So now,
we image an edge slanted slightly relative to the pixel matrix, measure
the signal transition down the column or row of pixels most parallel to
that edge, difference each sequential pixel in that column or row to
approximate the LSF and the MTF is just the modulus of the fourier
transform. To make things even better, the edge will cross several rows
or columns of pixels, and each can produce its own MTF or they can be
averaged to reduce the noise on the measurement.

It may all sound excessively complicated, but the approach is very sound
and, like a lot of mathematics, all the detail is easily implemented in
software - but that is the background to it. With the ISO standard
targets and the appropriate software to identify the position of the
slanted edge reference, measurement of MTF and the true resolution of
digital imaging sensors is now a trivial exercise. Indeed, someone ran
an open test a year or so back where people tested their own machines
and posted the results. The main problem with this type of thing though
is that it is easy to skew the results by artificially sharpening images
through USM and other techniques prior to running the measurement
software so, IMO, the results of the open test were always open to abuse
and in the end they were less than conclusive.
 
H

Hans-Georg Michna

Kennedy,

thanks for the excellent explanation of the slanted razor edge
test.

Certainly an ideal scanner would yield white pixels on one side,
black pixels on the other, and one grey pixel in each row.

Since scanners cannot be ideal, we have a very simple quality
signal here. We can check how many grey pixels we see when we
enlarge an area around the razor edge.

How many grey pixels per line would be acceptable before one
would say that the resolution is unnecessarily high in relation
to the poor optics?

Hans-Georg
 
D

Dieguito

Hans-Georg Michna said:
4. While scanning, the scan software uses 100% of an Athlon 64
3000+ processor. Is this normal? Is the processor really the
bottleneck? Anything I could set up differently to make it scan
faster?

I'm using a CoolScan V ED scanner and it does the same. As a software
developer, I don't believe the processor is the bottleneck though, I merely
think the processor is just waiting in a loop for data to come in. So I
think a faster processor won't speed up the scanning process.

What I experienced was that the post-processing (like ICE) was much faster
after adding extra memory to my PC. I noticed that the software kept
multiple copies of the image in memory and often took up to 800 megs of ram.
Since I've placed an extra 512MB, the post-processing came down from 2
minutes to 20 seconds, and that's where I think a faster processor might
help to make a few seconds. The scanning speed itself though is (to my
opinion) limited by the scanner hardware.

Hope this helps,

Dieguito
 
B

Bart van der Wolf

Dieguito said:
message


I'm using a CoolScan V ED scanner and it does the same. As a
software developer, I don't believe the processor is the bottleneck
though, I merely think the processor is just waiting in a loop for
data to come in. So I think a faster processor won't speed up the
scanning process.

If ICE is enabled, and it's applied to the scan data as it is
collected, it may exceed the sensor integration time and the interface
lag. If that's the case, processing power will help.

IMO, the CPU usage (<100%) will allow to determine if the total of
running processes exceeds the processing capacity.

Bart
 
H

Hans-Georg Michna

Dieguito, Bart,

thanks for the info!

I see slightly varying scanning speeds. It sounds like the scan
process is interrupted many times a second. Perhaps it is
different after all, and my processor is just a tad too slow to
provide maximum scan speed. Can't be sure though. This would be
the case if a faster processor showed less than 100% load. I
don't have a faster processor here at this time.

Hans-Georg
 
K

Kennedy McEwen

Hans-Georg said:
Kennedy,

thanks for the excellent explanation of the slanted razor edge
test.

Certainly an ideal scanner would yield white pixels on one side,
black pixels on the other, and one grey pixel in each row.
Sorry about the delay getting back to you Hans-Georg, but I had other
priorities over the weekend.

An imaginary scanner with 100% MTF at the Nyquist of the sampling
density would have only one grey pixel in the transition from white to
black. However that is unlikely to be an ideal or optimal scanner
because such a device would also have a significant, if not 100%, MTF
above Nyquist. An ideal scanner is not the imaginary scanner - it will
always produce *more* than 1 grey pixel in the transition.

To see why this is the case, you just need to consider the effect on the
MTF of the main components, the CCD and the optic.

In the case of the optic, the MTF is limited both by diffraction and
design and manufacturing aberrations. If the lens is very good it will
approach the situation where the MTF is dominated by the diffraction
term only, and hence is called diffraction limited. Diffraction is a
function of the lens aperture and the wavelength of the light
transmitted by it. If the lens has a variable aperture then it may
become diffraction limited above a certain f/#, since restricting the
aperture will increase the amount of diffraction produced and also
reduces the aberrations from the outer parts of the lens. So the
diffraction limited lens is the best that you can get. Roughly
speaking, diffraction spreads each point in the image into a gaussian
profile with a diameter at 1/e from the peak of approximately 2.44 x f/#
x wavelength. So, if we have an f/4 optic then the diameter of the
diffraction disc at the 1/e amplitude is about 10 wavelengths - or 5um
in the green and 7um in the red. The significant point is that the MTF
of a real lens, even an ideal lens, will always smear the image
slightly.

In spatial frequency terms, diffraction produces an MTF which falls
almost linearly from 1 at 0cy/mm spatial frequency down to 0.1 at 70% of
the cut-off frequency then tapering down to zero at the cut-off
frequency itself of about 1/(wavelength x f/#). The point here is that
the MTF at any particular sampling density will always be less than
unity, and can actually be quite low from even an ideal lens, and still
produce useful information.

For the CCD, the main limitation is the physical size of the CCD cell
itself. For most CCDs the optical collection area of the cell is close
to 80% of the pitch unless specific steps have been taken to reduce it
by masking etc. or increase it by microlenses. So the typical CCD
cannot discern anything smaller than 80% of a pixel pitch - whether it
is an infinitely fine spot or a blur it makes no difference to the CCD,
which is much the same as blurring the image itself, since the signal
produced by the cell only represents the average signal across its area.

Now, you can see that if you ignore the lens completely - or have a lens
which has almost 100% MTF at the limiting resolution of the scanner,
then the finite CCD element size itself will produce one grey pixel
between the black and white transition, because the chances of you
aligning your edge exactly with the transition between one CCD cell and
the next is negligible. You will almost always get one cell which sees
part of the black and part of the white, resulting in a grey pixel.

Similarly, if you ignore the finite width of the CCD cells, and assume
that they are infinitely thin, then the limited resolution of the lens,
even an ideal diffraction limited lens, will result in the edge becoming
blurred and hence at least one pixel (and possibly more) being grey.

Since both of these effects are unavoidable in even a perfect scanner,
it is fairly obvious that an ideal scanner must produce more than one
grey pixel between black and white transitions. More on this later
though.
Since scanners cannot be ideal, we have a very simple quality
signal here. We can check how many grey pixels we see when we
enlarge an area around the razor edge.
You can check the number of grey pixels but it does not tell you very
much on its own. The number of grey pixels across an edge may well be a
measure of quality, but it is not a measure of resolution and the
"optimum" is neither the maximum nor the minimum. In order to interpret
such a measure, you need to understand what the optimum is. You have
made an assumption that it is "1", but without any reference or
explanation of what artefacts you are prepared to tolerate in order to
achieve that figure, which I suggest you would find more undesirable
than the slightly softer result of a higher number. The optimum
scanning edge transition is not the same as the optimum downsampling of
a transition scanned at a much higher resolution since the latter can
invoke optimum digital filters which are impossible to manufacture
optically in the real world.
How many grey pixels per line would be acceptable before one
would say that the resolution is unnecessarily high in relation
to the poor optics?
That isn't a very useful question, in that it is a question which does
not have a unique answer. For example, your original assumption is that
an ideal scanner would have one grey pixel between black and white.
However, as explained, this would require a 100% MTF right up to, and
beyond, Nyquist, with all of the artefacts that such heavy undersampling
would produce. If your scanner were a simple analogue system, such as a
film camera or just the lens, then you would assess its resolution using
conventional Rayleigh criteria, which is roughly the spatial frequency
at which the MTF has fallen to around 10% or so. If this was a
diffraction limited lens then the MTF would reduce to this level almost
linearly, as mentioned above. So, if you had a scanner where the MTF
reduced linearly to around 10% at the Nyquist point, then you would get
scanned images which looked pretty close to ideal optics with minimum
sampling artefacts. How many pixels would appear grey in the transition
between black and white for such a scanner? 3-4, depending on the
position of the edge relative to the sample points - substantially more
than your nominally ideal, yet this would be considered ideal in an
analogue system, such as the camera lens that made your original image.
However, it doesn't end there. The lens/CCD combination probably
produces an MTF which does not reduce linearly, but is a concave down
curve, falling more rapidly than linear around 1/3rd of the spatial
frequency and then flattening off at higher frequencies. Such a scanner
will still "resolve" the full Nyquist of the sampling density but will
produce a transition which may be many times that "optimum". So you
really can't say that just because you have a grey transition of 10
pixels that the scanner resolution is unnecessarily high in relation to
the optics - what matters is the shape of the transition, not the number
of pixels it extends across. Since the shape is actually quite
difficult to assess visually, the best solution is to fourier transform
the transition to produce the MTF directly. Then you can simply set a
threshold for the ultimate contrast you want in your final scan and
simply read off the spatial frequency that corresponds to on the MTF
curve as your useful resolution directly. However, you need to be
careful not to be too aggressive in your requirements for MTF, since
data below this threshold can still be enhanced using USM, particularly
if scanning with high bit depth, to produce an end MTF which exceeds the
acceptable threshold.

By comparison, imagine you had a very high resolution scanner - let's
choose something impractically large for example, say 100,000ppi - and
produced your 4000ppi output by 25x downsampling the original data from
that device. In this case your "ideal" may well be to have only one
pixel grey in the transition between a black and white edge. Why does
this approach have a different optimum from scanning directly at
4000ppi? The answer lies in what is practical in terms of the system
MTFs. Whilst it is not practical to realise wither an optic or a CCD
with an MTF of 100% up to Nyquist and 0% beyond it, it is certainly
possible to closely approximate such an MTF in a digital filter and
incorporate such a filter in the downsampling algorithm. So, just
because you can achieve such a transition on downsampled data, it does
not mean that it is achievable, nor indeed desirable, at the original
scan resolution.
 
H

Hans-Georg Michna

By comparison, imagine you had a very high resolution scanner - let's
choose something impractically large for example, say 100,000ppi - and
produced your 4000ppi output by 25x downsampling the original data from
that device. In this case your "ideal" may well be to have only one
pixel grey in the transition between a black and white edge. Why does
this approach have a different optimum from scanning directly at
4000ppi? The answer lies in what is practical in terms of the system
MTFs. Whilst it is not practical to realise wither an optic or a CCD
with an MTF of 100% up to Nyquist and 0% beyond it, it is certainly
possible to closely approximate such an MTF in a digital filter and
incorporate such a filter in the downsampling algorithm. So, just
because you can achieve such a transition on downsampled data, it does
not mean that it is achievable, nor indeed desirable, at the original
scan resolution.

Kennedy,

thanks a lot for your long explanation! I've read it with
interest.

I do not understand all of it perfectly, but I hope I have
understood most of it. It is interesting and useful.

From a practical point of view of useful information contained
in the pixels, it seems to me that when I take the Nikon 4,000
ppi scanner and downsample the result to 2,000 ppi, which I
have, in fact, done, then I'm losing some information, but I
clearly lose less information per pixel than when I would
downsample it further to, say, 1,000 dpi.

It also seems obvious to me that the Nikon 4,000 dpi scanner is
still some way from lifting the entire information from a sharp
slide. Would you agree? It seems to me that, to copy most of the
information from a normal slide, one would need a scanner with a
sensor of perhaps 8,000 or 16,000 ppi and then, if one wanted to
save space, perhaps downsample the scanned result somewhat to
shed a lot of pixels while losing little information.

Do you know typical resolutions of professional film scanners?

Hans-Georg
 
K

Kennedy McEwen

Hans-Georg said:
thanks a lot for your long explanation! I've read it with
interest.

I do not understand all of it perfectly, but I hope I have
understood most of it. It is interesting and useful.

From a practical point of view of useful information contained
in the pixels, it seems to me that when I take the Nikon 4,000
ppi scanner and downsample the result to 2,000 ppi, which I
have, in fact, done, then I'm losing some information, but I
clearly lose less information per pixel than when I would
downsample it further to, say, 1,000 dpi.
Of course - not only would 1000ppi have around 1/16th as many pixels as
the original, compared to 1/4 as many at 2000ppi, but the MTF of the
scanner is much higher between 1000 and 2000ppi than it is between 2000
and 4000ppi, so the information content at 2000ppi is much more per
pixel than it is at 4000ppi. This is entirely normal and to be expected
- almost every imaging system reproduces lower spatial frequencies
better than high spatial frequencies (the one common exception being
catadioptric lenses, which usually have a "bump" in their MTF at some
point).
It also seems obvious to me that the Nikon 4,000 dpi scanner is
still some way from lifting the entire information from a sharp
slide. Would you agree?

If the scanner is performing correctly (and you have seen my comments on
your example scan) then it should be getting *almost* all of information
that a 35mm colour film can reproduce. There is no question that it
doesn't get everything that could be on the film (as the link to Bart's
comparison with the higher resolution Minolta shows), but unless you are
shooting with the best optics and always using a tripod it is unlikely
to leave much behind.
It seems to me that, to copy most of the
information from a normal slide, one would need a scanner with a
sensor of perhaps 8,000 or 16,000 ppi and then, if one wanted to
save space, perhaps downsample the scanned result somewhat to
shed a lot of pixels while losing little information.

Do you know typical resolutions of professional film scanners?
Actually, a lot of professional drum type scanners are limited to even
lower resolution than the 4000ppi Nikon. A few have native resolutions
of 80000ppi to 12000ppi but not many. For example, you can pick up an
Imacon 343 virtual drum scanner for about $5k that has 3200ppi optical
resolution, while its top of the range stablemate the 949 would set you
back $20k and give 8000ppi in portrait mode from a 35mm frame. Even
that drops to 3200ppi and 2050ppi on larger formats though. Check their
specifications at http://www.imacon.dk/sw3275.asp if interested.
 
H

Hans-Georg Michna

Actually, a lot of professional drum type scanners are limited to even
lower resolution than the 4000ppi Nikon. A few have native resolutions
of 80000ppi to 12000ppi but not many. For example, you can pick up an
Imacon 343 virtual drum scanner for about $5k that has 3200ppi optical
resolution, while its top of the range stablemate the 949 would set you
back $20k and give 8000ppi in portrait mode from a 35mm frame. Even
that drops to 3200ppi and 2050ppi on larger formats though. Check their
specifications at http://www.imacon.dk/sw3275.asp if interested.

Kennedy,

thanks for the info! I've learned a lot.

Hans-Georg
 
K

Kennedy McEwen

Hans-Georg said:
Kennedy,

thanks for the info! I've learned a lot.
That should, of course have read "A few have native resolutions of
8000ppi...". It would be an incredible scanner at 80,000ppi, able to
resolve more than a single wavelength of blue light! ;-)
 
G

Gordon Moat

Hans-Georg Michna said:
Kennedy,

thanks for the info! I've learned a lot.

Hans-Georg

Just to add a bit, many like to call an Imacon a virtual drum, which is
something common in their marketing material. The reality is that they are
curved path CCD scanners. This is still more like your Nikon film scanner
than a true drum scanner. The benefits for the Imacon are the film is held
at a precise distance, and the upright construction allows other potential
noise generating electronics to be farther from the imaging sensor. Imacons
are not really drum scanners.

While the resolutions might not seem that impressive, the Dmax is quite good
and can produce better results than many film scanners. The scanning limits
for larger film sizes are due to file size limitations, and not failure of
the optics. This is also true for some high end flat bed scanners.

A true drum scanner uses PMTs. They function quite differently than a CCD
sensor. The most common one you might find is the Heidelberg Tango, though
that came out over five years ago and is no longer available new. The latest
in drum scanning is made by ICG. You can read more about those at
<http://www.icg.ltd.uk/>. They have some very good explanations of their
technology on the site.

Another useful drum scanning resource is Kai Hammann. While the article is a
little old, there is a nice article is at:

<http://fb42.s6.domainkunden.de/kund...ht_Trommelscannen_Sinn/EN_Scans_vergleich.htm>

I have used many different drum scanners, flat scanners, and film scanners
over the last ten years. One reality is that there was rarely any value in
scanning beyond 8000 ppi. Not that there may have been more information on
the film, but that the final printing output may have defined a more finite
limit to useful detail information. In other words, your chosen printing
output might only be able to use up to a certain amount of detail, and that
might be less than your scanner is capable of giving, and certainly could be
less than is actually on the film.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top