nikon 5000ed vs minolta 5400

A

Alex Tutubalin

mark said:
I am deciding between two scanners: the nikon coolscan 5000ED vs. the
minolta scan elite 5400. Both have a 4.8 Dynamic range and share

Mark,

I've used both SE5400 and 5000ED in parallel for several months.
After that, I've decided to keep Nikon and sell Minolta.

Nikon scanner is
* sharper! (esp. visible on Velvia scans)
* faster
* can scan entire roll (with SA-30 adapter)
* film loading is faster and confortable

On the other side, Minolta can resolve more fine details but
it is not needed for 35-mm scans (I've never print photos larger
than 16x20 from 35-mm)


Alex Tutubalin,
Moscow, Russia
 
K

Kennedy McEwen

Alan Browne said:
Dynamic range is quantifiable regardless of a signal being digital or
analog. Ask an electrical engineer. Or here:
http://www.jeffrowland.com/tectalk6.htm
Of course, and the term existed before the concept of an analogue to
digital convertor was even invented.
Film is not an "analog", it is an image. ("Analog" means, "by
analogy", such as a voltage representing a temperature sensor and a
meter indicating that voltage as a temperature instead of as a voltage.).
However, Alan, by that definition (which I partially disagree with in
any case) film certain is an analogy of the scene, and thus analogue.

My disagreement with your definition becomes apparent when you consider
that the temperature can also be represented by a series of digital
numbers on a meter just as well - the numbers are an analogy of the
temperature. Thus, your definition leads to the immediate paradox that
digital is analogue - and thus, using the established "proof by
contradiction" method, your definition must be false.

In general terms, analogue means "continuous" whilst digital means
"discrete". A digital representation of the signal can only indicate
discrete values, whilst an analogue representation can indicate all
values with infinitesimal discrimination. The digital representation
can only indicate the signal to the noise floor if there are sufficient
discrete steps, however the analogue representation *always* indicates
the signal into the noise floor.
The confusion (as much mine as anyone's) is that the term Dmax for film
means maximum density, and this has a figure of 4.0. A film burned
clear would have a Dmin approaching 0 and a signal on the A/D would be
at or near maximum. Conversely, the densest area would have a signal at
the A/D approaching 0, but necessarilly would contain noise. The range
between the two can be construed as the Dynamic range of the film.
What seems to be missing in this entire thread is any consideration of
why the DRange of the scanner *MUST* be significantly higher than the
DRange of the film it is scanning. This relates to perception and gamma
as much as it does to the difference between analogue and digital.

Film reproduces the luminance changes in the shadows by increasing the
number and size of silver grains or dye clouds per unit area. This is
continuous, and thus effectively analogue - even at the quantum level it
remains analogue with the presence of atoms in a unit area being
probablistic.

If the DRange of the scanner simply matched the DRange of the film then
the Dmax on the film would represent a count of 0 from the scanner,
whilst the Dmin on the film would be represented by a number close to
(2^n)-1. The next darkest level on the scanner from this Dmin
representation would be a count of one less, which would be a virtually
indistinguishable visual change of density on the film. However the
next lightest level from Dmax which the scanner could represent would be
1, due to the discrete nature of the digital data. This would represent
DMax-0.3 on the film, and would be visibly discrete and lighter than the
Dmax. For example, film Dmax is generally around 3 - 3.6, which should
require no more than 10-12-bits, however a 1 bit change in the shadows
of such a linearly encoded digital image is clearly visible. In short,
an ADC which has a DRange equal to the film is inadequate to
discriminate the shadow information.

The problem this throws up is that the equations that both you and
Dimitris have been debating relate to *linear* representations of
signals, whilst our perception of density is very non-linear. It is
well known that your vision is more sensitive to fine changes of
luminance in the shadows than it is in the highlights of the scene,
hence the use of gamma encoding to minimise the bit depth used to
describe the image digitally. This means that the dynamic range
required to describe the shadows on the image is much higher than
dynamic range necessary to describe the highlights - however, your
equations relate to a uniform dynamic range in a linear system which, at
best, indicates an *average* dynamic range. Since perception of
discrete steps is the driver for quantisation level, your equations for
Drange should be applied in perceptual space, the space in which the
discrimination of discrete steps is equal throughout the range, ie.
perceptually linear. The results can then be transformed via the
inverse gamma space to voltage, luminance and digital linear space to
determine the Drange necessary to describe the shadows without
posterisation, and thus the minimum Drange necessary on the scanner.
 
A

Alan Browne

Kennedy McEwen wrote:

However, Alan, by that definition (which I partially disagree with in
any case) film certain is an analogy of the scene, and thus analogue.

An image of a scene is an image. We can look at it, project it, scan, replicate
it, etc. We don't need to 'convert' its meaning in any way.
My disagreement with your definition becomes apparent when you consider
that the temperature can also be represented by a series of digital
numbers on a meter just as well - the numbers are an analogy of the
temperature. Thus, your definition leads to the immediate paradox that
digital is analogue - and thus, using the established "proof by
contradiction" method, your definition must be false.

The 'digital' temp meter has made a translation in information type. It samples
the voltage, converts to digital and then discrete logic or s/w converts it into
numbers.

IAC, as previously ranted, I simply do not like the term analog for film as I
believe it had come up as some sort of counter for the term digital as applied
to cameras. Did anyone call film "analog" before digital? We certainly called
analog circuitry analog circuitry long before the great rise of digital systems.
(Most computers in my industry were certainly dominated by their analog
interfaces and sensors over the relative simplicity of the digital computer
part).... it never entered anyone's mind to call film "analog"... it was a
recording of an image and that image was (post dev.) identifiable as such on the
same film on which it was shot.

In general terms, analogue means "continuous" whilst digital means
"discrete".

I agree with that, but only as that is the nature of most analog signals. The
signal however remains some abstraction (as a voltage, current, frequency) of
something else it represents. A film image is undeniably an image.
What seems to be missing in this entire thread is any consideration of
why the DRange of the scanner *MUST* be significantly higher than the
DRange of the film it is scanning. This relates to perception and gamma
as much as it does to the difference between analogue and digital.

Film reproduces the luminance changes in the shadows by increasing the
number and size of silver grains or dye clouds per unit area. This is
continuous, and thus effectively analogue - even at the quantum level it
remains analogue with the presence of atoms in a unit area being
probablistic.

If the DRange of the scanner simply matched the DRange of the film then
the Dmax on the film would represent a count of 0 from the scanner,
whilst the Dmin on the film would be represented by a number close to
(2^n)-1. The next darkest level on the scanner from this Dmin
representation would be a count of one less, which would be a virtually
indistinguishable visual change of density on the film. However the
next lightest level from Dmax which the scanner could represent would be
1, due to the discrete nature of the digital data. This would represent
DMax-0.3 on the film, and would be visibly discrete and lighter than the
Dmax. For example, film Dmax is generally around 3 - 3.6, which should
require no more than 10-12-bits, however a 1 bit change in the shadows
of such a linearly encoded digital image is clearly visible. In short,
an ADC which has a DRange equal to the film is inadequate to
discriminate the shadow information.

Agree. I don't see how having more bits gives much improvement in the shaddows,
however, as a minute change in the light going through, with the high resolution
of the low order bits will raise the detector value from very low to some what
high. eg: a slight change in the light level will result in several bits worth
of information. (eg: go from values close to 0 to values in excess of 32 or 64,
quite quickly.)
The problem this throws up is that the equations that both you and
Dimitris have been debating relate to *linear* representations of
signals, whilst our perception of density is very non-linear. It is
well known that your vision is more sensitive to fine changes of
luminance in the shadows than it is in the highlights of the scene,
hence the use of gamma encoding to minimise the bit depth used to
describe the image digitally. This means that the dynamic range
required to describe the shadows on the image is much higher than
dynamic range necessary to describe the highlights - however, your
equations relate to a uniform dynamic range in a linear system which, at
best, indicates an *average* dynamic range. Since perception of
discrete steps is the driver for quantisation level, your equations for
Drange should be applied in perceptual space, the space in which the
discrimination of discrete steps is equal throughout the range, ie.
perceptually linear. The results can then be transformed via the
inverse gamma space to voltage, luminance and digital linear space to
determine the Drange necessary to describe the shadows without
posterisation, and thus the minimum Drange necessary on the scanner.

I'm beginning to see the light on this one. Thanks. Is it possible, then, to
map the gamma curve over the film to A/D response and thereby determine the
minimum number of bits required to span the response of the film?

Cheers,
Alan
 
K

Kennedy McEwen

Alan Browne said:
Kennedy McEwen wrote:



An image of a scene is an image. We can look at it, project it, scan,
replicate it, etc. We don't need to 'convert' its meaning in any way.
But what is recorded on film is an analogy of that scene, comprising a
mass of grains and/or dye clouds to represent the luminance levels in
the original scene. The fact that this can be viewed as an image by
projecting light through it is no different from the voltage recording
of a sound wave, which can just as readily be used to reproduce a
further analogy of the original sound be feeding it to a speaker.
The 'digital' temp meter has made a translation in information type.

Only in the same way as the digital scan of the image, whether or not
film forms an intermediary step.
It samples the voltage, converts to digital and then discrete logic or
s/w converts it into numbers.
Just as happens with the scanner or digital camera.
IAC, as previously ranted, I simply do not like the term analog for
film as I believe it had come up as some sort of counter for the term
digital as applied to cameras. Did anyone call film "analog" before
digital? We certainly called analog circuitry analog circuitry long
before the great rise of digital systems.

Actually, we didn't. In fact, even systems that we would now consider
to be digital in nature, such as the original PCM system proposed by
Alec Reeves in 1941 were not described as digital for some considerable
time, and neither were any that we would now call analogue so described.
Even Claude Shannon's 1948 paper on information theory, whilst refering
to systems of decimal-digits and binary-digits (and inventing the term
bit) failed to discriminate between analogue and digital encoding
schemes, referring to continuous and discrete. Even the original
publications on CCDs by Boyle & Smith or Amelio in 1970, as a device
which quantised what we would now term analogue signals in time, but not
level, did not use language to discriminate digital and analogue
techniques. The use of the terms analogue and digital first appear
during the development of the computer, to distinguish between computing
systems which used continuous analogue computations and those which used
discrete numerical calculations. The transfer of that terminology to
describe electrical systems only appears in the late 1960's when such
computational processing could be applied to real signals.
(Most computers in my industry were certainly dominated by their
analog interfaces and sensors over the relative simplicity of the
digital computer part).... it never entered anyone's mind to call film
"analog"... it was a recording of an image and that image was (post
dev.) identifiable as such on the same film on which it was shot.
Nevertheless, just as we can now refer to the original telegraph system
as digital, film is an analogue representation of the image. The issue
being that it is effectively a continuous rather than a discrete record
of the luminance it each point in the scene. The fact that we didn't
understand the distinction at the time of the original invention doesn't
mean the distinction didn't exist. Darwin wasn't aware of DNA when he
recognised the principle of evolution, but we now know that it is random
mutations of DNA that is responsible for it.
I agree with that, but only as that is the nature of most analog
signals.

It is the nature of *all* analogue systems. That is what distinguishes
digital from analogue. We have become used to consider digital as only
being binary in nature, with only two discrete levels, however there are
multi-level digital systems as well, with 3, 4, 5 or more discrete
levels of digital data. Indeed, if you ever used a 56K modem in your
computer then its operation at the highest data rate relied on such
multi-level digital signals.
The signal however remains some abstraction (as a voltage, current,
frequency) of something else it represents. A film image is undeniably
an image.

It is still an abstraction in a different medium from the original scene
and does not contain all of the information of the original scene, only
a limited selection that camera lens can reproduce and the film record.
It is inherently analogue in nature.
I don't see how having more bits gives much improvement in the
shaddows,

Because your eye is more sensitive to changes in the shadows.
however, as a minute change in the light going through, with the high
resolution of the low order bits will raise the detector value from
very low to some what high. eg: a slight change in the light level
will result in several bits worth of information. (eg: go from values
close to 0 to values in excess of 32 or 64, quite quickly.)
But, if the Drange of the digital data is the same as the Drange of the
analogue signal then you will not be able to represent those small
changes at low levels. You need more bits than the analogue Drange. In
addition to this, you need to digitise into the noise floor to
adequately represent the noise itself.
I'm beginning to see the light on this one. Thanks. Is it possible,
then, to map the gamma curve over the film to A/D response and thereby
determine the minimum number of bits required to span the response of
the film?
Yes - it is somewhere in the region of 17 to 18-bits of linear encoding
with typical slide film.
 
A

Alan Browne

Kennedy said:
projecting light through it is no different from the voltage recording
of a sound wave, which can just as readily be used to reproduce a
further analogy of the original sound be feeding it to a speaker.

I really don't want to debate this, but a voltage recording of a soundwave needs
a transducer to allow conversion back to the orignal sound. All I need to look
at filmphoto is the filmphoto.
Only in the same way as the digital scan of the image, whether or not
film forms an intermediary step.

Just as happens with the scanner or digital camera.

That is a translation (or conversion). The film, alone, conveys the same
information without translation/conversion. A digital camera makes its A/D
conversion in real time and immediately afterwards destroys the image itself.
The file created is maeningless without interpretation (as is a scan).
computational processing could be applied to real signals.

You're going back too far with that one.
Nevertheless, just as we can now refer to the original telegraph system
as digital, film is an analogue representation of the image. The issue
being that it is effectively a continuous rather than a discrete record
of the luminance it each point in the scene. The fact that we didn't
understand the distinction at the time of the original invention doesn't
mean the distinction didn't exist. Darwin wasn't aware of DNA when he
recognised the principle of evolution, but we now know that it is random
mutations of DNA that is responsible for it.

You're stretching you analogies further and further...
Because your eye is more sensitive to changes in the shadows.

That is now sufficiently clear, thanks.
But, if the Drange of the digital data is the same as the Drange of the
analogue signal then you will not be able to represent those small
changes at low levels. You need more bits than the analogue Drange. In
addition to this, you need to digitise into the noise floor to
adequately represent the noise itself.

Got it.
Yes - it is somewhere in the region of 17 to 18-bits of linear encoding
with typical slide film.

Thanks.
 
A

Alan Browne

Alex said:
I've used both SE5400 and 5000ED in parallel for several months.
After that, I've decided to keep Nikon and sell Minolta.

Nikon scanner is
* sharper! (esp. visible on Velvia scans)

Kinda goes against what you say below.
Less pixels too.
* can scan entire roll (with SA-30 adapter)
* film loading is faster and confortable

On the other side, Minolta can resolve more fine details but
it is not needed for 35-mm scans (I've never print photos larger
than 16x20 from 35-mm)

Doesn't mean they don't exist.

Cheers,
Alan.
 
K

Kennedy McEwen

Alan Browne said:
I really don't want to debate this, but a voltage recording of a
soundwave needs a transducer to allow conversion back to the orignal
sound. All I need to look at filmphoto is the filmphoto.
Even if it were true it wouldn't prevent it from being an analogue
recording of the scene. However you need several transducers to look at
the image on the film. The first of these, which actually comprises
several transducing steps, is development of the latent (analogous)
image into a density map and then secondly, after development, a
backlight by which to view that density mapped image. Not only that,
but the brightness of the image is dependent on the intensity of that
backlight, just as the volume of the reproduced soundwave depends on the
sensitivity of the speaker used.

There is no getting away from it, an image recorded on the film is
analogue, just as the latent image pre-development is analogue and just
as the image produced by the camera lens on the film is analogue itself.
 
A

Alex Tutubalin

Alan said:
Kinda goes against what you say below.

Sharpness and resolving power are _different_ characteristics.
In MTF terms, Nikon scanner has higher MTF in 30-50 cycles/mm
range.
Less pixels too.
'information density' is not too high on SE5400.
I've never see 1-pixel details on Minolta scan at full res.
Doesn't mean they don't exist.
Sure. So scaner selection depends on planned average print size.
If you shot several films per year :) and every frame should be
printed larger than 16x20 in, go and buy Minolta.
For several dozens/hundreths of rolls per year and most prints 16x20 or
less - I recommend buy Nikon and use drum scans for larger prints.

I've used all mentioned scanners (sell Nik. 4000 to buy Minolta, than
returned to Nikon 5000 for mass scans, than sell SE5400 because not used).

Another problem for Minolta is edge-to-edge sharpness when scanning
first (or last) frame in 6-frames strip. DOF is very shallow so
film curl is a serious problem.
SE5400 is much better in edge-to-edge sharpness than Nikon _4000_
but not Nik. 5000.

Alex
 
D

Dps

From Wikipedia, the free encyclopedia
"An analog (American English spelling) or analogue (British English
spelling) signal is any continuously variable signal. It differs from a
digital signal in that small fluctuations in the signal are meaningful.
Analog is usually thought of in an electrical context, however mechanical,
pneumatic, hydraulic, and other systems may also use analog signals.

The word "analog" implies an analogy between cause and effect, voltage in
and voltage out, current in and current out, sound in and frequency out.

An analog signal uses some property of the medium to convey the signal's
information. For example, an aneroid barometer uses rotary position as the
signal to convey pressure information. Electrically, the property most
commonly used is voltage followed closely by frequency, current, and
charge."

"Dynamic range is a term used frequently in numerous fields to describe the
ratio between the smallest and largest possible values of a changeable
quantity.

Audio engineers often use dynamic range to describe the ratio of the loudest
possible relatively-undistorted sound to silence or the noise level, say of
a microphone or loudspeaker.

Electronics engineers apply the term to:

a.. the ratio of a specified maximum level of a parameter, such as power,
current, voltage or frequency, to the minimum detectable value of that
parameter. (See Audio system measurements.)
b.. In a transmission system, the ratio of the overload level (the maximum
signal power that the system can tolerate without distortion of the signal)
to the noise level of the system.
c.. In digital systems or devices, the ratio of maximum and minimum signal
levels required to maintain a specified bit error ratio.
In music, dynamic range is the difference between the quietest and loudest
volume of an instrument, part or piece of music.

Photographers use dynamic range as a synonym for the luminosity range of a
scene being photographed; the light sensitivity range of photographic film,
paper and digital camera sensors; the opacity range of developed film
images; the reflectance range of images on photographic papers."



1) Did I ever say that I mostly enjoy this ng because, besides the knowledge
I get on scanning my pictures, I get the chance to read such interesting
views from interesting people?

2) Note the comment on photographers.

3) A film image is analogue in the sense that there is no sampling,
quantisation and encoding. You are allowed to have as many digits to your
numbers as you want, at any point, and that's analogue. Maybe in contrast to
digital, but that's still what we call it. It might not be a time series,
but that's not a problem.

4) What I meant to say is that I think the "dynamic" or "light sensitivity"
range of a film does not have exactly the same meaning, nor usefulness, nor
maybe robust and precise methodology of definition, as in DSP. Anyhow, it
was just an observation, maybe just that I feel people use the dynamic range
for films as a property that should be written on the spec sheet of the
film. The value of 4 is something like an estimated upper bound - not the
actual maximum value that is recorded on the particular film type, which,
BTW can be stochastic.



Anyway, I really agree with the opinion of Kennedy on these two:



"computational processing could be applied to real signals"

and

" Nevertheless, just as we can now refer to the original telegraph system
as digital, film is an analogue representation of the image. The issue
being that it is effectively a continuous rather than a discrete record
of the luminance it each point in the scene. The fact that we didn't
understand the distinction at the time of the original invention doesn't
mean the distinction didn't exist. Darwin wasn't aware of DNA when he
recognised the principle of evolution, but we now know that it is random
mutations of DNA that is responsible for it."


Regards,



Dimitris
 
B

Bart van der Wolf

SNIP
I've never see 1-pixel details on Minolta scan at full res.

I have, see:
<http://www.xs4all.nl/~bvdwolf/main/downloads/Minolta_DSE5400_5400_scr
atch.jpg>
It's a small crop of a horizontal scratch on Provia film, rather than
a lens MTF limited projection of detail. The scratch is not exactly
horizontal, so you can see it cleanly going from one line of pixels to
the next and never occupy more than 2 pixels in between. You'll see a
slanted vertical edge at the right, which *is* limited by the
lens/focussing. So the scanner outresolves the film image detail.

If you like to know my scanner's MTF (no film, just the scanner-lens
and CCD):
<http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_DSE5400_GD.png>
You can see that even at the Nyquist limit of 106.3 cycles/mm there is
significant modulation, more than most films can resolve on high
contrast edges, and there is potential for aliasing if there is
film/grain detail with even higher spatial frequencies.

This all demonstrates that the lack of pixel detail is mostly due to
camera lens/focus and camera shake limitations, and the combined
lens+film+scanner MTFs in the total imaging chain. The scanner cannot
record what isn't there to begin with.

Bart

P.S. The LS-5000 would be Nyquist limited to a maximum of 78.7
cycles/mm, which would lose film detail if the lens and camera
technique are good.
 
A

Alan Browne

Alex Tutubalin wrote:

Sharpness and resolving power are _different_ characteristics.
In MTF terms, Nikon scanner has higher MTF in 30-50 cycles/mm
range.

try this:
http://www.jamesphotography.ca/bakeoff2004/scanner_test_results.html
Sure. So scaner selection depends on planned average print size.
If you shot several films per year :) and every frame should be
printed larger than 16x20 in, go and buy Minolta.
For several dozens/hundreths of rolls per year and most prints 16x20 or
less - I recommend buy Nikon and use drum scans for larger prints.

You're neglecting cropped prints as well.

See link above.

Cheers,
Alan.
 
A

Alan Browne

Alex said:
There is only one scan from CS5000. Also, results from SE5400 are very
different.

Variance is normal. Some scanners (people) are better at setting up their
machine and assuring sharp focus than others (similar variance occurs in other
scanners).

I've tested only one SE5400 (may be broken, but resolution was higher
than CS5k's :) and only one CS5000 (may be carefully selected by Nikon
to make me happy :).

May be the case. Jim may be repeating the bake off, and so you could do your
own scan for the next round.
 
B

beenthere

Check the "Flare on Minolta 5400" thread. Though my posted experience
was only based on a few samples, it is worth bearing in mind. You want
to avoid the flare problem at all costs.
 
B

beenthere

I'm surprised that the different light sources on these two scanners are
not mentioned by you or others.
 
B

beenthere

The shallow dof appears to be a common problem on both of these
scanners. There seems to be more complaints from Nikon users on this
problem. Some suggested that it is due to Nikon's LED light source. Or
perhaps there are just more Nikon users.

Your implication that the LS5000 has less problem than the LS4000 is the
first time I have seen this comparison made. Nikon never claimed that it
has done anything to address this problem on the LS5000.
 
A

Alex Tutubalin

The shallow dof appears to be a common problem on both of these
scanners. There seems to be more complaints from Nikon users on this
problem. Some suggested that it is due to Nikon's LED light source. Or
perhaps there are just more Nikon users.

Your implication that the LS5000 has less problem than the LS4000 is the
first time I have seen this comparison made. Nikon never claimed that it
has done anything to address this problem on the LS5000.

Nikon never claimed that CS4000 has DOF problem :)

I've used all three scanners mentioned. SE5400 has definitely more
DOF than CS4000 with motorized adapter. For SE5400 shallow DOF is a
problem only for first/last frames in strip (or for heavy curled film).

CS5000 has definitely more DOF than SE5400. I've directly compared
them when I own both (unfortunately, scan files not saved)


Alex
 
A

Alan Browne

Check the "Flare on Minolta 5400" thread. Though my posted experience
was only based on a few samples, it is worth bearing in mind. You want
to avoid the flare problem at all costs.

I've had my 5400 for 15 months. I can find flare if I really dig for it.
Nothing as bad as the posted example.
 
K

Kennedy McEwen

Alex Tutubalin <[email protected]> said:
CS5000 has definitely more DOF than SE5400. I've directly compared
them when I own both (unfortunately, scan files not saved)

Just to be clear, did you measure depth of field on both scanners and,
if so, how? Or does the 5000 just hold the film flatter and, if so,
how?

In the short time I have played with an LS-5000 it seemed to have
identical DOF limitations as the LS-4000.
 
K

Kennedy McEwen

I'm surprised that the different light sources on these two scanners are
not mentioned by you or others.
Because it doesn't directly feature in the parameters that Mark was
asking about:
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top