NikonScan settings <-> Analog Gain

D

Don

Hiya,

Are NikonScan (3.1) levels (and/or curves) settings applied before or
after the actual hardware scan?

What I mean is this:

Before: The settings are first translated into equivalent Analog Gain
settings and then the scan is performed.

After: A scan is performed first, and the settings are applied to the
image afterwards (just like adjusting the image in an external
program).

Whatever the answer: How does one prove it?

For example, something along the lines of: Curves (in general) being
non-linear do not translate readily into a linear Analog Gain setting,
or an actual empirical test...

Thanks as always!

Don.
 
K

Kennedy McEwen

Don said:
Hiya,

Are NikonScan (3.1) levels (and/or curves) settings applied before or
after the actual hardware scan?

What I mean is this:

Before: The settings are first translated into equivalent Analog Gain
settings and then the scan is performed.

After: A scan is performed first, and the settings are applied to the
image afterwards (just like adjusting the image in an external
program).
The only control function (other than selection of positive or negative
material) that is implemented in hardware is the analogue gain. This
isn't exactly before the actual hardware scan, but synchronously with
it. Everything else is performed afterwards on the data produced by the
scan.
Whatever the answer: How does one prove it?
You can conduct a careful analysis of the statistics on uniform scans.
If the curves control were implemented at the scanning stage then the
noise characteristics would change. There are several sources of noise
in a CCD scanner, including noise on the LED drive currents; noise on
the CCD readout; shot noise on the emission, and hence the detection, of
photons transmitted by the media; noise on the analogue buffer circuits;
noise on the ADC references; and (the only noise that Nikon own up to in
their specifications, hence their ridiculous Dmax claims) quantisation
noise on the ADC. In normal operation some of these noise sources are
very significant, whilst others are less significant or even negligible.
Any linear changes implemented during the scan will change the balance
of some of these noise sources, consequently producing a difference in
the noise level of the uniform image. Linear changes implemented after
the data is captured will result in exactly the same noise. If you go
through this exercise, you will see the noise change with the analogue
gain, but not with any other linear change in the data controls.
For example, something along the lines of: Curves (in general) being
non-linear do not translate readily into a linear Analog Gain setting,
or an actual empirical test...
Whilst a logical assumption, that isn't really proof - you could, in
theory, modify the ADC reference characteristics to achieve some curves
control as part of the scan. It would, of course, be very complex (and
consequentially expensive) to implement anything as flexible as the
curves control, but that doesn't preclude it from being done. However,
I agree with your general assumption.
 
D

Don

The only control function (other than selection of positive or negative
material) that is implemented in hardware is the analogue gain. This
isn't exactly before the actual hardware scan, but synchronously with
it. Everything else is performed afterwards on the data produced by the
scan.

That's what I suspected but according to Nikon support the changes are
applied before A/D conversion because doing it afterwards would
"result in a lower quality output". Well, of course it would!

Anticipating conflicting answers like this is the very reason I asked
for proof, but that's "confidential engineering"...
You can conduct a careful analysis of the statistics on uniform scans.
If the curves control were implemented at the scanning stage then the
noise characteristics would change. There are several sources of noise
in a CCD scanner, including noise on the LED drive currents; noise on
the CCD readout; shot noise on the emission, and hence the detection, of
photons transmitted by the media; noise on the analogue buffer circuits;
noise on the ADC references; and (the only noise that Nikon own up to in
their specifications, hence their ridiculous Dmax claims) quantisation
noise on the ADC. In normal operation some of these noise sources are
very significant, whilst others are less significant or even negligible.
Any linear changes implemented during the scan will change the balance
of some of these noise sources, consequently producing a difference in
the noise level of the uniform image. Linear changes implemented after
the data is captured will result in exactly the same noise. If you go
through this exercise, you will see the noise change with the analogue
gain, but not with any other linear change in the data controls.

On the face of it, that sounds too complicated/difficult because of
multiple sources of noise. Not to mention, it would require the
precision of a sophisticated lab and not the high tolerance
environment I have here.

So, how about this: Let's take levels because they appear much simpler
and, also, expanding dynamic range appears much more important than
fine tuning with curves which can always be done later.

If (simple) levels settings were applied before the scan the dynamic
range should expand smoothly with no gaps. If levels settings were
applied after the scan then there would be (significant) gaps.
Correct?

If yes, then that would be much easier to spot, especially if the test
scan has a limited dynamic range to start with - not too limited,
though, so it gets lost in the noise. Unless, of course, the software
applying the changes afterwards contains some fancy algorithms to
interpolate intermediate values...
Whilst a logical assumption, that isn't really proof - you could, in
theory, modify the ADC reference characteristics to achieve some curves
control as part of the scan. It would, of course, be very complex (and
consequentially expensive) to implement anything as flexible as the
curves control, but that doesn't preclude it from being done. However,
I agree with your general assumption.

Just out of curiosity, how would curves be implemented by modifying
ADC settings? In particular, I'm curious about reconciling the linear
and non-linear nature of the two. Multiple scans and then heavy
processing?

Thanks, as always!

Don.
 
K

Kennedy McEwen

Don said:
On the face of it, that sounds too complicated/difficult because of
multiple sources of noise. Not to mention, it would require the
precision of a sophisticated lab and not the high tolerance
environment I have here.
Its not very difficult at all really, you just need to take care and
accumulate the statistics over a sufficiently large number of pixels to
be robust, yet a small enough area of the image to be unaffected by
light intensity variation across the frame (which does exist). I
managed to do it fairly easily soon after buying my LS-4000ED and if you
search the archives of this group from a couple of years back you will
find several posts detailing my results. At the time I was looking at
the variation of total noise with signal and any departure of the
multiscanning advantage from theory.
So, how about this: Let's take levels because they appear much simpler
and, also, expanding dynamic range appears much more important than
fine tuning with curves which can always be done later.

If (simple) levels settings were applied before the scan the dynamic
range should expand smoothly with no gaps. If levels settings were
applied after the scan then there would be (significant) gaps.
Correct?
Yes, but how significant is another matter.
If yes, then that would be much easier to spot, especially if the test
scan has a limited dynamic range to start with - not too limited,
though, so it gets lost in the noise. Unless, of course, the software
applying the changes afterwards contains some fancy algorithms to
interpolate intermediate values...
You would, however, have to write your own software to view the
histogram, since the gaps would probably be imperceptible on the 8-bit
scale that Photoshop uses for its histogram view.
Just out of curiosity, how would curves be implemented by modifying
ADC settings? In particular, I'm curious about reconciling the linear
and non-linear nature of the two. Multiple scans and then heavy
processing?
Lots of ways of doing that. All ADCs have a voltage references, usually
inputs but sometimes generated internally on the silicon, which
determine the zero and full scale levels. Changing these voltages would
change the black and white points on the conversion. Many ADCs also
have a mid-range adjustment voltage as well, and changing this could
(depending on the internal structure) implement a gamma adjustment. A
very few ADCs have multiple voltage levels within their conversion
range, and these could implement higher order curve adjustments. Linear
ramp ADCs operate by integrating a current on a capacitor to produce a
linearly ramping voltage proportional to the current, which triggers a
comparator when it crosses the input voltage. The chip counts clock
cycles between when the ramp starts until it completes, resulting in a
measure of the input signal. An exponential function can be achieved by
placing a resistor in parallel with the capacitor, so that the
integrated charge effectively leaks through the resistor in proportion
to the ramp voltage. A logarithmic conversion can be produced by
replacing the integrating capacitor with a diode, since it is well known
that the voltage on a diode is a logarithmic function of the current. A
mix of capacitor resistors and diode functions on the integration
circuit would implement a fairly sophisticated gamma control.
 
K

Kennedy McEwen

Kennedy McEwen said:
Yes, but how significant is another matter.

You would, however, have to write your own software to view the
histogram, since the gaps would probably be imperceptible on the 8-bit
scale that Photoshop uses for its histogram view.
Just checked this using NikonScan 3.1.2 on an LS-4000ED and, in general,
even with a very low contrast image such as an underexposed negative
strip, the distribution is still large enough that adjustment of the
black and white level points either have to be so far apart that no
missing codes are visible or the clipped areas cause the histograms to
autoscale making it impossible to see whether missing codes are present.

However, I did come up with a solution which fools the scanner. Before
trying to replicate this, save your current NikonScan settings so that
they can be restored after the test.

I took a piece of blank negative and placed that in a slide holder and
then sandwiched some aluminium foil over half of the frame. The film
just makes the scanner believe that film is present but the foil
prevents any light from getting through that area to the CCD. So the
foil appears to be very dark indeed, and only CCD readout, analogue and
ADC noise should be present on those parts of the image.

The settings on the scanner were autoexposure on for preview and scan,
ICE off, GEM off, ROC off, 1x multiscan, 14-bit data, positive film,
analogue gains at 0 and everything else set to linear, ie. no curves
etc.

Preview the entire frame, which should show the black half of the frame
and the orange mask of the negative film. Then select autoexposure off
for preview and scan, close Nikonscan down and then restart it. This
just sets the exposure to the last autoexposure, to stop everything
wandering around - you could use your last film scan as the default, but
the results would be more variable.

Then select an area of the frame which is masked by the foil - ie. very
black. Use the magnifier to enlarge that up to full size and preview it
again. Now go to the curves palette and press the autocontrast icon -
the black and white circle. This adjusts the levels of the red, green
and blue channels to achieve maximum contrast. Examine the settings for
the channels individually - selecting the maximum histogram button to
stretch the scale out to the level adjustment. If the black point is
set to zero then it simply means that the exposure is such that the
level is clipping the ADC input, so increase the master analogue gain
and preview again until the distribution is just off the zero black
point for all three colours.

Now manually increase the black point and decrease the white point of
each colour until the clipped part of the distribution just triggers the
autoscale, making the amplitude of the distribution fall. When you have
achieved this, you will probably have a difference between the black and
white points of around 3 or 4 levels and some missing codes should be
clear in the distributions. You have proved that the curves controls on
Nikonscan 3.1.2 are definitely implemented AFTER the data capture, not
in the analogue circuit, the CCD or any other place.

In doing this I also noticed something else though. Having achieved the
settings that you needed to demonstrate this, set these as the User
Settings. Then switch autoexposure back on for preview and scan, close
NikonScan and restart it. Repeat the above procedure, adjusting the
analogue gain to get the distributions just off the zero. You will
probably notice immediately that the missing codes are much more
significant. Only one thing has changed since the first operation -
autoexposure, and since this is causing more missing codes then
obviously autoexposure is also being implemented AFTER the data is
captured!

Increasing the selected area of the frame to encompass a small part of
the normal film area and repeating returns the results to those
previously noted.

From this, I conclude that NikonScan is implementing autoexposure in two
stages. Firstly at the analogue gain stage using the entire frame as
the reference to determine the actual exposure given to the CCD. This
ensures that no point in the image actually saturates the CCD and you
can hear the scan head passing across the frame to implement this. Then
a secondary autoexposure is apparently applied - a post capture
modification of the crop area of the frame selected for the scan! This
is something I have never noticed with NikonScan before, however I
wonder if it might account for why some people are getting variable
results with Nikonscan whilst others, myself included, have no problems
with it whatsoever.

Think I'll experiment again with this when I get some time - just a
little to much film to scan at the moment. :)
 
D

Don

Its not very difficult at all really, you just need to take care and
accumulate the statistics over a sufficiently large number of pixels to
be robust, yet a small enough area of the image to be unaffected by
light intensity variation across the frame (which does exist). I
managed to do it fairly easily soon after buying my LS-4000ED and if you
search the archives of this group from a couple of years back you will
find several posts detailing my results.

OK, I'll try to track them down.
Yes, but how significant is another matter.

The idea was to use an image (with smooth gradients) that covers, for
example, only 50% of the available dynamic range and then boost
exposure until the histogram expands close to 100%. Any gaps should
then be quite apparent in spite of minor distortions caused by
assorted noise sources, etc.

Would that approach be feasible?
You would, however, have to write your own software to view the
histogram, since the gaps would probably be imperceptible on the 8-bit
scale that Photoshop uses for its histogram view.

That's exactly what I had in mind. I always write my own software for
things like this. That's also why I'm asking beforehand to see if it's
worthwhile making the effort.

For example, it was by writing a few short routines that I discovered
how inexact a science scanning actually is. I assumed that multiple
(flatbed) scans would produce identical results, at least at low(er)
resolutions. (Of course, I expected differences at 2400 dpi and 48-bit
color, for example.)

But it was quite an eye opener to discover the (considerable!)
differences between scans even at 50 dpi which I plotted by using
false colors within the original image (e.g., color 1: scan value <
baseline, color 2 : scan value = baseline, color 3: scan value >
baseline, etc).

I realize that there are a lot of things in the chain causing this,
from stepper motor inaccuracies to assorted noise sources you alluded
to before, but it sure puts things into perspective regarding
manufacturer's claims about gazillions of colors and "petapixel"
resolutions, and the like.

Sure, one should always strive for best results, but it did sober me
up regarding going too far as there is a point of diminishing returns.
The art in the science is finding this point.

Don.
 
D

Don

....
Just checked this using NikonScan 3.1.2 on an LS-4000ED and, in general,
even with a very low contrast image such as an underexposed negative
strip, the distribution is still large enough that adjustment of the
black and white level points either have to be so far apart that no
missing codes are visible or the clipped areas cause the histograms to
autoscale making it impossible to see whether missing codes are present.

I didn't see this response before my previous message but that sort of
pre-empts my last message.
However, I did come up with a solution which fools the scanner. Before
trying to replicate this, save your current NikonScan settings so that
they can be restored after the test.

I took a piece of blank negative and placed that in a slide holder and
then sandwiched some aluminium foil over half of the frame. The film
just makes the scanner believe that film is present but the foil
prevents any light from getting through that area to the CCD. So the
foil appears to be very dark indeed, and only CCD readout, analogue and
ADC noise should be present on those parts of the image.

BTW, I used a different trick to fool the scanner into thinking there
is film. (I wanted to scan an empty frame once...). If memory serves,
what I did was turn autofocus off and focus manually. Once that was
done the scanner didn't object to an empty frame anymore!
From this, I conclude that NikonScan is implementing autoexposure in two
stages. Firstly at the analogue gain stage using the entire frame as
the reference to determine the actual exposure given to the CCD. This
ensures that no point in the image actually saturates the CCD and you
can hear the scan head passing across the frame to implement this. Then
a secondary autoexposure is apparently applied - a post capture
modification of the crop area of the frame selected for the scan! This
is something I have never noticed with NikonScan before, however I
wonder if it might account for why some people are getting variable
results with Nikonscan whilst others, myself included, have no problems
with it whatsoever.

I'm glad you got something out of this too!

That is very interesting, though, and indeed the reason why I ask all
these questions.
Think I'll experiment again with this when I get some time - just a
little to much film to scan at the moment. :)

I know what you mean!!! I've got about 1250 slides, 750 negatives and
some 1100 photos just itching to be scanned. I'm currently in the
(long term) process of "digitizing my life" and films/photos are just
one part (right now I'm trying to digitize 4-track recordings through
a stereo track output but without mixing down to 2 tracks...).

When I had my first go at film I had lots of trouble so I switched to
audio while I regroup and think about everything. These questions are
really a preparation as I'm getting ready to turn my attention back to
film in the next few days - when I'll repeat your above test and
report what I get.

Thanks again!

Don.
 
K

Kennedy McEwen

Don said:
The idea was to use an image (with smooth gradients) that covers, for
example, only 50% of the available dynamic range and then boost
exposure until the histogram expands close to 100%. Any gaps should
then be quite apparent in spite of minor distortions caused by
assorted noise sources, etc.

Would that approach be feasible?

The problem you are up against is that scanners such as the Nikon
LS-4000 have 14-bit resolution whilst the latest scanners have 16-bit
resolution. That corresponds to 16,384 and 65,536 discrete levels
respectively. Contrast stretching the image on a scale that is visible
with a 256 level histogram sufficient to see missing levels is extreme
in the understatement. Each 'bucket' in the Nikonscan or Photoshop
histogram corresponds to 64 unique levels with the LS-4000 and 256
levels with the LS-5000. The gaps produced by stretching even a low
contrast image would be completely insignificant, and hence invisible,
on a 256 level histogram.

For the LS-4000, you need to reduce the upper and lower level limits to
less than 4 apart to have any chance of seeing missing codes in the
histogram. For the LS-5000, you need to make that gap unity, which is
impossible in a single stage. So 50% stretching of the image would have
no discernible difference to the histogram continuity at all.

However, if you wrote your own histogram display algorithm or analysis
software then the missing codes would be VERY visible indeed.
That's exactly what I had in mind. I always write my own software for
things like this. That's also why I'm asking beforehand to see if it's
worthwhile making the effort.
It is, but it doesn't have to be very complex. I find an algorithm that
converts 16 and 8 bit data into ascii text readily enables most images
to be translated into files that can easily be imported into Excel or
Mathcad for analysis. The intermediate files are enormous though, so
make sure you have enough disk space before heading down that route. ;-)
 
B

Bart van der Wolf

SNIP
The problem you are up against is that scanners such as the Nikon
LS-4000 have 14-bit resolution whilst the latest scanners have 16-bit
resolution. That corresponds to 16,384 and 65,536 discrete levels
respectively.

And assuming the scanner responds linearly to luminance, a slide film's red
channel has often a D-max of 3.2, or 1585 discrete levels. A D-max of 3.6
would produce a maximum of 3981 discrete luminance levels. The film
base/aluminium foil sandwich is needed to use all available ADC levels.

SNIP
However, if you wrote your own histogram display algorithm or analysis
software then the missing codes would be VERY visible indeed.

I don't want to spoil the fun of someone writing his own routine to build a
histogram from the Raw data, but I use a freely available utility called
ImageJ for that kind of analysis. It has 16-bit histograms which can be
output and saved with all 65536 bins as text values.
It can be found at http://rsb.info.nih.gov/ij/ , and for the 16-bit
histograms one should upgrade to http://rsb.info.nih.gov/ij/notes.html for
the latest (beta, but stable) update of the main program file. It comes with
the Java runtime needed to run the app. It reads 16-bit/channel RGB files
(e.g. TIFFs) in what it calls stacks. and each layer can be analyzed
separately.

Bart
 
D

Don

It is, but it doesn't have to be very complex. I find an algorithm that
converts 16 and 8 bit data into ascii text readily enables most images
to be translated into files that can easily be imported into Excel or
Mathcad for analysis. The intermediate files are enormous though, so
make sure you have enough disk space before heading down that route. ;-)

That's a good idea, though!

I was just going to tabulate the values which is a very elementary
thing to do and then analyze the results.

But an even simpler solution might be to just count the number of
discrete values. After all, that count is what I'm after, really. That
should be enough because in case of post-scan processing the actual
number of values would be roughly the same allowing for a small
deviation due to various things discussed before.

Don.
 
D

Don

And assuming the scanner responds linearly to luminance, a slide film's red
channel has often a D-max of 3.2, or 1585 discrete levels. A D-max of 3.6
would produce a maximum of 3981 discrete luminance levels. The film
base/aluminium foil sandwich is needed to use all available ADC levels.

Thanks for that! I'm not familiar with the intricacies which is
exactly why I asked all the questions.
I don't want to spoil the fun of someone writing his own routine to build a
histogram from the Raw data, but I use a freely available utility called
ImageJ for that kind of analysis. It has 16-bit histograms which can be
output and saved with all 65536 bins as text values.
It can be found at http://rsb.info.nih.gov/ij/ , and for the 16-bit
histograms one should upgrade to http://rsb.info.nih.gov/ij/notes.html for
the latest (beta, but stable) update of the main program file. It comes with
the Java runtime needed to run the app. It reads 16-bit/channel RGB files
(e.g. TIFFs) in what it calls stacks. and each layer can be analyzed
separately.

Thanks for the tip!

But the routines for the above are very elementary and I've already
done more complicated stuff. Exporting images as RAW from Photoshop
even eliminates having to worry about various file formats, headers,
etc.

Don.
 
K

Kennedy McEwen

Actually Don, having gone all round the houses on this there is an even
easier way of establishing the answer to your original question...
its in the flipping manual!! ;-)

Quote from page 42 of the Nikonscan Software Manual (PDF file on the CD
for Nikonscan 3.1.2):
"If you are using NikonScan as a stand alone application, Stage 4 can be
performed in the Nikon Scan applet after the image has been saved to
disk."

In the same page, Stage 4 of the scanning process is specified as:
"Color enhancement and sharpening - Use the tools in the Curves, Color
balance, LCH Editor, and Unsharp Mask palettes to adjust tone, colors
contrast, and sharpness."

So there it is in black and white - those functions are all post-scan
processes. I don't know who at Nikon told you otherwise, but I suggest
you point them squarely at that particular page of *their* manual. Of
course it is not unknown for manuals to be wrong or misleading, so
generating the physical proof for yourself may make you feel a bit more
secure in discussions with them. ;-)
 
B

Bart van der Wolf

SNIP
Exporting images as RAW from Photoshop even eliminates
having to worry about various file formats, headers, etc.

True, but Photoshop is (only) 15-bits/channel!

Bart
 
D

Don

Actually Don, having gone all round the houses on this there is an even
easier way of establishing the answer to your original question...
its in the flipping manual!! ;-)

Very good! :) As I often joke: If all else fails, read the manual!
Quote from page 42 of the Nikonscan Software Manual (PDF file on the CD
for Nikonscan 3.1.2):
"If you are using NikonScan as a stand alone application, Stage 4 can be
performed in the Nikon Scan applet after the image has been saved to
disk."

In the same page, Stage 4 of the scanning process is specified as:
"Color enhancement and sharpening - Use the tools in the Curves, Color
balance, LCH Editor, and Unsharp Mask palettes to adjust tone, colors
contrast, and sharpness."

So there it is in black and white - those functions are all post-scan
processes. I don't know who at Nikon told you otherwise, but I suggest
you point them squarely at that particular page of *their* manual. Of
course it is not unknown for manuals to be wrong or misleading, so
generating the physical proof for yourself may make you feel a bit more
secure in discussions with them. ;-)

I'm waiting for their response to my last message right now where I
explained my idea to use levels to determine pre vs post scanning.
They've gone quiet all of a sudden for about a couple of days...

I'm not even sure who at Nikon (support.nikontech.com) is answering
these questions as they only have first names - like movie stars...
Probably just some underpaid students which explains the "gems" they
come up with. Although, this current guy at least knew the difference
between pre-scan and post-scan, which is quite an improvement!

However, my bad experiences in the past are exactly the reason why I
asked for proof and don't accept anything they say at face value
anymore.

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top