The Ultimate Scanner Software vs. Photoshop Question

K

Kennedy McEwen

Don said:
Theory is very nice - in theory... And as you know I always thirst to
learn more of it, but - as can be seen below - practice (i.e. context)
plays a significant part in real life situations. Which all reminds me
of...

In theory, a bumblebee was once declared insufficiently aerodynamic to
be able to fly. In practice, numerous bumblebees strongly disagreed...
Oh I just love it when numpties bring up that hoary old piece of urban
legend - it consistently means they neither know what they are talking
about nor have the interest to find out. It just sounds like a nice
story, so it must be true. Sorry Don, but theory is important to
understanding what is happening, what to expect and why. As for your
allegorical urban legend: if you do some research into that you will
find it is quite appropriate to your current situation and problem - not
understanding the full story.
And this bumblebee also strongly disagrees... ;o)
Do note that the stress should be on the "bee" part (as in "worker
bee"), not on the "bumble" part... ;o)


In theory...
No, in practice. You still haven't achieved the effect that multiscan
actually offers.
According to you, due to Photoshop's 15-bit and integer math
"limitations", multi-pass multi-scanning only makes sense with up to 2
scans. And yet...

I have scanned 18 times and then I threw away 2 most extreme scans.
Next I used Photoshop to combine the remaining 16 scans in increments
of 4 to end up with: 4x, 8x, 12x and 16x multiscans. Comparing those
there is a clear and incremental reduction of noise at each junction
with 16x nearly eliminating all, perhaps with only about 1% of shadows
- if that - still having some very minor noise.
As there should be, but the following indicates that you have hit a
precision limit.
However, there was no increase in shadow detail.

But shadow detail is limited by noise so, if you are achieving the
reduction you claim then you would be seeing increased shadow detail if
it is there - and it is, as your following test shows:
As I already
mentioned, a very similar effect can be achieved by simply selecting
the shadows (threshold = 32) and applying 0.3 Gaussian Blur. The
multiscanned images are still slightly superior in that they appear a
tad sharper.
What a coincidence - worked out the area under a 0.3 pixel radius
gaussian blur? Seems remarkably close to the square root of ratio
between 14 and 15 bits!

So we are back to square one - you see a modification of the noise and
assume your multican is working, but you can't see the detail that the
reduced noise would provide. Coincidentally, lack of precision does
exactly the same thing.

The rest is irrelevant except:
Please repeat this (it's a genuine question because I really want to
know) and report if you see any difference between the two. I
understand if you can't, because sub-pixel alignment of multi-pass
scans is excrutiating and very time consuming. It took me about a 1
day per image!
Multiscan does bring out the additional shadow detail that I would
expect - as I have already mentioned to you in another post. In fact,
to prove this I sandwiched a slide with a piece of unexposed and
developed film - it was the only way of getting controllably dense
material. However, by the time you get to 16 frame multiscan the
improvement is flattening off. Up to 8 frames works pretty close to
simple theory.
BTW, do you have any references that NikonScan uses floating point
(when calculating multiscanned images) and not integer math like
Photoshop? I can't find anything in the manual.
I have no indication that Nikonscan uses floating point arithmetic, and
I would be more than a little surprised if it did.
 
H

Howard

Kennedy and Don,

Thank you. I appreciate both perspectives.

Kennedy, one thing I must note is that your final message to ME (not
Don), was a surprise.

Based on your detailed report on the old "Epson-Inkjet" list a few
years ago, I've always scanned at the integer divisor of the scanner's
maximun optical resolution (e.g., on a 2400 ppi scanner: 300, 400,
600, 800, 1200 or 2400) that would give me the largest size photo I
wanted without causing the printer resolution to fall below the 240 -
300 range --- BUT LETTING THE FINAL DPI "FALL WHERE IT MAY" WITH NO
RESAMPLING IN PHOTOSHOP.

But based on your post to me above, I see that now (perhaps due to
newer and better printers/drivers) you suggest RESAMPLING THE FINAL
PHOTOSHOP RESOLUTION TO THE NEAREST OF 240, 360, or 720 DPI. Although
you did not say so, my guess is that you suggest that such resampling
in Photoshop be a DOWNsample.

If I interpret all this correctly, then this will be a significant
change in my workflow. But change for the better is a good thing!

Thank you.

Howard
 
K

Kennedy McEwen

Howard said:
Kennedy and Don,

Thank you. I appreciate both perspectives.

Kennedy, one thing I must note is that your final message to ME (not
Don), was a surprise.

Based on your detailed report on the old "Epson-Inkjet" list a few
years ago, I've always scanned at the integer divisor of the scanner's
maximun optical resolution (e.g., on a 2400 ppi scanner: 300, 400,
600, 800, 1200 or 2400) that would give me the largest size photo I
wanted without causing the printer resolution to fall below the 240 -
300 range --- BUT LETTING THE FINAL DPI "FALL WHERE IT MAY" WITH NO
RESAMPLING IN PHOTOSHOP.

But based on your post to me above, I see that now (perhaps due to
newer and better printers/drivers) you suggest RESAMPLING THE FINAL
PHOTOSHOP RESOLUTION TO THE NEAREST OF 240, 360, or 720 DPI. Although
you did not say so, my guess is that you suggest that such resampling
in Photoshop be a DOWNsample.

If I interpret all this correctly, then this will be a significant
change in my workflow. But change for the better is a good thing!
The objective here is to make sure that the printer driver is not
implementing the resample process using an inferior nearest neighbour or
bilinear interpolation scheme. All Epson desktop printers resample to
720ppi before they apply the stochastic dither process. The large
format range resample to 360ppi (incidentally this is necessary to
overcome another Photoshop limitation - 32000 pixels maximum in any
axis, which results in the largest print being 44.4" on the desktop
range). This resampling resolution is irrespective of what the
printer's advertised resolution is, which refers to the ink dot
placement within the dither, not the image resolution. Other printer
manufacturers do the same, although their native resolution may differ.
Whether the printer uses nearest neighbour or bilinear resample depends
on the setting of one parameter in the driver, called variously DCC or
digital camera or something similar.

Since this is generally an upsample procedure, assuming that you are not
printing at greater than 720ppi, and to a resolution which is well
beyond the resolution of the unaided eye on the page, it is less
important and any artefacts produced are imperceptible. However if you
intend to view the image under some form of magnification (for example,
I print sheets of "contact prints" at 720ppi which I regularly view with
magnification from close range) then it does take on a greater
importance. In that case sending the printer an integer division of the
native driver resolution ensures that each pixel in the image is
resampled to exactly the same number of printer pixels, resulting in an
image which can be viewed under magnification without artefacts and
distortions.

Since it produces better images under magnification, I implement it as a
standard operation even for images that would normally be viewed at a
greater distance.
 
D

Don

....
Sorry Don, but theory is important to
understanding what is happening, what to expect and why.

Absolutely! Which is exactly why I wrote above that I always thirst to
learn more. My point was that there is a difference between pure
theory and theory applied in practice.

I mean, when you design new instruments, do you trust the theory
blindly and go straight to production, or do you translate this theory
into practice (i.e. context) by prototyping and testing first?

Considering theory without paying attention to practice is only
useful/meaningful in ivory towers i.e. in theoretical discussions.
Nothing wrong with that, of course, if that is the end goal. But when
applied the theory is inevitably augmented by relevant contextual
conditionalities.

Case in point: My brand new bouncing baby LS-50 is rated at 14-bits.
In theory, that should give me more than enough dynamic range. And yet
in practice (as you yourself very comprehensively explained recently)
photons tend to disagree and there is noise in dark areas - hence this
discussion about multiscanning...
As for your
allegorical urban legend: if you do some research into that you will
find it is quite appropriate to your current situation and problem - not
understanding the full story.

That was *exactly* my point - so no wonder it's appropriate! ;-)

Relying on theory alone is not enough when trying to apply it. One
needs the full story i.e., consider the context. That *doesn't mean*
the theory is wrong or unimportant (although it may be disproved in
the process) but it certainly means that - within a given context -
other elements (may) play a crucial part.
As there should be, but the following indicates that you have hit a
precision limit.

Which according to you should be at 2 scans in my case, right?

--- cut ---
Nevertheless, remember
that the first step in your methodology is to reduce the opacity of the
top layer by 50% - which means you have already exhausted the 15 bit
capacity of PS, so I am not surprised you cannot see much improvement.
The calculation I gave above did not take account of the methodology you
have previously suggested. The noise texture change you perceive
suggests that you are just looking at truncation limits.
--- cut ---

And yet I see a clear and continuing improvement at each step: 4x, 8x,
12x and 16x. What's more, this improvement is not flat (equal across
the whole shadow area) but each successive step distinctly clears up
more (i.e. goes deeper into) shadows.

Therefore, this seems to indicate that we can eliminate lack of
(Photoshop) precision causing a simple "blurring of noise" because
that would be equally distributed across the whole area and not
"selective" by incrementally clearing up deeper shadows - as the scan
count rises - without affecting areas already cleared up.

I mean, we don't really *know* that NikonScan works with 16-bits
internally? Or, do we? After all, I (and I'm guessing you too)
presumed that Photoshop's 16-bit was true 16 bit - until we learned
better... But even if there were 1 bit of difference I'm still not
convinced that it would have such a drastic effect as my test above
seem to confirm.
But shadow detail is limited by noise so, if you are achieving the
reduction you claim then you would be seeing increased shadow detail if
it is there - and it is, as your following test shows:

In my current workflow, it was only after scanning at ~AG +2 and
layering the two images in order to compare them that I've located
where the detail was. Switching to multiscanned image and increasing
contrast almost to the point of distortion I could indeed observe (a
hint of) detail in the multi-pass multicanned image.

So, the detail is there but it's effectively masked by such low
contrast that it's of no practical use (well, to me, anyway).

If you have the time, please try this: Do a nominal multiscan. Then
scan again but boost AG until noise is gone to the same extent as in
the multiscan. After that "synchronize the histograms" so that the
shadows (where the noise is) in both images are of equal brightness
and contrast. Don't worry about the rest of the image because,
obviously, clipped areas can't be "synchronized".

I'm curious if you can spot as much difference between the two shadow
areas as I can? It's a roundabout, circumstantial way of trying to
determine if single-pass multiscan is equal to (properly performed)
multi-pass multiscan.
Multiscan does bring out the additional shadow detail that I would
expect - as I have already mentioned to you in another post. In fact,
to prove this I sandwiched a slide with a piece of unexposed and
developed film - it was the only way of getting controllably dense
material. However, by the time you get to 16 frame multiscan the
improvement is flattening off. Up to 8 frames works pretty close to
simple theory.

On reflection, it may be a case of two subjective judgments, i.e. I
may be seeing insufficient contrast or detail but that very same image
may look satisfactory to you!?

Actually, that's probably it. I have, most likely, been "spoiled" by
+2 AG scans which reveal so much more detail in the shadows that I'm
"blind" to subtle detail in multiscan images.

I suppose, the only way to figure this out conclusively would be to
compare single-pass and multi-pass multiscan of the same image and on
the same scanner...

Don.
 
K

Kennedy McEwen

Don said:
Absolutely! Which is exactly why I wrote above that I always thirst to
learn more.

So why aren't you?
My point was that there is a difference between pure
theory and theory applied in practice.
Which made it an irrelevant point, since I am telling you the result of
*both* theory and practice - its just that you are not listening.
Case in point: My brand new bouncing baby LS-50 is rated at 14-bits.
In theory, that should give me more than enough dynamic range. And yet
in practice (as you yourself very comprehensively explained recently)
photons tend to disagree and there is noise in dark areas - hence this
discussion about multiscanning...
Only your theory predicted that there would be no noise at all - and at
the lower ADC counts the noise is not photon driven at all, but pretty
close to being quantisation noise on the LS-4000 & LS-50. Photon noise
is only significant when sufficient photons arrive in the integration
period that their square root gives rise to a noise signal which exceeds
the quantisation and CCD readout noise. At low levels, such as deep in
the shadows of dense media, photon noise is insignificant.
That was *exactly* my point - so no wonder it's appropriate! ;-)
So why are you ignoring the obvious errors in your theory that 16 frame
summation of 14-bit data can be achieved in a 15-bit application? It
just isn't going to happen - you need at least 18bits of accumulation
for that to work. Quite simply, with anything less than that then you
most certainly will see further into the shadows with a 2EV exposure
increase.
Which according to you should be at 2 scans in my case, right?
No, just not much more than 2. It would only be exactly 2 if
quantisation noise were the only noise source in the lower levels.
--- cut ---

--- cut ---

You will note the word *much* in the fourth line.
And yet I see a clear and continuing improvement at each step: 4x, 8x,
12x and 16x.

But you have not reported any noise measurements. In particular, all
you report is a visual appearance at "300 - 400% scaling".

HOT NEWS FOR DON:
You have presented no evidence that you are seeing noise let alone its
reduction!!

To determine noise you need to repeat the scan (including the full
multiscan arithmetic) operation many times, at least 10 times, and
measure the standard deviation for each pixel in every frame. The noise
reduction produced by levels of multiscan can then be assessed by
averaging the standard deviation of all pixels and comparing that for
each multiscan image. There is no shortcut to this but, I accept, it is
a lot of work if you are doing the multiscan summation manually, as it
were.

Simply scaling the image up and viewing the result, particularly with a
multiplass multiscan approach, merely demonstrates sub-pixel
misalignment softening, not noise reduction! If the image you are
viewing is dominated by granularity from the film (as a deep shadow will
be) rather than scanner noise, then misalignment of the image will
create a visual impression of noise reduction but, since it was
granularity and not noise in the first place, that will not deliver the
associated increase in Dmax that you expect.
What's more, this improvement is not flat (equal across
the whole shadow area) but each successive step distinctly clears up
more (i.e. goes deeper into) shadows.
Why don't I find that surprising?
Therefore, this seems to indicate that we can eliminate lack of
(Photoshop) precision causing a simple "blurring of noise" because
that would be equally distributed across the whole area and not
"selective" by incrementally clearing up deeper shadows - as the scan
count rises - without affecting areas already cleared up.
Think again Don - that is *exactly* what would happen, right up to the
point were insufficient bit depth limits further penetration into the
shadows. Just as you have reported. BTW, it is truncation, not
blurring!
I mean, we don't really *know* that NikonScan works with 16-bits
internally? Or, do we?

We do know that it has at least 16-bit accuracy because of the test that
I suggested you repeat to convince yourself. Remember the synthesised
greyscale ramp with the linear and gamma adjustments? Remember the
manual calculations for comparison? Remember the Excel spreadsheet?
Remember the results that matched 16-bit truncation? Remember the
reduced accuracy when the same functions were implemented in Photoshop?

No, we don't know that NikonScan4 is working with 16-bits internally, it
might be working with 57 and three quarter bits for all we know, but it
has to round the result to the 16-bit output. We do know that it is *at
least* 16 bits internally though.
After all, I (and I'm guessing you too)
presumed that Photoshop's 16-bit was true 16 bit - until we learned
better...

Since I had never tried to do anything that relied on the full 16-bits
until recently, Adobe had released that particular gem of information
into the public domain long before I considered its significance. In
fairness it isn't actually significant to me at all, because I don't
generally find myself scraping around in the lower bits to pull images
out of the shadows. But, as I recall, the information was released in
response to others who had already encountered some unexpected
limitations.
But even if there were 1 bit of difference I'm still not
convinced that it would have such a drastic effect as my test above
seem to confirm.
Unfortunately Don, your tests confirm quite the opposite - that you
cannot perform multiscan accumulation of 14-bit data with a 15-bit
application.

Now, it might be coincidence, it might even be little green gremlins
that are shipped with every LS-50 scanner and encoding their spirits
into every scan that it produces to prevent Photoshop from achieving the
functions that have been disabled in the scanner itself. However, since
the theory actually predicts exactly the effects that you have reported
the principle of Occam's razor says we don't need to believe in those
little green gremlins.

Until such times as you can *prove* that 15-bits total depth is adequate
to accumulate 14-bit multiscans (and recall that you actually need
18-bits for the accumulation of 16 frames of 14-bit data) and identify
an alternative explanation (not just some unfounded suggestion that the
CCD just stops detecting light once the photon flux falls below a fairly
high level), that is the only theory that is needed. That theory fits
the facts as you have reported them - and as can be independently
verified.
In my current workflow, it was only after scanning at ~AG +2 and
layering the two images in order to compare them that I've located
where the detail was. Switching to multiscanned image and increasing
contrast almost to the point of distortion I could indeed observe (a
hint of) detail in the multi-pass multicanned image.
Check the word "much" in the quotation above, you missed it the first
time, but in context it matches your phrase "a hint of" fairly closely.
Just another coincidence that one, couldn't possibly be anything to do
with only having 15-bits to implement a process you need 18-bits to
accomplish.
So, the detail is there but it's effectively masked by such low
contrast that it's of no practical use (well, to me, anyway).

If you have the time, please try this: Do a nominal multiscan. Then
scan again but boost AG until noise is gone to the same extent as in
the multiscan. After that "synchronize the histograms" so that the
shadows (where the noise is) in both images are of equal brightness
and contrast. Don't worry about the rest of the image because,
obviously, clipped areas can't be "synchronized".

I'm curious if you can spot as much difference between the two shadow
areas as I can? It's a roundabout, circumstantial way of trying to
determine if single-pass multiscan is equal to (properly performed)
multi-pass multiscan.
I have been through this exercise in the past and *quantified* the data
- not just visually tried to match histograms, which is virtually
impossible given the massive noise reduction following the multiscan
operation.
On reflection, it may be a case of two subjective judgments, i.e. I
may be seeing insufficient contrast or detail but that very same image
may look satisfactory to you!?
Numbers are not subjective, which is why I can be absolutely dogmatic
that this has nothing whatsoever to do with subjective levels of
acceptability.
Actually, that's probably it. I have, most likely, been "spoiled" by
+2 AG scans which reveal so much more detail in the shadows that I'm
"blind" to subtle detail in multiscan images.
Only the difference between the two approaches would be subtle if you
did the basic arithmetic correctly.

I did suggest writing your own software to implement the arithmetic to
the necessary precision, and I still believe that is the only way you
will make any headway in this at all.
 
A

Anoni Moose

Case in point: My brand new bouncing baby LS-50 is rated at 14-bits.
In theory, that should give me more than enough dynamic range. And yet
in practice (as you yourself very comprehensively explained recently)
photons tend to disagree and there is noise in dark areas - hence this
discussion about multiscanning...

Hi,
I'm curious. Why would 14-bit'ness have anything to do
with noise in the dark areas, even in theory? Is it an assumption
that quantizing noise is the primary source of dark-area noise?
Keep in mind that I really don't understand what number of
bits has to do with dynamic range either, I'd think that the range
would be mostly optical & analog -- from clear film down to
the analog noise level in the black areas, or is the usable
darkest black always assumed to be where the lsb flips to '1'
regardless of how much analog-system noise is present?
If I'm too off topic, "never mind". Your example just
tickled my brain. :)

Mike

P.S. - My current (old) Polaroid scanner is very noisy in dark areas.
 
K

Kennedy McEwen

Anoni said:
(e-mail address removed) (Don) wrote in message


Hi,
I'm curious. Why would 14-bit'ness have anything to do
with noise in the dark areas, even in theory? Is it an assumption
that quantizing noise is the primary source of dark-area noise?
Keep in mind that I really don't understand what number of
bits has to do with dynamic range either, I'd think that the range
would be mostly optical & analog -- from clear film down to
the analog noise level in the black areas, or is the usable
darkest black always assumed to be where the lsb flips to '1'
regardless of how much analog-system noise is present?
If I'm too off topic, "never mind". Your example just
tickled my brain. :)
Your concerns are well placed because the dynamic range really depends
on which is the greatest noise that is present in the blacks -
quantisation or analogue noise. If, for example, the analogue noise was
significantly greater than the quantisation noise then the dynamic range
would indeed be much less than the 14-bits permits and the multiscan
implementation, even in 15-bit limited Photoshop, would provide more
dynamic range and deep shadow noise reduction.

However, it can be shown by measurement that in deep shadows the
analogue noise of the 14-bit Nikon scanners is very low indeed, such
that the total is barely greater than the quantisation noise itself.
This is, obviously, not the case as the light level increases and photon
noise (due just to the random emission and arrival of individual photons
at the CCD) becomes the dominant noise source.
 
A

Anoni Moose

Thanks for the info, I appreciate it!

Mike

Kennedy McEwen said:
Your concerns are well placed because the dynamic range really depends
on which is the greatest noise that is present in the blacks -
quantisation or analogue noise. If, for example, the analogue noise was
significantly greater than the quantisation noise then the dynamic range
would indeed be much less than the 14-bits permits and the multiscan
implementation, even in 15-bit limited Photoshop, would provide more
dynamic range and deep shadow noise reduction.

However, it can be shown by measurement that in deep shadows the
analogue noise of the 14-bit Nikon scanners is very low indeed, such
that the total is barely greater than the quantisation noise itself.
This is, obviously, not the case as the light level increases and photon
noise (due just to the random emission and arrival of individual photons
at the CCD) becomes the dominant noise source.
 
D

Don

Only your theory predicted that there would be no noise at all - and at
the lower ADC counts the noise is not photon driven at all, but pretty
close to being quantisation noise on the LS-4000 & LS-50. Photon noise
is only significant when sufficient photons arrive in the integration
period that their square root gives rise to a noise signal which exceeds
the quantisation and CCD readout noise. At low levels, such as deep in
the shadows of dense media, photon noise is insignificant.

The point is, 14-bits covers the 3.4 dynamic range of Kodachromes -
and then some - 2.7 bits more if memory and math serves. Now then, the
rule-of-thumb is to allow 1.5 bits for noise so, in theory, 14-bits
should be more than enough to scan without noise. And yet it isn't.
Judging by my empirical tests, I'd need an 18 or even a 20-bit scanner
to get all of the image data without noise.
No, just not much more than 2.

You once stated that you multiscan slides with 2x as standard (going
to more only when/if needed) so 2x multi-pass multiscanning and
blending with Photoshop afterwards is at least as good as your
standard workflow with a single-pass multiscan, correct?

If yes, we can eliminate all the discussions about > 2x multi-pass
multiscanning and focus only on 2x because you concur there is no
difference between 2x single-pass and 2x multi-pass scans, right?

Now then, when I compare a 2x multiscan of a not excessively dark
slide (i.e. one not requiring more than 2x) to a properly boosted
shadows scan of the same slide, the latter scan still reveals more
detail.
We do know that it has at least 16-bit accuracy because of the test that
I suggested you repeat to convince yourself. ....
No, we don't know that NikonScan4 is working with 16-bits internally, it
might be working with 57 and three quarter bits for all we know, but it
has to round the result to the 16-bit output. We do know that it is *at
least* 16 bits internally though.

(Just because they output 16-bits says nothing about what they use
internally e.g. Photoshop also "outputs" 16-bits. But I'm just
nitpicking here...)

If we go back to what you said above then that's still 2 bits short:
"you need at least 18bits of accumulation for that to work".

So, are you in effect saying that purported Nikon claims of 16x
multiscanning are actually misleading because NikonScan's (or
firmware's?) accumulation accuracy *may* be insufficient?

This is *not* a confrontational question, but - as always - a genuine
one.

(Also - and off the top of my head, caveats apply - there is more than
one way to average out the scans. For example, averaging out in twos -
or whatever is below the accuracy threshold. True, it would propagate
rouding errors and not be as accurate as proper averaging but it may
improve on truncation. Again, I'm just nitpicking...)
Only the difference between the two approaches would be subtle if you
did the basic arithmetic correctly.

I did suggest writing your own software to implement the arithmetic to
the necessary precision, and I still believe that is the only way you
will make any headway in this at all.

The problem is I don't think it's really worth the effort since
performing a second scan at +2-3 AG - or whatever is necessary - gives
me superior results much faster. Once I have the time I'd love to get
back and write my own software, but that's not likely any time soon.

One last question, and the key question, really: Does scanning a
second to time to pull out the detail in shadows (i.e. at +2-3 AG)
reveal more detail than multiscanning?

(Of course, I'm *not* talking about conventional contrast masking or
contrast blending both of which have problems with gradients, but
comparing a multi-pass multiscan to an appropriately adjusted shadows
scan.)

Even judging by the 2x case above, I'm convinced it does.

Don.
 
K

Kennedy McEwen

Don said:
On Tue, 3 Aug 2004 20:44:04 +0100, Kennedy McEwen

The point is, 14-bits covers the 3.4 dynamic range of Kodachromes -
and then some - 2.7 bits more if memory and math serves.

Now that's what happens when you take rules of thumb to extremes. Where
did you get the idea that Kodakchrome had a total dynamic range of only
3.4? Here is the characteristic curve for one type of Kodachrome, the
others don't change much and the same principles hold true:
http://www.kodak.com/global/en/professional/support/techPubs/e55/f002_048
6ac.gif

Examination will show that the oft misquoted dynamic range of 3.4 is
actually only approximately the linear range. This chart, for example,
shows a densities reaching well up to 3.8. More importantly though, this
is not the entire density range of the film - the chart only extends to
an exposure of 1/100 lux-seconds. Really deep shadows could well be far
lower exposure than that and, whilst well beyond the film's reciprocity
failure knee, will still record an image, albeit exceedingly dense and
with reduced contrast compared to the original scene.

In short, this curve merely indicates that Kodak are prepared to
guarantee the film will produce at least a density of 3.8. In practice
it could be even higher. It could, and in many cases is, even lower
though because, as Henry Wilhelm demonstrated, Kodachrome has a rather
nasty character of fading under even limited projection lighting and, in
this regard is even more fugitive than Ektachrome, which is quite
resilient to light fade, but fades with age instead.
Now then, the
rule-of-thumb is to allow 1.5 bits for noise so, in theory, 14-bits
should be more than enough to scan without noise. And yet it isn't.

Because a more appropriate rule of thumb applies:
"Rules of thumb never apply to extremes."
Judging by my empirical tests, I'd need an 18 or even a 20-bit scanner
to get all of the image data without noise.
I doubt it needs quite that much, but it certainly requires more than
true 14-bits to get everything off the film. Why do you think Nikon
introduced the LS-5000?
You once stated that you multiscan slides with 2x as standard (going
to more only when/if needed) so 2x multi-pass multiscanning and
blending with Photoshop afterwards is at least as good as your
standard workflow with a single-pass multiscan, correct?
That is certainly the case, but I don't have many slides with the
densities that you appear to be trying to cope with - and when I do, I
can bump up the multiscan as stated.
If yes, we can eliminate all the discussions about > 2x multi-pass
multiscanning and focus only on 2x because you concur there is no
difference between 2x single-pass and 2x multi-pass scans, right?
Nope - you ignored the "if/when needed". This isn't my workflow and
material that we are having problems with Don, its yours. ;-)
Now then, when I compare a 2x multiscan of a not excessively dark
slide (i.e. one not requiring more than 2x) to a properly boosted
shadows scan of the same slide, the latter scan still reveals more
detail.
Of course it does and it is a basic physics why that is so - I will
leave it as a simple exercise for you to work out why. However that is
hardly the issue here, since we are not discussing whether one approach
resolves your particular problem better than another, but whether you
have actually managed to implement multiscanning correctly in Photoshop
as you claim. The evidence to date is that you haven't, which is hardly
surprising, because it is, in fact, impossible.
(Just because they output 16-bits says nothing about what they use
internally e.g. Photoshop also "outputs" 16-bits. But I'm just
nitpicking here...)
Indeed, but testing of the output indicates that they are not using any
less than that internally - a test that Photoshop fails because it is a
bit too short (pun intentional).
If we go back to what you said above then that's still 2 bits short:
"you need at least 18bits of accumulation for that to work".
No, because Nikonscan only needs to handle the output of the hardware
without restricting it, which 16-bits does admirably on the LS-4000.
So, are you in effect saying that purported Nikon claims of 16x
multiscanning are actually misleading because NikonScan's (or
firmware's?) accumulation accuracy *may* be insufficient?
Not at all, in fact I have prepared a scan sequence that I can send to
you, if you wish, which demonstrates that 16x multiscan in the LS-4000
provides exactly the increase in Dmax that would be expected of it -
right down to the 16-bit level. This of course means that I have had to
pre-stretch the contrast of the multiscanned image just so that
Photoshop can actually allow you to further adjust the levels into a
visible region.
This is *not* a confrontational question, but - as always - a genuine
one.

(Also - and off the top of my head, caveats apply - there is more than
one way to average out the scans. For example, averaging out in twos -
or whatever is below the accuracy threshold. True, it would propagate
rouding errors and not be as accurate as proper averaging but it may
improve on truncation. Again, I'm just nitpicking...)
Nope, it doesn't wash - oncve you have run out of bits, you have run out
of bits and there are no shortcuts but getting more bits in there to do
the job.
The problem is I don't think it's really worth the effort since
performing a second scan at +2-3 AG - or whatever is necessary - gives
me superior results much faster. Once I have the time I'd love to get
back and write my own software, but that's not likely any time soon.
Superior only inasmuch as it partially resolves one specific problem you
have at the moment. It does nothing, for example, to reduce the noise
on the higher levels. Nor does it stretch the Dmax of the scanner into
the region that you claim you actually need.
One last question, and the key question, really: Does scanning a
second to time to pull out the detail in shadows (i.e. at +2-3 AG)
reveal more detail than multiscanning?
Mot at that level, however it probably does at 3-4AG. However, even
then, mutiscanning with full 16-bit resolution together with your max AG
adjustment and merging into the also multiscanned primary scan will
produce even better Dmax.
(Of course, I'm *not* talking about conventional contrast masking or
contrast blending both of which have problems with gradients, but
comparing a multi-pass multiscan to an appropriately adjusted shadows
scan.)

Even judging by the 2x case above, I'm convinced it does.
Even *if* it did, you would never know for sure because Photoshop just
won't allow you to implement the mutilscan arithmetic with sufficient
precision. Consequently, you have based your conviction on a flawed
premise.
 
D

Don

Sorry for the delay in replying but it's just too hot... I can't even
scan because the scanner needs to recalibrate even between two scans
(highlights and shadows).

So, I went for bike ride yesterday... :)
Now that's what happens when you take rules of thumb to extremes. Where
did you get the idea that Kodakchrome had a total dynamic range of only
3.4?

Right here, back when I was trying to figure out how much I need to
boost the shadow's scan on my LS-30 to cover the full dynamic range.

I guess you missed that thread, which is too bad because it would have
saved me some time. Although, it would've probably also make me think
twice about getting the LS-50 which is not good because I'm glad I did
get it!
Here is the characteristic curve for one type of Kodachrome, the
others don't change much and the same principles hold true:
http://www.kodak.com/global/en/professional/support/techPubs/e55/f002_048
6ac.gif

Examination will show that the oft misquoted dynamic range of 3.4 is
actually only approximately the linear range. This chart, for example,
shows a densities reaching well up to 3.8. More importantly though, this
is not the entire density range of the film - the chart only extends to
an exposure of 1/100 lux-seconds. Really deep shadows could well be far
lower exposure than that and, whilst well beyond the film's reciprocity
failure knee, will still record an image, albeit exceedingly dense and
with reduced contrast compared to the original scene.

That explains a lot! For one, my having to scan almost every slide
twice - at least for now. We'll see what happens later as I get into
different batches (I scan chronologically).

It also raises a few other questions, for example, it means that
pretty much no scanner on the market can truly cover the full dynamic
range to the extent a twin-scan can (once for highlights and once for
shadows)!?
That is certainly the case, but I don't have many slides with the
densities that you appear to be trying to cope with - and when I do, I
can bump up the multiscan as stated.

Nope - you ignored the "if/when needed". This isn't my workflow and
material that we are having problems with Don, its yours. ;-)

No, no, I understand that. I meant that since anything above 2x gets
us into Photoshop shortcomings, by sticking to 2x we are at least
working from the same (or at least comparable) baseline.

Actually, probably not exactly the same since sub-pixel shifting
introduces some blurring. We haven't really addressed this blurring
but it's been a nagging thought in the back of my mind since this
blurring is bound to have an effect. On the one hand, it may be
negative because it "diffuses" the pixel which needs to be corrected
by multisampling, perhaps requiring more scans to achieve the same
effect as a single-pass multiscan; but on the other hand, blurring has
a slight positive effect by masking noise - even thought that's not
really a solution, but only masking of the problem.
Of course it does and it is a basic physics why that is so - I will
leave it as a simple exercise for you to work out why. However that is
hardly the issue here, since we are not discussing whether one approach
resolves your particular problem better than another, but whether you
have actually managed to implement multiscanning correctly in Photoshop
as you claim. The evidence to date is that you haven't, which is hardly
surprising, because it is, in fact, impossible.

Actually, that's probably the cause of the misunderstanding. My
primary concern was, indeed, which method produces better results.
Even thought my mutli-pass multi-scanning tests may have been
imperfect due to Photoshop's limitations, they were still good enough
to indicate that multiscanning will not produce the same quality in
shadows as a dedicated shadow's scan (even allowing that my multi-pass
is not the best multiscanning can achieve).

However, I'm glad we tackled this because I learned something new
again regarding the true nature of Kodachromes!
Superior only inasmuch as it partially resolves one specific problem you
have at the moment. It does nothing, for example, to reduce the noise
on the higher levels. Nor does it stretch the Dmax of the scanner into
the region that you claim you actually need.

I'm not concerned with noise on the higher levels because it is not
visible or, let's put it this way, it's not as objectionable as the
noise in dark areas. I'm sure I'll change my tune when the time comes
to wrestle with the negatives... ;-)

But I don't understand why you say it doesn't extend the Dmax of the
scanner? I mean, it doesn't extend it literally of course, but
virtually a twin scan should produce an image comparable to a scanner
with a higher Dmax. Two caveats, the images have to be properly
"synchronized" (very important!) so different areas of dynamic range
are not merged together (something which all "contrast blending"
methodologies I've seen do with the shortcoming of a noticeable
transition in gradient areas) and the second caveat is the slight
blurring due to sub-pixel shifting which is probably neither here nor
there, because such very slight blurring also hides grain to some
extent, which is a plus.

Don.
 
K

Kennedy McEwen

Don said:
On Fri, 6 Aug 2004 16:30:32 +0100, Kennedy McEwen

Sorry for the delay in replying but it's just too hot... I can't even
scan because the scanner needs to recalibrate even between two scans
(highlights and shadows).

So, I went for bike ride yesterday... :)
No probs - its pretty hot here too. Far too hot do get on with the
garden landscaping project which has consumed most of my dry weekends
this summer. So I just sat in the garden and admired my previous
handiwork through a nice cool beer. ;-) But now its too hot for even
that, so I have retreated indoors! As they say, if you don't like the
weather in England, just give it a couple of hours.
Right here, back when I was trying to figure out how much I need to
boost the shadow's scan on my LS-30 to cover the full dynamic range.

I guess you missed that thread, which is too bad because it would have
saved me some time.

Pity, because I tend to refer to data where I can, and whilst real
emulsion data from Kodak is somewhat limited this chart has been widely
reproduced. The other thing to notice is the range of light levels that
the plotted range covers - a contrast of only a few hundred. In
practice you will get much greater contrast between highlights and deep
shadows in many short range scenes, so a Dmax of more than 3.8 on fresh
unprojected Kodachrome emulsion is quite realistic.
That explains a lot! For one, my having to scan almost every slide
twice - at least for now. We'll see what happens later as I get into
different batches (I scan chronologically).

It also raises a few other questions, for example, it means that
pretty much no scanner on the market can truly cover the full dynamic
range to the extent a twin-scan can (once for highlights and once for
shadows)!?
Not necessarily, as I said, I don't think the situation is quite as bad
as you apint it with your 18-20-bit requirement. There is a
fundamental limit for the Dmax that unexposed Kodachrome will produce.
Unfortunately Kodak do not publish it, and whilst it is more than the
LS-50 will achieve, I doubt it is that much beyond its capabilities -
perhaps 4.5-5. That should be achievable with multiscanning on a 16-bit
device like the LS-5000 or the Minolta 5400. Of course you then need to
apply some shadow lift to bring that into a region that can be
represented in a 16 file format, let alone do what you want: present it
on a display with 8-bit graphics.
I meant that since anything above 2x gets
us into Photoshop shortcomings, by sticking to 2x we are at least
working from the same (or at least comparable) baseline.

Actually, probably not exactly the same since sub-pixel shifting
introduces some blurring. We haven't really addressed this blurring
but it's been a nagging thought in the back of my mind since this
blurring is bound to have an effect. On the one hand, it may be
negative because it "diffuses" the pixel which needs to be corrected
by multisampling, perhaps requiring more scans to achieve the same
effect as a single-pass multiscan; but on the other hand, blurring has
a slight positive effect by masking noise - even thought that's not
really a solution, but only masking of the problem.
Well blurring doesn't actually mask the noise though. Multiscanning
works on the principle that you are accumulating signal and noise
together. The signal is coherent and adds linearly, the noise is
incoherent (effectively random in each pixel) and thus adds in
quadrature (square root of sum of squares). This is a gross
oversimplification because it assumes that the noise has a white
temporal spectrum, which is not always true. Neglecting that minor
issue for the moment though, ideal multiscanning gives a signal to noise
improvement of approximately the square root of the number of frames.
Noise isn't actually reduced by multiscanning, it is only reduced in
proportion to the signal and the overall scaling of the end result
normalise the signal is where the noise reduction comes from.

However if the frames do not align perfectly then the signal (ie. the
actual image content) is not being added coherently, so the sum is
actually less than would be expected, particularly in the finest
details. Meanwhile the noise is unaffected and just adds in quadrature
as normal. Consequently, if blurring occurs is means that the
improvement in signal to noise must actually be less than what would be
anticipated in the ideal case.

Again, you might be able to see from this why you get a very similar
result - with the same 15-bit limitation - to your multiscanning efforts
as you do with a 0.3pixel gaussian blur.
I'm not concerned with noise on the higher levels because it is not
visible or, let's put it this way, it's not as objectionable as the
noise in dark areas.

Scanning is like peeling onions, in many respects. Not only can it
create a lot of tears, but as soon as you have removed one layer of
problems there is another, almost identical, one just underneath. Once
you have resolved your shadow noise, the general noise level will be
just as evident as the shadow noise now appears to be.
I'm sure I'll change my tune when the time comes
to wrestle with the negatives... ;-)
That is another story completely - but at least black in a negative is
fairly well defined. Nothing can be blacker than the base mask colour.
But I don't understand why you say it doesn't extend the Dmax of the
scanner?

I said it doesn't extend it as far as you claim you actually need. It
certainly does extend the Dmax, but 2-3EV only gives the equivalent of
taking the shadow depth down by 2-3 bits, which is 16-17bits, or 1-4bits
short the 18-20 that you claim you need. As I said, I don't believe you
do actually need this, but if you do then you ain't going to get it that
way either.
 
D

Don

No probs - its pretty hot here too. Far too hot do get on with the
garden landscaping project which has consumed most of my dry weekends
this summer. So I just sat in the garden and admired my previous
handiwork through a nice cool beer. ;-)

Oh, lovely! :)

I'm limited to a couple of window boxes and a bunch of potted plants.
One of my (part-time) hobbies is collecting wild flower seeds (on the
aforementioned bike trips) and then planting those. I also have a
selection of cacti (also grown from seed, but this time purchased)
because cacti can survive on their own when I'm on the road for a few
days.
Not necessarily, as I said, I don't think the situation is quite as bad
as you paint it with your 18-20-bit requirement. There is a
fundamental limit for the Dmax that unexposed Kodachrome will produce.
Unfortunately Kodak do not publish it, and whilst it is more than the
LS-50 will achieve, I doubt it is that much beyond its capabilities -
perhaps 4.5-5. That should be achievable with multiscanning on a 16-bit
device like the LS-5000 or the Minolta 5400. Of course you then need to
apply some shadow lift to bring that into a region that can be
represented in a 16 file format,
....

That "histogram synchronization", as I call it, which I use when
combining the two scans (highlights and shadows) was a big revelation.
I wish this was more publicized because I had to "invent" it myself!
On reflection, I should've figured it out much sooner as it's so
blatantly obvious. Once exposure is boosted the two scans are no
longer "in sync" and - somewhat counterintuitively - the shadows scan
has to be darkened in order to bring it into the same region as the
highlights scan. It's more complicated than that as I first have to
identify the point on the histogram where I want the two scans to
"meet" and work from that. Anyway, I'm surprised more is not written
about this (or at least I couldn't find it). Instead, all I heard/read
was "contrast masking doesn't work for gradients".

....
let alone do what you want: present it
on a display with 8-bit graphics.

That's something I constantly have to remind myself of when I go off
and try to squeeze out every last bit (sic).
Well blurring doesn't actually mask the noise though.

I meant "mask" in the sense that it spreads it around and makes it a
bit more difficult to spot. I suppose, conceptually, this is similar
to anti-aliasing, which I contemptuously call "pro-blurring" ;-) as it
"masks" the jaggies by making *everything* fuzzy... :-/

Blurring noise is not as objectionable as anti-aliasing (well, to me,
anyway) because there are other artifacts competing for distortion
e.g. grain or film curvature resulting in softer focus, etc.
However if the frames do not align perfectly then the signal (ie. the
actual image content) is not being added coherently, so the sum is
actually less than would be expected, particularly in the finest
details. Meanwhile the noise is unaffected and just adds in quadrature
as normal. Consequently, if blurring occurs is means that the
improvement in signal to noise must actually be less than what would be
anticipated in the ideal case.

That's what I figured, which is why I instinctively thought that in
case of blurring (as in sub-pixel shifts) the number of samples will
have to be increased to achieve the same effect as in perfectly
aligned multi-scans without sub-pixel blurring.

That's another reason why I went the twin-scan route. Although I do
sub-pixel shifts there as well I always adjust the shadows scan
because a slight blurring there is less noticeable. To be fair, this
blurring is very slight and I can only see it at large magnification
and then only when flipping between the before and after images.
Again, you might be able to see from this why you get a very similar
result - with the same 15-bit limitation - to your multiscanning efforts
as you do with a 0.3pixel gaussian blur.

Applying blur was really just a lateral thought I tried on the fly. I
didn't actually do the math - because by then I already decided to go
the twin-scan route - but it's good to know that my empirical results
(0.3 Gaussian blur) do coincide with theory, because it means I'm
doing things "properly".
Scanning is like peeling onions, in many respects.

How true!!!! That is something they should print on T-shirts! :)
Not only can it
create a lot of tears, but as soon as you have removed one layer of
problems there is another, almost identical, one just underneath.

Bingo! One example, while the LS-50 solves one problem - e.g.
resolution - the flip side is that at the same time it reveals another
- pepper spots - which I never had to worry about on the LS-30!
Once
you have resolved your shadow noise, the general noise level will be
just as evident as the shadow noise now appears to be.

I found this quite a shock (together with grain) after my very first
scan. By now, I seem to have been somewhat de-sensitized. However, my
objection to noise in shadows is that when boosting shadows it seems
to "jump out" much more than noise in bright areas which seems to
"hide" in among the grain. But that's all very subjective...
I said it doesn't extend it as far as you claim you actually need. It
certainly does extend the Dmax, but 2-3EV only gives the equivalent of
taking the shadow depth down by 2-3 bits, which is 16-17bits, or 1-4bits
short the 18-20 that you claim you need. As I said, I don't believe you
do actually need this, but if you do then you ain't going to get it that
way either.

Ah, OK, I see what you mean. I should have qualified the +2-3 EV with
"at least so far". Just before the heatwave hit, I would've actually
said +2 EV but the last couple of scans were really, really dark so I
had to re-scan with +2.5 and +3 EV. In that case I chose +3 EV but I
might have even gone for +3.5, so it's really open ended depending on
the slide. Fortunately, those are an exception, although I seem to
have quite a few - at least in the early batches.

And now, back to sipping beer and admiring plantlife! :)

Don.
 
K

Kennedy McEwen

Don said:
On Sat, 7 Aug 2004 18:47:02 +0100, Kennedy McEwen

That "histogram synchronization", as I call it, which I use when
combining the two scans (highlights and shadows) was a big revelation.
I wish this was more publicized because I had to "invent" it myself!
On reflection, I should've figured it out much sooner as it's so
blatantly obvious. Once exposure is boosted the two scans are no
longer "in sync" and - somewhat counterintuitively - the shadows scan
has to be darkened in order to bring it into the same region as the
highlights scan. It's more complicated than that as I first have to
identify the point on the histogram where I want the two scans to
"meet" and work from that. Anyway, I'm surprised more is not written
about this (or at least I couldn't find it). Instead, all I heard/read
was "contrast masking doesn't work for gradients".
I would have thought that, after rescaling the shadow scan into the
correct range for the primary scan, you would be better to use a density
profiled mix to merge the two channels rather than a simple mask.
Obviously the mix would be 100% shadow scan for everything below 14-bits
and 100% primary scan for everything above the saturatation limit of the
shadow scan, with some transition percentage in between. At least this
would simulate a non-linearity in the extended dynamic range, rather
than the discontinuity that a simple mask would produce.
I meant "mask" in the sense that it spreads it around and makes it a
bit more difficult to spot.

That's what I thought you meant - fairly obviously, the only thing that
can get spread out is the signal. Noise is something that is unique to
each sensor.
I suppose, conceptually, this is similar
to anti-aliasing, which I contemptuously call "pro-blurring" ;-) as it
"masks" the jaggies by making *everything* fuzzy... :-/
Only by enough to ensure that what is detected does not have any more
resolution than the sampling density can support. Any more than that
and you never know what you have in the scanned image.

The ideal anti-alias filter would be flat in frequency response up to
the limit of the sampling density and then zero above that. Difficult
to achieve this optically, so most anti-alias filters sacrifice a little
resolution in pass band of the sampling density or permit some limited
level of aliasing, or both. The Minolta grain dissolver (originally the
Scanhancer - no idea if Minolta stole it or licensed it, but they don't
appear to credit it!) fall into the latter category.
Blurring noise is not as objectionable as anti-aliasing (well, to me,
anyway) because there are other artifacts competing for distortion
e.g. grain or film curvature resulting in softer focus, etc.
But that is no different from anti-aliasing.
 
H

Howard

Kennedy, a final issue regarding this subject occurred to me –-

As noted earlier, I used to scan at the highest resolution which was
an integer divisor into the scanner's maximum optical resolution that
would give me the largest-size image that I required, allowing
Photoshop image resolution to "fall where it may", so long as it did
not fall below 240 ppi for an Epson printer.

Based on the updated information you provided, your greater concern
today is to avoid the printer driver's resampling to its native 720
dpi resolution using its "nearer neighbor" or "bilinear" interpolation
scheme, whereas if undertaken in Photoshop, resampling to 720 dpi
occurs via the superior "bicubic" method. And that's the reason to
resample in Photoshop, rather than allowing the final Photoshop
resolution to "fall where it may".

Now the new question:

Although I've never before resampled any image in Photoshop, it seems
that all agree that, other things being equal, downsampling
(discarding pixels) is less degrading to the image than upsampling
(inventing pixels).

If one's final image resolution in Photoshop -- before resampling --
were greater than 720 dpi, the choice would be easy: downsample to
720dpi.

But what if one's final image resolution in Photoshop -- before
resampling – were, say, between 360 dpi and 720 dpi? Would it be
better to upsample in Photoshop to 720 dpi (thus no printer resampling
at all), or to downsample in Photoshop to 360 dpi (where printer
upsampling to 720 dpi results in relatively few artifacts and
anomalies since 360 divides evenly into 720).

Of course the same question should be asked where the final image
resolution (before resampling) falls between 240 dpi and 360 dpi --
whether, in Photoshop, to downsample to 240, or to upsample to 360?

As always, thanks for your insight.

Howard
 
K

Kennedy McEwen

Howard said:
Although I've never before resampled any image in Photoshop, it seems
that all agree that, other things being equal, downsampling
(discarding pixels) is less degrading to the image than upsampling
(inventing pixels).
Everything in life is relative, Howard, but you need to be careful what
things are relative to. ;-)

For instance, that comparison is generally made relative to the final
result rather than the original image.

Its the old biblical parable of the widow's mite in Luke 21 verses 1-4.
It is easier to give someone $100 if you have $5000 than it is if you
only have $5. For the person receiving the gift, they obviously get
more from the rich person than from the poor person, so relative to the
receiver, the rich person appears to be more benevolent than the poor
person. But for the donors, the rich person is only giving 2% of their
wealth, whilst the poor person is giving 100% of theirs. So relative to
the original donors, the poor person is being much more benevolent than
the rich one.

Similarly, an image that has been downsampled by discarding information
to use only a percentage of the original, generally looks better than an
upsampled image, where pixels have been interpolated to create enough
pixels in the final result. That is because the upsampled image had
less information to begin with, not because interpolation is a deficient
process.

However, relative to the original images, the upsampled image fares a
lot better than the downsampled image, since no information has been
lost from it, assuming a good interpolation algorithm has been used. An
image which has been upsampled, even by nearest neighbour interpolation,
*always* contains more of the original content than an image which has
been downsampled by the same proportion.

Perhaps scanning as a social process after all! ;-)

In your case, and in most situations, I would suggest that you want to
retain as much of the original image as possible, consequently
downsampling of that original information is generally worse than
upsampling.
If one's final image resolution in Photoshop -- before resampling --
were greater than 720 dpi, the choice would be easy: downsample to
720dpi.

But what if one's final image resolution in Photoshop -- before
resampling – were, say, between 360 dpi and 720 dpi? Would it be
better to upsample in Photoshop to 720 dpi (thus no printer resampling
at all), or to downsample in Photoshop to 360 dpi (where printer
upsampling to 720 dpi results in relatively few artifacts and
anomalies since 360 divides evenly into 720).
Upsample if your system can support the additional data produced. That
way you retain as much of the original image information as possible.
The stochastic dot placement of the Epson printer driver will take
advantage of the similarity between adjacent interpolated pixels to
achieve optimum tonal fidelity, whilst having the capability to
reproduce the full resolution of the original. In practical terms, this
particular choice doesn't make much difference if you are viewing the
resulting image unaided, because both 360ppi and 720ppi are so far above
the resolution of the eye at normal viewing distances - but it will if
you use a magnifying aid to view the print.
Of course the same question should be asked where the final image
resolution (before resampling) falls between 240 dpi and 360 dpi --
whether, in Photoshop, to downsample to 240, or to upsample to 360?
Here you are more likely to see a difference between upsampling and
downsampling an original - especially if you view the detail closely
with good eyesight. Similarly you will see the difference even more
clearly if your original scaled resolution lies between 144 and 180ppi.
As above, upsampling is better.

Follow the principle that nothing justifies throwing useful information
away and you can't go wrong.

That is why the only situation where downsampling is a clear decision is
the case where the scan results in more than 720ppi. Since the printer
cannot handle any more than that, the extra resolution is useless
information and must be discarded. All you are doing by downsampling in
Photoshop using bicubic interpolation is discarding the excess
information in a more controlled manner than the printer algorithm will
use.
As always, thanks for your insight.
You are welcome.
 
D

Don

That "histogram synchronization", as I call it, which I use when
I would have thought that, after rescaling the shadow scan into the
correct range for the primary scan, you would be better to use a density
profiled mix to merge the two channels rather than a simple mask.

Yes, that was my first thought too but without proper tools it was
just too difficult to do. For example, how do I locate the cutoff
point in absolute terms? I mean, how do I translate the 14-bits cutoff
into a place on the histogram?

In order to determine the 14-bits cutoff reliably I would have to use
manual exposure throughout which is very time consuming. Unlike LS-30,
where I used manual exposure throughout, with LS-50 I use AutoExposure
for the nominal scan (because of wider dynamic range I don't have to
"fight" the histogram at both ends) and then modify the AnalogGain
empirically.

Switching between AutoExposure and manual exposure is even more time
consuming because of NikonScan's crankiness when it comes to turning
AutoExposure off. As I mentioned in an earlier message (then with NS3
but exactly the same problem occurs with NS4) I need to restart both
NikonScan *and* turn off the scanner before the change "takes".
Otherwise, I get totally random and spurious exposures (again, I
documented all that, in detail, in an earlier message).

What would really be *very* useful is if NikonScan displayed which
AnalogGain values were used for AutoExposure! Now, that would really
be good!
Obviously the mix would be 100% shadow scan for everything below 14-bits
and 100% primary scan for everything above the saturatation limit of the
shadow scan, with some transition percentage in between. At least this
would simulate a non-linearity in the extended dynamic range, rather
than the discontinuity that a simple mask would produce.

I don't get much discontinuity after adjusting the shadows scan. As
the last step I may apply 3 pixel Gaussian Blur to the threshold mask
(the transition percentage equivalent) but - so far - they seem to
blend perfectly.

Maybe I ought to explain what I do in more detail?

First of all, since Photoshop's Threshold is based on Luminance it
uses weighted values for the three channels based on perception
sensitivity (~ 30% red, 60% green, 10% blue, roughly). So, I have to
create my own "true RGB threshold" by applying it to the three
channels individually and then mixing the results manually to get the
mask.

I have empirically (a blunt instrument, I know...) determined that (so
far) the rule of thumb seems to be that anything below about 32 has
noise. Therefore, that's my cutoff.

I then use this mask to get the mean values for that area of the image
in the nominal scan as well as the shadows scan. After that I just
apply a curve (using the two sets of means) to the shadows scan. This
"converts" the shadows scan to the nominal scan "domain". I know, it's
very blunt, but since the range is relatively narrow, so far, the
results are quite amazing.

If I really wanted to (and I may do so in the future) I can make this
curve into a much finer tool by getting the mean values for a number
of "bands" (for example, 32 to get a curve point for each notch on the
threshold - although in Photoshop reality each of those is really a
container for 256 individual 16-bit notches). I ran some tests and saw
no noticeable improvements to the curve so, for expedience, I only use
a single point.

The last thing I was about to test before this tropical weather kicked
in, was to try and maybe have another point at either side of the
cutoff (maybe at 28 and 36, or 30 and 34) for a really seamless
transition therefore eliminating the Gaussian Blur step completely.

Actually, the full procedure is a bit more complicated but that has to
do with the fact that I'm trying to minimize the number of edits.
However, the above gives you the gist of what I'm doing.

I realize that in this method (still a work in progress, of course)
there are some inaccuracies and even guesses (although all based on
empirical data). However, going for the absolute accuracy would get me
into diminishing returns. In other words, the extra work to achieve
perfection is just not worth it as the improvements would be
imperceptible.

But, who knows, in the end I may go back and use manual exposure
throughout because it has many other benefits... The time I save with
AE on the nominal scan, I seem to waste many times over later. I
decided on AE before I realized I will have to twin-scan which, sort
of, managed to sneak up on me again. Growl! Darn onion layers... ;-)
But that is no different from anti-aliasing.

Not conceptually. The difference is perceptional. An anti-aliased
image has nothing distracting from anti-aliasing so it's more
distracting (to me, anyway). (Slight) Blurring has to fight for
attention with grain, lack of focus, etc. so the end result is that it
seems less objectionable.

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top