The Ultimate Scanner Software vs. Photoshop Question

H

Howard

As always, thanks Kennedy.

By the way, surely you are aware that these questions regarding scan
resolution, Photoshop resolution, printer resolution, upsampling,
downsampling, and interpolation schemes, are among that most
perplexing for new, and even not-so-new, digital imagers. Advice from
Internet user groups is "all over the place"; opinions differ greatly.

I've read many books, including Photoshop manuals (HA!), "Real World
Photoshop", "Real World Scanning", "Photoshop Artistry", etc., and
NOWHERE is such information provided -- except in general form. Why
doesn't someone (that's YOU, Kennedy) write on this very subject
matter -- even if it is to be one dedicated chapter in someone's
else's book.

I suspect that one reason for the variety of opinions and workflows is
that on continuous tone images (i.e., photos) of "normal" size viewed
by the naked eye, differences are not apparent. (I suspect that line
drawings having considerable diagonals would show differences in
workflow techniques more readily.)

In any event, it's now time to print some photos.

Thanks again,

Howard
 
K

Kennedy McEwen

Don said:
Yes, that was my first thought too but without proper tools it was
just too difficult to do. For example, how do I locate the cutoff
point in absolute terms? I mean, how do I translate the 14-bits cutoff
into a place on the histogram?
You know where that is precisely - at exactly 4 in the 16-bit range. OK,
allow a little bit for noise and you could say 5 or 6.
In order to determine the 14-bits cutoff reliably I would have to use
manual exposure throughout which is very time consuming.

No - just use auto-exposure on preview, but not on the scans themselves.
Then the exposure is only determined once per frame by Nikonscan and
everything afterwards, including analogue gain shifts, is based on that
exposure level. There is no need to switch between manual and auto
exposure.
Switching between AutoExposure and manual exposure is even more time
consuming because of NikonScan's crankiness when it comes to turning
AutoExposure off. As I mentioned in an earlier message (then with NS3
but exactly the same problem occurs with NS4) I need to restart both
NikonScan *and* turn off the scanner before the change "takes".
Otherwise, I get totally random and spurious exposures (again, I
documented all that, in detail, in an earlier message).
I know that this was a problem you were having with NS3 and the LS-30
but I didn't think it had occurred since you upgraded to the LS-50. It
certainly isn't something I have noticed in the LS-4000. I guess it
could be a comms issue, since that is a difference between the two, but
even so, it is unlikely that just one command would be problematic.
What would really be *very* useful is if NikonScan displayed which
AnalogGain values were used for AutoExposure! Now, that would really
be good!

I guess you mean actual exposure times, since analogue gain is a more
abstract concept which is relative to the last exposure. I don't know
if setting a specific exposure time is even possible - if it was then I
feel that Ed Hamrick would already have incorporated into Vuescan, but
he also uses an exposure adjustment based on the autoexposure level,
just with a linear scale rather than an EV (ie. log) scale.
I don't get much discontinuity after adjusting the shadows scan. As
the last step I may apply 3 pixel Gaussian Blur to the threshold mask
(the transition percentage equivalent) but - so far - they seem to
blend perfectly.

Maybe I ought to explain what I do in more detail?

First of all, since Photoshop's Threshold is based on Luminance it
uses weighted values for the three channels based on perception
sensitivity (~ 30% red, 60% green, 10% blue, roughly). So, I have to
create my own "true RGB threshold" by applying it to the three
channels individually and then mixing the results manually to get the
mask.

I have empirically (a blunt instrument, I know...) determined that (so
far) the rule of thumb seems to be that anything below about 32 has
noise. Therefore, that's my cutoff.
Presumably this 32 is *after* gamma has been applied? Hence it is
likely to be either to 4 or 5 prior to the application of 2.2 gamma. ie.
at the density limit or the scanner, as expected.
I then use this mask to get the mean values for that area of the image
in the nominal scan as well as the shadows scan. After that I just
apply a curve (using the two sets of means) to the shadows scan. This
"converts" the shadows scan to the nominal scan "domain". I know, it's
very blunt, but since the range is relatively narrow, so far, the
results are quite amazing.
Again, you can calculate what this curve is much more precisely it is
just the analogue gain difference between the primary and shadow scans -
taking account of gamma of course. So it should be quite possible to
automate this with exact figures rather than empirical estimation - and
then apply a gradual transition between the two.
The last thing I was about to test before this tropical weather kicked
in

Still tropical here - just changed to tropical monsoon season now! ;-)
Not conceptually. The difference is perceptional. An anti-aliased
image has nothing distracting from anti-aliasing so it's more
distracting (to me, anyway). (Slight) Blurring has to fight for
attention with grain, lack of focus, etc. so the end result is that it
seems less objectionable.
But both are the same thing - the only difference is the degree applied
and, since both are usually adjustable parameters, they should be
interchangeable.
 
K

Kennedy McEwen

Howard said:
Why
doesn't someone (that's YOU, Kennedy) write on this very subject
matter -- even if it is to be one dedicated chapter in someone's
else's book.
I don't think anyone would pay to read my ramblings!

Seriously, writing a book is a lot more difficult than writing on
Usenet. Here people raise topics, ask questions, make assumptions etc.
that others can respond to - but if you are writing a book you have to
do that part yourself. So inevitably there will be many aspects about
any subject covered in a book which remain untouched, even if they turn
out to be important issues to someone else. Also, you might have
noticed that there are a lot of topics that are important to scanning
and printing that I never get involved in the detailed discussions
about, because I don't know enough about them myself and simply take
them at face value or use them as tools.
 
T

tomblue

Kennedy said:
I don't think anyone would pay to read my ramblings!

Seriously, writing a book is a lot more difficult than writing on
Usenet. Here people raise topics, ask questions, make assumptions etc.
that others can respond to - but if you are writing a book you have to
do that part yourself. So inevitably there will be many aspects about
any subject covered in a book which remain untouched, even if they turn
out to be important issues to someone else. Also, you might have
noticed that there are a lot of topics that are important to scanning
and printing that I never get involved in the detailed discussions
about, because I don't know enough about them myself and simply take
them at face value or use them as tools.

Authors want to make money, and try to write for as wide an audience as
possible, often at the expense of diluting the knowledge they may
actually have. For obvious reasons, they would either steer clear of
writing about specific makes and models, or would only write about the
good and never the bad.

Usenet and Internet offer free information flow. The problem is trying
to filter the 0.000000001% of the signal from the noise.
 
D

Don

You know where that is precisely - at exactly 4 in the 16-bit range. OK,
allow a little bit for noise and you could say 5 or 6.

If I follow you correctly, that (4) would be about 64 on the
histogram, right?
No - just use auto-exposure on preview, but not on the scans themselves.
Then the exposure is only determined once per frame by Nikonscan and
everything afterwards, including analogue gain shifts, is based on that
exposure level. There is no need to switch between manual and auto
exposure.

Yes, I realize that subsequent AnalogGain (AG) adjustments use
AutoExposure (AE) as baseline. The problem is that I don't know where
this exposure is on the absolute scale. By "absolute scale" I'm
referring to the manual scan with AE=off, AG=0. Best explained with an
example...

Let's use the old Kodachrome (KC) adjustment. As you correctly stated
once there is no one *absolute* setting to correct for KC. However,
there is a relative "rule" which is (approximately): red needs to be
boosted by the same amount blue is cut - green stays as is (in reality
it needs a very slight boost).

Now, scanning manually with AE=off, AG=0 gets me the absolute
baseline. If the slide is bright enough and no additional Master AG
boost is needed then there is virtually no need for any KC adjustment.

However, if, for example, I need to boost Master AG=+2.0 (AE=off) in
order to get a properly exposed image, the KC correction goes up
correspondingly. I don't remember exactly (it's too hot to look it up)
but it's something like R=+1.2, B=-1.2 or thereabouts. If, on the
other hand, the image needs only Master AG=+1.0 then the KC adjustment
drops accordingly to about R=+0.6, B=-0.6. Therefore, the more (or
less) I need to boost the Master AG in order to get a proper exposure,
the more (or less) I need to boost KC adjustment (of course!).

The problem is, if I use AE then I don't know where that exposure is
on the absolute scale and so I can't calculate the amount of required
KC adjustment. (I wasted months on this, as you know, although it all
seems so elementary and pedestrian right now.)

My initial tests seem to indicate that the same thing happens when
trying to determine the exposure boost needed for the shadows scan. In
other words, the required boost is a function of the nominal scan
exposure (AE=off, AG=0): the more AE deviates from this "flat"
exposure, the more I need to boost to pull detail out of the shadows.
And if I use AE then I don't know what this exposure is in relation to
the flat exposure (by "flat" I mean, of course: AE=off, AG=0).
I know that this was a problem you were having with NS3 and the LS-30
but I didn't think it had occurred since you upgraded to the LS-50. It
certainly isn't something I have noticed in the LS-4000. I guess it
could be a comms issue, since that is a difference between the two, but
even so, it is unlikely that just one command would be problematic.

Oh, believe me, it is and I just don't get it!? I have a sneaky
suspicion it may have something to do with the same reason NikonScan
needs to be restarted after changing Nikon Color Management status. In
other words, whatever is initialized for NCM is probably initialized
for AE. But it could be a comms issues, or anything else... I just
have no time to really look into in...
I guess you mean actual exposure times, since analogue gain is a more
abstract concept which is relative to the last exposure.

I could use exposure times too, but if we take the manual exposure
with Master AG=0, AE=off as the baseline, then any AE after that can
be expressed as a relative AG (+ or -) deviation from that (nominal or
baseline) exposure. I just find it easier to work with that because AG
= ev but, of course, even exposure times would be better than nothing.
I don't know
if setting a specific exposure time is even possible - if it was then I
feel that Ed Hamrick would already have incorporated into Vuescan, but
he also uses an exposure adjustment based on the autoexposure level,
just with a linear scale rather than an EV (ie. log) scale.

Actually, ViewScan (VS) does display the absolute "value". If you do
AE in VS and then go to the exposure (need to click something, don't
remember anymore... maybe manual exposure?) then VS will reveal what
exposure times (in its weird VS "units") it used to achieve that AE.
This value is the deviation from the "flat" or nominal exposure which
in the "unique" VS world is 1 (not 0) as I seem to recall - I know,
it's actually a multiplier, but I'm just having fun with it... ;o)

Of course, not only is this exposure in weird "VS units" but who in
their right mind would use the "rolling beta" VS, anyway? ;o)
Presumably this 32 is *after* gamma has been applied? Hence it is
likely to be either to 4 or 5 prior to the application of 2.2 gamma. ie.
at the density limit or the scanner, as expected.

Oh, sorry... Yes, of course, everything is with Gamma 2.2.
Again, you can calculate what this curve is much more precisely it is
just the analogue gain difference between the primary and shadow scans -
taking account of gamma of course. So it should be quite possible to
automate this with exact figures rather than empirical estimation - and
then apply a gradual transition between the two.

That's what I thought but, instead of re-inventing optics empirically
like I did with KC correction ;-) can you give me some pointers? It's
just too time consuming for me to wade through theory until I find the
relevant areas. That's why I use the empirical shortcut...

Specifically, given the above explanations i.e., a nominal scan
(AE=on) how would I calculate:

1. The required AG boost for the second scan to reveal detail in
shadows (allowing 5 or 6 for noise, as you suggest)

2. The threshold (on the Photoshop's 256 interval scale) which I can
then use to create the mask.

3. Points on the curve to apply to the shadows scan in order to bring
it into the same color "domain" as the AE scan.
Still tropical here - just changed to tropical monsoon season now! ;-)

Lucky you! ;-) Over here it briefly *pretended* to rain yesterday
afternoon but today it's back to oppressive Gobi desert weather...

The thing is I just can't stand the heat! Turns my brain to mush. I
have no problem with cold because you just keep putting on clothing,
but nothing comparable can be done once the thermometer hits 30 C.
But both are the same thing - the only difference is the degree applied
and, since both are usually adjustable parameters, they should be
interchangeable.

I was thinking of, say, anti-aliased letters or lines compared to a
slightly blurred image. The letters or lines just lay there exposed on
a blank page and even the slightest blurring is quite noticeable. A
complex image, on the other hand, has many distractions so a slight
blurring is easily overlooked.

I don't know... It may be a just subjective thing related to
perception but I find the former much more objectionable than the
latter.

Don.
 
D

Don

I don't think anyone would pay to read my ramblings!

Seriously, writing a book is a lot more difficult than writing on
Usenet. Here people raise topics, ask questions, make assumptions etc.
that others can respond to - but if you are writing a book you have to
do that part yourself. So inevitably there will be many aspects about
any subject covered in a book which remain untouched, even if they turn
out to be important issues to someone else. Also, you might have
noticed that there are a lot of topics that are important to scanning
and printing that I never get involved in the detailed discussions
about, because I don't know enough about them myself and simply take
them at face value or use them as tools.

One option could be to collect all postings and publish them "as is".
That also eliminates planning the book as it's simply a product of
"natural selection" (it automatically handles the most frequently
asked questions). An exhaustive index and a comprehensive cross
reference would then bring the necessary order. I'm not a lawyer, but
the only thing is that the quotes may need to be deleted because
otherwise you may need to ask for permission.

I even have a title for you:

"Scanning or how to peel onions"

with the jacket blurb:

"Like onions, scanning is a multi-layered structure that will make a
grown man cry!" ;o)

Don.
 
K

Kennedy McEwen

Don said:
If I follow you correctly, that (4) would be about 64 on the
histogram, right?
No, it would be around 18 or so on the histogram. The 4 comes from the
fact that you are coding a 14-bit ADC value in a 16-bit format,
normalised to the peak level. So 1 on the 14-bit ADC corresponds to 4
on the 16-bit (raw) format. Then you apply gamma, which is a linear
gain adjustment in these deep shadows. Assuming a 2.2 gamma correction
then the 4 is multiplied by 10/2.2, which results in 18.
Yes, I realize that subsequent AnalogGain (AG) adjustments use
AutoExposure (AE) as baseline. The problem is that I don't know where
this exposure is on the absolute scale. By "absolute scale" I'm
referring to the manual scan with AE=off, AG=0. Best explained with an
example...

Let's use the old Kodachrome (KC) adjustment. As you correctly stated
once there is no one *absolute* setting to correct for KC. However,
there is a relative "rule" which is (approximately): red needs to be
boosted by the same amount blue is cut - green stays as is (in reality
it needs a very slight boost).

Now, scanning manually with AE=off, AG=0 gets me the absolute
baseline. If the slide is bright enough and no additional Master AG
boost is needed then there is virtually no need for any KC adjustment.

However, if, for example, I need to boost Master AG=+2.0 (AE=off) in
order to get a properly exposed image, the KC correction goes up
correspondingly. I don't remember exactly (it's too hot to look it up)
but it's something like R=+1.2, B=-1.2 or thereabouts. If, on the
other hand, the image needs only Master AG=+1.0 then the KC adjustment
drops accordingly to about R=+0.6, B=-0.6. Therefore, the more (or
less) I need to boost the Master AG in order to get a proper exposure,
the more (or less) I need to boost KC adjustment (of course!).
That doesn't look right, the EV variance on the individual colours
should not be a function of the master EV.

Remember that the Analogue Gain settings are logarithmic, whilst the CCD
response is linear. That means that AG of +1.2R increases the red
exposure by 2.3x the nominal, whilst -1.2B exposes the blue by 0.44x the
nominal. So you are giving the red channel about 5.3x the exposure of
the blue, and this gets you into roughly the right region for colour
balance.

However, the EV numbers are additive, meaning the exposure time
multiplies, so adjusting the master EV by whatever you chose should
*NOT* change the KC adjustment required to get into the correct colour
balance region.

Specifically, if you set a master AG of +1 and then reduce the colour
channels to +0.6R and -0.6B, you are reducing the exposure difference
between them. This produces a total EV adjustment of +1.6R and -0.4B,
and hence red is exposed only 4x as much as blue.

If the first got you into an acceptable colour balance region, the
second one certainly could not.

The only way to maintain the same exposure ratio is keep the same EV
variation in the colour channels. Hence when you add in the +1 Master
AG you have a total EV adjustment of +2.2R and -0.2B, maintaining an
exposure ratio of red to blue of 5.3:1.
The problem is, if I use AE then I don't know where that exposure is
on the absolute scale and so I can't calculate the amount of required
KC adjustment.

I don't see why not. As we mentioned last time, this is easier to see
when you scan in RAW, with NCM off and gamma=1.0. Then the numbers
match up as the simple arithmetic predicts. The gamma is just a further
computation of the numbers, but does not affect the exposure ratios.
My initial tests seem to indicate that the same thing happens when
trying to determine the exposure boost needed for the shadows scan. In
other words, the required boost is a function of the nominal scan
exposure (AE=off, AG=0): the more AE deviates from this "flat"
exposure, the more I need to boost to pull detail out of the shadows.

This should not be the case at all. However, remember that Kodachrome
dye characteristic chart - there is more density in the red channel than
in the others. I suspect you are adjusting the balance of the shadows
to make it "look" right, rather than reflect what is actually there on
the film.
And if I use AE then I don't know what this exposure is in relation to
the flat exposure (by "flat" I mean, of course: AE=off, AG=0).
From above, absolute exposure is irrelevant, as is the ratio to the
"flat" exposure.
Actually, ViewScan (VS) does display the absolute "value". If you do
AE in VS and then go to the exposure (need to click something, don't
remember anymore... maybe manual exposure?) then VS will reveal what
exposure times (in its weird VS "units") it used to achieve that AE.

Yes, I forgot about that. Just done some testing at settings of 10, 20,
30 etc. measuring the scan times fro each and a linear fit shows the
"unit" to be multiples of around 300uS per scan line. Probably
different for every type of scanner though, so you should check your
model.
My mistake, sorry - I thought you meant a data count of 32 in the 16-bit
range, not a Photoshop level of 32 in the 0-255 range. I didn't realise
what you meant until you referred to this range in your reply. The
difference is significant, because this now seems extremely high to be
perceiving excess noise at. Even after the 2.2 gamma is accounted for,
this is still a very high in the ADC range, so base scanner noise and
ADC quantisation should not be a problem.
That's what I thought but, instead of re-inventing optics empirically
like I did with KC correction ;-) can you give me some pointers?

Well, I would estimate where a sensible upper threshold for the
transition to the shadow scan should begin using raw data to start with.
That would be somewhere well into the ADC range, perhaps ADC count of
16. On a 16 bit scale this is level 64. After gamma correction (still
linear in this region) that would correspond to a count of about 290 in
the 16-bit range (but still between zero and one in the Photoshop level
range).
It's
just too time consuming for me to wade through theory until I find the
relevant areas. That's why I use the empirical shortcut...

Specifically, given the above explanations i.e., a nominal scan
(AE=on) how would I calculate:

1. The required AG boost for the second scan to reveal detail in
shadows (allowing 5 or 6 for noise, as you suggest)
Each EV is just doubling the CCD exposure and, since the shadows are in
the linear region of the gamma curve, this should be reproduced as a
doubling of the data count in the 16-bit image. So scaling these back
by the same factor should put the shadows back in the same range as the
main scan. Given that you have 16-bit file depth, a 14-bit ADC and
working in a 2.2 gamma space (ie. a gain of around 4.5 in the shadows)
you would need to increase the shadow exposure by about 18x, or around
4.17EV before running into quantisation limits. Since you are using
Photoshop though and only have 15-bit processing space, you will reach
that quantisation limit with only +3.17EV on your shadow scan.
2. The threshold (on the Photoshop's 256 interval scale) which I can
then use to create the mask.
This is what triggered the fact I had misunderstood you above. I really
can't advise on this because I am using PS7.1 which only allows mixing
of 8-bit images. As I mentioned in another post, I assume you are using
PS-cs to implement these mixes.
3. Points on the curve to apply to the shadows scan in order to bring
it into the same color "domain" as the AE scan.
I think I touched on this above - you appear to be trying to correct for
the colour balance of Kodachromes dyes being different in the dense
shadows at the point of implementing the mix of shadow and primary scan.
Isn't it easier/better to scan the film as it is, mix the shadows with
the balance they actually have and then correct the shadow colour
balance using the PS tool? After all, the exact balance of the dense
regions of each slide will depend critically on how much light it has
been exposed to since it was developed, and it is unlikely that you have
enough control to get a good match with just the AG levels to play with.

Presumably you are implementing a curve manipulation or some form of
contrast mask type operation to bring this 16-bit shadow detail into an
8-bit display range at some stage.
 
D

Don

OK, I'm back! It's amazing how the brain kicks in when the temperature
finally drops below 30 C! ;o)

BTW, I reversed the order of quotes to deal with important stuff
first, and leave the rest for you to comment on, time permitting.

---

Each EV is just doubling the CCD exposure and, since the shadows are in
the linear region of the gamma curve, this should be reproduced as a
doubling of the data count in the 16-bit image. So scaling these back
by the same factor should put the shadows back in the same range as the
main scan. Given that you have 16-bit file depth, a 14-bit ADC and
working in a 2.2 gamma space (ie. a gain of around 4.5 in the shadows)
you would need to increase the shadow exposure by about 18x, or around
4.17EV before running into quantisation limits. Since you are using
Photoshop though and only have 15-bit processing space, you will reach
that quantisation limit with only +3.17EV on your shadow scan.

Excellent! Thanks!

That seems to correspond very nicely to the last batch of my tests
where I actually used +3 AG which is equivalent to +3 EV (although I
haven't tested the darkest slide yet!). I also noticed that I could
have used a tad more - which accounts for the 0.17 EV in your
calculations - but decided that the remainder was not that visible.

So, I suppose a rough rule-of-thumb would be to apply +3.2 AG/EV for
the shadow scan as standard and be assured that - given other
limitations - I will be pulling out the most data (in my current
setup).

However, since my scanner is only 14-bit shouldn't the PS 15-bit
limitation be unimportant here? (I'm not averaging out, but just
combining two separate scans.)
This is what triggered the fact I had misunderstood you above. I really
can't advise on this because I am using PS7.1 which only allows mixing
of 8-bit images. As I mentioned in another post, I assume you are using
PS-cs to implement these mixes.

No, I'm actually using PS6!

I came up with a (convoluted) method to do 16-bit layers in PS6. Very
messy but it works. I'll explain in more detail later if you're
interested (this message is already too long...).

However, I use an 8-bit image (duplicate) to create the mask which I
then use on 16-bit images. I understand this is not ideal but the mask
does not have to be perfect. Besides, since there is blurring in
transition areas 8-bit should be more than enough for the mask.

My (unreliable?) empirical data seem to suggest a point of around 32
on the 256-level scale. Perhaps a bit higher, but I settled on the 32
- for now.

So, given an 8-bit image, where would you set the threshold given the
environment in point 1 above?
I think I touched on this above - you appear to be trying to correct for
the colour balance of Kodachromes dyes being different in the dense
shadows at the point of implementing the mix of shadow and primary scan.

That's right!
Isn't it easier/better to scan the film as it is, mix the shadows with
the balance they actually have and then correct the shadow colour
balance using the PS tool?

I do scan as is (i.e., since LS-50 I no longer apply KC adjustment
with AG) but if I mix those two scans I then get an abrupt cutoff in
the transition areas. This can be hidden somewhat by blurring the edge
in the mask, but that only deals with consequences of the problem, not
the problem itself i.e, the mismatched color balance at either side of
this blurred transition.

Because of that, before I merge the two I need to "adjust" the shadows
scan to be in the same "histogram domain" as the nominal scan by
applying a specific curve. Following that, there is no abrupt cutoff
and the two images flow seamlessly into each other. I may apply a very
small amount of blur to the mask edge anyway, but there is no color
mismatch anymore between the two scans.

Only after that do I apply the slight KC adjustment to the composite
image. Actually, since LS-50, KC adjustment is no longer a separate
step in my workflow but just a part of the general color correction.
After all, the exact balance of the dense
regions of each slide will depend critically on how much light it has
been exposed to since it was developed, and it is unlikely that you have
enough control to get a good match with just the AG levels to play with.

Sorry, that's the misunderstanding. Since the LS-50 I don't use AG for
KC adjustment anymore.
Presumably you are implementing a curve manipulation or some form of
contrast mask type operation to bring this 16-bit shadow detail into an
8-bit display range at some stage.

I'm not quite sure I understand what you mean? Are you referring to
the contrast masking procedure of the two scans?

I don't do traditional contrast masking when combining the two images
because all those methods do not deal with the above mentioned color
imbalance. So, I invented my own...

First a bit of background to explain my train of thought. For the
purposes of this explanation, let's assume that the 256 histogram
range is an "ideal" range capable of covering the full dynamic
spectrum. I'll call this the "absolute" histogram.

Due to the scanner's shortcoming it's only capable of covering a
portion of this absolute range. So, let's say that the nominal scan
(with AE) only covers 64-255 on the absolute scale. However, the
scanner presents this as 0-255. I'll call this the "relative"
histogram.

Conversely, the shadows scan will only scan, say, 0-192 on the
absolute scale, but again, present this as 0-255 on the relative
scale.

So, we end up with two scans which both have a relative range of 0-255
although they actually represent different areas of the absolute
histogram!

My question 3 is about is how do I determine the curve to apply to the
shadows scan to bring its histogram into the same (absolute) range as
the nominal scan?

The method I have been using is as follows:

Having created a mask intended for merging of the two scans, I do a
histogram of the shadow area in both scans (in my case that's anything
below 32). I then note the mean values of both and create a curve with
1 point per color, where Input=Shadows scan Mean, Output=Nominal scan
Mean and then apply this curve to the shadows scan - before merging.

My question is, whether there is a way to calculate the (theoretical)
RGB points on this curve instead of relying on empirical data (0-32
Mean values)?

Actually, using only a single point as I do is very blunt but since
the range is very narrow I can get away with it. In real life I would
actually need to separate this range (0-32 on the 8-bit histogram)
into multiple bands and create multiple curve points for each color.

That would be very time consuming to do empirically for each scan, but
if I knew how to calculate these values theoretically, I could create
better curves with multiple points, faster! That's what I was asking.


.... and now the rest, time permitting ...
That doesn't look right, the EV variance on the individual colours
should not be a function of the master EV.

I spent a lot of time on this with the LS-30 and I clearly observed
that the darker the slide (i.e. the more I needed to boost Master AG)
the more variance I needed to introduce between individual colors to
eliminate (well, let's say, ameliorate) the casts. Although I need
much less adjustment with the LS-50 the same pattern is still there.

I also noticed (and have Excel sheets with graphs, someplace) that
different parts of the histogram actually need different amounts of AG
(I know, old hat to you, but a revelation to me). However, since I was
only after increased dynamic range (not color correction) I ignored
that part.

This is all with Gamma 2.2, BTW, which may (and probably does) account
for non-linear effects I observed.
Remember that the Analogue Gain settings are logarithmic, whilst the CCD
response is linear. That means that AG of +1.2R increases the red
exposure by 2.3x the nominal, whilst -1.2B exposes the blue by 0.44x the
nominal. So you are giving the red channel about 5.3x the exposure of
the blue, and this gets you into roughly the right region for colour
balance.

However, the EV numbers are additive, meaning the exposure time
multiplies, so adjusting the master EV by whatever you chose should
*NOT* change the KC adjustment required to get into the correct colour
balance region.

Well, that's what I started with, but I thought no single KC
adjustment was possible. And, my latter tests pretended to confirm
that...

But something just occurred to me! Since color balance is not linear
across the histogram, but varies depending where on the histogram the
measurement is made (your Kodak graph, as well as my own tests when
plotted confirm this) maybe what I was observing is this difference!

In other words, even though the KC adjustment is not a function of
exposure, it's different for different parts of the histogram. And
since a darker image has more data lower in the histogram the
difference I'm observing is not a function of the Master AG, but a
function of the histogram range which happens to be prevalent in the
image!

Which, sort of, gets me back to square one... A single KC adjustment
is possible but it's non-linear, right (the Kodak graph)? And since AG
is linear, the best that can be done with AG is "fine tune" it to the
image. Images with prevalently dark areas need a different AG KC
adjustment to images with prevalently light areas. This also means,
that in either case, a reverse cast will be introduced on the opposite
side of the histogram.
Specifically, if you set a master AG of +1 and then reduce the colour
channels to +0.6R and -0.6B, you are reducing the exposure difference
between them. This produces a total EV adjustment of +1.6R and -0.4B,
and hence red is exposed only 4x as much as blue.

If the first got you into an acceptable colour balance region, the
second one certainly could not.

I think this may be explained with what I just wrote above that -
inadvertently - I was really adjusting the prevalent areas of the
image, rather that the whole image (as I thought back then!).
I don't see why not. As we mentioned last time, this is easier to see
when you scan in RAW, with NCM off and gamma=1.0. Then the numbers
match up as the simple arithmetic predicts. The gamma is just a further
computation of the numbers, but does not affect the exposure ratios.

I understand that, and I did run some tests with linear gamma but -
for pragmatic reasons - switched back to 2.2. For one, NikonScan gamma
2.2 seems to give me better results than PS gamma 2.2. I even
downloaded ACV curves from a site which claimed that PS gamma is
actually in error, but NikonScan still seems "better" somehow,
possibly because of the full 16-bit range!?

Of course, this complicated things which may have - inadvertently -
lead me to some incorrect conclusions... :-/
This should not be the case at all. However, remember that Kodachrome
dye characteristic chart - there is more density in the red channel than
in the others. I suspect you are adjusting the balance of the shadows
to make it "look" right, rather than reflect what is actually there on
the film.

Yes, I think that's exactly what happened! That's the risk of relying
solely on empirical data, especially data based on a very narrow
sample - the few rolls from a single batch I used for all these
tests...

It wasn't supposed to be like that! That was only a preliminary set of
data which I was supposed to generalize shortly afterwards. But I kept
getting mired ever deeper in a recursive set of problems which
unfolded fractal-like ("the onion effect") so I never left that
initial sample... :-/

Don.
 
K

Kennedy McEwen

Don said:
OK, I'm back! It's amazing how the brain kicks in when the temperature
finally drops below 30 C! ;o)

BTW, I reversed the order of quotes to deal with important stuff
first, and leave the rest for you to comment on, time permitting.
This is becoming less of a scanning issue and more of a Photoshop tools
discussion, I fear...
However, since my scanner is only 14-bit shouldn't the PS 15-bit
limitation be unimportant here? (I'm not averaging out, but just
combining two separate scans.)
But, if I understand you correctly (see later) you are combining these
in the correct ratio relative to each other. So the first step you
make, from your earlier post, is to darken the shadow scan to compensate
for the increased exposure you captured it with. This is the stage
where the 15-bit limitation of PS comes in. You might start with a
14-bit image, scanned at +3EV for the shadow scan, but now you have to
darken that by a factor of approximately 8x (ignoring the gamma slope
change) to get it into the same range as the primary scan. That means
that the lsb from the scanner on the shadow scan, which was originally
the 14th msb from the scanner, corresponds to the 17th msb relative to
the primary scan - but you only have 15-bits to play with in Photoshop.
No, I'm actually using PS6!

I came up with a (convoluted) method to do 16-bit layers in PS6. Very
messy but it works. I'll explain in more detail later if you're
interested (this message is already too long...).

Hmm... I upgraded from PS5 directly to 7 and never went through the 6
phase. Nevertheless, I thought 7 could do everything 6 can and more.
However, I can't see any means of producing 16-bit layers in PS7, though
I rarely use most of the facilities it provides, hence the lack of
urgency to upgrade to CS.

Whilst I can think of ways that would give a 15-bit density range in the
result, buy merging 8-bit masked layers in the appropriate ratio, the
resulting data would only have 8-bit precision. The problem with this
type of arrangement is that you are never really sure if the noise you
perceive at any level, especially after level adjustments, is due to
that lack of precision or if it is real.

True 15-bit layers appear to be outside of the capabilities of PS7 and,
I presume, PS6. So I guess you better explain what you are doing
because it seems to influence the levels that you are merging at.
However, I use an 8-bit image (duplicate) to create the mask which I
then use on 16-bit images. I understand this is not ideal but the mask
does not have to be perfect. Besides, since there is blurring in
transition areas 8-bit should be more than enough for the mask.

My (unreliable?) empirical data seem to suggest a point of around 32
on the 256-level scale. Perhaps a bit higher, but I settled on the 32
- for now.
The more I look at this, the more unbelievable it seems. On a linear
scale of 256 levels, 32 represents 1/8th of the full scale - the
threshold of the 3rd msb in the data. You are saying, in effect, that
the noise floor of your 14-bit scanner is actually around the 3rd msb
and, quite frankly, I just don't believe that! If it was the case that
everything below level 32 was just noise then adding 12.5% random
gaussian noise to your 14-bit scanned images would make a barely
perceptible change. OK, after conversion to a gamma 2.2 working space
you have a little extra to deal with, but the slope of the gamma
transfer curve at 1/8th of full scale is 1.4, which means the noise
floor you perceive is still more that the 4th msb referenced to the raw
scanner output.

Clearly, you either have a faulty scanner or I haven't understood what
you are trying to explain.
So, given an 8-bit image, where would you set the threshold given the
environment in point 1 above?
Well, assuming that your shadow scan is at +3EV, that corresponds to 8x
the exposure. So above a maximum level of (256/8 -1)=31, the output
should be entirely from the primary scan, since that is the only scan
which actually contains unsaturated data for that region. Obviously
there is nothing wrong with making the transition level lower than that,
but it would leave a gap in the resulting histogram if it were higher.
Below that, there could be an increasing proportion from the shadow scan
so that for a significant part of the range you are getting a 2x
multiscan benefit of both scans. However, well before the primary scan
runs out of data depth, the transition should have switched to entirely
the shadow scan, since this is the only scan that contains data below
the 14-bit limit of the scanner. In practical terms, that would be
around 1-2 on the 256 level scale giving around 4-5 bits of noise free
signal from the 14-bit primary scan, but 7-8bits on the shadow scan.

However, you are clearly seeing a much higher noise floor on your scans
if you empirically deduce that "everything below 32" in the primary scan
is just noise. Until that is bottomed out, these levels are
meaningless.
I do scan as is (i.e., since LS-50 I no longer apply KC adjustment
with AG) but if I mix those two scans I then get an abrupt cutoff in
the transition areas. This can be hidden somewhat by blurring the edge
in the mask, but that only deals with consequences of the problem, not
the problem itself i.e, the mismatched color balance at either side of
this blurred transition.

Because of that, before I merge the two I need to "adjust" the shadows
scan to be in the same "histogram domain" as the nominal scan by
applying a specific curve.

The curve should just be a scaling of 8 (3EV), modified by the
difference between the gamma transfer before and after scaling. Its a
fairly straight forward bit of mathematics to work out what the curve is
- scale the gamma curve down by 8 and divide by the original gamma
curve.
I'm not quite sure I understand what you mean? Are you referring to
the contrast masking procedure of the two scans?
Well, after lots of effort you have now created a 15-bit image from your
14-bit scanner, but you only have 8-bits of display range. You can't
count the gamma of the display, because you have already corrected for
that in the scanner, so somehow you need to bring these lower bits into
a perceptible region - and that is a lot less than the 8-bit display
range. This is what I am having real problems with - you should not
even be able to see these shadow details on a display without some level
shifting.

I think that is as far as I want to go at the moment, since there is
clearly a mismatch of understanding of the issues here - either you are
not describing what is happening at your end or I am not understanding
your explanation - either way, there is no point in continuing until
that is clarified.

Other than this:
I understand that, and I did run some tests with linear gamma but -
for pragmatic reasons - switched back to 2.2. For one, NikonScan gamma
2.2 seems to give me better results than PS gamma 2.2.

You could always work in linear space and then output the image to
NikonScan to apply the more pleasing gamma curve as a near final step.
Alternatively, you could work out the Nikonscan curves and apply them
directly in Photoshop - within its 15-bit limits.
 
M

Mac McDougald

Hmm... I upgraded from PS5 directly to 7 and never went through the 6
phase. Nevertheless, I thought 7 could do everything 6 can and more.
However, I can't see any means of producing 16-bit layers in PS7, though
I rarely use most of the facilities it provides, hence the lack of
urgency to upgrade to CS.

PS 7 (and of course PS 6 before it) can NOT do 16 bit layers; no idea why
OP would claim that.
Can't do 15 bit layers either :)

Mac
 
D

Don

This is becoming less of a scanning issue and more of a Photoshop tools
discussion, I fear...

Well, PS is what we both use, but the principles are not
tool-specific.
But, if I understand you correctly (see later) you are combining these
in the correct ratio relative to each other. So the first step you
make, from your earlier post, is to darken the shadow scan to compensate
for the increased exposure you captured it with. This is the stage
where the 15-bit limitation of PS comes in. You might start with a
14-bit image, scanned at +3EV for the shadow scan, but now you have to
darken that by a factor of approximately 8x (ignoring the gamma slope
change) to get it into the same range as the primary scan. That means
that the lsb from the scanner on the shadow scan, which was originally
the 14th msb from the scanner, corresponds to the 17th msb relative to
the primary scan - but you only have 15-bits to play with in Photoshop.

I think, I see now what you mean. The thing is I'm not using *all* of
the image. Indeed, most of the shadows (boosted) scan is thrown away
(masked) and only a narrow band is actually used. Indeed, most of the
shadows scan is clipped anyway after applying the 2-3 EV/AG boost.

If it helps any, I'm not doing conventional contrast *merging* (where
both images are used in full) but only "cutting out" two complementary
parts of both images and combining those.
Hmm... I upgraded from PS5 directly to 7 and never went through the 6
phase. Nevertheless, I thought 7 could do everything 6 can and more.
However, I can't see any means of producing 16-bit layers in PS7, though
I rarely use most of the facilities it provides, hence the lack of
urgency to upgrade to CS.

PS6 does not support 16-bit layers out of the box. However, I found a
way to combine two 16-bit images using a mask. In theory, quite a few
others 8-bit operations could be done, but it's very messy...

Anyway, it works and I can merge the two scans so I'm not complaining.
Whilst I can think of ways that would give a 15-bit density range in the
result, buy merging 8-bit masked layers in the appropriate ratio, the
resulting data would only have 8-bit precision. The problem with this
type of arrangement is that you are never really sure if the noise you
perceive at any level, especially after level adjustments, is due to
that lack of precision or if it is real.

True 15-bit layers appear to be outside of the capabilities of PS7 and,
I presume, PS6. So I guess you better explain what you are doing
because it seems to influence the levels that you are merging at.

OK, it's quite simple, really. In a nutshell, I export 16-bit images
as RAW (PS file format), then re-import these RAW files as 8-bit, but
*double the width*! Of course, the image appears distorted because
each 16-bit pixel is now represented by 2 neighboring 8-bit pixels.
However, I don't care because I have the mask already (on a 2-pixel
boundary to maintain 16-bit pixel integrity!!!). After merging the
images, I flatten the layers and export as 8-bit raw again. Import
back as 16-bit (with correct dimensions) and, "viola" a combined
image! ;-)
The more I look at this, the more unbelievable it seems. On a linear
scale of 256 levels, 32 represents 1/8th of the full scale - the
threshold of the 3rd msb in the data. You are saying, in effect, that
the noise floor of your 14-bit scanner is actually around the 3rd msb
and, quite frankly, I just don't believe that! If it was the case that
everything below level 32 was just noise then adding 12.5% random
gaussian noise to your 14-bit scanned images would make a barely
perceptible change. OK, after conversion to a gamma 2.2 working space
you have a little extra to deal with, but the slope of the gamma
transfer curve at 1/8th of full scale is 1.4, which means the noise
floor you perceive is still more that the 4th msb referenced to the raw
scanner output.

Clearly, you either have a faulty scanner or I haven't understood what
you are trying to explain.

No, no, it's not all noise. Well, at least not noise in the technical
sense. After numerous tests I have (visually and *subjectively*)
determined that I "don't like" the image data below 32 on the 256
scale. The part I don't like, in particular, are randomly colored
bright pixels which appear in dark areas when these areas are
temporarily (and radically!) boosted for inspection purposes only. A
properly boosted shadows scan lifts these areas, eliminates those ugly
"spurious pixels" and generally reveals more detail and contrast.
The curve should just be a scaling of 8 (3EV), modified by the
difference between the gamma transfer before and after scaling. Its a
fairly straight forward bit of mathematics to work out what the curve is
- scale the gamma curve down by 8 and divide by the original gamma
curve.

I understand that in theory, but could you perhaps give me an example
for a single arbitrary point, and I'll then calculate the rest?
Well, after lots of effort you have now created a 15-bit image from your
14-bit scanner, but you only have 8-bits of display range. You can't
count the gamma of the display, because you have already corrected for
that in the scanner, so somehow you need to bring these lower bits into
a perceptible region - and that is a lot less than the 8-bit display
range. This is what I am having real problems with - you should not
even be able to see these shadow details on a display without some level
shifting.

Yes, of course, those artifact only become visible during a radical
(!) and *temporary* curves adjustment to inspect the dark areas.

It may be overkill to worry about cleaning up those areas, but I like
to start the edit with an image which is "clean" even in the darkest
areas. In most cases, such a radical boost (as used during inspection)
is not needed during actual editing but the fact that I'm starting
with a "clean" image and have the necessary headroom means I don't
have to worry about that and just edit away...

Don.
 
D

Don

PS 7 (and of course PS 6 before it) can NOT do 16 bit layers; no idea why
OP would claim that.

Because - as it says on top - I figured out a (convoluted) way to do
it, so it's not a claim but a fact.

Don.
 
K

Kennedy McEwen

Don said:
I think, I see now what you mean. The thing is I'm not using *all* of
the image. Indeed, most of the shadows (boosted) scan is thrown away
(masked) and only a narrow band is actually used. Indeed, most of the
shadows scan is clipped anyway after applying the 2-3 EV/AG boost.
That doesn't make any difference though, the point is that in the parts
that you are using you simply haven't got sufficient bit depth for the
operation you need. But it gets worse...
PS6 does not support 16-bit layers out of the box. However, I found a
way to combine two 16-bit images using a mask. In theory, quite a few
others 8-bit operations could be done, but it's very messy...

Anyway, it works and I can merge the two scans so I'm not complaining.
I don't think so...
OK, it's quite simple, really. In a nutshell, I export 16-bit images
as RAW (PS file format), then re-import these RAW files as 8-bit, but
*double the width*! Of course, the image appears distorted because
each 16-bit pixel is now represented by 2 neighboring 8-bit pixels.
However, I don't care because I have the mask already (on a 2-pixel
boundary to maintain 16-bit pixel integrity!!!). After merging the
images, I flatten the layers and export as 8-bit raw again. Import
back as 16-bit (with correct dimensions) and, "viola" a combined
image! ;-)
Err... that should be "voila, a 15-bit image that is only accurate to
8-bits!" Nice trick if it worked, but it doesn't. :-(

In the first step you do indeed get an image which is double width
because each pixel is represented as upper and lower byte pixels.
However, the mask that you apply (which, from your previous posts has
been blurred) now takes a fraction of that image ranging from 100% to 0%
depending on the level of the mask. That is fine if it is 100%. Its
fine if it is 0%. Anything else produces a truncation error at the
8-bit level.

For example, say you have a 16-bit pixel which has a value of 1076,
which is 434h. That gives two 8-bit pixels of values 4 and 52. Now, if
that pixel is masked at a 30% level, you get two 8-bit pixels with
values of 1 and 17. When you read these back in 16-bit format, you have
levels of 273. Unfortunately, 30% of 1076 is 323 (rounding), not 273.
Your shadow area is in error by about 15% - and this only considers the
original part of the image, not the masked part. In fact, everything
that falls below 8-bits in these transitional areas is no more than
systematic noise.

Back to the bank manager to ask for that loan for PS-CS! ;-)
No, no, it's not all noise. Well, at least not noise in the technical
sense.

OK, I overlooked the third possibility - that you described it badly
when you said in the previous thread that "anything below 32 was just
noise" or more recently in this thread when you said "anything below 16
contained no image data".
After numerous tests I have (visually and *subjectively*)
determined that I "don't like" the image data below 32 on the 256
scale. The part I don't like, in particular, are randomly colored
bright pixels which appear in dark areas when these areas are
temporarily (and radically!) boosted for inspection purposes only. A
properly boosted shadows scan lifts these areas, eliminates those ugly
"spurious pixels" and generally reveals more detail and contrast.
Of course it reveals more detail - you have extended the Dmax of the
scanner by around 0.9, and if you apply a sufficiently radical level
shift you will see that, but you won't see it without that level shift -
and certainly not on an 8-bit display. Even so, contrast stretching
level 32 to 255 on my LS-4000 scans does not show these randomly
coloured bright pixels in the shadows that you are referring to, so I
still harbour suspicions about your results.
I understand that in theory, but could you perhaps give me an example
for a single arbitrary point, and I'll then calculate the rest?
Well, say you have a pixel on the original +3ev shadow scan that has
data of 500 on the 16-bit data range. But Nikonscan has produced that
from the raw scanner output by applying a gamma correction of 2.2, and
it is the raw data that you actually want to scale by a factor of 8 and
then re-apply the 2.2 gamma correction to the result. If this gamma was
applied across the entire range, which is a reasonable approximation,
this would just be the same as applying the gamma correction to the
scale factor across the entire range. So that the 8x reduction becomes
a factor of 8^(1/2.2) = 2.57x.

However, Photoshop uses the RIE gamma correction, so for data below 1.8%
of full scale, the gamma curve is limited to a straight line of 10/gamma
and a scale factor on a linear transfer function retains the same scale.
So, where the raw data falls in this region, the correction increases to
original 8x scale, and the transition to this level from the 2.57x in
the higher levels just follows the scale of the gamma curve applied to
the original and scaled data.
It may be overkill to worry about cleaning up those areas, but I like
to start the edit with an image which is "clean" even in the darkest
areas. In most cases, such a radical boost (as used during inspection)
is not needed during actual editing but the fact that I'm starting
with a "clean" image and have the necessary headroom means I don't
have to worry about that and just edit away...
But since your 16-bit layer processing actually introduces a
discontinuity at the 8-bit level, scaled by the mix factor, I fear you
have simply succeeded in adding low level systematic noise to your
shadow detail, although the systematic noise does have a form determined
by the image content. In short - you are kidding yourself.
 
D

Don

In the first step you do indeed get an image which is double width
because each pixel is represented as upper and lower byte pixels.
However, the mask that you apply (which, from your previous posts has
been blurred) now takes a fraction of that image ranging from 100% to 0%
depending on the level of the mask. That is fine if it is 100%. Its
fine if it is 0%. Anything else produces a truncation error at the
8-bit level.

Yes, in the very narrow transition area, which is exactly why I'm
looking for a perfect curve to adjust the histograms so I don't have
to blur the mask.

However, it's also important to note that this blur may only be needed
when the transition falls in the middle of a gradient. In all other
cases it's not needed. So, at worst, it's only a very marginal
problem.
For example, say you have a 16-bit pixel which has a value of 1076,
which is 434h. That gives two 8-bit pixels of values 4 and 52. Now, if
that pixel is masked at a 30% level, you get two 8-bit pixels with
values of 1 and 17. When you read these back in 16-bit format, you have
levels of 273. Unfortunately, 30% of 1076 is 323 (rounding), not 273.
Your shadow area is in error by about 15% - and this only considers the
original part of the image, not the masked part. In fact, everything
that falls below 8-bits in these transitional areas is no more than
systematic noise.

Even if I'm forced to accept these errors in transition areas, that's
certainly something I can live with because (in addition to only being
relevant for some images) this area is only a couple of pixels wide
and the rest of the image is unaffected.

Not to mention that this transition area is already "corrupt" in the
sense that it's a blend of two images. So what if the blend is uneven?
Indeed, that random error may even enhance the "natural look" of the
blend!?

However, this is all still "work-in-progress" and just like I found a
way to use 16-bit layers in PS6, I'm sure there must be a way to
tackle this - by comparison - very minor problem.

For example, by simply applying the blur *after* combining the two
images the problem is solved!
OK, I overlooked the third possibility - that you described it badly
when you said in the previous thread that "anything below 32 was just
noise" or more recently in this thread when you said "anything below 16
contained no image data".

Well, it doesn't because the nominal scan "pushes" the histogram as
far to the right as possible, which then leaves the first 15 or so
levels "empty".

All that is "so-far..." and once I get into different batches of film
this may change. But for now, in effect, the band I "don't like" is
only about 16-levels wide i.e. from ~16-32 on the 256 scale.
Of course it reveals more detail - you have extended the Dmax of the
scanner by around 0.9, and if you apply a sufficiently radical level
shift you will see that, but you won't see it without that level shift -
and certainly not on an 8-bit display.

All that is a given. As I explain later, the point is to create a
"clean" image which I can subsequently edit with a peace of mind and
without worrying that some radical editing action may inadvertently
reveal some ugly and undesirable artifacts.
Even so, contrast stretching
level 32 to 255 on my LS-4000 scans does not show these randomly
coloured bright pixels in the shadows that you are referring to, so I
still harbour suspicions about your results.

Have you looked at the image at 300-400% magnification? Also, try
Kodachromes (~1980 vintage), because that's all I have been scanning
so far. If I use a curve with a single point Input=32, Output=64 some
pixels in the (very) dark areas "light up" and clearly stand out from
the amorphous "goo" of the rest of the dark area.

Also, bear in mind that this is subjective and I may just be very
picky! After all, that's why I'm not happy with using image editing in
NikonScan as we discussed earlier. So, even if you see these "inverse
pepper spots" they may not be as objectionable (and therefore as
visible) to you as they are to me.
Well, say you have a pixel on the original +3ev shadow scan that has
data of 500 on the 16-bit data range. But Nikonscan has produced that
from the raw scanner output by applying a gamma correction of 2.2, and
it is the raw data that you actually want to scale by a factor of 8 and
then re-apply the 2.2 gamma correction to the result. If this gamma was
applied across the entire range, which is a reasonable approximation,
this would just be the same as applying the gamma correction to the
scale factor across the entire range. So that the 8x reduction becomes
a factor of 8^(1/2.2) = 2.57x.

However, Photoshop uses the RIE gamma correction, so for data below 1.8%
of full scale, the gamma curve is limited to a straight line of 10/gamma
and a scale factor on a linear transfer function retains the same scale.
So, where the raw data falls in this region, the correction increases to
original 8x scale, and the transition to this level from the 2.57x in
the higher levels just follows the scale of the gamma curve applied to
the original and scaled data.

I don't intend to use Photoshop gamma at all. Even when doing gamma in
Photoshop I use pre-calculated AMP files because PS gamma is quite
appalling by - visual - comparison at least. A side-by-side comparison
shows banding in the PS gamma image while AMP gamma image is quite
smooth and shows more detail.

No, my idea is to create a custom curve (AMP if need be) to, in
effect, emulate the result of an EV/AG boost or cut. This curve should
contain all the necessary corrections (both EV/AG and gamma).

As I explained earlier, until now, I have been doing this empirically
by noting Mean RGB values of both images in the area of interest and
then creating a curve using those sampled Mean values as Input and
Output points. This works very well but I'd just like to know how to
calculate this.

So, let's take this example:

If a curve point has the (Input) value of 127, what calculations do I
need to perform to get what this point should be if I were to emulate
a 2 EV/AG cut i.e. what should be the curve Output value?

What I'm looking for is something along the lines of:
- first remove 2.2 gamma like so: x=127/something...
- then apply 2 EV/AG cut, like so: y=x/2EV
- and then re-apply 2.2 gamma like so: z=y*whatever...

OK, stop laughing... ;o)

It's just an example and the answer would be something like:
Therefore, for Input=127, the Output=42... or whatever...
Ha! 42! Life, Universe and Everything! ;o)

But seriously, I can then calculate the rest of the points, and create
the curves for the few EV/AG values I use the most.
But since your 16-bit layer processing actually introduces a
discontinuity at the 8-bit level, scaled by the mix factor, I fear you
have simply succeeded in adding low level systematic noise to your
shadow detail, although the systematic noise does have a form determined
by the image content. In short - you are kidding yourself.

Which only applies to the razor thin transition area! Not to the image
at large.

Besides, you yourself stated above how much information is hidden by
the context of an 8-bit display alone! By comparison to all that, the
couple of pixels wide transition area - which will subsequently be
edited with the rest of the image and hence further obscured - and at
~5400x3600 pixels can only be currently viewed at a fraction of that
original resolution, taking all that into account and the couple of
pixels wide transition area is virtually and literally invisible!

Besides, as I write above, if I simply apply the blur afterwards, the
"problem" is solved.

Don.
 
K

Kennedy McEwen

Don said:
Yes, in the very narrow transition area, which is exactly why I'm
looking for a perfect curve to adjust the histograms so I don't have
to blur the mask.

However, it's also important to note that this blur may only be needed
when the transition falls in the middle of a gradient. In all other
cases it's not needed. So, at worst, it's only a very marginal
problem.
Marginal, but at a level which is significantly higher than the levels
you are desperately trying to improve. That is not a successful
solution in my book. Furthermore, it does mean that the fudge is
nothing like as useful as the description of "16-bit layers" suggests.
Not to mention that this transition area is already "corrupt" in the
sense that it's a blend of two images. So what if the blend is uneven?

You seem to have lost the plot here. Are you, or are you not, trying to
get a more faithful reproduction of what is on film all the way down to
the shadow detail? It seems you are now just interested in capturing
shadow structure whether faithfully represented or not.
Well, it doesn't because the nominal scan "pushes" the histogram as
far to the right as possible, which then leaves the first 15 or so
levels "empty".
Do you mean empty "in the technical sense"? The contents of the
histogram bins are indeed zero? If so, then the rest of your attempts
are pointless, because there is nothing there in the shadows to pull
out. Photoshop (and NikonScan) round the histogram bin contents up for
display. So even though the highest bin may have a population of
10Mpixels, a bin containing just a single pixel is still visible and
distinguishable on the histogram from a truly "empty" bin.

Whilst levels "below 16 contained no image data" is perfectly consistent
with "the first 15 or so levels (are) empty", neither of these
statements is consistent with your quest to pull data out of the image
which corresponds to the least significant byte of a 16-bit range. If
the first 15 levels are indeed empty, not only do no pixels have data
described only by the lower byte, but no data in the scan exists which
is lower than 16 on the 256 level scale, which is 4096 in the true
16-bit scale. That suggests a slide with a very limited contrast. It
may be a dense image, but the shadow to highlight contrast would be very
limited to produce that range of levels.
All that is "so-far..." and once I get into different batches of film
this may change. But for now, in effect, the band I "don't like" is
only about 16-levels wide i.e. from ~16-32 on the 256 scale.
So we have a range of levels corresponding to the lower 10-bits of the
scanner ADC which have no output whilst the ranges corresponding to the
11-th lsb which you don't like. It comes back to my earlier suggestion
that there is a fault. If you get no output below level 15 on the 256
range in a normal scan and additional information (which is not
discontinuous from the previously visible data) is made visible by
scanning at +3EV then there is certainly a fault.
All that is a given. As I explain later, the point is to create a
"clean" image which I can subsequently edit with a peace of mind and
without worrying that some radical editing action may inadvertently
reveal some ugly and undesirable artifacts.
I can think of much easier ways of getting peace of mind than this Don.
I had thought you were trying to get the last vestige of shadow
information from the film in the correct proportion to the rest of the
image - rather than just make sure you would never see it.
Have you looked at the image at 300-400% magnification?

Yes - and all the way up to 1600%! However, remember that once the
pixels are large enough to be individually discernible on the display,
further magnification merely reveals block structure, not real
information.
Also, try
Kodachromes (~1980 vintage), because that's all I have been scanning
so far.

The only KC I have is 60's and 70's vintage, but the medium is
irrelevant in this context - no film I have results in empty histograms
below level 15 and bright pixel noise between levels 16 and 32. You are
also talking about single pixels, and it would be very unlikely that an
emulsion artefact would always be limited to single pixels, however a
scanner fault would. Seriously, if you are describing this accurately
then you are describing a fault condition, and not normal expectations.
Can you post some examples of this on a web page to show us what you are
describing?
If I use a curve with a single point Input=32, Output=64 some
pixels in the (very) dark areas "light up" and clearly stand out from
the amorphous "goo" of the rest of the dark area.

32 >> 64 is only a gain of 2x. I referenced a gain of almost 8 above,
32 >> 255, and still can't see anything that I would describe like this.
Also, bear in mind that this is subjective and I may just be very
picky! After all, that's why I'm not happy with using image editing in
NikonScan as we discussed earlier. So, even if you see these "inverse
pepper spots" they may not be as objectionable (and therefore as
visible) to you as they are to me.

Don, as already established, after the scan has been made, we can view
any part of the image at any scale you choose in Nikonscan just as
readily as in Photoshop - the difference is that in NikonScan you can do
this *before* the scan is made too, albeit less readily. Photoshop
offers no advantage in this respect - on the contrary, the spatial
filtering applied to the cache from which the scaled image is drawn
means you are likely to see less real image data and more editor
artefacts in PS than you are in NikonScan.
I don't intend to use Photoshop gamma at all. Even when doing gamma in
Photoshop I use pre-calculated AMP files because PS gamma is quite
appalling by - visual - comparison at least. A side-by-side comparison
shows banding in the PS gamma image while AMP gamma image is quite
smooth and shows more detail.
I assume that you are again referring to perceived banding after you
have applied some level shifting or gain to the image to pull out shadow
detail. If not - check your monitor calibration, monitor profiles are
notorious banding sources. I assume you are referring to contrast
enhanced shadow detail because I examined Photoshop gamma curves in
detail some time ago - indeed I have saved some of these in an Excel
spreadsheet for comparison purposes. (You might recall an earlier
discussion on the precision of PS & NS!). I can assure you that the PS
curve is perfectly smooth to the 15-bit input precision. The NS gamma
curve, whilst a different function, is accurate to 16-bit precision.

Not only that, but the PS curve incorporates ITU recommendation that
limits the maximum slope to 10/gamma in the shadow region specifically
to avoid quantisation issues. So perception of banding with the
standard PS gamma curve suggests something else wrong in the system.

Less apparent posterisation can be achieved by two adjustments to the
gamma curve, but each have their own drawbacks:

1. limit the maximum slope of the transfer function to even less than
10/2.2. To obtain a smooth transition, this means that you need to
increase the level at which the transition from perceptual space to
linear space occurs - so gamma is not actually applied to even lighter
shadows than the recommendations, which distorts the relative perception
of image shadows and midtones.
or
2. place less restriction on the maximum slope, thus maintaining
perceptual space right down into the deepest shadows, but restrict the
depth into which the image can be rendered to prevent quantisation
exceeding the visual threshold. For example, applying continuous gamma
2.2 right down to shadow limits of a 16-bit range produces the following
transformations:
0 >> 0
1 >> 424
2 >> 581 (33% brighter)
3 >> 698 (18% brighter)
4 >> 796 (13% brighter) etc.
ie. this produces exceedingly large quantisation steps in the extreme
shadows. In fact, the raw data has to be greater than 970 or so before
the quantisation step equals that of the Photoshop gamma curve - in
other words, before the perceptual banding in the shadows is less than
in Photoshop. However, if you ensure the shadow data is greater than
this level then the quantisation will appear less than the PS gamma, but
at the expense of reduced shadow density.

So, if you are seeing "less banding" by the application of an alternate
gamma, it is almost certainly at the expense of something else in your
images - most likely either perceptual uniformity throughout the density
range or loss of shadow density. You might want to consider what that
could be, and how it might relate to some of the symptoms that you have
reported so far in this and previous threads. "You don't get owt for
nowt, lad", as the saying goes, and while you may have chosen your gamma
curve preference for good reason you may have overlooked its limitation.
That limitation may or may not be important to you.
No, my idea is to create a custom curve (AMP if need be) to, in
effect, emulate the result of an EV/AG boost or cut. This curve should
contain all the necessary corrections (both EV/AG and gamma).

As I explained earlier, until now, I have been doing this empirically
by noting Mean RGB values of both images in the area of interest and
then creating a curve using those sampled Mean values as Input and
Output points. This works very well but I'd just like to know how to
calculate this.
I think I explained that in the previous post, by example. However,
although I left the last part fro you to calculate, you seem to have
ignored it. So, for the raw data of 500 on the 16-bit scale, the
correction for +3EV on the linear scale would result in 500/8=62.5.
However taking account of the simple unrestricted slope gamma
translation this would be 500/2.57 = 195.

Since I do not know what gamma you are using, I can only advise on the
method - you need to work out the implementation in your space,
particularly if using a non-standard curve.
So, let's take this example:

If a curve point has the (Input) value of 127, what calculations do I
need to perform to get what this point should be if I were to emulate
a 2 EV/AG cut i.e. what should be the curve Output value?
With an unrestricted slope, 2.2 gamma curve, this would be:
127/(4^(1/2.2)) = 127/1.88 = 68.
What I'm looking for is something along the lines of:
- first remove 2.2 gamma like so: x=127/something...
- then apply 2 EV/AG cut, like so: y=x/2EV
- and then re-apply 2.2 gamma like so: z=y*whatever...

OK, stop laughing... ;o)

I am not laughing because that is exactly the process. However it can
be reduced to a single step, as shown above, if the gamma curve is not
slope restricted because the steps reduce to the following:

1. x = data^gamma
2. y = x / 2^EV
3. z = y ^ (1/gamma)

Hence z = data / 2^(EV/gamma). For the case of gamma=2.2 and 2EV
scaling, this reduces further to z = data / 1.8778618213234127 which I
approximated even further above as data/1.88. ;-)

However, you do need to know the details of the gamma curve you are
using - or at least have a good approximation of them, if you have slope
limitations, such as in the Photoshop implementation of gamma.

This is relatively easy to find out if you have, or are prepared to
synthesise, a test image, such as a 16-bit ramp in each colour.
It's just an example and the answer would be something like:
Therefore, for Input=127, the Output=42... or whatever...
Ha! 42! Life, Universe and Everything! ;o)

But seriously, I can then calculate the rest of the points, and create
the curves for the few EV/AG values I use the most.
Hope the above explains it.
Which only applies to the razor thin transition area! Not to the image
at large.

Besides, you yourself stated above how much information is hidden by
the context of an 8-bit display alone! By comparison to all that, the
couple of pixels wide transition area - which will subsequently be
edited with the rest of the image and hence further obscured - and at
~5400x3600 pixels can only be currently viewed at a fraction of that
original resolution, taking all that into account and the couple of
pixels wide transition area is virtually and literally invisible!

Besides, as I write above, if I simply apply the blur afterwards, the
"problem" is solved.
Yes, you can blur the whole image, which is fine if you think your
monitor resolution will never increase in the future, or you can apply
the blur manually to the transition edges after restoring the 16-bit
format - in which case standard monitor resolution will have increased
to these levels before you complete processing scans from your first
roll of film. ;-) However you can't apply the blur just to the
transition areas because you cannot apply layers to 16-bit images in
PS6/7. And if you resort to 8-bit images before applying the blur then
you have lost everything you have just struggled to include - assuming
that you fix that issue with blank bins below level 16 and noisy bins
between that and level 32. There is something seriously out of kilter
there.
 
D

Don

You seem to have lost the plot here. Are you, or are you not, trying to
get a more faithful reproduction of what is on film all the way down to
the shadow detail? It seems you are now just interested in capturing
shadow structure whether faithfully represented or not.

The shadow part remains unaffected because it's 100% copy from the
shadow scan layer.

We're only talking about a couple of pixels in the *transition area*
between the two layers which is blurred.

Of course, as I indicate later, if I simply blur that transition area
(in 16-bit mode) after I combine the layers then even this negligible
amount of image data in transition area will be unaffected.
Do you mean empty "in the technical sense"? The contents of the
histogram bins are indeed zero?

Depends on which channel but, in general, yes. Red reaches deeper
(less 0-counts) while blue is narrower (more 0-count bins) - as is to
be expected of Kodachromes. In the few difficult images I tried so far
I ended up clipping 10 to 15 levels (256 scale).

NOTE: There are quite a few important caveats here (due to my method)
and I think we may have gotten out of sync (in part due to hot weather
delays, as well as general meanderings) so let me briefly summarize
and clarify any misunderstandings - because I have a feeling we may be
talking about different things:

AE scans do not suffer from this to the extent described i.e. there
are pixels in 0-16 range.

Shadows scans *adjusted* to be brought down to the nominal scan range
(using my primitive empirical "method") do have empty bins.

*However*, (and it's a big however!) this is almost certainly due to
my method. In order to minimize the number of edits the (empirically
arrived at) curve used to bring the shadows scan down to nominal scan
levels is also *gray point adjusted* as is the nominal scan (!) before
the two images are combined!

Given all that, the *combined* image will (or to be totally accurate,
may) then have lots of empty bins in the 0-16 range. This may (and
almost definitely is) in part due to my inexact, empirical method as
well as to:

Finally, I'm concentrating on the most difficult images first (dense
images with little contrast) because once they are taken care of the
rest is easy. Therefore, these results should not be taken as
representative because I'm really dealing with extreme images.
I can think of much easier ways of getting peace of mind than this Don.
I had thought you were trying to get the last vestige of shadow
information from the film in the correct proportion to the rest of the
image - rather than just make sure you would never see it.

I am trying to get the last vestige of shadow data. However, after I
have done that - depending on the image - I may or may not need to
boost (in post processing) to such extent that this data becomes
(glaringly) visible.

But the key is, I have obtained this shadow data, it's still there,
although - as you yourself explained - due to 8-bit nature of displays
(as well as editing needs of a particular image) it may not be that
easy to see in the final product. However, the data has been obtained
and archived.
The only KC I have is 60's and 70's vintage, but the medium is
irrelevant in this context - no film I have results in empty histograms
below level 15 and bright pixel noise between levels 16 and 32. You are
also talking about single pixels, and it would be very unlikely that an
emulsion artefact would always be limited to single pixels, however a
scanner fault would. Seriously, if you are describing this accurately
then you are describing a fault condition, and not normal expectations.
Can you post some examples of this on a web page to show us what you are
describing?

Yes, I was just about to do that last time, but I thought I'd ask
about magnification first.

OK, I've uploaded a couple of image segments. They are very small (50
x 50 pixels) but at full resolution and 16-bit depth. Note that the
shadows scan has *not* been sub-pixel shifted so the two areas are
only approximately the same but not identical. Also, no editing has
been done on these segments. They are as received from the scanner,
only cropped:

http://members.aol.com/tempdon100164833/nikon/06_0.0.tif
http://members.aol.com/tempdon100164833/nikon/06_3.0.tif

06 is just the slide number on that roll, and 0.0 and 3.0 are the
respective AG/EV.
I assume that you are again referring to perceived banding after you
have applied some level shifting or gain to the image to pull out shadow
detail.

Yes, it's the banding. I created two images one using PS gamma and the
other using AMP gamma curves from:

http://www.aim-dtp.net/aim/download/gamma_maps.zip

To read more about this, from the main index page go to "Evaluations"
and then "Gamma induced errors" page(s).

You may already know this site, but it's been created by a
controversial Finnish guy who firmly believes all image editing should
be done in linear gamma. Anyway, he knocks Photoshop at every turn so
you'll feel quite at home there... ;o)

Anyway, I didn't actually tabulate the data or ran these curves on a
gradient, but simply visually inspected a few test images. The PS
gamma images suddenly looked relatively "choppy" when compared to
images adjusted with above curves. They (curves adjusted images) also
appeared slightly "lighter" (!?) but that might have been my
subjective impression.

It's been a while since I've been to the above web site, but the gamma
errors he's referring to may actually be the slope limitations you
mention later on…!?

On a related tangent, once I got into AMP curves (if you know this
already then just ignore it) I found out elsewhere that AMP curves are
very easy to create. Unlike other PS adjustment files AMP files have
no complicated headers or footers. They are simply arrays of bytes.
The first 256 bytes correspond to the Master RGB curve, and the
subsequent 256-byte chunks correspond to channels (the first three
after Master would normally be R, G and B). When I played with this I
wrote a little VB routine to create them and also used my hex editor
for fine tuning. Only curves you supply are used, so you can create
AMP files with only the Master curve, for example.
If not - check your monitor calibration, monitor profiles are
notorious banding sources.

That's what I thought at first but then I serendipitously came across
these AMP curves, and since they do not show this banding, I concluded
it was the PS gamma which caused the banding and that the display was
fine.
I assume you are referring to contrast
enhanced shadow detail because I examined Photoshop gamma curves in
detail some time ago - indeed I have saved some of these in an Excel
spreadsheet for comparison purposes. (You might recall an earlier
discussion on the precision of PS & NS!). I can assure you that the PS
curve is perfectly smooth to the 15-bit input precision. The NS gamma
curve, whilst a different function, is accurate to 16-bit precision.

The only other thing I can think of is Adobe Gamma in the Control
Panel.
I am not laughing because that is exactly the process.

Hey, I got it! How does that joke go:

Even a broken clock is correct twice a day! ;-)
However it can
be reduced to a single step, as shown above, if the gamma curve is not
slope restricted because the steps reduce to the following:

1. x = data^gamma
2. y = x / 2^EV
3. z = y ^ (1/gamma)

Excellent! Now we're cooking!
Hence z = data / 2^(EV/gamma).

Yes, that's exactly what I was after!
However, you do need to know the details of the gamma curve you are
using - or at least have a good approximation of them, if you have slope
limitations, such as in the Photoshop implementation of gamma.

You read my mind! ;-)

The original gamma in the file is created by NikonScan, of course,
because I scan "raw" but with gamma 2.2! So...

Does NikonScan gamma also implement "slope limitations"?

And...

If yes, how do I compensate for that (if I need to), again using the
simple (istic...) example above, because the logic above is quite
clear to me?
Hope the above explains it.

Yup! It's perfectly clear now. Thanks very much, as always, Kennedy!!!
Yes, you can blur the whole image, which is fine if you think your
monitor resolution will never increase in the future
....
No, not the whole image, just the relevant area.
....
or you can apply
the blur manually to the transition edges after restoring the 16-bit
format - in which case standard monitor resolution will have increased
to these levels before you complete processing scans from your first
roll of film. ;-)

Bingo! ;-)

A contrarian troublemaker once said (facetiously) that sending
spaceships on an intergalactic trip is pointless because the
technology will always advance faster than the ship's speed, and each
subsequent vehicle is bound to overtake the previous one...

That's how I feel when "chasing shadows" (sic) with my LS-50!

At times, I just get sick of it all and considering simply scanning
using AE and leaving it at that. But before the day is out I start
fiddling... A little here, a little there, and - before I know it -
I'm back in the thick of it... Aaaarrrggghhh... Just when I think I'm
out, it pulls me back in... ;o)

Don.
 
K

Kennedy McEwen

Don said:
On Fri, 20 Aug 2004 01:12:24 +0100, Kennedy McEwen

We're only talking about a couple of pixels in the *transition area*
between the two layers which is blurred.

Of course, as I indicate later, if I simply blur that transition area
(in 16-bit mode) after I combine the layers then even this negligible
amount of image data in transition area will be unaffected.
I realise that it is the transition area that has been corrupted but,
unless you are prepared to work at it till hell freezes over, you won't
be able to blur all of those transitions manually. Since there is no
layer function in PS6/7 you have no alternative.
Depends on which channel but, in general, yes. Red reaches deeper
(less 0-counts) while blue is narrower (more 0-count bins) - as is to
be expected of Kodachromes. In the few difficult images I tried so far
I ended up clipping 10 to 15 levels (256 scale).
Ok, I *think* I see where the disparity is coming from here. Since you
are determined to "scan raw" and make all of your adjustments in PS, it
appears that you haven't done any black point compensation when
producing the scan before you are making this assessment. Errors in
black point compensation are exaggerated by the gamma correction curve,
due to the very high gains that occur in the shadow region - another
reason why Photoshop uses a slope limit.

Think about what the scanner is actually doing when it produces the
image. The first step is to calibrate the sensor - normalise the
responses of the CCD cells. This is achieved by viewing black and white
references inside the scanner. Ignore the white reference for the
moment. The data from the black reference is subtracted from all
subsequent scan data to produce the raw output, which is then gamma
compensated. However, any difference between the black reference and
the darkest part of the slide, then the black calibration will produce a
small black level offset in the raw data which is then amplified by the
gamma in the shadow region. Such a black offset is unavoidable, simply
due to stray light leakage paths within the hardware when actually
performing the scan. Since the black calibration is performed with the
LEDs off there is no light leakage to reference, so the black offset is
always positive in the raw data. The high gain of the gamma curve in
the shadows brings that up in level.

Unless you compensate for this black offset, either by sampling the mask
or, better, the unexposed film border, and subtracting this from your
image (ideally performed before gamma is applied, but the difference is
actually small if the black offset is significant) then all of your
subsequent processing will be in error. That is why the black point
control is there!

Now, looking at your information, if all the channels have empty bins
below 16 and you are working with 2.2 gamma, then it would appear that
your black offset is actually a count of around 140 on the 14-bit ADC.
Whilst that is rather large, it is not impossibly so. From what you
have said above though, your red channel, which is more dense on KC
emulsions, has fewer than 16 empty bins, indicating a black offset which
is actually much less. Given the operation of the Nikon scanners, with
a single broad band response CCD, I would expect black offset to be very
similar in all three channels, so the difference you see between the
channels is almost certainly real density variations on the film.

However, before you do *any* subsequent processing (*especially* the
scan mixing that you have been attempting) you need to correct for the
black point of the scanner. You might also want to consider
multiscanning to get an accurate assessment of what the black point
actually is for the exposure that you are working with.
AE scans do not suffer from this to the extent described i.e. there
are pixels in 0-16 range.
I would expect AE scans to have a lower, but not significantly so, black
offset. Your "primary" scan has, presumably, been adjusted by
increasing the AG in each channel so that the highlights just fail to
saturate, however I would not expect that to be by very much, so the
difference in black offset between an AE and AG optimised scan should be
relatively small. What level of AG adjustment are you typically
applying to each channel to produce your primary scan.
Shadows scans *adjusted* to be brought down to the nominal scan range
(using my primitive empirical "method") do have empty bins.
These will have more empty bins because you have increased the CCD
exposure to the stray light. Consequently the black offset is
increased. You need to apply a different black point adjustment for the
shadow scan, but you can estimate it in the same way as for the primary
scan.
*However*, (and it's a big however!) this is almost certainly due to
my method.

That would appear so! Scanner software wins again! Of course you can
apply black point correction in PS later, but with half the precision
that you can do it in NikonScan. ;-)
Finally, I'm concentrating on the most difficult images first (dense
images with little contrast) because once they are taken care of the
rest is easy. Therefore, these results should not be taken as
representative because I'm really dealing with extreme images.
Clearly with dense images, stray light is more significant
proportionally, so black point compensation is even more important.
I am trying to get the last vestige of shadow data. However, after I
have done that - depending on the image - I may or may not need to
boost (in post processing) to such extent that this data becomes
(glaringly) visible.

But the key is, I have obtained this shadow data, it's still there,
although - as you yourself explained - due to 8-bit nature of displays
(as well as editing needs of a particular image) it may not be that
easy to see in the final product. However, the data has been obtained
and archived.
Once you have corrected for blacks, I will be very surprised if you see
any difference at all on the display after combining these scans. You
get an extra bit of raw precision for every EV, so 3EV gives you an
effective 17-bit scan if you combine it correctly with the primary. Even
with an unlimited slope gamma, 14-bits produces less than 8-bit
quantisation for every count except the lowest. But you will find that
out in the long run.
OK, I've uploaded a couple of image segments. They are very small (50
x 50 pixels) but at full resolution and 16-bit depth. Note that the
shadows scan has *not* been sub-pixel shifted so the two areas are
only approximately the same but not identical. Also, no editing has
been done on these segments. They are as received from the scanner,
only cropped

OK, got these. However I don't see any random bright pixels in the
normal scan even under extreme adjustments. On the contrary, examining
the data itself I can see random *dark* spikes in the red channel. In
particular, there are 5 cells which have *zero* data in the red channel,
making them appear cyan against a white background under very extreme
adjustments. More significantly though, your shadow scan looks *much*
softer - if you haven't applied any blur to this then I would be
concerned that the focus was different.
Yes, it's the banding. I created two images one using PS gamma and the
other using AMP gamma curves from:

http://www.aim-dtp.net/aim/download/gamma_maps.zip
I hope then, from the previous explanation (and the issues discussed
above) that you can see why Timo's curve appears to produce less banding
than the PS one - it will if you don't give it real shadow detail.
You may already know this site, but it's been created by a
controversial Finnish guy who firmly believes all image editing should
be done in linear gamma.

Timo's ramblings are legendary and mainly wrong, particularly his forte
on processing in linear space rather than perceptual space - nice
mathematically, but completely wrong in terms of how you see things, and
that is what matters. There are a few things he is quite correct on, but
he is so focussed on kicking the established methodology that they are
hard to distil from his output. This is made the more difficult because
some functions should be undertaken in a linear working space, such as
the scanner calibration itself etc.
Anyway, he knocks Photoshop at every turn so
you'll feel quite at home there... ;o)
Err, I don't knock Photoshop in general - just specific points, like its
claims for 16-bit precision and singular failure to deliver. ;-)
Anyway, I didn't actually tabulate the data or ran these curves on a
gradient, but simply visually inspected a few test images. The PS
gamma images suddenly looked relatively "choppy" when compared to
images adjusted with above curves. They (curves adjusted images) also
appeared slightly "lighter" (!?) but that might have been my
subjective impression.
I have just checked these and there is no visual banding on a
synthesised grey ramp on this machine using either gamma curve. I
suspect you are seeing an interaction with the curves and your monitor
profile in the colour management.

The amp file produces an unlimited slope gamma curve with 8-bit
precision, with linear slope segments to 16-bits. However, since it is
based on 8-bit data, both for the input and output levels, the precision
of the end points on the linear segments is limited, and this gives rise
to some clustering in the histogram of the converted data. This can,
and will as shown below, produce problems.
That's what I thought at first but then I serendipitously came across
these AMP curves, and since they do not show this banding, I concluded
it was the PS gamma which caused the banding and that the display was
fine.
They will if you bring the black level up after applying it. As
mentioned, unlimited gamma slope means you need to sacrifice more of the
black levels to avoid quantisation.
Does NikonScan gamma also implement "slope limitations"?
Yes, the slope is limited to a maximum of 21 by default rather than
design. The NS gamma curve is similar to the effect that you get using
an 8-bit amp curve on 16-bit data in that the curve approximated by a
series of 256 linear segments. However, being calculated with 16-bit
precision, including the segment ends points, the results are *much*
smoother. This is pretty obvious if you compare histograms of ramps
processed by the three versions. The Nikonscan curve produces the
smoothness of the Photoshop curve without the discontinuity due to the
transition to a linear shadow region, yet doesn't produce any of the
quantisation limits of the amp implementation.

Note that the lack of slope limit on the NikonScan is less noticeable
because of the black offset inherent in the scanner calibration process.
And...

If yes, how do I compensate for that (if I need to), again using the
simple (istic...) example above, because the logic above is quite
clear to me?

As I said above, I don't think you will even need to bother after you
have corrected the black point, but if you still want to continue
torturing yourself, you need to consider the effect of the linear slope
on gamma up to PS levels of around 50 or so. However, using the
methodology you are proposing, the best accuracy you can achieve is
still pretty crude. The data for your amp file is just the calculation
I provided in the previous post for the continuous function - but at
each of the 256 vertices of the 255 linear segments. Photoshop will
apply a linear interpolation to the data that falls between those
points. That is all accurate, but the data defining the vertices is
only 8-bit accurate, and this will introduce posterisation as you will
see.

Rounding the data/4^(1/2.2) function to 8 bit precision, the first few
terms of your amp file should be:
0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,9,9,10,10..
Note that only one 8 occurs, since 4^1/2.2 is 1.88, and this is where it
rolls over in the sequence.

For example, if you had data anywhere between level 16 and 17, that is
16-bit data between 4096 and 4351, this amp correction for the 2EV shift
would produce a 16-bit result of 256*9 = 2304 for all of those 256 input
values. This is because you have no slope between levels 9 and 10,
since there is insufficient resolution in an 8-bit value to quantify it.
The situation gets worse as you try to correct for higher EV adjustments
using this technique.

In fact, the required slope fro 2EV can be seen directly from the
following real calculations at the ends of the relevant linear segment:
Level=16: Output = 8.520328 (16-bit value 2181)
Level=17: Output = 9.05285 (16-bit value 2318)
Hence slope = 0.532521
Thus, the data in the 16-bit range between 4096 and 4351 would be
expected to map to the range 2181 to 2318, rather than all be reproduced
at exactly 2304, as an 8-bit amp curve adjustment does. In short, the
amp methodology results in 8-bit quantisation effects. I suspect that
this quantisation will be worse than the improvement you are expecting
to produce - especially if you make the transition to modified shadow
scan in a region where the primary scan is producing valid data. Whether
you like the look of it's histogram or not, it is still accurate to
14-bits and the amp transform will reduce it to 8-bit precision - and
not just in the transition areas, but everywhere in the image that the
shadow scan is relevant.

As I suggested right at the beginning of this sub-thread, you can't do
what you want with Photoshop <=7 - and I don't even know if what you
want to do is possible with the required accuracy in CS.

The only way I can see of doing this is to scan in linear working space
with gamma=1, apply the 2EV correction there and *then* apply gamma to
get to flat perceptual space. Trouble is, if you use Timo's amp curves
you will encounter 8-bit quantisation again. The only transfer function
that does not introduce adjacent codes which are identical somewhere in
the range is the unity transfer. Unfortunately, going the AMP route,
the problem occurs just where you don't want it, in the shadows.

So, the proposed solution is...

you'll like this, not a lot, but... ;-)




Scan linear.
Scale and merge in linear space using Photoshop.
Save image as tif file.
Import into NikonScan.
Apply required gamma in NikonScan.
Save in glorious 16-bit colour (shame you had to go through that poxy
15-bit stage en-route) from NikonScan.

Fortunately, if you set the black point correctly in Nikonscan in the
first place, you won't need to worry about any of this ever again, since
everything that can be recorded in the 14-bit dynamic range of the
scanner you have will be perfectly reproduced in a perceptually flat
display space without having to go near that pesky 15-bit editing suite.
;-)
 
D

Don

I realise that it is the transition area that has been corrupted but,
unless you are prepared to work at it till hell freezes over, you won't
be able to blur all of those transitions manually. Since there is no
layer function in PS6/7 you have no alternative.

No, I just select the transition area in 16-bit mode and blur it. I
don't need layers to do that.

Maybe I'm not clear. This is what I do: I make a duplicate of the
16-bit image and reduce this duplicate to 8 bits. Next, I make the
selection of the desired histogram range using all of the tools
available in 8-bit mode. I end up with a selection of, say, everything
between 31 and 33. I save this selection. Next, I switch to the 16-bit
image and "Load Selection" saved in the 8-bit duplicate. Finally, I
apply Gaussian Blur to that selection only.

The "trick" (actually a function of PS) is that you can load a 16-bit
selection from the 8-bit duplicate because the two images have the
same dimensions. And since the histogram appears to be 8-bit accuracy
regardless of bit depth mode, there is no loss by making the selection
in 8-bit mode. Furthermore, since this area is being blurred anyway,
even if there were small selection inaccuracies they would be lost in
the blurring.
Ok, I *think* I see where the disparity is coming from here. Since you
are determined to "scan raw" and make all of your adjustments in PS, it
appears that you haven't done any black point compensation when
producing the scan before you are making this assessment.

That's correct. The reason I don't set the black point is twofold; one
practical, the other conceptual:

1. My early Kodakchromes are mounted and I don't always have access to
unexposed film. Later I "got smart" and had the films developed
unmounted (which also eliminates vicious cropping done by Kodak's
mounts!!). However, I still have about 5-years worth of mounted
slides... :-(

2. I was always under the impression that setting the black and white
point (we're talking curves in NikonScan, right?) is performed after
the scan and I didn't want any post-processing to be done at this
stage (with the aforementioned exception of gamma). Now, granted,
NikonScan has higher precision but I didn't like the fact that I have
to make such a crucial decision at the scanning stage and then be
stuck with it.

NOTE: This only refers to cases where unexposed film is not available
(i.e. the 5 years worth of slides). The best I could do in that case
is choose the darkest part of the image to *act* as black point
reference. This is *very* unreliable because I don't even have
Threshold in NikonScan to find this darkest point (I can watch the
curves display as I move the mouse over the image, but that's only
guessing), not to mention it opens up a can of worms because such a
black point is susceptible to variations in image content (although I
realize this has a positive component too, in that it may/will remove
color casts). I don't mind doing this in PS later because I can
play/click around and always go back to the original scan. In
NikonScan, once scanned - that's that! To have another go (instead of
a casual click) I'd have to do the whole scan all over again...

There is also the possibility of some glaring omission on my part and
by "setting the black point" you're referring to something other than
NikonScan curves. If that is the case I would have no choice but to,
calmly, tear out all of my hair and then step out on the ledge... ;o)
Unless you compensate for this black offset, either by sampling the mask
or, better, the unexposed film border, and subtracting this from your
image (ideally performed before gamma is applied, but the difference is
actually small if the black offset is significant) then all of your
subsequent processing will be in error. That is why the black point
control is there!

That I understand, but I'm under the impression this has nothing to do
with setting the black point in NikonScan curves. I was under the
impression that the black mask is subtracted from the image *before*
NikonScan curves ever get the chance to have a go at the image i.e.,
before the image leaves the firmware.

In that sense (other than the increased NikonScan accuracy) there was
no advantage to setting this Curves black point in NikonScan and "be
stuck with it". I prefer to be able to "play around" in PS without
having to re-scan each time in order to try a new black point setting.

I mean, when I was playing around with my digital camera I performed a
similar test once by taking a picture with the lens cap on (to be able
to extract the black mask) and then taking the actual picture,
followed by subtracting the black mask, etc.
However, before you do *any* subsequent processing (*especially* the
scan mixing that you have been attempting) you need to correct for the
black point of the scanner. You might also want to consider
multiscanning to get an accurate assessment of what the black point
actually is for the exposure that you are working with.

OK, there is an implied (very basic) question above already, but just
to be sure:

Do I understand you correctly i.e., are you saying I should set the
black point in NikonScan curves and that will, indeed, subtract the
black mask rather than merely "stretch" the histogram - which is what
the equivalent action in PS would do (accuracy differences aside)?

I was under the impression that the black mask was always subtracted
within the firmware before the image ever leaves the scanner, so I
must have misunderstood you.
I would expect AE scans to have a lower, but not significantly so, black
offset. Your "primary" scan has, presumably, been adjusted by
increasing the AG in each channel so that the highlights just fail to
saturate, however I would not expect that to be by very much, so the
difference in black offset between an AE and AG optimised scan should be
relatively small. What level of AG adjustment are you typically
applying to each channel to produce your primary scan.

Since I got the LS-50 I just let AE do everything. Due to increased
dynamic range (as well as Kodachrome mode) I found there was no need
for me to manually modify individual RGB AG as I was forced to do with
the LS-30. The only thing I do is set the clipping to 0% but (as can
be seen above) since I don't use NikonScan Curves, I thought that
would have no effect anyway.

So, the primary (nominal) scan is plain-vanilla AutoExposure (with
clipping at 0%, if that's relevant).
OK, got these. However I don't see any random bright pixels in the
normal scan even under extreme adjustments.

So I was right in the sense that we are perceiving things differently.
I'm "disturbed" by those glaringly-cyan and light-orange pixels which
stand out from the dirty-brown background.
On the contrary, examining
the data itself I can see random *dark* spikes in the red channel. In
particular, there are 5 cells which have *zero* data in the red channel,
making them appear cyan against a white background under very extreme
adjustments.

Those are precisely what I described as "randomly colored pixels".
More significantly though, your shadow scan looks *much*
softer - if you haven't applied any blur to this then I would be
concerned that the focus was different.

I noticed that too! My "explanation" was that the normal AE scan was
showing random noise which, by its nature, is not contiguous but
random. Because of that those individual bright pixels only *appear*
sharp because they stand out against the dark background.

There is another possibility. I have made sure NikonScan does not
re-focus between scans, and watched the display during the scans to
confirm there is no re-focusing, but you never know...

Witness the need to turn everything off when switching between AE and
manual exposure. I've documented this on the LS-30 and confirmed it
again with the LS-50 recently. That's not only two different scanners
(LS-30 & LS-50), two different software versions (NS3 & NS4), two
different connections (SCSI & USB) but also two different OSes (W98 &
W2K) because NS4 doesn't run under W98.

However, you can't replicate this so I have no idea what's going on
over here. The only thing I know is that - just to be on the safe side
- I turn everything off when making any global changes like that.
Unfortunately, I can't do that between normal and shadow scans.
Timo's ramblings are legendary and mainly wrong ....
Err, I don't knock Photoshop in general - just specific points, like its
claims for 16-bit precision and singular failure to deliver. ;-)

I know, I know... I was just kidding... ;-)
So, the proposed solution is...

you'll like this, not a lot, but... ;-)

You're right! ;-)

Seriously though, without going into details but it's the linear scan
I have most difficulty with (constantly switching display gamma, my
instincts regarding exposure are based on 2.2, etc., etc.)
Scan linear.
Scale and merge in linear space using Photoshop.
Save image as tif file.
Import into NikonScan.
Apply required gamma in NikonScan.
Save in glorious 16-bit colour (shame you had to go through that poxy
15-bit stage en-route) from NikonScan.

Out of left field (and really just out of curiosity)...

I know that gamma does not correspond to the scaling I need to do to
before I can merge the two scans, but is there a gamma curve that
would come close to the required scaling curve?

It's a loaded question, because if there is a similar gamma curve, I
could use it in NikonScan to "darken" the shadows scan and only
perform the actual merge in PS. I would still get the 15-bit
"trimming" but if this gamma produces less errors than darkening in PS
I may salvage yet another fraction of accuracy.

But that's really, really, being picky and - even if possible - it's
probably not worth the effort. At this point, I better take my own
advice and pull back to look at the big picture i.e. the context
instead of getting bogged down in minutiae...
Fortunately, if you set the black point correctly in Nikonscan in the
first place, you won't need to worry about any of this ever again, since
everything that can be recorded in the 14-bit dynamic range of the
scanner you have will be perfectly reproduced in a perceptually flat
display space without having to go near that pesky 15-bit editing suite.
;-)

It's like Windows, we all hate it but have no choice... ;-)

Don.
 
K

Kennedy McEwen

Don said:
On Sat, 21 Aug 2004 11:20:07 +0100, Kennedy McEwen

Maybe I'm not clear. This is what I do: I make a duplicate of the
16-bit image and reduce this duplicate to 8 bits. Next, I make the
selection of the desired histogram range using all of the tools
available in 8-bit mode. I end up with a selection of, say, everything
between 31 and 33. I save this selection. Next, I switch to the 16-bit
image and "Load Selection" saved in the 8-bit duplicate. Finally, I
apply Gaussian Blur to that selection only.

OK - I had forgotten that feature was available in 16-bit images.
Presumably you feather the selection before applying your blur though.
That's correct. The reason I don't set the black point is twofold; one
practical, the other conceptual:

1. My early Kodakchromes are mounted and I don't always have access to
unexposed film. Later I "got smart" and had the films developed
unmounted (which also eliminates vicious cropping done by Kodak's
mounts!!). However, I still have about 5-years worth of mounted
slides... :-(
The issue is stray light within the scanner itself. You are likely to
get as much stray light from the slide mount as from anywhere else. What
you lose from using the mount as a reference though is the ability to
compensate for the true black transmission through the slide itself and
any difference in the density of the three dye layers - black colour
balance, effectively. I am pretty certain that setting the black point
on the slide mask will reduce your black offset considerably - and all
that should be left afterwards is real film transmission.
2. I was always under the impression that setting the black and white
point (we're talking curves in NikonScan, right?) is performed after
the scan and I didn't want any post-processing to be done at this
stage (with the aforementioned exception of gamma). Now, granted,
NikonScan has higher precision but I didn't like the fact that I have
to make such a crucial decision at the scanning stage and then be
stuck with it.
I know this is the concept that you are attempting to follow, but it is
impractical. All of the processing, analogue gain excluded, is
performed after the physical scan - so it is all on the digital data.
That includes the gamma, the application of calibration coefficients,
the lot. Better to have that all implemented in one single calculation
which has sufficient bit overhead than save an interim result that has
black/white calibration and gamma applied, reduce the saved result to
15-bit precision and then apply the black point correction. Although
there is nothing to prevent you from taking that approach the results
are measurably inferior.
NOTE: This only refers to cases where unexposed film is not available
(i.e. the 5 years worth of slides). The best I could do in that case
is choose the darkest part of the image to *act* as black point
reference. This is *very* unreliable

The point is *not* to use an area of the image. Either use the mount or
unexposed film area if possible.
I don't mind doing this in PS later because I can
play/click around and always go back to the original scan. In
NikonScan, once scanned - that's that! To have another go (instead of
a casual click) I'd have to do the whole scan all over again...

There is also the possibility of some glaring omission on my part and
by "setting the black point" you're referring to something other than
NikonScan curves. If that is the case I would have no choice but to,
calmly, tear out all of my hair and then step out on the ledge... ;o)
No, I do mean black point in the Curves section - avoiding this appear
to be the source of your problems. The black point cursor is your
friend.
That I understand, but I'm under the impression this has nothing to do
with setting the black point in NikonScan curves. I was under the
impression that the black mask is subtracted from the image *before*
NikonScan curves ever get the chance to have a go at the image i.e.,
before the image leaves the firmware.

You say tomaytoe, I say tomato...

By "mask" in the above paragraph I mean any opaque area surrounding the
frame which masks off the film from the full scanned area. In the case
of mounted slides, that would be the mount, in the case of unmounted
strips that would be the side of the FH-3 or the SA-21.

I am not so sure that the calibration is actually performed in the
firmware these days. It certainly was with the LS-30 and earlier
scanners, which also implemented the gamma correction and all other
processing in a built in LUT firmware. But having changed their
approach to built in processing in the scanner itself and with a full
depth data transfer to the PC, it would be more cost effective to do it
in the driver and get rid of the firmware entirely.
In that sense (other than the increased NikonScan accuracy) there was
no advantage to setting this Curves black point in NikonScan and "be
stuck with it". I prefer to be able to "play around" in PS without
having to re-scan each time in order to try a new black point setting.

Getting rid of excess black level as soon as possible is the right way
to go about it and you don't need to worry about "being stuck with it" -
there will still be enough light getting through the deepest blacks on
the film to register data if you set the black point on the mount or the
film holder. If you get a chance to use the unexposed film area then
you know that the black is true film black as recorded by the film.
OK, there is an implied (very basic) question above already, but just
to be sure:

Do I understand you correctly i.e., are you saying I should set the
black point in NikonScan curves and that will, indeed, subtract the
black mask rather than merely "stretch" the histogram - which is what
the equivalent action in PS would do (accuracy differences aside)?
Yes it will stretch the histogram as you expect from the familiarity
with PS. However I don't see what your worry is about this step. You
already apply the black and white calibrations (for dark current and CCD
response normalisation) after the CCD output is digitised, and they
stretch the histogram, and differently for each element in the device.
You apply gamma as well, which stretches and compresses the histogram
according to the level. Slightly changing how that histogram is
stretched by defining a suitable black point is neither here nor there
in terms of image quality - and better to do it all at once with the
other computations in the scanner driver when there is adequate bit
overhead than in some external application that has less bits than it
thinks anyway.
So I was right in the sense that we are perceiving things differently.
I'm "disturbed" by those glaringly-cyan and light-orange pixels which
stand out from the dirty-brown background.


Those are precisely what I described as "randomly colored pixels".
So we weren't perceiving anything differently, just describing them so.
However these bright pixels as you refer to them, are dark spots in
reality.
I noticed that too! My "explanation" was that the normal AE scan was
showing random noise which, by its nature, is not contiguous but
random. Because of that those individual bright pixels only *appear*
sharp because they stand out against the dark background.
But your shadow scan has longer exposure and therefore should have more
noise too, although it is difficult to separate noise from signal
contrast on a single scan. Interestingly though, I had a look at how
the raw data compared in both the 2.2 gamma space you posted them in and
also after accurately converting back to unity gamma. (Data all
exported to Excel for numerical analysis). In unity gamma, the mean and
median of the +3EV scans were roughly 4x the 0EV scan, whilst in 2.2
gamma space they were roughly in the ratio of 1.85:1. These figures
seem too close to what I would expect for +2EV that I wonder if that is
what you actually used for the shadow scan, rather than the +3EV you
intended.
You're right! ;-)

Seriously though, without going into details but it's the linear scan
I have most difficulty with (constantly switching display gamma, my
instincts regarding exposure are based on 2.2, etc., etc.)
Well, I guess that without the gamma slope stretching the contrast in
the shadows you wouldn't have enough overhead to get much advantage out
of mixing the two scans in Photoshop anyway. 15-bits is the limit and
you are trying to stretch the performance in the shadows to the
equivalent of 17-bits through the +3EV shadow scan.

It is tempting to have a go at writing a specific application to do this
myself, working in linear space, using 32-bit arithmetic, applying the
gamma with full precision and then truncating the result to 16-bits
before outputting as a tiff file. Unfortunately I just haven't got the
time at the moment.
Out of left field (and really just out of curiosity)...

I know that gamma does not correspond to the scaling I need to do to
before I can merge the two scans, but is there a gamma curve that
would come close to the required scaling curve?
The closest I think you can get with any precision is the function I
gave you previously for the continuous curve gamma:
Merge data = Original Data / 2^(EV / gamma).

This is easy enough to apply just using levels to adjust the output
scaling from the 255 default. So, for example, for 2.2 gamma and +3EV
shadow scan, just reduce the output level from 255 to 99. For gamma=2.2
and a +2EV shadow scan, reduce it from 255 to 136. There will be
residual errors due to the linear segmented gamma used, but these might
be acceptable. You could even apply this output scaling directly in
Nikonscan as you make the shadow scan itself, when you implement the
black point setting, all in one go - before saving the file, or
importing it into PS via twain, if that's how you are working.

There might be a better way, but I can't see how to overcome the linear
segments in the gamma curves with adequate accuracy.
 
D

Don

OK - I had forgotten that feature was available in 16-bit images.
Presumably you feather the selection before applying your blur though.

Yes, once I have the selection then I can modify it freely because the
8-bit selection matches the 16-bit image perfectly (unlike my merging
via 8-bit where the image is stretched horizontally).
The issue is stray light within the scanner itself. You are likely to
get as much stray light from the slide mount as from anywhere else. What
you lose from using the mount as a reference though is the ability to
compensate for the true black transmission through the slide itself and
any difference in the density of the three dye layers - black colour
balance, effectively. I am pretty certain that setting the black point
on the slide mask will reduce your black offset considerably - and all
that should be left afterwards is real film transmission.

That's exactly what I mean! Since the slide is mounted there is no
unexposed film edge to be used for calibration. With some rolls,
however, I may have a slide which was unexposed (which the automatic
Kodak mounting machine mounted anyway) and then I have a reference.
But I was usually pretty good at squeezing up to 38 shots from a roll
so unexposed slides are not that common.

I also realize the problems with using the cardboard edge as a
reference. It may even skew things more than it helps because it's
slightly transparent.

Once I'm out of the woods, and get to unmounted slides, then things
should get a easier.
All of the processing, analogue gain excluded, is
performed after the physical scan - so it is all on the digital data.
That includes the gamma, the application of calibration coefficients,
the lot. Better to have that all implemented in one single calculation
which has sufficient bit overhead than save an interim result that has
black/white calibration and gamma applied, reduce the saved result to
15-bit precision and then apply the black point correction. Although
there is nothing to prevent you from taking that approach the results
are measurably inferior.

Your idea of loading the image back into NikonScan to do the gamma
prompted another thought! Since NikonScan can read saved PS curves an
option may be do all the editing in PS with its multitude of handy
utilities, such as Threshold (although I use my non-weighted version),
as well as inspect the image at 10 gazillion % magnification as I seem
to like to do...

Once happy with the result, I would then save the relevant curves
(and, indeed, other settings) as files and import them into NikonScan
to be applied to the original image using NikonScan's full 16-bit
glory! ;o)

BTW, don't NikonScan curves also suffer from 8-bit precision problems?
I mean, the histogram appears only 8-bit with 256 bins.
The point is *not* to use an area of the image. Either use the mount or
unexposed film area if possible.

I know. But since I seem to have a dark subject in almost every roll,
some of those deep, deep shadows are very close to unexposed...
But your shadow scan has longer exposure and therefore should have more
noise too, although it is difficult to separate noise from signal
contrast on a single scan. Interestingly though, I had a look at how
the raw data compared in both the 2.2 gamma space you posted them in and
also after accurately converting back to unity gamma. (Data all
exported to Excel for numerical analysis). In unity gamma, the mean and
median of the +3EV scans were roughly 4x the 0EV scan, whilst in 2.2
gamma space they were roughly in the ratio of 1.85:1. These figures
seem too close to what I would expect for +2EV that I wonder if that is
what you actually used for the shadow scan, rather than the +3EV you
intended.

It's possible... Normally, I'm very careful not to mislabel my scans,
but that very slide was scanned just before I surrendered to the heat
wave. Another possibility is that since I did not remove the slide
between scans the scanner did not have a chance to re-calibrate. I
just turned on the scanner so the temperature must have initially
risen quite fast.

Anyway, that may explain my next question. I used the AG reduction
formula and wrote a little routine to generate all the AMP curves (0.1
to 5.0 range, which should be more than enough). When I applied the 3
EV curve the modified (shadow) image was slightly darker than the
nominal scan.

But I now just repeated it with the 2 EV curve and that does look much
better! So, I probably did mislabel the scan as my brain was frying in
the heat!? I think I better throw away anything I've done during that
time...

I also noticed a green cast in the shadow image, while the nominal
scan is distinctly dirty-brown. It actually permeates the whole image,
not just the shadows. Is this color mismatch to be expected?

BTW, looking at the EV reduction AMP curves in Photoshop I was
surprised because I expected curves but I see straight lines. I
checked the calculations but I don't see any errors. Is that right,
should the generated curves be more like straight lines? My numbers
for 2 EV seem to correspond to your example last time:
Rounding the data/4^(1/2.2) function to 8 bit precision, the first few
terms of your amp file should be:
0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,9,9,10,10..

so, I don't think I made any errors.

Finally, you noticed last time that the shadows scan was slightly
blurred. I forgot to mention that the part of the image I uploaded is
from one of the corners. So - due to film curvature - that part of the
image is indeed a little out of focus (although that should equally
apply to both scans).
It is tempting to have a go at writing a specific application to do this
myself, working in linear space, using 32-bit arithmetic, applying the
gamma with full precision and then truncating the result to 16-bits
before outputting as a tiff file. Unfortunately I just haven't got the
time at the moment.

Tell me about it!!

I find all this very fascinating and hate that I have to do it
head-over-heals. I really wish I had the time to read the literature
methodically and sink my teeth into it properly instead of my current
"buffet approach" of parachuting into areas and just getting the
minimum I need to do the task at hand...

Like I myself always say (with disdain!) to such an approach: Little
knowledge is a dangerous thing! :-(
The closest I think you can get with any precision is the function I
gave you previously for the continuous curve gamma:
Merge data = Original Data / 2^(EV / gamma).

This is easy enough to apply just using levels to adjust the output
scaling from the 255 default. So, for example, for 2.2 gamma and +3EV
shadow scan, just reduce the output level from 255 to 99. For gamma=2.2
and a +2EV shadow scan, reduce it from 255 to 136. There will be
residual errors due to the linear segmented gamma used, but these might
be acceptable. You could even apply this output scaling directly in
Nikonscan as you make the shadow scan itself, when you implement the
black point setting, all in one go - before saving the file, or
importing it into PS via twain, if that's how you are working.

I run NikonScan stand-alone.

Actually, my initial attempts (before my "histogram synchronization"
method using curves on a range of values where the two scans should
"meet") was indeed to modify Output in Levels!!!

I won't bore you with my method (again, empirically based) but even
though I was getting reasonable results regarding matching the
brightness of the two scans, I still had a major problem with
mismatched colors.

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top