Minolta 5400 or Coolscan V

K

Kennedy McEwen

Don said:
Indeed, it was your reference to a partially painted car which during
the day looked perfect in the shop, but parked outside under street
lighting clearly showed every spot touched up in the shop.
Ooh, I sold it - on a nice sunny day, to a used car salesman. ;-)

It was subsequently bought by a staffer at the Consumer Association who
contacted me to congratulate me on its upkeep and maintenance, so I had
to 'fess up that it wasn't perfect. ;-)
 
H

Hecate

My 5400 scan time increases exponentially when each of these features
are turned on incrementally: ICE+GD, multi-sampling, and increasing
Exposure Control tab. Time for a coffee break. Fine for those who scan
leisurely, but definitely not for those who need to bulk scan or meet a
deadline.
No. For that you need one of the expensive Nikons :)

--

Hecate - The Real One
(e-mail address removed)
Fashion: Buying things you don't need, with money
you don't have, to impress people you don't like...
 
D

Don

Ooh, I sold it - on a nice sunny day, to a used car salesman. ;-)

ROTFL! Very good! ;o) Sweet revenge, eh?
It was subsequently bought by a staffer at the Consumer Association who
contacted me to congratulate me on its upkeep and maintenance, so I had
to 'fess up that it wasn't perfect. ;-)

Two for two! The irony of a consumer advocate not noticing something
like that! ;o)

Don.
 
D

Don

As shown in the characteristic curves, there is a difference in the
slopes of the three color channels. A correction to remove the cast
would be to make the curves more parallel and coincident.

But how would you do that reliably and objectively?

Conceptually, one would create an "inverse" curve to the
characteristic to curve to produce a "parallel and coincident"
relationship between channels, i.e. turn it into a linear
relationship.

In theory, I believe that Nikon Kodachrome mode tries to do something
like that. However, since it's applied "dogmatically", it fails to
sufficiently compensate any scan where the *absolute* exposure
deviates from nominal (i.e. <> 0)! Which practically means all slides.

The catch (in my experience) is that - in scanning, whenever one tries
to apply pure theory (the "dogmatic" approach) - there is always a
"gotcha" because scanning practice always throws in a monkey wrench
because scanning is *multi* dimensional. That's why I favor a
self-corrective, adaptive method which doesn't (only) deal with
"should" (of a single dimension) but simultaneously takes into account
the "is" (of *multiple* dimension *interconnections*) as well!

Specifically, based on numerous tests it's been my feeling for a long
time that some sort of *self-correcting* & *adaptive* method based on
exposure, in addition to the characteristic curve, is what's needed.

For example - and these very rough figures for illustration purposes
only (!) - taking the *absolute* nominal exposure (with Kodachrome
mode on) and then applying 50% boost in red and 25% boost in green
analog gain virtually eliminates the casts and produces a scan
seemingly identical to the slide. Or, more accurately, it stretches
dynamic range enough to enable post-processing with maximum quality.

A couple of examples to better explain what I mean. If the optimal
Kodachrome mode exposure for a slide is +2.0 master analog gain I
would add +1.0 analog gain boost to the red channel and +0.5 to the
green channel. Conversely, a nominal exposure of +1.0 master would
require +0.5 red and +0.25 green to achieve optimal results.

Again, these numbers are not to be taken literally but only illustrate
the type of self-correcting, adaptive solution I'm referring to and
which I've been after. So "work in progress" and all other caveats
apply.
A white
balance correction of the scanner RGB exposures would only move the
curves up and down relative to one another (in the log-log domain)
without achieving coincidence at more than one point.

That single dimensional nature is exactly the problem of trying to use
the linear analog gain to correct the non-linear characteristic curve.

Don.
 
C

Crames

...Conceptually, one would create an "inverse" curve to the
characteristic to curve to produce a "parallel and coincident"
relationship between channels, i.e. turn it into a linear
relationship.

You probably don't want to completely invert the curves, as that will
remove all of the tone reproduction properties of Kodachrome. What I
would do is tilt the curves to make them parallel. Varying the
exponent of a power curve will do this, which is a gamma adjustment.
At the same time you can adjust the overall gamma to reduce the extra
contrast in the film that is intended to compensate for a dark viewing
surround. Then scale the channels (levels adjustment) to make them
overlap.
...In theory, I believe that Nikon Kodachrome mode tries to do something
like that. However, since it's applied "dogmatically", it fails to
sufficiently compensate any scan where the *absolute* exposure
deviates from nominal (i.e. <> 0)! Which practically means all slides.

Maybe they're using a lookup table. Depending on the shape of LUT, if
you change the exposure you end up indexing into the wrong part of
table.
...The catch (in my experience) is that - in scanning, whenever one tries
to apply pure theory (the "dogmatic" approach) - there is always a
"gotcha" because scanning practice always throws in a monkey wrench
because scanning is *multi* dimensional. That's why I favor a
self-corrective, adaptive method which doesn't (only) deal with
"should" (of a single dimension) but simultaneously takes into account
the "is" (of *multiple* dimension *interconnections*) as well!

I don't really understand what you mean by "self-corrective, adaptive
method."
...A couple of examples to better explain what I mean. If the optimal
Kodachrome mode exposure for a slide is +2.0 master analog gain I
would add +1.0 analog gain boost to the red channel and +0.5 to the
green channel. Conversely, a nominal exposure of +1.0 master would
require +0.5 red and +0.25 green to achieve optimal results.

Again, you need to tilt the curves. All you're doing with the analog
gain is moving the curves up and down, changing the one point on the
grayscale at which the RGB densities are equal, while causing
divergence at other densities.

Rgds,
 
H

Hecate

Yes, the likes of wedding photographers can pass that onto their
customers. <g>


hahahaha!


--

Hecate - The Real One
(e-mail address removed)
Fashion: Buying things you don't need, with money
you don't have, to impress people you don't like...
 
D

Don

You probably don't want to completely invert the curves, as that will
remove all of the tone reproduction properties of Kodachrome. What I
would do is tilt the curves to make them parallel. Varying the
exponent of a power curve will do this, which is a gamma adjustment.
At the same time you can adjust the overall gamma to reduce the extra
contrast in the film that is intended to compensate for a dark viewing
surround. Then scale the channels (levels adjustment) to make them
overlap.

Yes, of course, to invert the curves literally would be equivalent to
turning the Kodachrome mode off. That's why I put "invert" in quotes
to indicate an attempt to make them linear.

However, the problem with this approach in general is that it relies
on post-processing, while my goal has been to scan raw and have this
raw scan either reflect what's on the film faithfully without
introducing artifacts (preferred option, but virtually impossible), or
failing that, at the very least extract maximum dynamic range (without
any further corruption of data) so when I do get down to proper
post-processing I have enough elbow room.

To put things in context, I've been at this, literally, for years...
Maybe they're using a lookup table. Depending on the shape of LUT, if
you change the exposure you end up indexing into the wrong part of
table.

As I wrote recently in another thread, I have, indeed, extracted the
LUTs:

http://members.aol.com/tempdon100164833/nikon/P2K.amp

http://members.aol.com/tempdon100164833/nikon/P2K-R.jpg
http://members.aol.com/tempdon100164833/nikon/P2K-G.jpg
http://members.aol.com/tempdon100164833/nikon/P2K-B.jpg

However, as can be seen there, it's a complex curve incorporating
apparently both the characteristic Kodachrome curve as well as some
processing which, sort of, gets me back to square one. Unraveling this
curve (in order to amplify it as a function of exposure) is just as
excruciating as applying a correction from scratch.

Not to mention, it gets me ever further from the raw scan (digital
negative) goal...
I don't really understand what you mean by "self-corrective, adaptive
method."

An empirical method which (semi) automatically takes into account
*all* aspects of scanning and does so flexibly rather than, for
example, just dogmatically focusing on one and ignoring the rest.

Nikon's Kodachrome mode is a perfect example of this because it only
"corrects" for Kodachrome *if* and *when* the (absolute!) exposure is
0, blissfully ignoring the fact that for any non-0 exposure the
"correction" is totally inadequate. The further away the exposure is
from 0 the more useless IMHO the Nikon's Kodachrome "correction" is.

A "self-corrective, adaptive method" would not only correct for
Kodachrome but take into account different exposures (and everything
else...) by making the Kodachrome correction a *function* of exposure,
thereby maintaining the color balance and be immune to exposure.

Right now, each "Kodachrome mode" exposure has a different color
balance so, for example, combining multiple scans is impossible
without adjusting the color balance first. A "proper" Kodachrome
correction would *not* be dependent/predicated on any one single
exposure but automatically adapt to whatever the exposure may be.

Sure, all that can be corrected in post-processing, but that's not the
point if one tries to scan raw for archival purposes.

Another example would be setting the black point. A BP equivalent of
Nikon's Kodachrome mode would arbitrarily declare a place on the
histogram as the "black point", say 10, and then just apply this "BP"
blindly regardless of what the histogram really looks like. So, any
scan where the true black point is below would be clipped while any
scan where the true black point is above would be insufficient.

A self-corrective, adaptive method looks for the first non-zero value
and uses that as the black point instead of a fixed "dogmatic" point.
Again, you need to tilt the curves. All you're doing with the analog
gain is moving the curves up and down, changing the one point on the
grayscale at which the RGB densities are equal, while causing
divergence at other densities.

Which is precisely the problem! Using a linear setting (analog gain)
to correct a non-linear problem (Kodachrome bias) and at the same time
extract maximum (!) amount of raw data without "corrupting" it with
post-processing at such an early stage (the raw scan is intended to be
archived as a digital negative).

It's an impossible task which is why I've been agonizing trying to, at
the same time, minimize corruption and extract the most raw data. And
do that using objective criteria rather than subjective guesswork.

In general terms, the solution is to apply analog gain to emulate
Kodachrome compensation i.e. get the closest match to the
characteristic curve, and do so as a function of exposure to maintain
color balance as much as possible. The catch is finding an *objective*
method to achieve this *without* requiring the operator to make
subjective decision.

The end result will still require "fine tuning" in post-processing but
such adjustments are far less radical than the adjustments needed when
the blunt Nikon Kodachrome setting is used, not to mention "correcting
the Nikon's Kodachrome correction" is cumulative!

Don.
 
K

Kennedy McEwen

Don said:
However, the problem with this approach in general is that it relies
on post-processing

As does all applications of gamma corrections to CCD scans.
, while my goal has been to scan raw

So, your stuffed - looking to achieve the impossible. You just can;t do
what you are looking for *and* apply a raw scan.
and have this
raw scan either reflect what's on the film faithfully

Err... you don't like a faithful reproduction of what's on the film, its
too blue (and gets bluer in the shadows, as the characteristic curve
shows).
without
introducing artifacts (preferred option, but virtually impossible), or
failing that, at the very least extract maximum dynamic range (without
any further corruption of data) so when I do get down to proper
post-processing I have enough elbow room.
Don, if you haven't got enough elbow room using your technique to extend
the linear range of scan to about 20 bits or so then there isn't any
hope!

Seriously, your raw scans ought to have more than enough "elbow room" to
implement this as a post scan process without the slightest hint of an
artefact. We are only looking at a slope for the red layer that is 3%
higher than that of the green or blue in the linear region - and that is
the gamma difference that is needed. I suspect that the gamma variation
between the channels of your monitor or printer will swamp this -
although in the case of the monitor it probably isn't a gamma variation
but a mis-set black point.

In the toe region, deep in the shadows, it is a bit more significant,
and in the opposite direction, with the red being flatter than the green
and blue channels by about 40%, but I suspect that you don't really need
to worry too much about that if you get the exposure balance correct for
the shadows.
 
B

Bart van der Wolf

SNIP
The manual is rather confusing regarding the "auto expose for
slides" preference setting.

Ah manuals, more odd stuff when looking for explanations.
On p.32, it says, "When using autoexposure, adjustments
[of the Exposure Control tab] are made in reference to the
exposure determined by the AE system." I take this to mean
that with the "auto expose for slides" preference setting ON,
the scanner will use its autoexposure system, which is also
its default.

I interpret it as, relative to the auto calibration at start-up (no
film in the light path yet).
From there on, an user can tweak the Exposure Control tab
for more adjustments if necessary.
Since my understanding is that a scanner's autoexposure
will attempt not to clip either highlights or shadows, I leave
the "auto expose for slides" preference setting ON.

So do I, assuming correct non-clipping behavior. It should accomodate
for the brightest film areas in the crop area. That would also include
the film base color for color negatives, scanned as "positives".
Scans of both Kodachrome and Fujichrome without blown
out highlights or shadows come out very well with this
setting.

As could be expected.

Bart
 
K

Kennedy McEwen

Since my understanding is that a scanner's autoexposure will attempt not
to clip either highlights or shadows, I leave the "auto expose for
slides" preference setting ON.

No, it cannot possibly do both. The autoexposure will attempt not to
clip the highlights. Autoexposure does not take the shadows into
account at all. How it copes with the shadows are merely the
consequence of the relative dynamic range of the scanner and the slide.
Scans of both Kodachrome and Fujichrome
without blown out highlights or shadows come out very well with this
setting.

They come out acceptably, but the shadows are inevitably buried in
scanner noise rather than clipped, particularly in Kodachrome which has
extremely dense shadows - beyond what is possible to achieve with a
16-bit dynamic range scanner.

The data for Fuji Velvia shows the density pretty much limiting out at
around 3.8, whilst for Provia it is flat at about 3.6. Kodachrome data
sheets, on the other hand, don't show the curves at their density limit,
but it is still rising at 3.8 on K25, for example. Assuming this limits
somewhere around 4 or just above, you need at least 14-bit ADC just to
quantise it - so 16-bits only provides about 4 discrete levels in the
shadows, if the scanner noise is low enough to get there - and it isn't.
Scanner noise is usually a few bits higher than the ADC quantisation
noise.

The densest shadows on KC are inevitably limited by the noise of current
desktop scanner technology, if the highlights are exposed correctly. You
can prove this by allowing the highlights to saturate and increasing the
exposure by a few stops. You will see more detail in the shadows than
by post processing a scan with the correct exposure to produce the same
brightness.

Cue Don and his double exposure scan process. ;-)
 
M

Markus Malmqvist

Good to hear from another Minolta user.



My 5400 scan time increases exponentially when each of these features
are turned on incrementally: ICE+GD, multi-sampling, and increasing
Exposure Control tab. Time for a coffee break. Fine for those who scan
leisurely, but definitely not for those who need to bulk scan or meet a
deadline.

Yeah... But at least I am seeing significantly increased noise levels in the
right part of the scan, if scan time was long. I guess somebody explained
that at least partly this happens, because the temperature of the scan unit
will climb during the scan. For me, there could also be some calibration
issue.

Increased noise levels in this context does not mean that the noise would be
automatically visible, since it is easily below the level there details
begin to show with a calibrated CRT.
Don't understand what you said, or how you determine that "CCD color
sensors begin to accumulate erronous base charge from left to right".
Care to clarify?

I mean that for example a poorly calibrated, or perhaps physically faulty,
red CCD pixel starts to permanently accumulate charge. As a result when the
scan progresses to the right, a red line slowly emerges. I have tested in PS
using layers that this error seems to increase in linear fashion from left
to right.

With long scant times those faulty CCD pixels can become VERY visible. I
just mask them out, since there are relatively small amount of them.
I also noticed, that even with "default" exposure (automatic slide
autoexposure OFF and general exposure HW control at zero) the light
somewhat
penetrates the unexposed black part of a developed slide. At least with
Fuji
Sensia 100. I was left wondering how much exposure in fact is useful when
trying to reach a result with enough quality information? Does the
"partly
transparent" black actually have anything to do with this?

The manual is rather confusing regarding the "auto expose for slides"
preference setting. On p.32, it says, "When using autoexposure,
adjustments [of the Exposure Control tab] are made in reference to the
exposure determined by the AE system." I take this to mean that with the
"auto expose for slides" preference setting ON, the scanner will use its
autoexposure system, which is also its default. From there on, an user
can tweak the Exposure Control tab for more adjustments if necessary.
Since my understanding is that a scanner's autoexposure will attempt not
to clip either highlights or shadows, I leave the "auto expose for
slides" preference setting ON. Scans of both Kodachrome and Fujichrome
without blown out highlights or shadows come out very well with this
setting.

The autoexposure tries to expose to the right without blowing highlights.
But I have noticed, that is PS after applying a Minolta profile and
converting to the working profile, some channel can still be clipped.
Therefore with autoexposure I usually use about -0.3 compensation. I guess
the exposure is based on the pre-scan.

--markus
 
D

Don

As does all applications of gamma corrections to CCD scans.

That's different. Gamma needs to be applied so whether it happens in
scanner software or afterwards doesn't matter (assuming both work to
the same precision). So, that gamma post-processing is a given.
However, due to the cumulative nature of editing, I do want to
minimize other, avoidable post-processing, as much as I can.
So, your stuffed - looking to achieve the impossible. You just can't do
what you are looking for *and* apply a raw scan.

Yes, that's exactly what I said in closing ("preferred option, but
virtually impossible").

So, since what I want *is* impossible the next best thing is to
minimize the negative and maximize the positive. The only question now
is to what degree? And there I do have a tendency to go to extremes
(probably due to inertia and as a function of years of frustration)
and I acknowledge that.
Err... you don't like a faithful reproduction of what's on the film, its
too blue (and gets bluer in the shadows, as the characteristic curve
shows).

The issue is that the scan is not a faithful reproduction of what's on
the film (not that the film is not a faithful reproduction of
reality). Even though the film itself nominally has a blue cast (as
illustrated by its characteristic curve) the real problem is this cast
gets disproportionately amplified in the process of scanning.

Looking at the film and the scan, side by side, clearly shows the
difference. In other words, the blue cast in the film is virtually
imperceptible to the naked eye, while the blue cast in the scan is
very objectionable (even with the Kodachrome mode on). And it gets
worse the higher the analog gain when the film is scanned (for which
AG boost, the Kodachrome mode doesn't even try to compensate).

Without getting into all the reasons why that is - as we have over the
years - because that's not going to change the laws of physics... ;o)

So, I'm left with the pragmatic approach to do the best I can given
the available conditions and the desired requirements.

Another thing which is very important here is the scale of improvement
I have achieved over the years because that puts things into
perspective.

The results I'm getting these days are way over (!!) my initial
requirements when I started all this. Indeed, they are unbelievable by
comparison! It's just human nature that each time a major problem is
solved, the next problem in line (which until then appeared minor)
gains in relative (perceived) importance. Especially if previous
improvements were the result of hard, tortuous and long struggle.

So my current "problems" are to be taken with all that in mind. I'm
really splitting hairs now, as you correctly identified.
Don, if you haven't got enough elbow room using your technique to extend
the linear range of scan to about 20 bits or so then there isn't any
hope!

Actually, it depends on the definition of "enough". In relative terms,
I do agree, there's plenty of dynamic range with my current process.

What I was referring to is in absolute terms (the "splitting hairs"
bit) i.e., since I went through all that trouble, I might just as well
go the extra mile (the extra millimeter, actually) and squeeze out the
very last drop.

The reason is that I don't want to (yet again and for the umpteenth
time!) get half way done only to realize there's something simple yet
trivial-to-implement I could have done to improve things, although, I
do acknowledge, that the relative merits of such an improvement may be
questionable. But... once bitten, twice shy.
Seriously, your raw scans ought to have more than enough "elbow room" to
implement this as a post scan process without the slightest hint of an
artefact. We are only looking at a slope for the red layer that is 3%
higher than that of the green or blue in the linear region - and that is
the gamma difference that is needed. I suspect that the gamma variation
between the channels of your monitor or printer will swamp this -
although in the case of the monitor it probably isn't a gamma variation
but a mis-set black point.

In the toe region, deep in the shadows, it is a bit more significant,
and in the opposite direction, with the red being flatter than the green
and blue channels by about 40%, but I suspect that you don't really need
to worry too much about that if you get the exposure balance correct for
the shadows.

Your analysis is correct, of course, confirming that I am splitting
hairs now.

As regard to the toe region the example histograms I posted are of a
"nominal" scan. In other words, that toe region data will be replaced
with the corresponding toe data from the "shadows" (boosted) scan
which, therefore, should be comparable in quality to the mid-tones and
highlights of the nominal scan.

Don.
 
D

Don

The densest shadows on KC are inevitably limited by the noise of current
desktop scanner technology, if the highlights are exposed correctly. You
can prove this by allowing the highlights to saturate and increasing the
exposure by a few stops. You will see more detail in the shadows than
by post processing a scan with the correct exposure to produce the same
brightness.

Cue Don and his double exposure scan process. ;-)

Bingo! ;o)

Now that I've been tagged, until we get scanners with ~18+ bits of
dynamic range, the only way to penetrate those dense KC shadows is to
indeed, scan twice, once for highlights and once for shadows, and then
combine relevant parts of the two scans to create a composite image.

I do hope OP has lots of hair, though, because once he gets into
trying to reconcile the color balance of the two scans et al, there
will be a lot of hair pulling... ;o)

Don.
 
C

Cliff Rames

Yes, of course, to invert the curves literally would be equivalent to
turning the Kodachrome mode off. That's why I put "invert" in quotes
to indicate an attempt to make them linear.

I meant "invert" as in create a LUT or function that takes you back to
linear
scene intensities. You would lose the toe, shoulder, and tone shaping that
contribute to the unique look of Kodachrome.
... Nikon's Kodachrome mode is a perfect example of this because it only
"corrects" for Kodachrome *if* and *when* the (absolute!) exposure is
0, blissfully ignoring the fact that for any non-0 exposure the
"correction" is totally inadequate. The further away the exposure is
from 0 the more useless IMHO the Nikon's Kodachrome "correction" is.

That's why I suggested adjusting the individual RGB gammas. You should
be able to use the same gamma adjustment no matter the exposure.
A "self-corrective, adaptive method" would not only correct for
Kodachrome but take into account different exposures (and everything
else...) by making the Kodachrome correction a *function* of exposure,
thereby maintaining the color balance and be immune to exposure.

Right now, each "Kodachrome mode" exposure has a different color
balance so, for example, combining multiple scans is impossible
without adjusting the color balance first. A "proper" Kodachrome
correction would *not* be dependent/predicated on any one single
exposure but automatically adapt to whatever the exposure may be.

What happens if you combine multiple exposures with "Kodachrome
mode" OFF? Then try a gamma correction to the combined scan?

Rgds,
Cliff

p.s. I didn't see your post because for some reason it hasn't shown up in
Google Groups. After seeing Kennedy's response I had to fired up the
newsreader to find it on the local news host.
 
J

JJackson382

Now that you put your comments in context, everything makes sense.

Markus said:
I mean that for example a poorly calibrated, or perhaps physically faulty,
red CCD pixel starts to permanently accumulate charge. As a result when the
scan progresses to the right, a red line slowly emerges. I have tested in PS
using layers that this error seems to increase in linear fashion from left
to right.

With long scant times those faulty CCD pixels can become VERY visible. I
just mask them out, since there are relatively small amount of them.

You were referring to a defective 5400 and not 5400s in general. I never
notice this problem on my 5400.
I also noticed, that even with "default" exposure (automatic slide
autoexposure OFF and general exposure HW control at zero) the light
somewhat
penetrates the unexposed black part of a developed slide. At least with
Fuji
Sensia 100. I was left wondering how much exposure in fact is useful when
trying to reach a result with enough quality information? Does the
"partly
transparent" black actually have anything to do with this?

The manual is rather confusing regarding the "auto expose for slides"
preference setting. On p.32, it says, "When using autoexposure,
adjustments [of the Exposure Control tab] are made in reference to the
exposure determined by the AE system." I take this to mean that with the
"auto expose for slides" preference setting ON, the scanner will use its
autoexposure system, which is also its default. From there on, an user
can tweak the Exposure Control tab for more adjustments if necessary.
Since my understanding is that a scanner's autoexposure will attempt not
to clip either highlights or shadows, I leave the "auto expose for
slides" preference setting ON. Scans of both Kodachrome and Fujichrome
without blown out highlights or shadows come out very well with this
setting.

The autoexposure tries to expose to the right without blowing highlights.

That is my understanding as well.
But I have noticed, that is PS after applying a Minolta profile and
converting to the working profile, some channel can still be clipped.
Therefore with autoexposure I usually use about -0.3 compensation. I guess
the exposure is based on the pre-scan.

Again you need to put this in context. The clippings are showing in
scans of what kind of images?

I use the same workflow as you do. As already stated, scans of images
without blown out highlights and shadows will not show any clippings.
But images that have blown out highlights and shadows will have clipped
channels in the scans. If the clippings are not too severe, adjusting
the Exposure Control tab will fix them.
 
D

Don

I meant "invert" as in create a LUT or function that takes you back to
linear
scene intensities. You would lose the toe, shoulder, and tone shaping that
contribute to the unique look of Kodachrome.

If you mean work in linear gamma (1.0) I could do that by skipping the
gamma application step in my scanning program. After all, the raw data
coming off the scanner is linear.

However, even though working in linear gamma simplifies things from a
practical standpoint (simpler calculations) the real problem here is a
conceptual one because I'm really trying to do the impossible.

The Kodachrome curve is complex and trying to emulate it with Analog
Gain alone just can't be done. So, what I was trying to do is find an
Analog Gain approximation of the K-curve and then ascertain whether
correcting this approximation causes less "damage" to raw data than
simply using the Kodachrome mode in the first place. It appears it
does not, so this particular question could very well be academic.
Actually, after Analog Gain boost the blue channel is on the money
while green deviates only very slightly but red is way out to lunch...

But I hasten to add again, I'm really splitting hairs here. The
Kodachrome mode is not as big a problem as it may appear from my
writings because - due to my twin-scanning method - I have plenty of
dynamic range for the correction. It's "big" to me, but only because
I've been at this for such a long time and I want to get to the bottom
of it.

Also, at this early stage in the process I don't like "subjective
approximations" but want to have an *objective method* which can be
applied to all slides regardless of content and without thinking.

Only after such "pure" data has been safely archived will I let my
"artistic talents" run wild and make subjective edits...
That's why I suggested adjusting the individual RGB gammas. You should
be able to use the same gamma adjustment no matter the exposure.

That's basically my current (fallback) method. Actually, the curve is
a bit more complex than gamma, but that's the gist of it.

The catch with that method is twofold. For one, it involves
"subjective guesswork" in determining the adjustment and I like to
take the operator completely out of the loop at this stage. Two, such
secondary adjustments compound the amount of post-processing.

Therefore, finding a single and "better Kodachrome curve" to replace
the Nikon K-mode is one way of tackling both of the above.

As I said - at this stage - I'm driven by two goals: get a true
representation of the film (so it can be archived) but do so with
minimum (ideally, no) post-processing.
What happens if you combine multiple exposures with "Kodachrome
mode" OFF? Then try a gamma correction to the combined scan?

That works just as well - or as badly... Namely, whether the K-mode is
on or off, the boosted scan has the same color balance mismatch to the
nominal scan. That color shift is just inherent.

But since the way I combine scans uses a "self-corrective, adaptive
method" it doesn't matter whether the Kodachrome mode is on or off.

Indeed, because the method I use is adaptive and self-correcting, I
could even combine a non-Kodachrome scan with a Kodachrome scan!!! And
do so seamlessly and, without feathering and, without any operator
intervention!

The thing is the Nikon's Kodachrome mode (whether applied before or
after) just doesn't go far enough so I wanted to boost it, but do so
while observing my two "prime directives": raw data and maintaining
data integrity (as much as possible).

In the end, and because the K-mode appears to be all software anyway,
I may just end up just scanning as Positive and leave it at that.

While such scans will have the utterly disgusting blue cast - which
makes me cringe and my skin crawl - that's the "best" the scanner can
do... You can't brake the laws of physics! ;o)
p.s. I didn't see your post because for some reason it hasn't shown up in
Google Groups. After seeing Kennedy's response I had to fired up the
newsreader to find it on the local news host.

No problem, sometimes my feed is flaky and the messages may show up a
day later. Also, Google groups are officially beta anyway... ;o)

But seriously, I'm really splitting hairs here, so please don't feel
obliged to respond.

Don.
 
M

Markus Malmqvist

Now that you put your comments in context, everything makes sense.



You were referring to a defective 5400 and not 5400s in general. I never
notice this problem on my 5400.

Yeah, I still have a couple of months to send mine for service. Since I have
no high hopes about the quality of service, I have delayed this. And the
defect is fairly minor. If it is visible, I map out something like ten lines
of 5300, a minor issue really. But still irritating.

There is actually also other reason for service. Sometimes when moving the
holder just prior to aufocusing or prescan, the mechanism can somehow
"miss", and there is a louder voice than usual, and the holder will not
move. Never happens while prescanning or scanning. I guess the speed of the
holder is greatest in these situations.

Also the mechanism for pulling the slider into the scanner is somewhat
hit-and-miss. Rarely it immediately grabs it, sometimes I need to push a
bit.
I also noticed, that even with "default" exposure (automatic slide
autoexposure OFF and general exposure HW control at zero) the light
somewhat
penetrates the unexposed black part of a developed slide. At least
with
Fuji
Sensia 100. I was left wondering how much exposure in fact is useful
when
trying to reach a result with enough quality information? Does the
"partly
transparent" black actually have anything to do with this?

The manual is rather confusing regarding the "auto expose for slides"
preference setting. On p.32, it says, "When using autoexposure,
adjustments [of the Exposure Control tab] are made in reference to the
exposure determined by the AE system." I take this to mean that with
the
"auto expose for slides" preference setting ON, the scanner will use
its
autoexposure system, which is also its default. From there on, an user
can tweak the Exposure Control tab for more adjustments if necessary.
Since my understanding is that a scanner's autoexposure will attempt
not
to clip either highlights or shadows, I leave the "auto expose for
slides" preference setting ON. Scans of both Kodachrome and Fujichrome
without blown out highlights or shadows come out very well with this
setting.

The autoexposure tries to expose to the right without blowing highlights.

That is my understanding as well.
But I have noticed, that is PS after applying a Minolta profile and
converting to the working profile, some channel can still be clipped.
Therefore with autoexposure I usually use about -0.3 compensation. I
guess
the exposure is based on the pre-scan.

Again you need to put this in context. The clippings are showing in
scans of what kind of images?

I probably should examine this further. I have clear recollection about
setting main exposure slider to 0 in Minolta Scanning utility for a 16bit
linear scan, and after applying the Minolta Posi-linear profile in PS and
converting to Adobe RGB, the histogram showed clipping of one color channel.
I think this was not visible in Minolta soft after prescan. I usually do not
have blown higlights in my scans. But you are right, this should be
validated.

BTW, I am positive that in some cases using 16bit linear provides notably
better color accuracy in PS compared to 16bit scan.

--markus
 
D

Don

Err... you don't like a faithful reproduction of what's on the film, its
too blue (and gets bluer in the shadows, as the characteristic curve
shows).
[/QUOTE]

The issue is that the scan is not a faithful reproduction of what's on
the film (not that the film is not a faithful reproduction of
reality).

Further to this... I ran a few preliminary test (visual estimation
only).

I just scanned the following negative film *as positive* (!!): 5035
i.e. Kodacolor II (100 speed).

Comparing the characteristic curve of this film to the characteristic
curve of Kodachrome 64 clearly shows that the 5035 curves are far
more "off" than KC 64 curve. The difference between the 3 RGB curves
is far more pronounced for the 5035, while KC curves (although they
wave a little) are almost identical, comparatively speaking.

And yet, looking at the scans and the film (light table) side by side
the negative scan is pretty much identical to the film while KC scan
is way off (the notorious blue cast)!?

This is the opposite of what the characteristic curve would suggest
and it appears to indicate that the characteristic curve doesn't seem
to play such an important part (if it did, it is the negative that
should have a much bigger cast, not the KC).

Two question: Why is that and what causes this apparent contradiction?

The only variable appears to be the scanner itself!?

Don.
 
K

Kennedy McEwen

Don said:
Further to this... I ran a few preliminary test (visual estimation
only).

I just scanned the following negative film *as positive* (!!): 5035
i.e. Kodacolor II (100 speed).

Comparing the characteristic curve of this film to the characteristic
curve of Kodachrome 64 clearly shows that the 5035 curves are far
more "off" than KC 64 curve. The difference between the 3 RGB curves
is far more pronounced for the 5035, while KC curves (although they
wave a little) are almost identical, comparatively speaking.
Not if you understand the curves. The K-II curves are significantly
separated in density, but they are all pretty much parallel. The KC
curves have similar average densities, but different slopes. The
difference between the K-II curves is just an exposure difference as far
as the scanner is concerned, to shift them up and down on the chart. The
difference between the KC curves is a gamma variation, which can only be
matched at one particular density using exposure adjustment only. So,
as far as the scanner is concerned, the K-II curves are much *more*
matched than the KC curves.
And yet, looking at the scans and the film (light table) side by side
the negative scan is pretty much identical to the film while KC scan
is way off (the notorious blue cast)!?

This is the opposite of what the characteristic curve would suggest
and it appears to indicate that the characteristic curve doesn't seem
to play such an important part (if it did, it is the negative that
should have a much bigger cast, not the KC).

Two question: Why is that and what causes this apparent contradiction?

The only variable appears to be the scanner itself!?
I don't think you can make a meaningful comparison like that by eye,
Don, because the K-II film has a very deep orange base (and hence
overall cast) whilst the KC has a neutral base with almost no
significant cast. You simply cannot judge small colour casts, of the
level that you are seeing in the KC scan, by eye in the presence of a
major colour cast such as the orange on K-II.

I don't know if you ever worked with colour photographic printing
equipment, but if you did then you would know that judging the colour
cast and the corrective filtration necessary by eye is impossible at the
gross scale. It requires a step by step process, eliminating the major
casts first, before then being able to determine the level of filtration
necessary to correct any residual minor cast. That is one reason why
aided methods, such as integrating or spot colour analysers, became a
standard requirement for colour darkroom printing. Even then, getting a
perfect print first time was really only something that happened in the
advertising literature - but the analyser would get you in the right
region and usually an acceptable print first time.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top