VueSCAM!

D

Don

It is a very sensible approach. No wonders, since it comes from Nikon.
:)

Oh, Nikon has its problems (Kodachrome!) but calibration is something
they did right.
I have the habit of ("manually") recalibrating my Minolta (within MSU
or Vuescan) each hour or so.

I do it even more frequently especially when temperature fluctuates a
lot. It's very convenient because, unlike Minolta, I don't have to
turn the scanner off, just remove the media and click a button.

Don.
 
K

Kennedy McEwen

Fernando said:
Certainly looks promising.
I was too lazy to completely get rid of random noise from the dark
frame, so I just switched on 4x multisampling during the acquisition.

The filter would have been quicker - multiscanning is a slow process,
even on Nikon scanners. Once set up the custom filter is quick and easy
to use, and the motion blur already exists, so is easy to use.
I guess this is one of the reasons why I get an higher black point on
the corrected scan, and slightly less saturated colors.
I'll investigate about this later (any suggestions?)

Close examination of your results show that this isn't actually the
case. As expected, the black level is slightly lower on the corrected
scan - after all you have subtracted something from it, so unless your
dark scan was actually negative it can't raise the black level. ;-)

However, you are correct that the black level *appears* to be lighter.
This is simply because you have more noise present and this is also what
is causing the saturation to reduce. By using the same multisampling
for both the main exposure and the dark scan you have equal noise in
each scan, although these are uncorrelated. When you then subtract the
dark scan you simply add the noise of the two scans in quadrature, thus
increasing its amplitude by sqrt(2), or x1.414. This effectively reduces
the noise performance of the scan to approximately what you would expect
from a 2x multiscan result if the Vuescan calibration was working.

By implementing the filter on the dark scan by one of the methods that I
suggested originally, you should be able to overcome the effect.
However, there is a catch that I just recalled - Photoshop doesn't have
16-bit precision, so you are only going to get the noise in your dark
scan down to a 15-bit base. That should still be better than what you
currently have with the x4 multiscan route, but it won't be as perfect
as if, say, Vuescan did the job properly.

The more I see of the results from you guys the more I am convinced that
Vuescan simply isn't taking enough samples in its calibration process.
 
K

Kennedy McEwen

Don said:
Ah! OK, now it makes sense!

But it does make VueScan look even worse if actually (and, I'm hoping,
inadvertently) it *amplifies* this *residual* noise which under normal
circumstances shouldn't even be visible!
Not really. Vuescan implements a scanner *independent* calibration,
which should be valid with all scanners, storing the results offline in
a separate calibration file. However, if you go into the mathematics of
calibration processes (and I have to admit to over 25 years of
experience of doing exactly this in my professional field!) every
calibration process is a compromise between degradation and improvement.
If you have a perfect calibration to start with then any attempt to
improve on that will degrade the result. The best that you can hope to
achieve is that the degradation is negligible. This effectively comes
down to noise and its spectral (ie. time domain, not colour)
characteristics.

For example, suppose you have a perfect CCD with no dark current
variation between the cells at all - there may still be dark current,
but all the cells have the same amount. The best results would be
produced without *any* calibration at all. As soon as you attempt to
implement a calibration you freeze the random temporal noise which is
present on the device and impose that as a pseudo-residual dark current
noise on the calibrated output. The best you can hope to do is to
reduce the effect of that temporal noise - through averaging many
samples - to a sufficiently low level that its imposition on the output
becomes insignificant. In a real CCD that noise may not need to be
averaged very much to make a significant improvement on the dark current
variability, but on this ideal device it can only degrade it - and the
issue then is how much averaging is needed to ensure that the
degradation is imperceptible.

In another example, you may have a CCD with a very low median noise for
all of the elements which is, say, a factor of 10 better than typical
CCDs however, due to the limitations of the process the out-liers, which
are inevitably present in all devices, are just as poor as those of
typical CCDs. In this case, you need to be 10x more precise in your
handling of noise on the high performance CCD than you are with the
typical CCD, although intuitively you might believe that it would be
easier because it produces more accurate data to begin with.

Looking at Fernando's results, it is clear that he has eliminated the
lines by the process implemented. However he has also increased the
noise - something he noticed by an apparent increase in black level and
loss of colour saturation. In fact, he got rid of the lines because the
subtracted dark current value was different (by the temporal noise on
the CCD) at every pixel in the image - even though the image pixel
resulted from the same CCD cell, a different dark compensation data was
subtracted from it. If he had used the same data, with the same noise
amplitude (eg. by taking one row of that dark scan) for removing the
residual dark variation on every pixel in the image, he would have ended
up with a worse result than he started with. However that is exactly
what a conventional calibration, whether internal on the Nikon/Minolta
hardware or in the Vuescan software, would do - use the same data to
subtract dark level on each pixel produces by a particular CCD cell.
That is why it is important to filter the dark scan, to reduce this
temporal noise adequately.

So, summarising, the algorithm that Ed has developed for typical
scanners, including the Nikons, may simply have met its match with the
performance of the Minolta CCD. It was, for example, the first true
16-bit scanner - which means that more than 16-bit precision will be
required to accumulate the dark reference scan in order to reduce the
spatial manifestations of frozen temporal noise below the quantisation
noise threshold of the raw device.
 
F

Fernando

The filter would have been quicker - multiscanning is a slow process,

Yup, but it's a "fire and forget": I started the scan and went to make
a coffee. :)
Ok, ok, I'm too lazy. :)
Close examination of your results show that this isn't actually the
case.

I manually lowered the black point on the final result to better match
the "before" scan (for showing purpose), but on the original the
shadow are quite lighter, and the colors somehow less saturated.
I'll post the crop from the original soon.
Your explanation about the noise that is adding up makes sense, of
course.
By implementing the filter on the dark scan by one of the methods that I
suggested originally, you should be able to overcome the effect.
However, there is a catch that I just recalled - Photoshop doesn't have
16-bit precision, so you are only going to get the noise in your dark
scan down to a 15-bit base. That should still be better than what you
currently have with the x4 multiscan route, but it won't be as perfect
as if, say, Vuescan did the job properly.

I'll implement the blurring filter and try again. I'm also
contemplating the idea of writing a small utility that performs the
operations from the command line using a graphic library, so I can
script the whole process on a batch of images.
That is, if Hamrick chooses to ignore my email about this whole
stuff... a better calibration routine in Vuescan would be a far better
thing. :)

Thanks again, Kennedy!

Fernando
 
B

Bart van der Wolf

SNIP
With Minolta Scan Utility, yes, I power cycle the scanner
and restart MSU, because there is no way to "force" a recalibration.

CTRL+Shift+I will reinitialize the scanner from within the Minolta
Scan Utility.

Bart
 
F

Fernando

New version, having averaged the dark frame along the scan direction
to lower the random noise (I applied 8 times the custom averaging
filter as suggested by Kennedy):

http://gundam.srd.it/PhotoPages/images/5400_vuescan_darkframesub_02.jpg

This time, I did not try to compensate for different black levels, and
the "cleaned" image actually has deeper blacks (while in the previous
test, I had to push down the shadows on the cleaned image, for it
showed higher black level)!
Moreover: for some odd (to me!) reason, the cleaned image does not
suffer anymore from the red/magenta cast that affected the original
scan...

Fernando
 
K

Kennedy McEwen

Fernando said:
New version, having averaged the dark frame along the scan direction
to lower the random noise (I applied 8 times the custom averaging
filter as suggested by Kennedy):

http://gundam.srd.it/PhotoPages/images/5400_vuescan_darkframesub_02.jpg

This time, I did not try to compensate for different black levels, and
the "cleaned" image actually has deeper blacks (while in the previous
test, I had to push down the shadows on the cleaned image, for it
showed higher black level)!
Moreover: for some odd (to me!) reason, the cleaned image does not
suffer anymore from the red/magenta cast that affected the original
scan...
Well the red/magenta is probably just a poor black level in those
channels, so the dark frame removal will correct that as well. One of
the things I try to do with my scans in the Nikon is to set the black
level on the unexposed edges of the film, for this same reason.

However, what concerns me a little is, if you look at the cleaned frame,
there is still a small residual of the single lines present. This is
fairly obvious if you stretch the levels 0-31 up to the full 0-255 scale
- a fairly extreme gain of about x8.

This is most likely due to rounding or truncation errors on the
Photoshop filter, but it indicates that you will need more than the
15-bit precision of Photoshop to do any better. Working back from the
results to linear space and then get the black reference (full of errors
due to jpeg artefacts) it seems that you are dealing with data which is
very low indeed - as expected - so truncation errors at 15-bits on the
filter become very significant. Looks like the graphic library might be
the best route - but be sure that it retains full precision throughout,
which may mean converting the integer data to longints to get the
overhead necessary.

Still, what you have got is a big improvement on the original and
significantly better in noise and saturation than the first attempt.

Looks promising - your next task is to hack into the Vuescan calibration
file and compute replacement data. Then you have a much simpler to use
solution. ;-)
 
R

Roger

So why didn't you check it out in demo mode first if you were concerned? Its
easily done.


If the tone of your emails were the same as this post and I were Ed, I
certainly wouldn't want to help you. When I'm wearing my tech support hat at
work, and someone is shouting down the phone or abusive, I simply say call
back when you've calmed down and put the phone down.

About 90% of receiving satisfaction comes from communications.

I worked as a project manager for a relatively large multinational
corporation. I insisted that those attending meetings be respectful
to each other. When many departments are involved discussions can
sometimes get heated. I had no problem with that, but as soon as the
conversations turned disrespectful, I would ask that they reconsider.
If it happened again the meeting was over and some of these people may
have flown a few hundred or a few thousand miles to get there.

Only once did they have to reschedule their flights home. With that
kind of price tag the meetings were very peaceful after that.
Never once did any one threaten to have me fired for that conduct. I
did get some strange looks and a few asked how I could get away with
walking out of my own meeting with some high priced engineers and
execs. though.

To quote one of the members of our board-of-directors speaking to a
group of new hire employees. " We are known as a high pressure work
place. To those of you seeking IPR (In Plant Retirement) I suggest you
reconsider your goals or seek employment elsewhere".When discussions remain respectful things can be accomplished faster
and with less effort.

Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com
 
R

rgbcmyk

Thanks for all the answers.
Gamma correction is a software adjustment; its value depends upon lots
of things; for instance, it depends upon the output colorspace you
select for your scans. If you select AdobeRGB or sRGB, for example,
the right Gamma setting is 2.2 (and it is automatically applied by the
scanning software). For AppleRGB, the right Gamma setting is 1.8.
You only need to adjust Gamma by yourself if you scan in Linear mode:
because this way, scans have a "flat" tonal response, with a Gamma
setting of 1.0, and you'll need a proper Gamma adjustment within your
editing software to bring the levels to the right values (otherwise
you get a screwed up tonal response, with a very dark output).
Unless you need Linear output for some reason, I'd stick with
(automatically) Gamma-corrected scans at the beginning. AdobeRGB is a
good compromise as the output colorspace, by the way, for it
encompasses a reasonably wide gamut.

Please note that I'm keeping things simple; the reasons for the need
of Gamma correction, and the justifications for different Gamma
settings, would take a long time, and were already explained in depth
by both Kennedy McEwen and Bart Van der Wolf (and others, for sure) in
past threads.

Following the raw scan discussions in this group, I have used the 5400
sw to control only the hw features, and make all sw adjustments in PS.
This means choosing between AF and MF, enabling ICE/GD, and if
necessary, eliminating clippings with the hw Exposure Control tab. I
never touch the sw Image Correction tab. The raw linear scan generated
this way looks dark in PS as you described, but would brighten up after
assigning it to Minolta's linear posi profile and converting to Adobe
RGB1998 workspace. From there on, it is a matter of adjusting to taste.

In the above workflow, it does not seem to matter whether Color Matching
in the 5400 is enabled and set to Adobe RGB or not. In either case, the
scan looks dark in PS and requires assigning to the Minolta linear posi
to brighten it. After assigning the profile, the two scans would look
almost identical. It is not apparent how your suggested "gamma
correction" in the scanner can produce a bright scan. Perhaps I'm still
missing something here. But at least I now know that "gamma correction"
is sw.
 
F

Fernando

In the above workflow, it does not seem to matter whether Color Matching
in the 5400 is enabled and set to Adobe RGB or not. In either case, the
scan looks dark in PS and requires assigning to the Minolta linear posi
to brighten it. After assigning the profile, the two scans would look
almost identical. It is not apparent how your suggested "gamma
correction" in the scanner can produce a bright scan. Perhaps I'm still
missing something here. But at least I now know that "gamma correction"
is sw.

When you save your scans as Raw, the resulting TIFF is saved in Linear
mode, that is, with Gamma = 1.0 (it is an exponent for a correction
curve: the gamma curve. If the exponent is 1.0, the curve is a straight
line).
When you apply the Minolta Posi Linear profile, it automatically
corrects the image by (among other things) applying a gamma curve with
exponent 2.2, thus brightening the image.

Bye!

Fernando
 
D

Don

Not really. Vuescan implements a scanner *independent* calibration,
which should be valid with all scanners, storing the results offline in
a separate calibration file. However, if you go into the mathematics of
calibration processes (and I have to admit to over 25 years of
experience of doing exactly this in my professional field!) every
calibration process is a compromise between degradation and improvement.

Yes, I realize calibration is a tug-of-war which is why I mention
elsewhere that I suspect his algorithm is so "fragile" that if he
eliminated all lines something else (possibly worse) would pop up. So
he went for the "lesser evil". Faced with lines (or worse) for
everybody or, torturing Minolta users only, he went for the latter.
So, summarising, the algorithm that Ed has developed for typical
scanners, including the Nikons, may simply have met its match with the
performance of the Minolta CCD. It was, for example, the first true
16-bit scanner - which means that more than 16-bit precision will be
required to accumulate the dark reference scan in order to reduce the
spatial manifestations of frozen temporal noise below the quantisation
noise threshold of the raw device.

That's exactly what I mean. We're not talking about a Neanderthal with
a scanner (like Don trying to boost shadows in gamma corrected space
for his own amusement) but we're talking about a (cranky) - alleged -
"professional" who's selling a product, and should know better!

What speaks volumes is the fact that this "professional"
unsuccessfully wrestled with this for *two years* (still does!) and
yet you not only correctly identified the problem in 5 minutes but
offered a solution, including employing higher precision in case he
goes "but, but, but..."!

To paraphrase a US vice-presidential candidate's line:
"You, Ed, are no Kennedy"! ;o)

Don.
 
B

Bart van der Wolf

Fernando said:
New version, having averaged the dark frame along the scan
direction to lower the random noise (I applied 8 times the
custom averaging filter as suggested by Kennedy):

If you apply the (5 point line average) filter several times on a
uniform piece of film, you'll probably see the Standard Deviation go
down until you applied the filter some 20 times. If you apply the
7point Custom filter plug-in I suggested, you'll be able to apply it
some 40 times with decreasing standard deviation, and it ultimately
results in lower StdDev (some 30% lower in my empirical results).

SNIP
Moreover: for some odd (to me!) reason, the cleaned image
does not suffer anymore from the red/magenta cast that
affected the original scan...

If you look at the characteristic film curve as published by the major
film manufacturers, you'll notice that the Red channel usually has a
lower Dmax than the others. The dark frame should keep that intact if
you use a truly opaque material for the dark frame, but if you used a
piece of unexposed but processed film it would also remove that
reddish dark. It could also be caused by the Blackpoint setting in
VueScan, if the highest densities are cleaner, it may pick up the
reddish Dmax and correct for it.

Bart
 
B

Bart van der Wolf

Good to know. Another tip from Ed?

No, from the Scanner's PDF manual actually ... ;-)
It's well hidden at the bottom of page 20 (in my original version
anyway), in the troubleshooting section.

Bart
 
K

Kennedy McEwen

Don said:
What speaks volumes is the fact that this "professional"
unsuccessfully wrestled with this for *two years* (still does!) and
yet you not only correctly identified the problem in 5 minutes but
offered a solution, including employing higher precision in case he
goes "but, but, but..."!

To paraphrase a US vice-presidential candidate's line:
"You, Ed, are no Kennedy"! ;o)
Don, I haven't solved the problem. I haven't even identified the cause
of the problem, merely suggested a possible cause. What I have proposed
is a rather cumbersome work-around of the problem based on the residual
symptoms. It is not a solution to the problem.

With a bit of luck it *might* point to a cause and solution that Ed
hasn't considered but can implement, but that is the most you can say of
it.
 
F

Fernando

Bart said:
If you apply the (5 point line average) filter several times on a
uniform piece of film, you'll probably see the Standard Deviation go
down until you applied the filter some 20 times. If you apply the 7point
Custom filter plug-in I suggested, you'll be able to apply it some 40
times with decreasing standard deviation, and it ultimately results in
lower StdDev (some 30% lower in my empirical results).

I think I'll go the custom utility way.
So I could apply whatever algorythm I like to the image, while working
with the needed precision (32 bit unsigned integers).
If you look at the characteristic film curve as published by the major
film manufacturers, you'll notice that the Red channel usually has a
lower Dmax than the others. The dark frame should keep that intact if
you use a truly opaque material for the dark frame, but if you used a
piece of unexposed but processed film it would also remove that reddish
dark.

I used a totally opaque black cardboard into a slide frame, so this
can't be; and:
It could also be caused by the Blackpoint setting in VueScan, if

This should play no role, being both Raw scans; anyway, I always keep
Color Balance to None, so Vuescan should not set any black/white point,
apart from the Calibration pass.

Or am I missing something?

Thanks!

Fernando
 
D

Don

Don, I haven't solved the problem. I haven't even identified the cause
of the problem, merely suggested a possible cause.

You still seem to have zeroed in on it much faster. Anyway, on to
something much more interesting...

I've noticed that 8-bit scans on the LS-50 exhibit "reversed" gamma
artifacts i.e. the "gaps" introduced by applying gamma are on the
*right* side of the histogram!?

Here's an example:

http://members.aol.com/tempdon100164833/nikon/LS50histogram.jpg

Now, perhaps mistakenly, but I was under the impression that these
gamma artifacts only occur on the left side of the histogram. Am I
mistaken in assuming that, or is there another reason for the gaps to
"magically move" to the right?

The reason I noticed this is because in my program I do a low
resolution scan (500 ppi) in order to determine the exposure. For both
programming convenience and scanning speed I used 8-bit depth but gaps
on the right (also present in NikonScan with the same settings) are a
problem when trying to push exposure but avoid clipping.

Back when I was fighting with Kodachromes on the LS-30 the gaps were
on the *left* messing up my attempts to boost shadows. Now when I want
to use the right side to determine exposure the gaps magically move to
the right! Grrr... The seems to always place themselves exactly where
I *don't* want them! ;o)

Anyway, thanks in advance as always!

Don.
 
K

Kennedy McEwen

Don said:
You still seem to have zeroed in on it much faster. Anyway, on to
something much more interesting...

I've noticed that 8-bit scans on the LS-50 exhibit "reversed" gamma
artifacts i.e. the "gaps" introduced by applying gamma are on the
*right* side of the histogram!?

Here's an example:

http://members.aol.com/tempdon100164833/nikon/LS50histogram.jpg
I have seen something similar to that once, but I had something set up
badly wrong - unfortunately I can't remember what caused it now. If I
recall what it was I'll let you know.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top