Twin scan

D

Don

As requested, here's my method of combining twin scans. People with
short attention spans can stop reading now... ;o)

Preface/Disclaimers/etc:

- This works for me. Meaning: Others may have different requirements,
or implement it differently, etc. Take it as such.
- I have a Nikon scanner (LS-50)
- I use Photoshop 6
- Initially I used NikonScan but now use custom scanning software I
wrote myself.
None of the above is a pre-requisite for the method, but just what I
happen to use - outlined here in the interest of full disclosure.
- I scan (1980's) Kodachromes (i.e. slides)
- Instead of calculations (e.g. to adjust exposure) I have always
opted for an empirical alternative. This is very much intentional due
to various "gotchas" (film response, light source, etc.). I find that
empirical methods I devised are the fastest and also self-correcting.

Abbreviations:

PS - Photoshop
AG - Analog Gain = ev (exposure value)

Objective:

Remove noise by scanning twice (once for highlights, once for shadows)
and then combine the two scans, but do it seamlessly and without
mixing.

Background:

Scanning twice and combining is an old idea implemented in the analog
days of film where it was known as "contrast masking". There are a
number of digital alternatives:
- Overlaying two images and "painting" with one over the other using
appropriate feather value. (manual method)
- Using various blending options e.g. splitting the channel blending
sliders in PS by holding down Alt. (semi-automatic method)
- Applying Gaussian Blur to one image before blending. (automatic
method)
- etc. etc.

The problem with all of these methods is that they only "work" with
images where the border between highlights and shadows is clearly
defined. They do not work when the border falls in the middle of a
gradient i.e., with most images.

The reason for this is fairly simple once we examine the histograms.
By overexposing, the whole histogram is stretched out, as well as
shifted to the right. So, the two images don't correspond anymore.
Therefore, before combining the two images seamlessly they must be
"color synced" first.

Since none of the conventional methods do this, they fudge the result
by mixing two scans in various degrees i.e., they "pollute" the shadow
scan with data from the highlights, and vice versa, trying to hide
this color mismatch. All that does is further degrade the outcome.

My method can be outlined in the following steps:

1. Scan twice: once for highlights, once for shadows.
2. Sub-pixel align the two scans
3. Color-sync the two scans (both, the black levels and the "seam")
4. Combine seamlessly with a hard mask (no feather or blending)

1. Scan twice.

Do a nominal scan with highlights just touching the right histogram
edge (clipping as desired). Next, do a boosted scan to eliminate noise
by "moving" the left edge of the histogram out of the "noise range".

In my case (at gamma 2.2) this is bin 32 at 8-bit histogram scale and
has settled at about 5 AG. Tests on unexposed slides established that
using absolute exposure of 5 AG eliminated all noise from the scan.

Note: Most scanner software forces AutoExposure with optional
*relative* adjustment *on top* of this initial exposure. The above
exposure values, however, refer to *absolute* values. In other words,
with the AutoExposure turned OFF! Therefore, the value of 5 AG refers
to difference from 0 AG. This means that not all slides will need such
a wide difference in exposure. For example, if the nominal scan
exposure is 2 AG, then the difference to the 5 AG will only be 3 AG,
in relative terms. Again, all this on LS-50 and 1980's Kodachromes!

2. Sub-pixel alignment.

There is software out there to do this automatically. I have, however,
devised my own method. Even though I've come up with this on my own
it's quite possible (indeed, probable) this has "already been
invented" because it's so elementary.

First, we need to establish the amount of misalignment. To do this
superimpose the two images at maximum (!) magnification. The best area
is one of high contrast, for example a small light reflection. The
so-called "pepper spots" (small black specks) are great for this
because they're very sharp and well defined. Quickly alternating
between the two images - once they are perfectly superimposed - will
clearly show the misalignment, as one image "moves".

The rest is best explained with an example, so let's say we need to
shift an image by half a pixel to the right and down. The image in
question is 100 x 100 pixels.

First, we enlarge the image using Image Size in PS. The amount of
enlargement depends on the amount of shift. For half a pixel we double
the size, for third of a pixel we triple the size, etc. Therefore, in
the example given above, we enlarge the image to 200 x 200 pixels.

Note: Use the absolutely best interpolation method possible! In case
of my PS 6 that's Bicubic.

Next, move this enlarged image right 1 pixel, and down 1 pixel.

Finally, reduce the image back to 100 x 100, again using the best
interpolation.

The image has now shifted to the right and down by 1/2 a pixel.

Note: Even the best quality Bicubic method will blur the image
slightly. Therefore, it's advisable to shift the dark (shadows) scan.
This is not only less noticeable but this slight blurring actually
further helps eliminate noise. Also, if the image only needs to be
shifted in one direction (say only 0.5 pixels to the left) the
resulting image will be sharper than the one shifted in both
directions.)

3. Color-sync

This has caused me the most grief and took the most time to figure
out. It's also "work in progress" and outlined below is the snapshot
of the current state of affairs.

The first step is to shift the boosted histogram back to the levels of
the nominal scan. In other words line up the black edges. To do this I
first establish the location of the black level for each channel in
both images.

I do this by using a 16-bit histogram program I was also forced to
write because PS 8-bit histogram is too inexact as is the 10-bit (or
was is 12?) Wide Histogram program.

The process is similar to determining the amount of clipping during
scanning only it's done at the other end. Because of the noise at the
left histogram edge instead of the ~0.3% common for clipping the
highlights I use 1% (or more). In other words, I establish the point
on the histogram where each channel has 1% of the pixels. Let's say
(using 8-bit histogram for illustration) that nominal scan is at:
R: 10, G:12, B:15
while the boosted scan histograms start at:
R: 29, G:38, B:49
(These are not real values! Just examples for illustration purposes.)

The goal now is to "shift" the boosted scan down to the same levels as
the nominal scan. This can be done in Levels by specifying values for
each channel (trim R by 19, etc.).

Note: It is essential to not only set the left slider, but also the
right slider! And by the same amount! This is so the whole histogram
is shifted left without distortion. Otherwise, histograms will be
stretched and get even more "out of sync".

Note 2: This black level step can be skipped but that will result in a
slight cast, albeit with a higher shadow boost.

The second step is to determine the difference between the two images
at the "seam" and color sync that area. By doing numerous tests I have
established that, in my case, the "seam" where the two images should
be joined is at ~32 (in 8-bit histogram scale). That means, anything
below that level in the nominal scan shows some amount of noise.
However, examining the images above that level doesn't show any noise.

To do this, I first create a band-pass mask to only take into account
pixels at histogram bin 32 (in 8-bit scale).

Note: The instinct is to use Threshold for this purpose, but a caveat
is in order. Threshold in PS uses Luminance as the base of its
calculations! And Luminance is *not* linear! Instead it adds different
weights to the channels (~30% Red, ~59% Green, ~11% Blue). Threshold
could be used, but be aware of this.

Next, apply this bin 32 band-pass mask to both images to get the Mean
(average) or Median values for each channel. I've tried both and Mean
(average) seems to work better but your mileage may vary. I call this
a "meta pixel".

Finally, I create a curve using the above values supplying shadow scan
values as input, and nominal scan values as output. Applying this
curve to the shadow scan then "adjusts" the histograms in such a way
that both images "meet" at this point - in my case at 32.

As already mentioned this whole chapter is work-in-progress and could
possibly be improved, or at least streamlined. Ideally, it would be a
simple calculation and the process could be automated but, for a
variety of reasons (such as film and light source response) I get best
results with this empirical, adaptive process.

4. Combine seamlessly

All that's left now is to create a hard mask to combine the two images
- in my case, as already mentioned, at 32 on the 8-bit histogram
scale. Having color-synced and adjusted the two scans the resulting
merge is seamless. It also has clean shadow areas which can be boosted
at will without exposing those ugly speckly noise aberration pixels.

Of course, by first darkening the boosted scan (in order to combine)
and then again boosting the combined image, some (small amount of)
data in the shadow area will be lost. This can be overcome by first
determining the amount of boost the final (combined) image needs and
then "syncing" to this level, rather that the level the nominal scan
is at initially. However, one has to be careful with too much editing
before combining as that may influence the result negatively.

After all, the goal of this procedure is not to edit the image, but
only to combine them seamlessly and eliminate noise.

Epilogue

The above method effectively turns any scanner into a variable dynamic
range scanner and then reduces this range to the available/required
data width. It's also feasible to do this not only in 2 steps (as
outlined above) but 3 steps - or more... My scanner has 14-bits of
dynamic range so 2 scans are enough.

Of course, if scanner manufacturers had any integrity they would do
all this in firmware thereby eliminating the need for multiscanning as
well as the race who has more bits of dynamic range.

Finally, do note that even though the above method (with slight
adjustments) could be used to produce images with expanded dynamic
range this is problematic on two counts. First, it would need an image
format with more than 16-bits. TIF can do that easily (although a
custom merge program will need to be written) but image editors with a
24-bit per channel color mode are few and far between. Secondly, the
purpose of this whole exercise was not to produce such a file but to
temporarily shift the dynamic range into an area with no noise for the
purpose of sampling.

After all, slides nominally only have about ~12.5 dynamic range (if
memory serves). So, in theory, a 14-bit scanner should have enough
headroom. My Kodachromes, however, beg to differ... So, out of the
available 14-bits I can really only use about 10. That's why I
(excruciatingly) devised the above method to essentially turn my
14-bit scanner into a 19-bit one.

Phew... I need a break! ;o)

Don.

P.S. Oh, yeah. Kennedy will now step in to show me how all this can be
done with a single mouse click! ;o)
 
D

Don

Here's an interesting program I just discovered, for people who are
into this sort of thing:

http://www.ict.usc.edu/graphics/HDRShop/

HDR stand for "high dynamic range". Conceptually it's similar to what
vector graphics do to bitmaps by storing images as formulas.

In a nutshell, the above program stores pixel values as floating point
numbers saved together with how they progress when exposure is changed
i.e., pixels are stored as "formulas" rather than absolute RGB values.

In practical terms, it means being able to brighten or darken the
image without pixelization or banding. Sort of, unlimited dynamic
range.

To create an HDR image, use twin exposures (or more). The program also
allows images to be edited via an external editor e.g. Photoshop.

I'm still testing it but (thinking laterally) it could be also used to
archive dynamic range of a scan! Anyway...

Windows only. Version 1 is free. Enjoy!

Don.
 
G

Greg Campbell

Don said:
Here's an interesting program I just discovered, for people who are
into this sort of thing:

http://www.ict.usc.edu/graphics/HDRShop/

HDR stand for "high dynamic range". Conceptually it's similar to what
vector graphics do to bitmaps by storing images as formulas.

In a nutshell, the above program stores pixel values as floating point
numbers saved together with how they progress when exposure is changed
i.e., pixels are stored as "formulas" rather than absolute RGB values.

In practical terms, it means being able to brighten or darken the
image without pixelization or banding. Sort of, unlimited dynamic
range.

To create an HDR image, use twin exposures (or more). The program also
allows images to be edited via an external editor e.g. Photoshop.

I'm still testing it but (thinking laterally) it could be also used to
archive dynamic range of a scan! Anyway...

Windows only. Version 1 is free. Enjoy!

Don.


TY for the heads up.

OK, I've beem playing with it, but can't figure out how to export the
resultant blended image to a 48 bit TIF or other PS digestable integer
format. What am I mising? TX!!

-Greg
 
D

Don

OK, I've beem playing with it, but can't figure out how to export the
resultant blended image to a 48 bit TIF or other PS digestable integer
format. What am I mising? TX!!

That's exactly what I want, too!

Right now the only way I can think of (haven't tried it yet) is to
export as HDR raw, and then write a converter. In theory, it should
really be easy. A movable 16-bit "window" of dynamic range should be
wide enough to encompass the necessary range (now that the artifacts
have been removed thanks to multiple exposures). Otherwise, a dynamic
range compressor...

But before doing that, I'll google around for some HDR editing
software first. Yesterday - during the downloading frenzy - I spotted
one program but didn't keep the link. :-/ I'll try again later
today...

Anyway, if you spot something in the meantime, do drop a line here.

Don.

BTW (you probably saw this, but...) you can edit by exporting an LDR
image (8-bit) to an external editor. When you're done HDRShop will
then incorporate the changes in the blend.

BTW #2, another site to try is www.debevec.org. There are some links
and more tools (I haven't unzipped them all yet...)
 
S

simplicity

Don said:
That's exactly what I want, too!

Right now the only way I can think of (haven't tried it yet) is to
export as HDR raw, and then write a converter. In theory, it should
really be easy. A movable 16-bit "window" of dynamic range should be
wide enough to encompass the necessary range (now that the artifacts
have been removed thanks to multiple exposures). Otherwise, a dynamic
range compressor...

But before doing that, I'll google around for some HDR editing
software first. Yesterday - during the downloading frenzy - I spotted
one program but didn't keep the link. :-/ I'll try again later
today...

Anyway, if you spot something in the meantime, do drop a line here.

Don.

BTW (you probably saw this, but...) you can edit by exporting an LDR
image (8-bit) to an external editor. When you're done HDRShop will
then incorporate the changes in the blend.

BTW #2, another site to try is www.debevec.org. There are some links
and more tools (I haven't unzipped them all yet...)

Please keep us posted on this. Thanks.
 
D

Don

Please keep us posted on this. Thanks.

Well, the plot thickens...

Apparently, converting an HDR image to lower bit range (among other
things) is known as "tone mapping". And even though the original HDR
algorithm (combining of different exposures) goes back to 1998, tone
mapping is still quite new and there are many different ways of doing
it, all vying for dominance. A nice comparison can be found here:
http://www.cgg.cvut.cz/~cadikm/tmo/

HDRShop has a free tone mapping plugin called "Reinhard HDR
Tonemapping Plugin" the link to which is at its site:
http://www.ict.usc.edu/graphics/HDRShop/.
Alas, the plugin outputs to another HDR file format (pfm) and not to
TIF.

I came across a few stand-alone tone mapping alternatives such as:
http://scanline.ca/exrtools/ and
http://www.mpi-sb.mpg.de/resources/pfstools/
but they all run on Linux, probably because initial work on all this
was done on Unix workstations.

There's another free HDR generating program at www.acm.uiuc.edu called
"HDRIE" but it's quite buggy and export doesn't work. But before it
can run at all, a huge 10 MB library must be downloaded.

I only found one other (this time commercial) program that can create
HDR images at www.hdrsoft.com called "Photomatix" but it appears
considerably worse than HDRShop.

Next version of Photoshop CS (to be released this month) will support
HDR images, but there's a catch (as always). Even though it will take
16-bit images as input it converts them to 8 bit before blending!?!?

Which got me thinking... Does HDRShop do the same? More tests to
do... I need more time...

A 24-hour day (like 24-bit images) just isn't enough. I need a
48-bit... erm... 48-hour day! One might say, a "high temporal range"
i.e. an HTR day... ;o)

Don.
 
S

simplicity

Don said:
A 24-hour day (like 24-bit images) just isn't enough. I need a
48-bit... erm... 48-hour day! One might say, a "high temporal range"
i.e. an HTR day... ;o)

A 48-hour day is easy. Just redefine an hour to be 30 min. OTOH, if you
figure out how to get a 48-hour day with 60 min per hour, do let us
know.
 
D

David Blanchard

Let me run a few tests... ;o)

Don.


Ummm....use the clone tool, perhaps? ;-)

Let's see: 60 min X 24 h X (1 original + 1 clone) = 48 work hours/day.

If only it was that simple...

-db-
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top