A "slanted edge" analysis program

  • Thread starter Lorenzo J. Lucchini
  • Start date
D

Don

By the way, I see that the levels (or perhaps the gamma) are different
if I scan at 16-bit and if I scan at 8-bit, with otherwise the same
settings. Actually, the 16-bit scan clips. Wonderful, another bug in my
fine scanner driver!

If could be that 8-bit clips as well but you just can't see it.
Another common problem when looking at 16-bit images with an 8-bit
histogram is that the programs don't calculate correctly. Even
Photoshop "massages" the histogram data before it shows it, resulting
in some really weird artefacts. Because of all those reasons I wrote
my own 16-bit histogram program.
Let's agree on terms. I took the "metrics" as meaning the slanted edge
test results, and the "image" is just the image, that is the picture to
be sharpened (or whatever).

Yes, metrics simply means results of a measurement. In the above
context, image is anything you used to make these measurements on.
No wait -- my understanding is that, by definition, "optimal sharpening"
is the highest amount you can apply *without* causing haloes.
Perhaps unsharp mask in particular always causes them, I don't know, but
there isn't only unsharp mask around.

Haloes show quite clearly on the ESF graph, and I assure you that I
*can* apply some amount of sharpening that doesn't cause "hills" in the
ESF graph.

As I mentioned last time it's all about how the sharpening is done. It
simply means localized increase of (edge) contrast resulting in an
optical illusion i.e. we perceive such an image as sharp.

Now, whether you get ESF peaks is not really what I was addressing but
the fact that the whole concept of sharpening is based on this
selective contrast. So whether this causes ESF peaks or not, the image
has been (in my view) "corrupted". It may look good, and all that, but
I just don't like the concept.
[snip: the ruler test]
Of course, the key question is, is it worth it? In my case, in the
end, I decided it wasn't. But it still bugs me! ;o)

I know. By the way, changing slightly the topic, what about two-pass
scanning and rotating the slide/film 90 degrees between the two passes?
I mean, we know the stepper motor axis has worse resolution than the CCD
axis. So, perhaps multi-pass scanning would work best if we let the CCD
axis get a horizontal *and* a vertical view of the image.

Of course, you'd still need to sub-pixel align and all that hassle, but
perhaps the results could be better than the "usual" multi-pass scanning.
Clearly, there is a disadvantage in that you'd have to physically rotate
your slides or film between passes...

And it's also nearly impossible to rotate exactly 90 degrees, at least
not to satisfy the accuracy of the scanner. So there will be problems
with that too. Also, due to stretching the pixels are no longer
perfectly rectangular so that will have to be fixed. Etc.

It's a very clever idea, though!

Another option (for lower resolutions) is to simply take a picture
with a high resolution digital camera. This causes many other
problems, of course, but at least as far as horizontal vs vertical
distortion goes it could be much more regular than a scanner.
Hm? I don't follow you. When you have got the ESF, you just *have* your
values. You can then move them around at your heart's will, and you
won't lose anything. Which implies that you can easily move the three
ESFs so that they're all aligned (i.e. the "edge center" is found in the
same place), before taking any kind of average.

I'm not talking about ESF per se but in general. If you align the
channels (using some sort of sub-pixel interpolation) you will be
changing the actual sampled values. This may work visually but it will
throw off any measurements or calculations based on such data.
Yes, and I'm not doing anything to the data *coming from the scanner*;
just to the ESF, which is a high-precision, floating point function that
I've calculated *from* the scanner data.
It's not made of pixels: it's made for x's and y's, in double precision
floating point. I assure you that I'm already doing so much more
(necessary) evil to these functions, that shifting them around a bit
isn't going to lose anything.

I don't know exactly what you're doing and it may very well not be
important but it's an easy trap to fall into. That's all I was saying.
Yes, in theory. In practice, my red channel has a visibly worse MTF than
the green channel, for one.

That's *very* interesting!!! I wonder why that is?
Because they're asking money for it :)

Oh, really! That's disgusting!!

Like I said, I'm not really into all that, but aren't there free
versions available? Surely, others must have done this many times by
now? Especially if Imatest is so greedy!
I've had my trial runs, finished
them up, and I'm now left with SFRWin and no intention to buy Imatest
(not that it's a bad program, it's just that I don't buy much of
anything in general).

I'm not sure I would call myself a "free software advocate", but I
definitely do like free software. And certainly the fact that my program
might be useful to other people gives me more motivation to write it,
than if it were only useful to myself.

As you know, in GNU sense "free" doesn't refer to cost but to the fact
that the software is not "imprisoned".
Not necessarily altruism, mind you, just seeing a lot of downloads of a
program I've written would probably make me feel a star :) hey, we're
human.

That's one of the main motivations for some of the best free software
out there. Or just simply because people are curious and don't believe
the martektroids so they do things themselves.
Anyway, have you tried out ALE yet?

No, unfortunately not! It's still sitting on top of my "X-files" (I
have a temporary "x" directory where I keep all my current stuff).

I got sidelined because I ran out of disk space. You see, I've done
all my programming and I'm now heavily into scanning. It's complicated
to explain but I want to scan everything to disk first before I start
offloading the images to DVDs. The reason is because the chronology is
unclear, so until I finish scanning *all* of them I will not be able
to re-order them correctly. (Looking at slides with a naked eye is not
good enough.) And I don't want to start burning DVDs only to find out
later, the images are actually out of chronological order. I'm just
being silly, but that's the workflow I chose.

So I was forced to get a new drive. And then "just for fun" I decided
to format it as NTFS (the first time I did that). Long story short,
I'm still running tests and "playing" with it...
I don't think it can re-align
*single* rows or columns in an image, but it does perform a lot of
geomtry transformation while trying to align images. And it works with
16 bit images and all, which you were looking for, weren't you? It's
just so terribly slow.

I have my own alignment program and it does what I need, but I was
just interested to see what they did in ALE. In the end, I not only
sub-pixel align in my program but actually transform the image. I do
this with 4 anchor points instead of going with a full mesh exactly
because it is so slow.

Don.
 
L

Lorenzo J. Lucchini

Bart said:

Yes, "the guy" has some resolution graphs on xs4all :)

You can see the figures for my Epson RX500 here:
http://ljl.150m.com/scans/fig-blade2.gif

But please notice that I might have made some mistakes scanning:
somehow, the edge image looks color-corrected and possibly
gamma-corrected, even though I thought I told the driver to disable that.

Still, the actual MTF doesn't look too different from the ones I've got
with certainly good scans.

by LjL
(e-mail address removed)
 
L

Lorenzo J. Lucchini

Don said:
On Sun, 02 Oct 2005 03:23:42 +0200, "Lorenzo J. Lucchini"

[my 16-bit scans having different colors from my 8-bit scans]

I don't really know, I'll have to do some tests on this. Yeah, it could
be that Photoshop is messing up something, but the images do *look* very
different, too, with the 16-bit image having the whitepoint at 255,
while the 8-bit scan is around 230 or so.
[snip]
Haloes show quite clearly on the ESF graph, and I assure you that I
*can* apply some amount of sharpening that doesn't cause "hills" in the
ESF graph.

As I mentioned last time it's all about how the sharpening is done. It
simply means localized increase of (edge) contrast resulting in an
optical illusion i.e. we perceive such an image as sharp.

Now, whether you get ESF peaks is not really what I was addressing but
the fact that the whole concept of sharpening is based on this
selective contrast. So whether this causes ESF peaks or not, the image
has been (in my view) "corrupted". It may look good, and all that, but
I just don't like the concept.

I see, but I'm not sure sharpening can be dismissed as an optical illusion.
From all I've understood, scanners (expecially staggered array ones)
soften the original image, and sharpening, when done correctly, is
simply the inverse operation.

Actually, "softening" and "sharpening" are just two specific case, the
general concept being: if your optical system *corrupts* the original
target it's imaging, you can *undo* this corruption, as long as you know
exactly the (convolution) function that represents the corruption.

Look at these images, for example:
http://refocus-it.sourceforge.net/

All I know is that I can't read what's written in the original image,
while I can quite clearly read the "restored" version(s).
Yes, there is much more noise, that's unavoidable (expecially in such an
extreme example)... but I have a problem calling the technique "corruption".

Or look at the first example image here:
http://meesoft.logicnet.dk/Analyzer/help/help2.htm#RestorationByDeconvolution

Sure, the "restored" image is not as good as one that was taken without
motion blur to begin with, but still the result is quite impressive.

And note that both programs, Refocus-it and Image Analyzer, *guess* (or
let the user guess) the kind of blurring function *from* the image --
which does result in artifacts, as guessing is hard (the same that
happens with unsharp masking).

But if you know instead of guess, I'm convinced the sharpened result
will not only be more pleasing to the eye, but more mathematically close
to the original target.
[snip: the ruler test]

Of course, the key question is, is it worth it? In my case, in the
end, I decided it wasn't. But it still bugs me! ;o)

I know. By the way, changing slightly the topic, what about two-pass
scanning and rotating the slide/film 90 degrees between the two passes?
I mean, we know the stepper motor axis has worse resolution than the CCD
axis. So, perhaps multi-pass scanning would work best if we let the CCD
axis get a horizontal *and* a vertical view of the image.

Of course, you'd still need to sub-pixel align and all that hassle, but
perhaps the results could be better than the "usual" multi-pass scanning.
Clearly, there is a disadvantage in that you'd have to physically rotate
your slides or film between passes...

And it's also nearly impossible to rotate exactly 90 degrees, at least
not to satisfy the accuracy of the scanner. So there will be problems
with that too. Also, due to stretching the pixels are no longer
perfectly rectangular so that will have to be fixed. Etc.

Yes, but these things have to be done (though perhaps to a lesser
extent) with "simple" multi-pass scans, as well, because of the problems
we know -- stepper motor and all.
I'm not sure just *how much* increased complexity my idea would add to
the game.
[snip]
Hm? I don't follow you. When you have got the ESF, you just *have* your
values. You can then move them around at your heart's will, and you
won't lose anything. Which implies that you can easily move the three
ESFs so that they're all aligned (i.e. the "edge center" is found in the
same place), before taking any kind of average.

I'm not talking about ESF per se but in general. If you align the
channels (using some sort of sub-pixel interpolation) you will be
changing the actual sampled values. This may work visually but it will
throw off any measurements or calculations based on such data.

Ok, I see.
But don't worry then, I don't have to do any sub-pixel interpolation in
the ESF case (or, you could say, I *do* have to do some, but I have to
do it no matter whether I have to re-align or not).
I don't know exactly what you're doing and it may very well not be
important but it's an easy trap to fall into. That's all I was saying.

Ok. Just to explain briefly: imagine scanning a sharp edge. You now want
to obtain the function that describes how pixel values change on the
edge (the edge spread function = ESF).

So you take any single row of the scanned edge's image, and look at how
pixels change.

This function looks like the one I had uploaded here:
http://ljl.150m.com/scans/fig-blade2.gif (the first graph)


But, how can the function be so precisely defined, when there are only a
few pixels representing the edge transition in any one row of the edge
image?

Interpolation? No. You simply take more than just *one* row: you take
all of them.
And you scan an edge that's tilted by some degrees with respect to the
scanner axis you want to measure.

This way, you get oversampled data, just as if you were doing many
misaligned scan passes -- only, you know quite precisely what the
misalignment of each is (as you can measure where the edge *is* with
some decent precision).

Once you've done this, and you have the ESF, you don't need to do
anything sub-pixel anymore; you already have an oversampled, "sub-pixel"
function that you've obtained not by interpolation, but by clever use of
real data.
That's *very* interesting!!! I wonder why that is?

Well, for one, who's to say that my scanner's white light source is white?
If red is less well represented in the light source than the other
primaries, there will be more noise in the red channel.

Though noise should directly affect the MTF, AFAIK.

But there are other possibilities: being a flatbed, my scanner has a
glass. Who says the glass "spreads" all wavelengths the same way?
Oh, really! That's disgusting!!

:) It's their right. It's a fine program after all. Imagine what? A guy
on an Italian newsgroup just posted an *executable-only* Visual Basic
program for calculating resistor values from colors.
He was asking for advice about how to improve the program.
But when people told him they couldn't be of much help without the
source code, he reply he wouldn't post it on the net.

My reaction was to write a similar (but hopefully better) program, send
it to him, and tell him I transfered the copyright to him ;-)
Like I said, I'm not really into all that, but aren't there free
versions available? Surely, others must have done this many times by
now? Especially if Imatest is so greedy!

There is SFRWin, which is free, though not open source; and it only
outputs the MTF, while Imatest also gives you the ESF and the LSF (which
have to be calculated to get to the MTF, anyway), as well as some other
useful information.

Also, both Imatest and SFRWin only work under Windows (a version of
SFRWin, called SFR2, runs under Matlab, which *might* mean it could work
under Octave, for all I know, but I doubt it).

by LjL
(e-mail address removed)
 
D

Don

I see, but I'm not sure sharpening can be dismissed as an optical illusion.

It can, because the image is not sharpened only the contrast at both
sides of the border between dark and light areas is enhanced locally.

To really sharpen the image one would need to "shorten" the transition
from dark to light i.e. eliminate or reduce the "fuzzy" part and
generally that's not what's being done.

One simple proof of that is halos. If the image were truly sharpened
(the fuzzy transition is shortened) you could never get haloes! In the
most extreme case of sharpening (complete elimination of gray
transition) you would simply get a clean break between black and
white. That's the sharpest case possible.

The fact that you get halos shows that so-called sharpening algorithms
do not really sharpen but only "fudge" or as I would say "corrupt".
From all I've understood, scanners (expecially staggered array ones)
soften the original image, and sharpening, when done correctly, is
simply the inverse operation.

But that's my point, exactly, it does not really reverse the process
but only tries to and that just adds to the overall "corruption".
Actually, "softening" and "sharpening" are just two specific case, the
general concept being: if your optical system *corrupts* the original
target it's imaging, you can *undo* this corruption, as long as you know
exactly the (convolution) function that represents the corruption.

In theory but not in practice. And certainly not always. You can't
reverse a lossy process. You can "invent" pixels to compensate
("pretend to reverse") but you can never get the lossy part back.

Now, some of those algorithms are very clever and produce good results
while others just corrupt the image even more (e.g. anti-aliasing).

Whether the result is acceptable or not depends on each individual
because it's a subjective call, really.
Or look at the first example image here:
http://meesoft.logicnet.dk/Analyzer/help/help2.htm#RestorationByDeconvolution

Sure, the "restored" image is not as good as one that was taken without
motion blur to begin with, but still the result is quite impressive.

Which only confirms what I said: Some processes are very clever but
you can never really get the lossy part back.
Well, for one, who's to say that my scanner's white light source is white?
If red is less well represented in the light source than the other
primaries, there will be more noise in the red channel.

Though noise should directly affect the MTF, AFAIK.

But there are other possibilities: being a flatbed, my scanner has a
glass. Who says the glass "spreads" all wavelengths the same way?

But none of that should affect the results because you're dealing with
*relative* change in brightness along the edge. Now, in *absolute*
terms there may be difference between channels but if, for example,
red receives less light than other channels the *relative* transition
should still be the same, only the red pixels will be a bit darker.

I don't think noise enters into this because red would need to receive
considerably less light for noise to affect the measurements. If that
were the case you would notice this in the scans as they would get a
cyan cast.

Don.
 
L

Lorenzo J. Lucchini

Don said:
It can, because the image is not sharpened only the contrast at both
sides of the border between dark and light areas is enhanced locally.

To really sharpen the image one would need to "shorten" the transition
from dark to light i.e. eliminate or reduce the "fuzzy" part and
generally that's not what's being done.

One simple proof of that is halos. If the image were truly sharpened
(the fuzzy transition is shortened) you could never get haloes! In the
most extreme case of sharpening (complete elimination of gray
transition) you would simply get a clean break between black and
white. That's the sharpest case possible.

The fact that you get halos shows that so-called sharpening algorithms
do not really sharpen but only "fudge" or as I would say "corrupt".

But my point is that sharpening algorithms should not necessarily
produce haloes. I don't have proof -- actually, proof is what I'm hoping
to obtain if I can make my program work! --, but just note that my
hypothesis is just that: halos need not necessarily occur.

By the way - not that it's particularly important, but I don't think the
"sharpest case possible" is a clean break between black and white, as at
least *one* gray pixel will be unavoidable, unless you manage to place
all of your "borders" *exactly* at the point of transition between two
pixels.
But that's my point, exactly, it does not really reverse the process
but only tries to and that just adds to the overall "corruption".

Make a distinction between unsharp masking and similar techniques, and
processes based on knowledge of the system's point spread function,
which is what I'm trying to work on.

Unsharp masking just assumes that every pixel is "spread out" in a
certain way (well, you can set some parameters), and bases its
reconstruction on that.

*That*, I think, is its shortcoming. But if you knew *exactly* the way
every pixel is "spread out" (i.e., if you knew the point spread
function), my understanding is that you *could* then really reverse the
process, by inverting the convolution.

Read below before you feel an urge to say that it's impossible because
the process is irreversible...
In theory but not in practice. And certainly not always. You can't
reverse a lossy process. You can "invent" pixels to compensate
("pretend to reverse") but you can never get the lossy part back.

Now, this time, yes, what we're talking about is a lossy process, and as
such it cannot be completely reversed.

But before giving up, we should ask, *what is it* that makes it lossy?
Well, I'm still trying to understand how this all really works, but
right now, my answer is: noise makes the process lossy. If you had an
ideal scanner with no noise, then you could *exactly* reverse what the
sensor+optics do.

In real life, we have noise, and that's why you can't just do a
deconvolution and get a "perfect" result. The problem you'll have is
that you also amplify noise, but you won't be otherwise corrupting the
image.

Sure, amplifying noise is still something you might not want to do...
but pretend you own my Epson staggered CCD scanner: you have a scanner
that has twice less noise than equivalent linear CCD scanners, but a
worse MTF in exchange. What do you do? You improve the MTF, at the
expense of getting noise to the levels a linear CCD scanner would should.
And, in comparison with a linear CCD scanner, you've still gained
anti-aliasing.

Kennedy would agree :) So let's quote him, although I can't guarrantee
the quote isn't a bit out of context, as I've only just looked it up
quickly.

--- CUT ---

[Ed Hamrick]
The one thing you've missed is that few (if any) flatbed scanners
have optics that focus well enough to make aliasing a problem when
scanning film. In this case, staggered linear array CCD's don't
help anything, and just reduce the resolution.

[Kennedy McEwen]
If this was the situation then any 'loss' in the staggered CCD spatial
response would be more than adequately recovered by simple boost
filtering due to the increased signal to noise of the larger pixels

--- CUT ---
[snip]
Well, for one, who's to say that my scanner's white light source is white?
If red is less well represented in the light source than the other
primaries, there will be more noise in the red channel.

Though noise should directly affect the MTF, AFAIK.

Ehm, note that I meant to say "shouldn't" here.
But none of that should affect the results because you're dealing with
*relative* change in brightness along the edge. Now, in *absolute*
terms there may be difference between channels but if, for example,
red receives less light than other channels the *relative* transition
should still be the same, only the red pixels will be a bit darker.

Not darker: the scanner calibration will set the light as the
whitepoint, so channels will still have the same brightness.
I agree on a second thought that the red source should be *really*
dimmer than the other, for this to produce noticeably more noise.

But I think the glass hypothesis still stands: if the glass blurs red
more than it blurs the other colors, well, here you have a longer edge
transition, and a worse MTF.

by LjL
(e-mail address removed)
 
D

Don

But my point is that sharpening algorithms should not necessarily
produce haloes. I don't have proof -- actually, proof is what I'm hoping
to obtain if I can make my program work! --, but just note that my
hypothesis is just that: halos need not necessarily occur.

"Sharpening" by increasing contrast of transition areas always
produces halos. It's the basis of the algorithm. You may not perceive
them but they're there. The optical illusion which makes you perceive
this contrast in transition areas as sharpness has a threshold. Step
over it and halos become perceptible. Stay below it and they don't.
By the way - not that it's particularly important, but I don't think the
"sharpest case possible" is a clean break between black and white, as at
least *one* gray pixel will be unavoidable, unless you manage to place
all of your "borders" *exactly* at the point of transition between two
pixels.

It's theoretically the sharpest which, as you indicate, can also be
achieved in practice sometimes by lining up the image and the sensors.


Anyway, have to keep it short because I'm real busy today. With the
new drive I decided to re-organize everything and that takes a lot of
time. Haven't scanned anything in 3 days and I'm falling behind my
original schedule... Why are days always too short? ;o)

Don.
 
B

Bart van der Wolf

SNIP
Try with this command line:

slantededge.exe --verbose --csv-esf esf.txt --csv-lsf
lsf.txt --csv-mtf mtf.txt testedge.ppm

Yep, got it running.

Bart
 
B

Bart van der Wolf

SNIP
But my point is that sharpening algorithms should not necessarily
produce haloes. I don't have proof -- actually, proof is what I'm
hoping to obtain if I can make my program work! --, but just note
that my hypothesis is just that: halos need not necessarily occur.

That's correct, halo can be avoided while still boosting the high
spatial frequencies' MTF. The boost however may be not too
spectacular, it's just restoring some of the capture process losses.
Some losses are inherent to the sampling process (e.g. area sampling
versus point sampling will produce different MTFs from the same
subject). Maybe it is more accurate to classify those as system
characteristics rather than losses.

Sharpening on the other end, with further anticipated losses in mind
(e.g. printing) should introduce small halos in order to trick human
vision, but the halo should not be visible (IOW smaller than visual
resolution). What *is* visible is the contrast boost (without halo) of
the spatial frequencies we *can* resolve.

Capture loss restoration can be tackled by using a high-pass filter.
Convolving the image with a smallish HP-filter kernel is rather fast
and simple to implement. The best result can be achieved if the HP
filter is modeled after the PSF. This is what it can look like on your
(odd looking) "testedge.tif":
<http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/testedge.zip>
The Luminance channel was HP-filtered in "Image Analyzer" with a "user
defined filter" of 7x7 support.

The mean S/N ratio has decreased from 241.9:1 to 136.3:1, while the
10-90% edge rise went from 4.11 to 2.64 pixels. Unfortunately the scan
suffers from some CA like aberration (especially the Red channel is of
lower resolution), which may become more visible as well.

Bart
 
L

Lorenzo J. Lucchini

Bart said:
SNIP

[snip]

This is what it can look like on your (odd
looking) "testedge.tif":
<http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/testedge.zip>
The Luminance channel was HP-filtered in "Image Analyzer" with a "user
defined filter" of 7x7 support.

Why is my edge odd-looking? Does it still look like it's not linear gamma?
The behavior of my driver is puzzling me more and more, expecially wrt
the differences between 8- and 16-bit scans.
The mean S/N ratio has decreased from 241.9:1 to 136.3:1, while the
10-90% edge rise went from 4.11 to 2.64 pixels. Unfortunately the scan
suffers from some CA like aberration (especially the Red channel is of
lower resolution), which may become more visible as well.

Note that I had disabled all color correction, and the three channels
look much more consistent when my driver's standard color correction
coefficients are used.


By the way - I'm experimenting a bit with the Fourier method for
reconstructing the PSF that's explained in the book's chapter you
pointed me to (I mean the part about tomography).

I don't think I have much hope with that, though, as there is
interpolation needed, and it appears that interpolation in the frequency
domain is a tough thing.

OTOH, I think I've understood the way you're trying to reconstruct the
PSF; I'm not sure I like it, since as far as I can understand you're
basically assuming the PSF *will be gaussian* and thus try to fit a
("3D") gaussian on it. Now, perhaps the inexactness due to assuming a
gaussian isn't really important (at least with the scanners we're
using), but it still worries me a little.

Also, the book says that gaussian PSFs have gaussian LSFs with the same
parameters -- i.e. that a completely simmetrical gaussian PSF is the
same as any corresponding LSF.

Our PSFs are generally not symmetrical, but they *are* near-gaussian, so
what would you think about just considering the two LSFs we have as
sections of the PSF?

I think in the end I'll also try implementing your method, even though
there is no "automatic solver" in C, so it'll be a little tougher. But
perhaps some numerical library can be of help.


by LjL
(e-mail address removed)
 
B

Bart van der Wolf

SNIP
Why is my edge odd-looking?

The pixels "seem" to be sampled with different resolution
horizontally/vertically. As you said earlier, you were experimenting
with oversampling and downsizing, so that may be the reason. BTW, I am
using the version that came with the compiled alpha 3.

SNIP
OTOH, I think I've understood the way you're trying to reconstruct
the PSF; I'm not sure I like it, since as far as I can understand
you're basically assuming the PSF *will be gaussian* and thus try to
fit a ("3D") gaussian on it.

In fact I fit the weighted average of multiple (3) Gaussians. That
allows to get a pretty close fit to the shape of the PSF. Although the
PSFs of lens, film, and scanner optics+sensor all have different
shapes, the combination usually resembles a Gaussian just like in many
natural processes. Only defocus produces a distinctively different
shape, something that could be added for even better PSF approximation
in my model.

This is how the ESF of your "testedge" compares to the ESF of my
model:
http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/ESF.png
Now, perhaps the inexactness due to assuming a gaussian isn't really
important (at least with the scanners we're using), but it still
worries me a little.

That's good ;-) I also don't take things for granted without some
soul searching (and web research).
Also, the book says that gaussian PSFs have gaussian LSFs with the
same parameters -- i.e. that a completely simmetrical gaussian PSF
is the same as any corresponding LSF.

The LSF is the one dimensional integral of the PSF. If the PSF is a
Gaussian, then the LSF is also a Gaussian, but with a different shape
than a simple cross-section through the maximum of the PSF.
The added benefit of a Gaussian is that it produces a separable
function, the X and Y dimension can be processed separately after one
an other with a smaller (1D) kernel. That will save a lot of
processing time with the only drawback being a very slightly less
accurate result (due to compounding rounding errors, so negligible if
acurate math is used).
Our PSFs are generally not symmetrical, but they *are*
near-gaussian, so what would you think about just considering the
two LSFs we have as sections of the PSF?

Yes, we can approximate the true PSF shape by using the information
from two orthogonal resolution measurements (with more variability in
the "slow scan" direction). I'm going to enhance my (Excel) model,
which now works fine for symmetrical PSFs based on a single ESF input
(which may in the end turn out to be good enough given the variability
in the slow scan dimension). I'll remove some redundant functions
(used for double checking), and then try to circumvent some of Excel's
shortcomings.
I think in the end I'll also try implementing your method, even
though there is no "automatic solver" in C, so it'll be a little
tougher. But perhaps some numerical library can be of help.

My current (second) version (Excel spreadsheet) method tries to find
the right weighted mix of 3 ERF() functions
(http://www.library.cornell.edu/nr/bookcpdf/c6-2.pdf page 220, if not
present in a function library) in order to minimize the sum of squared
errors with the sub-sampled ESF. The task is to minimize that error by
changing three standard deviation values.
I haven't analyzed what type of error function this gives, but I guess
(I can be wrong in my assumption) that even an iterative approach
(although not very efficient) is effective enough because the
calculations are rather simple, and should execute quite fast. One
could consider (parts of) one of the methods from chapter 10 of the
above mentioned Numerical Recipes book to find the minimum error for
one Standard Deviation at a time, then loop through them again for a
number of iterations until a certain convergence criterion is met.

This is basically the model which is compared at the original ESF
sub-sample coordinates:
ESFmodel = (
W1*(1+IF(X<0;-Erf(-X/(SD1*Sqrt(2)));Erf(X/(SD1*SQRT(2)))))/2
+ W2*(1+IF(X<0;-Erf(-X/(SD2*Sqrt(2)));Erf(X/(SD2*SQRT(2)))))/2
+ W3*(1+IF(X<0;-Erf(-X/(SD3*Sqrt(2)));Erf(X/(SD3*SQRT(2)))))/2 )
/ (W1+W2+W3)

That will ultimately give, after optimization (minimizing the error
between samples and model), three Standard Deviations (SD1,SD2,SD3)
which I then use to populate a two dimensional kernel with the
(W1,W2,W3) weighted average of three symmetrical Gaussians. The
population is done with symbolically pre-calculated (with Mathematica)
functions that equal the 2D pixel integrals of 1 quadrant of the
kernel. The kernel, being symmetrical, is then completed by
copying/mirroring the results to the other quadrants.

That kernel population part is currently too inflexible for my taste,
but I needed it to avoid some of the Excel errors with the ERF
function. It should be possible to make a more flexible kernel if the
ERF function is better implemented.

Bart
 
L

Lorenzo J. Lucchini

Bart said:
SNIP



The pixels "seem" to be sampled with different resolution
horizontally/vertically. As you said earlier, you were experimenting
with oversampling and downsizing, so that may be the reason. BTW, I am
using the version that came with the compiled alpha 3.

Maybe I see what you mean: it seems that every pixel in the edge has a
vertical neighbour that has the same value.
But if I look at the noise, this doesn't seem to hold true anymore (and
you mentioned this before, I think).

Look... now that I think of it, I once scanned an edge where every pixel
on the edge was *darker* than the one below it (think of an edge with
the same orientation as testedge.tif).

Now, I don't really remember what settings I used on that one, so this
doesn't mean much, but I'm really sure that I've used all the "right"
settings for the testedge.ppm and testedge.tif I've included in alpha 3.

I can't quite explain this.

Before I try to digest what you wrote... :) Have you heard of the Abel
transform -- whatever that is, of course, don't assume I know just
because I mention it! -- being used to reconstruct the PSF from the LSF?
(yes, unfortunately I've only read about "LSF", singular, upto now)

I ask just to be sure I'm not following a dead end, in case you already
looked into this.


by LjL
(e-mail address removed)
 
B

Bart van der Wolf

Lorenzo J. Lucchini said:
SNIP
Maybe I see what you mean: it seems that every pixel in the edge has
a vertical neighbour that has the same value.
But if I look at the noise, this doesn't seem to hold true anymore
(and you mentioned this before, I think).

Correct, that looks a bit strange.

Of course, for testing purposes only, one could make a well behaved
CGI slanted edge and apply several Gaussian blurs to it, and even add
a little Poisson noise. That won't solve things for your scanner, but
it does provide a known response for testing the program.

SNIP
Before I try to digest what you wrote... :) Have you heard of the
Abel transform -- whatever that is, of course, don't assume I know
just because I mention it! -- being used to reconstruct the PSF from
the LSF? (yes, unfortunately I've only read about "LSF", singular,
upto now)

No, I hadn't heard of it (or I wasn't paying attention when I did
;-)).
http://mathworld.wolfram.com/AbelTransform.html describes it, but I'm
not enough of a mathematician to immediately grasp its usefulness.
I ask just to be sure I'm not following a dead end, in case you
already looked into this.

I'll have to see myself if it can be of use, too early to tell.
However, do keep in mind that I am also considering the final
deconvolution or convolution step, which will take a very long time on
large images (e.g. 39 Mega-pixels on a full 5400 ppi scan from my film
scanner).

There is a lot of efficiency to be gained from a separable function
(like a Gaussian, or a polynomial(!)) versus one that requires a
square/rectangular kernel. It's roughly the difference between e.g. 18
instead of 81 multiplications per pixel when convolving with a 9x9
kernel, times the number of pixels.

What I'm actually suggesting, is that I'm willing to compromise a
little accuracy :)-() for a huge speed gain in execution. If execution
speed is unacceptable in actual use, then it won't be used. But I'm
willing to be surprised by any creative solution ...

One final remark for now, I think that for large images the
deconvolution path may prove to be too processing intensive (although
the CGLS method used by "Image Analyzer" seems rather efficient). It
is probably faster to convolve in the Spatial domain with a small
kernel than to deconvolve in the Frequency domain, which is why I
often mention the High-Pass filter solution. There are also free image
processing applications, like ImageMagick <http://www.imagemagick.org>
(also includes APIs for C or C++), that can use arbitrarily sized
(square) convolution kernels, so the final processing can be done (in
16-bit/channel, or I believe even in 32-b/ch if need be).

Bart
 
L

Lorenzo J. Lucchini

Bart said:
Bart van der Wolf wrote:

[snip]

There is a lot of efficiency to be gained from a separable function
(like a Gaussian, or a polynomial(!)) versus one that requires a
square/rectangular kernel. It's roughly the difference between e.g. 18
instead of 81 multiplications per pixel when convolving with a 9x9
kernel, times the number of pixels.

What I'm actually suggesting, is that I'm willing to compromise a little
accuracy :)-() for a huge speed gain in execution. If execution speed is
unacceptable in actual use, then it won't be used. But I'm willing to be
surprised by any creative solution ...

You're right about speed of course.

One thing: by "separable function" you mean one that can be split into
two, which in turn are applied to the horizontal and vertical axes,
aren't you?

If so... are we really sure that there isn't a way to directly apply the
two LSFs for "deconvolution" (well, it'll have to be something slightly
different of course), instead of somehow reconstructing a PSF and then
splitting it again?
One final remark for now, I think that for large images the
deconvolution path may prove to be too processing intensive (although
the CGLS method used by "Image Analyzer" seems rather efficient). It is
probably faster to convolve in the Spatial domain with a small kernel
than to deconvolve in the Frequency domain, which is why I often mention
the High-Pass filter solution. There are also free image processing
applications, like ImageMagick <http://www.imagemagick.org> (also
includes APIs for C or C++), that can use arbitrarily sized (square)
convolution kernels, so the final processing can be done (in
16-bit/channel, or I believe even in 32-b/ch if need be).

Indeed, I've been considering linking to ImageMagick, as well as to the
NetPBM library.

ImageMagick is probably more refined than NetPBM, and judging from your
article about resizing algorithms, it definitely has some merits.

Also, if I want to support more formats than just PNM (which I'm only
half-supporting right now, anyway), I'll have to use some kind of
library -- I'm definitely not going to manually write TIFF loading code,
not my piece of cake :)


by LjL
(e-mail address removed)
 
B

Bart van der Wolf

SNIP
One thing: by "separable function" you mean one that can be split
into two, which in turn are applied to the horizontal and vertical
axes, aren't you?
Yep.

If so... are we really sure that there isn't a way to directly apply
the two LSFs for "deconvolution" (well, it'll have to be something
slightly different of course), instead of somehow reconstructing a
PSF and then splitting it again?

I'm working on that, but my method currently starts with finding a
match
to the ESF, there's no need to make a PSF for that.

For now the PSF is still useful for existing applications that require
a rectangular/square PSF as input for deconvolution, and the FIR
(Finite Impulse Response) filter support of many existing applications
is
often limited to 7x7 or 9x9 kernels. Also, calculating a PSF from an
ESF is not too much effort, although the final excecution of a PSF is
slower than
applying two 'separated' functions.

SNIP
ImageMagick is probably more refined than NetPBM, and judging from
your article about resizing algorithms, it definitely has some
merits.

I have no experience with NetPBM, but ImageMagick is quite (too?)
potent and it also covers many file formats.

Bart
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top