Multi-sampling and "2400x4800 dpi" scanners

L

ljlbox

Many flatbed scanners claim to offer a vertical resolution that is
twice the horizontal resolution, such as 2400x4800 dpi. I understand
this to mean that, while there are only 2400 cells in the CCD, the
stepping motor can move by steps of 1/4800th of an inch.

Additionally, these scanners' CCDs usually do not have a single row of
2400 cells, but two rows of 1200 each, which are positioned at an
half-pixel offset.

Now, if this is true (please confirm), don't we effectively have 4x
multi-sampling when scanning at 1200 dpi?

There are several issues that I don't find clear.

First: when scanning at 1200 dpi, do scanners actually use both CCD
arrays and "mix" the results (I'm not simply saying "average" the
results, since it might be too simplicistic given the half-pixel
offset), or do they only "turn on" one array?

Second: when scanning at 2400 dpi, do scanners give out pixels in the
order "1st pixel of 1st array | 1st pixel of 2nd array | 2nd pixel of
1st array | 2nd pixel of 2nd array", or do they somehow consider the
fact that nearby pixel overlay one another by half their width?
Of course, this also applies vertically, since while the motor moves by
1/2400th of an inch steps, pixels are 1/1200th of an inch "wide".

Third: when scanning at "4800" dpi, what do scanners do about the
horizontal resolution? Interpolation, I suppose. What kind of
interpolation? Does it vary from scanner to scanner?
And, do scanners that claim 2400x4800 resolution *really move the motor
by 1/4800th steps when instructed to scan at 4800 dpi*, or do they just
interpolate (since I know there are also other reasons for having
1/4800th stepping motors)? Does this vary from scanner to scanner?


Now, let's see how all this relates to multi-sampling.

Let's suppose I want to scan at 4800 dpi, with 2x multi-sampling -- for
the moment, let's ignore the fact that it might really be 4x
multi-sampling because of the double CCD array.

The scanner gives me an image. I can turn it into *two* images, one
made of the even lines of the original image, and the other made of the
odd lines (clearly, I must first downsample the original image
horizontally, since it was interpolated to 2x by the scanner).
I can then average the two images. Have I just obtained 2x
multi-sampling?

Apparently not, since I forgot that even and odd lines were sampled at
1/4800th of an inch apart from each other.

But I do know they're separated by a consistent 1/4800th of an inch. So
I could first sub-pixel-align the two images (a no brainer, since I
know they're misaligned by exactly one pixel), and only then do the
merge.

Have I now obtained 2x multi-sampling? Apparently, I have. But now I
wonder: what would have happened if I had just scaled down the original
image to half its size vertically?
Wouldn't that be equivalent to the procedure I described of splitting
it in two, aligning and merging?

Programs usually offer more than one algorithm for scaling down images:
bilinear, bicubic, etc.
Which of these is equivalent to splitting/aligning/merging, if any?


Now you probably also see why I asked all those questions about scanner
behavior above, since to answer my doubts about multi-sampling one must
be aware of how the scanner really behaves, and whatever it does to the
data *before* giving them out to the user.

Perhaps this whole article can be "scaled down" to the question: is
scanning at 4800 dpi and then scaling down to 1200 dpi (with what?
bilinear, bicubic...) equivalent to 4x multi-sampling at 1200 dpi?
(Make substitutions between 4800, 2400 and 1200 above, and you'll get
the other possible scenarios)


by LjL
(e-mail address removed)
 
D

Don

Additionally, these scanners' CCDs usually do not have a single row of
2400 cells, but two rows of 1200 each, which are positioned at an
half-pixel offset.
....

Check the archives (for example on Google). Kennedy
articles about this in quite some detail, for example:

Subject: Re: filmscanner vs hi-res flatbed
Subject: Re: REPOST: Re: Plustek OptikFilm 7200
etc.

Don.
 
L

ljlbox

Don ha scritto:
...

Check the archives (for example on Google). Kennedy
articles about this in quite some detail, for example:

Subject: Re: filmscanner vs hi-res flatbed
Subject: Re: REPOST: Re: Plustek OptikFilm 7200
etc.

Maybe even a bit *too* technical ;-)
I've read these and similar threads before, and I am aware that the
topic of "staggered CCD arrays" (and stepping motors that step less
than one pixel is wide) has been investigated to death.

However, it was mainly about "does a 1200+1200 dpi scanner resolve as
much as a 2400 dpi scanner?", and "does a 1200+1200 dpi resolve
anything more than a 1200 dpi scanner at all?", and "staggered arrays
reduce aliasing but make the image softer".

Instead, my post wanted to investigate the question: is scanning with a
1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
scanner?
And if it is, should we process the image taking account of the pixel
offset/overlap, and if so, how?

I've read clues that unsharp masking can be a perfectly valid technique
to compensate for sensor overlap, for example... but it's all a bit too
vague in the threads I've read, covering wider topics than I am
currently focusing on -- such as resolution, aliasing, etc.


by LjL
(e-mail address removed)
 
D

Don

Maybe even a bit *too* technical ;-)

Yes, Kennedy does that! ;o)

But I like it and always file such messages for future use even if
most of it over my head at the time.
Instead, my post wanted to investigate the question: is scanning with a
1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
scanner?
And if it is, should we process the image taking account of the pixel
offset/overlap, and if so, how?
I've read clues that unsharp masking can be a perfectly valid technique
to compensate for sensor overlap, for example... but it's all a bit too
vague in the threads I've read, covering wider topics than I am
currently focusing on -- such as resolution, aliasing, etc.

I haven't really looked into all that because I'm too busy with my
film scanner so someone else will have to jump in...

Don.
 
K

Kennedy McEwen

Don ha scritto:


Maybe even a bit *too* technical ;-)

Sorry, but sometimes it needs that technical detail to explain the true
implications of the concept.

Instead, my post wanted to investigate the question: is scanning with a
1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
scanner?

That depends on whether the subject contains any information at higher
than 1200ppi and if the lens is capable of resolving it. If it isn't
then it is exactly the same as multisampling - which is why I always
jump on posters who claim that their is no advantage to this scanning
approach: even when there is no resolution advantage there is always the
multisampling advantage.

In simple terms, the double CCD captures twice as much information as a
single line device. If that information does not go into increased
resolution then it appears as increased signal to noise similar to
multisampling.

This is no different from a single line sensor with double the pixel
density when scanning an object which does not have as much resolution
in the original - there is always an advantage to getting more samples
of nominally the same data, but it can be debatable whether that
advantage is worth the time and effort to do so.
And if it is, should we process the image taking account of the pixel
offset/overlap, and if so, how?
The simplest method of doing this is a pixel average and downsample by a
factor of two. Suffice to say that there isn't an exact method of
separating the resolution from the SNR gain. Half pixel realignment
isn't really a solution in these cases because it involves resampling
losses in itself which are likely to exceed any benefit that they are
intended to gain. Some blurring, up to a quarter of a pixel may be
advantageous.
I've read clues that unsharp masking can be a perfectly valid technique
to compensate for sensor overlap, for example... but it's all a bit too
vague in the threads I've read, covering wider topics than I am
currently focusing on -- such as resolution, aliasing, etc.
Yes, this is essentially the opposite of what you are trying to do - put
more of the additional information of the double scan into increased
resolution rather than improved signal to noise ratio at the lower
resolution. Hence my comment that limited blurring may offer a benefit.
 
L

ljlbox

Kennedy McEwen ha scritto:
Don ha scritto:
[snip]

Check the archives (for example on Google). Kennedy
articles about this in quite some detail, for example:

Subject: Re: filmscanner vs hi-res flatbed
Subject: Re: REPOST: Re: Plustek OptikFilm 7200
etc.

Maybe even a bit *too* technical ;-)

Sorry, but sometimes it needs that technical detail to explain the true
implications of the concept.

Oh but it wasn't a bad comment on you, I meant too technical *for me*.
When you talk about MFT and so on, I think I can grasp the basic ideas
behind those concepts, but can't really *understand* them to any
extent.

But it's certainly a very good thing that you can discuss the more
technical details on a newsgroup with people who understand them,
that's just what the Internet is good for in research!
That depends on whether the subject contains any information at higher
than 1200ppi and if the lens is capable of resolving it. If it isn't
then it is exactly the same as multisampling - which is why I always
jump on posters who claim that their is no advantage to this scanning
approach: even when there is no resolution advantage there is always the
multisampling advantage.

Yes. But I can see two scenarios:
1) when there is no resolution advantage, is it really *exactly* as
multisampling, or does it lose some ground because of the misalignment?
or can the lost ground be re-gained with appropriate post-processing?
2) when there *is* resolution advantage, can the multisampling
advantage exploited *together* with the resolution advantage, or must a
choice be made?

What I suspected is that a choice must be made, and that the choice
typically favors resolutions over multi-sampling (i.e. noise
reduction).

Anyway, you see, I was thinking more about the *vertical* axis of
scanning (i.e. the "4800 dpi" of my scanner), where the resolution gain
appears to be practically null, with pixels overlapping for three
fourths their size.

There is also a post by you where you say that half-stepping on the
vertical axis is next to useless, at least concerning resolution.

But I can clearly see that it *is* useful in terms of noise reduction,
just by taking a scan at 2400x4800 (and then downsampling the 4800) and
one at 2400x2400.

When half-stepping, scanners usually interpolate on the horizontal axis
to get a 1:1 ratio. This I don't like (and in fact I'm trying to modify
my SANE driver accordingly): I'd like to take a purely 2400x4800 scan,
and then downsample *appropriately* on the vertical axis.

My main concern, which you address below, was on the meaning of
"appropriate downsampling" when downsampling an image that is made by
3/4ths overlapping pixels.
[snip]

This is no different from a single line sensor with double the pixel
density when scanning an object which does not have as much resolution
in the original - there is always an advantage to getting more samples
of nominally the same data, but it can be debatable whether that
advantage is worth the time and effort to do so.

More than "debatable", I'd call it a personal choice.
My scans at 1200x1200 are awfully noisy; those at 2400x2400 are better,
but I certainly do appreciate the benefit of 2400x4800, at least for
some pictures.

What worries me is the "nominally the same data" part. It's not
nominally the same data in the real world, unless the original is of a
much lower resolution than the sampling rate.
It's *almost* the same data, but shifted -- half a pixel horizontally
(double CCD), and 1/4 of a pixel vertically (half-stepping).

So, I'm under the impression that scanning at 2400x4800 (let's talk
about the half-stepping and ignoring the double CCD) and then
downsampling the vertical axis gives me a less noisy, but blurrier
image than scanning at 2400x2400.

This wouldn't happen with "real" multi-sampling, i.e. samples taken at
exactly the same position. Question is, is there a software fix for
this? I'm taking your answer, below, as a "mostly no"...?
The simplest method of doing this is a pixel average and downsample by a
factor of two.

I.e. an image made by (all pixels from line n + every pixel from line
n+1) / 2 (that is considering only one direction)?
But this is really the same as treating it as a "standard"
multi-sampling, i.e. with no offset, isn't it?

Then what about the various bilinears, biquadratics and bicubics?
Suffice to say that there isn't an exact method of
separating the resolution from the SNR gain.

Which is to say that the offset between each pair of scan lines can't
be really accounted for in software?
Half pixel realignment
isn't really a solution in these cases because it involves resampling
losses in itself which are likely to exceed any benefit that they are
intended to gain. Some blurring, up to a quarter of a pixel may be
advantageous.

Hm. Blurring, at what stage? Scans taken at 4800 and then resampled to
2400 (Photoshop, bicubic) look already blurrier than scans taken at
2400, as I said.
So, I take it you'd be blurring by 1/4 of a pixel and then
downsampling? But you'd still be downsampling with the method you
described above (average), rather than the standard functions in say
Photoshop, correct?

In any case I don't fully understand why you say that half-pixel
realignment isn't worth doing. I know the explanation would get
technical, but just tell me, shouldn't it be just as worthless when
done on multi-scans (the Don way, I mean, taking multiple scans and
then sub-pixel aligning them)?
The only difference is that, in "our" case, the amount of misalignment
is known. Which should even be an advantage, or shouldn't it?
Yes, this is essentially the opposite of what you are trying to do - put
more of the additional information of the double scan into increased
resolution rather than improved signal to noise ratio at the lower
resolution. Hence my comment that limited blurring may offer a benefit.

I see. But do you agree with me, in any case, that on the vertical
axis, the 4800 dpi of "resolution" are worthless as *resolution* and
much more useful as a substitute for multi-sampling (i.e. for improving
SNR)?

But anyway, what do you have to say about the unsharp masking -- which
I certainly consider doing on 2400x2400 scans?
My impression is that the standard, consumer-oriented Internet sources
say "just apply as much unsharp masking as you see fit".

But shouldn't there be an exact amount and radius of unsharp masking
that can be computed from the scanner's characteristics, seeing from
the things you said in the various threads (which I only very partially
understood, though)?

by LjL
(e-mail address removed)
 
L

ljlbox

Kennedy McEwen ha scritto:

I forgot one more thing I wanted to ask.

Assume I settle on a solution I like for downsampling my vertical 4800
dpi to 2400.
As I wrote in the other post, I'm trying to path my scanner driver to
have it output 2400 dpi on the *horizontal* axis instead of
interpolated 4800, but I'm afraid I might not make it.

(SANE doesn't even natively support 4800x4800dpi with interpolated x
axis, I have to patch it for that; and then, doing 2400x4800dpi
*without* interpolated x axis looks very hard, because the driver isn't
really written with different x/y resolutions in mind.)

So, assuming I only get 4800x4800dpi with interpolated x axis, how do I
downsample that axis?
Bicubic resize in Photoshop gives blurrier data, *on the x axis* (also
on the y axis, but I've treated that in the other post), than a simple,
uninterpolated 2400x2400 scan does.

Must I know the exact interpolation algorithm used by my scanner, in
order to recover the original data? Or doesn't even that suffice, and
some data gets lost irremediably with the interpolation?

I suppose an interpolation that works as
pixel1 -- (pixel1 + pixel2)/2 -- pixel2 -- (pixel2 + pixel3)/2 --
pixel3 -- etc

should be easily "reversed". But I currently have no clue about the
interpolation used by my scanner, and don't know whether some
interpolation methods are "irreversible".

Is there even perhaps a safe bet about my scanner's algorithm, that is
do most or all scanners use a specific algorithm?


by LjL
(e-mail address removed)
 
G

Gordon Moat

Many flatbed scanners claim to offer a vertical resolution that is
twice the horizontal resolution, such as 2400x4800 dpi. I understand
this to mean that, while there are only 2400 cells in the CCD, the
stepping motor can move by steps of 1/4800th of an inch.

Additionally, these scanners' CCDs usually do not have a single row of
2400 cells, but two rows of 1200 each, which are positioned at an
half-pixel offset.

Now, if this is true (please confirm), don't we effectively have 4x
multi-sampling when scanning at 1200 dpi?

Actually, many linear CCDs are 8400 or 10200 cells (pixel sites), though
divided by three to give each colour Red, Green, and Blue. Kodak have some
nice White Papers on these.

So in theory an 8400 element linear CCD should be able to resolve 2800
dpi, and a 10200 element CCD should be able to do 3400 dpi. The reality is
that each pixel site is not that efficient, and only resolves a fraction
of the total possible. Often that can be 0.3 to 0.8 of the cell site for
commercial imagers. That would give us an actual best of 2720 dpi for the
10200 element CCD, and 2240 dpi for the 8400 element CCD.

You should be aware that there are linear CCDs in scanners that are less
than 8400 elements, and expect those to perform worse. The stepper motors
and scanner optics will affect resolution. The size of the cell site for a
linear CCD will affect resolution and colour. The scanner optics could
have the most affect on resolution, and often are the limiting factor in
low end and mid range gear.
There are several issues that I don't find clear.

First: when scanning at 1200 dpi, do scanners actually use both CCD
arrays and "mix" the results (I'm not simply saying "average" the
results, since it might be too simplicistic given the half-pixel
offset), or do they only "turn on" one array?

Second: when scanning at 2400 dpi, do scanners give out pixels in the
order "1st pixel of 1st array | 1st pixel of 2nd array | 2nd pixel of
1st array | 2nd pixel of 2nd array", or do they somehow consider the
fact that nearby pixel overlay one another by half their width?
Of course, this also applies vertically, since while the motor moves by
1/2400th of an inch steps, pixels are 1/1200th of an inch "wide".

Third: when scanning at "4800" dpi, what do scanners do about the
horizontal resolution? Interpolation, I suppose. What kind of
interpolation? Does it vary from scanner to scanner?

Interpolation can happen at an up or down value. It is controlled by fixed
sets of algorithms determined by the scanner manufacturers. Obviously,
this would vary between companies. In short, there is not one answer to
your questions, since different scanners will arrive at final files by
using different methods.
And, do scanners that claim 2400x4800 resolution *really move the motor
by 1/4800th steps when instructed to scan at 4800 dpi*, or do they just
interpolate (since I know there are also other reasons for having
1/4800th stepping motors)? Does this vary from scanner to scanner?

Usually interpolated. Don't think this is all bad. While more resolution
and details might not be visible, overscanning can give smoother colour
transitions, since there are more final pixels in the resulting file. Of
course this only works if your printing output can use that extra
information.
Now, let's see how all this relates to multi-sampling.. . . . . . . . .
. . . . . .

Multi-sampling is usually just done to decrease noise or sometimes to help
colour accuracy. The effectiveness of this will vary for each type of
scanner, each scanner manufacturer, and the software in use.
. . . . . . . . . . .

Now you probably also see why I asked all those questions about scanner
behavior above, since to answer my doubts about multi-sampling one must
be aware of how the scanner really behaves, and whatever it does to the
data *before* giving them out to the user.

Perhaps this whole article can be "scaled down" to the question: is
scanning at 4800 dpi and then scaling down to 1200 dpi (with what?
bilinear, bicubic...) equivalent to 4x multi-sampling at 1200 dpi?
(Make substitutions between 4800, 2400 and 1200 above, and you'll get
the other possible scenarios)

Scanning at some multiple of the claimed resolution might improve your
scans, if that is what you are after with all this investigation. If you
really want to get technical, check out the Dalsa and Kodak web sites,
then find the White Papers for their linear CCDs. You will get far more
technical information that way, though maybe more than is practical.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com/technology.html>
 
L

ljlbox

Gordon Moat ha scritto:

[snip]

Scanning at some multiple of the claimed resolution might improve your
scans, if that is what you are after with all this investigation. If you
really want to get technical, check out the Dalsa and Kodak web sites,
then find the White Papers for their linear CCDs. You will get far more
technical information that way, though maybe more than is practical.

I don't want to get *too* technical.

In short, my scanner's got 2400 dpi horizontal. Sure, there are
complications: it's a "staggered" CCD, for one, and then all you've
written that I snipped (although I believe my scanner has three --
actually six -- linear CCDs, one for each color, not one -- actually
two -- very big linear CCD).

But let's just pretend for a moment that it's 2400 dpi optical, period.

What I want to do is scan at 4800 dpi in the *vertical* direction, i.e.
run the motor at "half-stepping". My scanner can do that.

The problem is twofold:

1) (the less important one) My scanner's software insists on
interpolating horizontally in order to fake 4800 dpi on both the x and
y axis, and I don't know how to "revert" this interpolation to get the
original data back (just downsampling with Photoshop appears to lose
something). But as you said, the interpolation algorithm varies between
scanners, so I'll have to find out what mine does, I suppose -- or,
hopefully, just manage to hack the open-source driver I'm using to
support 2400x4800 with no interpolation.

2) (the more important one) I, of course, don't want a 2:1 ratio image.
I just want 2400x2400, and use the "extra" 2400 I've got on the y axis
as one would use multi-sampling on a scanner supporting it. Yes, to get
better image quality and less noise, as you said.
But the question is, how to do it *well*?
I feel that I shouldn't just pretend I'm really multi-sampling (i.e.
taking two readouts for each scanline), because I am not. I ought to
somehow take into account the fact that each scanline is shifted by
"half a pixel" from the previous one.
Should I ignore this, and go on processing as if I were "really"
multi-sampling? Or should I downsample the image using bilinear,
bicubic, or something else more appropriate -- something that can take
the half-pixel offset into account?


I realize that simply downsampling the picture to 2400x2400 in
Photoshop or something gives decent results. But I'd just like to know
if there's something I'm missing.

In my mind, the "right" thing to do would be to consider the scan as
two separate scans (one made from the even scanlines, one made from the
odd scanlines); then merge the two image at an half-pixel offset. But
Kennedy said this is not such a great idea.
And in any case, even if Kennedy were wrong, I suppose there must be
some simpler transformation that gives the same result as the alignment
thing above... after all, it seems stupid to actually perform the
alignment and then the merging, when we know the misalignment is
exactly x=0, y=0.5.


All the other questions I posed in the original message were mostly
about how all this relates (if anyhow) with the fact the CCD is
"staggered" (which in turn means that each sensors already overlaps
each other sensors by half their size -- or *about* half their size,
since as you pointed out, things get actually a bit more complicated).


by LjL
(e-mail address removed)
 
L

ljlbox

Kennedy McEwen ha scritto:

Hey, I've come across anrticle by you (in "EPSON Scan wouldn't make
large files (>1000 MB)", 2004), where you say

--- CUT ---

You are right about resolution, even the theoretical resolution gain is
marginal and almost certainly well below the sample to sample
production
variation. But I don't think there has ever been any question about
the
noise reduction aspect - if you resample the image back to 3200ppi
using
nearest neighbour resizing it is mathematically exactly the same as 2x
multiscanning. That yields exactly 1.414x noise reduction - and all in
a single pass with a scanner which formally doesn't provide
multisampling at all. With no significant resolution gain, the noise
reduction is just there in the image without resampling.

--- CUT ---

Is nearest neighbour resizing (though I've got no idea what it is! but
thankfully there's the Internet for that) what I am looking for?

I mean, "mathematically exactly the same as 2x multiscanning" is really
close to what I had in mind. Confirm?

But I think I can see some bad news, too, as in

--- CUT ---

Not always. Some, indeed most flatbeds these days, exploit what is
known as "half stepping" of the stepper motor drive. These half steps
are less precise than the full step and less robust because only half
the holding force is produced by the motor coils [...]

--- CUT ---

So does that mean that I might possibly losing more than I gain by
half-stepping? Although I suppose that, at most, I would end up with a
scan whose geometry doesn't perfectly match that of the original... or?


by LjL
(e-mail address removed)
 
K

Kennedy McEwen

Kennedy McEwen ha scritto:


Oh but it wasn't a bad comment on you

That's OK - I didn't take it that way and appreciate that you recognise
the value of the detail, something that seems to get overlooked more and
more these days.
When you talk about MFT and so on, I think I can grasp the basic ideas
behind those concepts, but can't really *understand* them to any
extent.

MTF is just a measure of the contrast that a particular component can
reproduce as a function of spatial frequency - it is the spatial
frequency response of the component - just like the graphs that used to
be printed on the back of audio tapes and hi-fi components showing their
response to audio frequencies. The main advantage of MTF in the
analysis of imaging systems is that the total response of a system is
exactly the product of all of the linearly combined individual
components. So with a knowledge of the components, you can derive an
exact measure of the system frequency response - and with a knowledge of
the frequency response you can predict the behaviour. You probably do
understand this, but I added it after writing a lot more about MTF than
I initially intended to later in this post.
But I can see two scenarios:
1) when there is no resolution advantage, is it really *exactly* as
multisampling, or does it lose some ground because of the misalignment?
or can the lost ground be re-gained with appropriate post-processing?

Well, it isn't *exactly* the same as multisampling, but the difference
is minimal and averages out. Only in the special case where there is no
spatial frequency higher than 0cy/mm on the image (a completely bland
and uniform scene) then clearly it doesn't matter where in that scene
the samples are taken, they should all produce the same data, varying
only by the noise of the system. However, if the scene contains a
single low spatial frequency of, say, 1cy/mm then there will be a
systematic difference between samples taken at different phases of that
pattern - even though the spatial frequency is much lower than the
resolution of the basic single CCD line let alone the combination of the
two offset lines. However, since there is no correlation between the
sensor and the scene, that difference will be positive just as often as
it is negative and on average it will cancel out.

With clearly no resolution to be gained, this effect is negligible.
However you can see that with a spatial frequency at the limit of the
single line of sensors, ie. still no actual resolution to be gained,
then a second sample with a half pitch offset can be up to a 50% of the
reproduced contrast level from the original sample. (for example, say
the original CCD line sampled the peak and troughs of the sine wave,
then the corresponding offset sample would be at the mid point, with 50%
level difference from either adjacent original - again averaging out to
zero. Obviously the contrast itself is only 64% of the peak due to the
finite width of the CCD cell being half the cycle of the sine wave, so
you are looking at a total possible error of 32% - and the lens reduces
this significantly further, perhaps to 2-3% at this spatial frequency.)
The noise, however, is always reduced by the square root of the number
of samples used.
2) when there *is* resolution advantage, can the multisampling
advantage exploited *together* with the resolution advantage, or must a
choice be made?
I don't see what you mean by "a choice being made" - it is merely
increased data sampling and you can increase the frequency response of
the system through post processing to maximise the resolution gain at
the expense of signal to noise ratio or decrease the frequency response
to maximise the SNR at the expense of resolution.

The fact that the sensors overlap in object space is not really an issue
- in fact, the individual sensors of *ALL* scanners and digital cameras
can be considered to overlap in the same way sue to the blurring effect
of the lens (you can view this as blurring the image of the scene on the
sensor and as blurring the image of the sensor on the scene). It just
comes down to the component MTFs and the sampling density employed as to
how significant that "overlap" appears relative to the samples used.

One analogy that may help you visualise this is to consider the linear
CCD, not as an array of individual identical sensors, but as a single
cell which scans the image along the axis of the CCD. This single cell
will produce a signal continuous waveform as it scans along the axis of
the CCD. If that waveform is now sampled only at the precise positions
where the cells in the original CCD exist then the resulting sampled
waveform will be indistinguishable from the output of the original CCD.

Now, that probably doesn't seem to make much difference initially, but
since the result is the same then the same equation describes the
waveform. The waveform of the scanned single element is simply the
convolution of the image projected onto it by the lens and the point
spread function of the single element - effectively its width. This
corresponds exactly to the product of the fourier transform of the image
(ie. its representation as a series of spatial frequencies as reproduced
by the lens) and the MTF of the individual cell. So now we have a
spatial frequency representation of the continuous waveform of the
single scanned cell - the fourier transform of the waveform. The
sampling process is simply multiplying the waveform by a series of delta
functions at the sampling positions, which corresponds to convolving the
fourier transform with a series of delta functions at the sampling
frequency and its harmonics. (This is the source of the aliasing etc.
where the negative components in frequency space appear as positive
frequencies when their origin is shifted to the first sampling frequency
- but that is another issue.)

So we can derive an equation to describe the output of the linear CCD by
considering it as a sampled version of a single scanned element. The
real advantage is that this equation is not restricted by the physical
manufacturing limitations of the CCD - there is no relationship between
the pixel size and pitch inherent in that equation. The cell dimension
can be very small or very large compared to the sampling frequency - the
equation remains unchanged.

For a square cell of width a, the MTF is readily computed to be
sin(pi.a.f)/(pi.a.f) [ensuring you calculate the sin in radians]. You
might like to plot out a few of these curves for different sizes of
cell. A cell from a single line 1200ppi CCD will have an effective cell
width of around 20um, a cell from a single line 2400ppi CCD will have a
width of around 10um. What you should see from this exercise is that
changing the cell width only changes the spatial frequency response of
the system. This is completely independent of the sampling density -
the size of the CCD cell is just a spatial filter with a particular
frequency response, just the same as the lens itself is another such
filter with known frequency response. Unlike the CCD cellular MTF, the
lens has a finite MTF (meaning that it falls to zero at a particular
spatial frequency and stays there at higher frequencies than this
cut-off). One of the rules of fourier transforms is that finite
functions on one side of the transform result in infinite functions on
the other side - so, while the CCD cell has a finite dimension and
spread it has an infinite frequency response (albeit at low amplitude),
the lens has a finite frequency response and consequently an infinite
spreading effect on the image (albeit at low amplitude). Hence my
earlier comment that the no optical scanner actually has sensors which
do not overlap to some degree. All that is different is how much
response remains in the system at the sampling density - ideally,
invoking Nyquist, there should be no response to frequencies greater
than half the sampling density.

With a scanner which has a staggered CCD or half steps the linear CCD in
the scan axis, all that is happening is that you move the sampling
frequency further up the MTF curve - where the contrast reproduced by
the lens and the CCD itself is less. So there really isn't a choice to
be made that is any different from how you would treat a full stepped
4800ppi scanned image to how you would treat a half stepped 4800ppi
image - they both behave exactly the same.
There is also a post by you where you say that half-stepping on the
vertical axis is next to useless, at least concerning resolution.
Yes, for the reasons provided above. Once you include the optic MTF
and the CCD cell MTF and then plot where the sampling frequency is, it
is clear that all of the spatial frequencies that can be resolved by the
system are done so well before the advantage of half stepping
(effectively increasing the sampling density by x4) is realised.
But I can clearly see that it *is* useful in terms of noise reduction,
just by taking a scan at 2400x4800 (and then downsampling the 4800) and
one at 2400x2400.
Yes, because the resolution benefit is negligible, so all of the
additional information is simply noise reduction.
When half-stepping, scanners usually interpolate on the horizontal axis
to get a 1:1 ratio. This I don't like (and in fact I'm trying to modify
my SANE driver accordingly): I'd like to take a purely 2400x4800 scan,
and then downsample *appropriately* on the vertical axis.
That would be an average of each of the two 4800ppi samples.
My scans at 1200x1200 are awfully noisy; those at 2400x2400 are better,
but I certainly do appreciate the benefit of 2400x4800, at least for
some pictures.
Yes, 2400x2400ppi downsampled to 1200x1200ppi will have a x2 improvement
in SNR, assuming that the noise is not limited by bit depth you use in
the process. 2400x4800ppi down to 1200x1200ppi should provide about
x2.8 in SNR.
What worries me is the "nominally the same data" part. It's not
nominally the same data in the real world, unless the original is of a
much lower resolution than the sampling rate.
It's *almost* the same data, but shifted -- half a pixel horizontally
(double CCD), and 1/4 of a pixel vertically (half-stepping).
It is shifted, but that is just a higher frequency sample. The shift in
sample position will only produce a difference in signal if there is a
resolution gain to be obtained - what you are trying to do is forego any
resolution benefit for the SNR benefit.
So, I'm under the impression that scanning at 2400x4800 (let's talk
about the half-stepping and ignoring the double CCD)
- they are both the same thing in principle.
and then
downsampling the vertical axis gives me a less noisy, but blurrier
image than scanning at 2400x2400.
The slight loss in doing this is due to the change in the MTF of the
system. If you average two cells from a 1200ppi line that are offset by
1/4800ppi then there is a slight increase in the overall size of the
cell - but this is marginal in the scheme of things. A rough gide of
how significant can be seen by examining the MTF of a cell 1/1200" wide
at a spatial frequency of 4800ppi, the shift that is present. A more
detailed assessment of the MTFs shows that the difference at the
limiting resolution of the 1200ppi image is only 3%, and less than 1% at
4800ppi, confirming that the shift itself is negligible as is the
resolution gain.
This wouldn't happen with "real" multi-sampling, i.e. samples taken at
exactly the same position. Question is, is there a software fix for
this? I'm taking your answer, below, as a "mostly no"...?
No, not at all, just that re-alignment isn't it - there are too many
losses in the process for it to yield a worthwhile benefit.
I.e. an image made by (all pixels from line n + every pixel from line
n+1) / 2 (that is considering only one direction)?
But this is really the same as treating it as a "standard"
multi-sampling, i.e. with no offset, isn't it?

Yes, because when the sampling density is this much higher than the
resolution of the system that shift is no longer significant.
Then what about the various bilinears, biquadratics and bicubics?
Just different downsampling algorithms - the difference between them
swamping the effect that this minor shift has. In effect these are
interpolations with different frequency responses - the higher the order
the flatter and sharper cut-off of the frequency response, so the better
the result.
Which is to say that the offset between each pair of scan lines can't
be really accounted for in software?
Exactly - but it can be very closely approximated.
In any case I don't fully understand why you say that half-pixel
realignment isn't worth doing. I know the explanation would get
technical, but just tell me, shouldn't it be just as worthless when
done on multi-scans (the Don way, I mean, taking multiple scans and
then sub-pixel aligning them)?

What you are doing is not the same as what Don is trying to achieve. You
are multisampling, which means the noise throughout the density range of
the image reduces by the square root of number of samples. Don is
extending the dynamic range of the image directly with an improvement in
the noise only in the extended region which is directly proportional to
the scale of the two exposures. These are very different effects for
very different applications. Don's technique, for example, is very
useful with high contrast and density originals, but offers no advantage
with low contrast materials such as negatives. Conversely,
multiscanning offers the same, albeit reduced, advantage to both. For
example, Don's technique can extend the effective scan density by, say,
10:1 (increasing the Dmax by 1) reducing the noise in the shadows by the
same amount, in only two exposures. Multiscanning will only reduce the
noise by 29% (ie. 71% of its original) with two exposures, or 68% (ie.
to 32% of its original level) with 10 exposures.

Since the benefits of multiscanning are much less direct, being only a
square root function, the susceptibility of that benefit to unnecessary
processing losses is consequentially higher.
The only difference is that, in "our" case, the amount of misalignment

I see. But do you agree with me, in any case, that on the vertical
axis, the 4800 dpi of "resolution" are worthless as *resolution* and
much more useful as a substitute for multi-sampling (i.e. for improving
SNR)?
Absolutely! (and have stated as much on several occasions).
But anyway, what do you have to say about the unsharp masking -- which
I certainly consider doing on 2400x2400 scans?
My impression is that the standard, consumer-oriented Internet sources
say "just apply as much unsharp masking as you see fit".

But shouldn't there be an exact amount and radius of unsharp masking
that can be computed from the scanner's characteristics, seeing from
the things you said in the various threads (which I only very partially
understood, though)?
Yes, there should be and there is. There is also an exact amount of USM
that is required for any particular output, whether screen or print and
that changes with scale etc. The general advice of "apply as required"
is usually given because estimating the exact amount of sharpening (not
just USM) to compensate for the scanner, printer (and any loss in the
original, such as the camera lens etc.) is extremely complex.
 
K

Kennedy McEwen

Kennedy McEwen ha scritto:

Hey, I've come across anrticle by you (in "EPSON Scan wouldn't make
large files (>1000 MB)", 2004), where you say

--- CUT ---

You are right about resolution, even the theoretical resolution gain is
marginal and almost certainly well below the sample to sample
production
variation. But I don't think there has ever been any question about
the
noise reduction aspect - if you resample the image back to 3200ppi
using
nearest neighbour resizing it is mathematically exactly the same as 2x
multiscanning. That yields exactly 1.414x noise reduction - and all in
a single pass with a scanner which formally doesn't provide
multisampling at all. With no significant resolution gain, the noise
reduction is just there in the image without resampling.

--- CUT ---

Is nearest neighbour resizing (though I've got no idea what it is! but
thankfully there's the Internet for that) what I am looking for?
That depends on the version you use. Direct nearest neighbour
downsampling would not help since that just throws away the unused data.
However many applications prefilter the data prior to downsampling, if
not by exactly the correct blur average, by something very close to it.
The exact function would be the 2:1 pixel average in each axis that you
are downsampling by.
--- CUT ---

Not always. Some, indeed most flatbeds these days, exploit what is
known as "half stepping" of the stepper motor drive. These half steps
are less precise than the full step and less robust because only half
the holding force is produced by the motor coils [...]

--- CUT ---

So does that mean that I might possibly losing more than I gain by
half-stepping? Although I suppose that, at most, I would end up with a
scan whose geometry doesn't perfectly match that of the original... or?
No, you won't end up with bad geometry from this, just that the
precision of that half pixel shift is not constant - it may be 2/3rds of
a pixel on one step and 1/3rd on the next - or 3 at 7/16th followed by 1
at 1/4. This is just another reason why attempting subpixel alignment
for this benefit is likely to cause more pain than gain.
 
K

Kennedy McEwen

Gordon Moat said:
Actually, many linear CCDs are 8400 or 10200 cells (pixel sites), though
divided by three to give each colour Red, Green, and Blue. Kodak have some
nice White Papers on these.
Not generally - colour linear CCDs used in scanners are generally
tri-linear. Each colour is a separate parallel line of CCDs, they are
not divided.

Check the explanation of linear and tri-linear CCDs at
http://www.kodak.com/global/en/service/professional/tib/tib4131.jhtml?id=
0.1.14.34.5.10&lc=en

The entire Kodak inventory of linear CCDs is listed at
http://www.kodak.com/global/en/digital/ccd/products/linear/linearMain.jhtml
and not one has the interleaved colour structure you describe.

In addition, the Sony inventory of linear CCDs is listed at
http://products.sel.sony.com/semi/ccd.html#CCD Linear Sensors nor
the NEC product inventory at
http://www.necel.com/partic/display/english/ccdlinear/ccdlinear_list.html
nor the Fairchild site at
http://www.fairchildimaging.com/main/prod_fpa_ccd_linear.htm have
anything similar.

Now, I am not saying these devices don't exist, but I would like some
pointer as to where you are getting this information from since it is
not from the Kodak or Dalsa sites you reference, and more likely to be a
misunderstanding on your part. Whilst there may well be colour
interleaved linear CCDs these are certainly not used on any commercial
scanners that I am aware of.
So in theory an 8400 element linear CCD should be able to resolve 2800
dpi, and a 10200 element CCD should be able to do 3400 dpi.

A colour CCD with a total of 8400 elements would only be capable of
resolving 2800 colour samples across the A4 page - somewhat better than
300ppi - whilst your 10200 element colour CCD would only be capable of
400ppi! The real requirements for flatbed scanners are *much* higher
than these!

An A4 scanner with a 1200ppi capability has a tri-linear CCD with round
10,500 cells in *each* line ie. a total of more than 31,000 cells. A
4800ppi full page scanner requires a CCD with more than 42000 cells in
each line, a total of over 125,000 cells.

cf. http://www.necel.com/nesdis/image/S17546EJ1V0DS00.pdf for data on
such a device, where each line is in itself produced by having four real
lines offset by a quarter of a pixel pitch. Guess which scanner that's
in! ;-)
check out the Dalsa and Kodak web sites,

Dalsa don't make linear CCDs (in fact they don't design CCDs - all of
their products are identical in form, function and nomenclature to
Philips devices - even the data sheets are Philips with a DALSA sticker
over the top!).

Interleaved colours (by Bayer masking) is common on two dimensional CCDs
(indeed, Bayer was a Kodak employee!) but this is unnecessary in linear
devices. I suspect that you are confusing the two.
 
G

Gordon Moat

Gordon Moat ha scritto:

[snip]

Scanning at some multiple of the claimed resolution might improve your
scans, if that is what you are after with all this investigation. If you
really want to get technical, check out the Dalsa and Kodak web sites,
then find the White Papers for their linear CCDs. You will get far more
technical information that way, though maybe more than is practical.

I don't want to get *too* technical.

Though you want to hack the driver. ;-)
In short, my scanner's got 2400 dpi horizontal. Sure, there are
complications: it's a "staggered" CCD, for one, and then all you've
written that I snipped (although I believe my scanner has three --
actually six -- linear CCDs, one for each color, not one -- actually
two -- very big linear CCD).

If it is moving the optics, and not the CCD, then it has a three or four row
CCD with RGB filtering over it. If it is moving the CCDs, then it could be
several. Of course, you could crack it open and find out. ;-)
But let's just pretend for a moment that it's 2400 dpi optical, period.

You would be lucky for it to be much better than half that, but for sake of
discussion . . . . . . .
What I want to do is scan at 4800 dpi in the *vertical* direction, i.e.
run the motor at "half-stepping". My scanner can do that.

The problem is twofold:

1) (the less important one) My scanner's software insists on
interpolating horizontally in order to fake 4800 dpi on both the x and
y axis, and I don't know how to "revert" this interpolation to get the
original data back (just downsampling with Photoshop appears to lose
something). But as you said, the interpolation algorithm varies between
scanners, so I'll have to find out what mine does, I suppose -- or,
hopefully, just manage to hack the open-source driver I'm using to
support 2400x4800 with no interpolation.

Make that a three fold problem . . . how and what do you plan to use to view
that image? In PhotoShop, you would view 2400 by 4800 as a rectangle; if all
the information was 2400 by 2400 viewing, then you have a square; if you want
a square image and have 2:1 ratio of pixels then your square image will be
viewed like a stretched rectangle. This is similar to a problem that comes up
in video editing for still images; video uses non-square pixels, so the
square pixel still images need to be altered to fit a non-square video
display.
2) (the more important one) I, of course, don't want a 2:1 ratio image.
I just want 2400x2400, and use the "extra" 2400 I've got on the y axis
as one would use multi-sampling on a scanner supporting it. Yes, to get
better image quality and less noise, as you said.
But the question is, how to do it *well*?

Or how to actually still view it as a square image.
I feel that I shouldn't just pretend I'm really multi-sampling (i.e.
taking two readouts for each scanline), because I am not. I ought to
somehow take into account the fact that each scanline is shifted by
"half a pixel" from the previous one.
Should I ignore this, and go on processing as if I were "really"
multi-sampling? Or should I downsample the image using bilinear,
bicubic, or something else more appropriate -- something that can take
the half-pixel offset into account?

Perhaps using some high end video editing software would get you closer,
since you could work directly with non-square pixels.
I realize that simply downsampling the picture to 2400x2400 in
Photoshop or something gives decent results. But I'd just like to know
if there's something I'm missing.

In my mind, the "right" thing to do would be to consider the scan as
two separate scans (one made from the even scanlines, one made from the
odd scanlines); then merge the two image at an half-pixel offset. But
Kennedy said this is not such a great idea.
And in any case, even if Kennedy were wrong, I suppose there must be
some simpler transformation that gives the same result as the alignment
thing above... after all, it seems stupid to actually perform the
alignment and then the merging, when we know the misalignment is
exactly x=0, y=0.5.

Okay, just a side note on technology. Canon came up with a half pixel shift
idea in 3 CCD video several years ago. Panasonic and Sony tried something
similar, but basically gave up on it on professional 2/3" 3 CCD cameras. The
Canon idea was to slightly alter the spacing to enhance edge resolution, and
choose green since it corresponds to how human eyes like to view things. Then
the in-camcorder processing put all that back together as a real image. I
don't know of a way to separate out the original capture information, unless
you got that prior to in-camera processing.
All the other questions I posed in the original message were mostly
about how all this relates (if anyhow) with the fact the CCD is
"staggered" (which in turn means that each sensors already overlaps
each other sensors by half their size -- or *about* half their size,
since as you pointed out, things get actually a bit more complicated).

I have not heard of anyone outside of Canon still using a staggered idea. I
think Microtek may have tried it, or possibly UMAX. In order to really do
something different with that, much like the video example above, it seems
you would need to get the electronic signal directly off the CCD prior to any
in-scanner processing of the capture signal. Basically that means hacking
into the scanner. I don't see how that would be practical; even if you came
up with something, you still have a low cost scanner with limited optical
(true) resolution and colour abilities.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
G

Gordon Moat

Kennedy said:
Not generally - colour linear CCDs used in scanners are generally
tri-linear. Each colour is a separate parallel line of CCDs, they are
not divided.. . . . . . . Now, I am not saying these devices don't exist, but
I would like some
pointer as to where you are getting this information from since it is
not from the Kodak or Dalsa sites you reference, and more likely to be a
misunderstanding on your part. Whilst there may well be colour
interleaved linear CCDs these are certainly not used on any commercial
scanners that I am aware of.

Okay, maybe I should have stated that better. So I will give you one to find
and read about. That is the Kodak KLI-10203 Imaging Sensor. It is correctly
termed a 3 x 10200 imager, so I apologize for not being more thorough in my
description of it. The white paper and long spec sheet for this one is 27
pages, so I will skip on typing the details in this message.
A colour CCD with a total of 8400 elements would only be capable of
resolving 2800 colour samples across the A4 page - somewhat better than
300ppi - whilst your 10200 element colour CCD would only be capable of
400ppi! The real requirements for flatbed scanners are *much* higher
than these!

If you could figure out what scanner uses the KLI-10203, then you might be
surprised at your statements. Just to give you a hint, it is only available in
a few high end products. The lowest spec (and lowest cost) of those does 3200
dpi true resolution. That is across the entire bed, and not just down the
middle.
An A4 scanner with a 1200ppi capability has a tri-linear CCD with round
10,500 cells in *each* line ie. a total of more than 31,000 cells. A
4800ppi full page scanner requires a CCD with more than 42000 cells in
each line, a total of over 125,000 cells.

Okay, just to through out some numbers, and then you can do calculations, or
whatever. Using the KLI-10203 again, the cell sites are 7 µm square pixels.
There are 3 rows of 10200 cells each, so 30600 total cells. Row spacing is 154
µm centre to centre. There is no sideways offset of cells in each row, and the
spacing allows a processing timing gap of 22 lines.
cf. http://www.necel.com/nesdis/image/S17546EJ1V0DS00.pdf for data on
such a device, where each line is in itself produced by having four real
lines offset by a quarter of a pixel pitch. Guess which scanner that's
in! ;-)


Dalsa don't make linear CCDs (in fact they don't design CCDs - all of
their products are identical in form, function and nomenclature to
Philips devices - even the data sheets are Philips with a DALSA sticker
over the top!).

Dalsa bought out the Philips imaging chip business, though they kept some
engineers and other workers. Is it still possible to buy imaging chips directly
from Philips? Anyway, they do have some nice information on chips on their
website. Fill Factory in Belgium are another company with some nice technical
information. With Sony, I have not been very impressed with the level of
information from them, though they do make lots of imaging chips for lots of
companies.
Interleaved colours (by Bayer masking) is common on two dimensional CCDs
(indeed, Bayer was a Kodak employee!) but this is unnecessary in linear
devices. I suspect that you are confusing the two.

Okay, to be more specific, each row on a linear CCD has a colour filter over
it. In other words, on our KLI-10203 example, one row has a red filter, one row
has a blue filter, and one row has a green filter. There is no need for a Bayer
pattern, since one row is scanned at a time, and the final result is three
colour channels of information.

There are 3 CCD digital still cameras, and they do not use Bayer pattern
filters either. They do use one overall colour filter over the surface of each
of the three chips. The result again is three colour channels of information.

Bayer patterning is a altering of colour filters over each pixel on one imaging
chip. Basically, the patterns will vary across manufacturers, though usually
RGBG with twice as many green filtered pixels as red or blue. The information
to create three colour channels of information is interpolated (often by in
camera processing). This is unlike scanning, or 3 CCD stills cameras.

Okay, so I don't recall mentioning interleaving, but interpolation was
mentioned, though only for upsizing or downsizing to change resolution. The OP
wants to use what he thinks might be extra resolution in one dimension of the
specifications for his scanner.

An exception to colour filtering is in many Nikon film scanners, since they use
coloured LEDs as a light source. I would suspect those are Sony imaging chips
in those Nikon scanners. While many do like the LED approach, it is interesting
to note that is not done in any high end scanning systems. I doubt it is some
patent issue, and more likely that a single light source provides a more
predictable scanning operation in regards to colour accuracy over the life of
the scanner.

Anyway, I apologize for not being more clear, a 10200 linear CCD should be
correctly termed a 3 x 10200 element linear CCD. Regardless the resolution is
still limited by the physical size of the cell site, the scanner optics, and
the accuracy of movement of the imaging components within the scanner. A linear
image sensor with a single array of 1000 photosites of pitch 10 µm would have a
resolution of 2540 dpi (1000 / (1000 x .01 mm x 1"/25.4mm)). If that sensor
were used in an optical system to image an 8" wide document, then the
resolution in the document plane would be 125 dpi (1000 pixels / 8"). If we
consider the 7 µm cell size for the KLI-10203, for example, then we can
estimate for that imager.

Scanner optics are still the most limiting factor, and could be the main piece
that contributes to limiting true optical resolution. Trying to find
information about scanner optics is tough, though there is a little about these
available from Rodenstock and a few other companies. Interestingly, it is much
easier to find out information on high end systems, and nearly impossible to
find useful information on low end systems. Maybe that is just the way it
should be.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
L

ljlbox

Kennedy McEwen ha scritto:
[snip]
[snip]

Is nearest neighbour resizing (though I've got no idea what it is! but
thankfully there's the Internet for that) what I am looking for?
That depends on the version you use. Direct nearest neighbour
downsampling would not help since that just throws away the unused data.
However many applications prefilter the data prior to downsampling, if
not by exactly the correct blur average, by something very close to it.
The exact function would be the 2:1 pixel average in each axis that you
are downsampling by.

Oh. Which is what you suggested to me in other messages, isn't it?
Well, I suppose I'll settle for that then.

In the end I don't really care what applications do, since I'm going to
write my own little program to do this -- I need to do that for other
reasons, anyway.

So, just out of curiosity, nearest neighbour after the applying the
"correct blur average" corresponds to 2:1 pixel average?

Just one more time, to be sure, what you're telling me to do is

for(int y=0; y<OriginalHeight/2; y++) {
for(int x=0; x<OriginalWidth; x++) {
NewImage[x, y] = (OldImage[x, y*2] + OldImage[x, y*2+1]) / 2;
}
}

[snip: half-stepping the motor is not very precise, and although it
doesn't result in bad geometry, it contributes to making
sub-pixel alignment worthless]

I see. Good to know.


by LjL
(e-mail address removed)
 
K

Kennedy McEwen

Gordon Moat said:
Okay, maybe I should have stated that better. So I will give you one to find
and read about. That is the Kodak KLI-10203 Imaging Sensor. It is correctly
termed a 3 x 10200 imager, so I apologize for not being more thorough in my
description of it.

The KLI-10203 is a tri-linear CCD (check the FIRST line of the data
sheet!) - *each* of the lines is 10200 cells long and *each* of the
lines is a separate colour - no interleaving. So, contrary to your
claim that this could only resolve 3400ppi, because it has 3 colours in
each line, it can 5100cy/length. Without optical scaling it resolves
3600ppi, with optical scaling (as would be used in a scanner
application) this can be set to match whatever the scanner width is - on
the 8.5in flatbed scanner configuration that the OP is referencing, it
would produce around 1200ppi.
If you could figure out what scanner uses the KLI-10203, then you might be
surprised at your statements.

I don't think so, mainly since the statement is based on *YOUR* figures
that the 10200pixel CCD is only capable of 3400ppi! Perhaps you see now
why it was ridiculous? And before you wriggle further - KODAK DON'T
MAKE A 3400PIXEL LONG TRILINEAR (10200 TOTAL CELLS) CCD AND NEVER HAVE!!

An A4 flatbed scanner, as the type under discussion in this thread,
means a scan width of at approximately 8.5"; 10200 pixels across that
distance yields exactly 1200ppi - no division by three because the
colours are on three separate lines of 10200 cells *each*, not
interleaved on a single line as you suggested.
Just to give you a hint, it is only available in
a few high end products. The lowest spec (and lowest cost) of those does 3200
dpi true resolution. That is across the entire bed, and not just down the
middle.

Not across the full A4 width o a single pass it isn't. To achieve
3200ppi resolution requires a scan width of no greater than 3.2" -
around a third of the width of the flatbed under discussion!
Okay, just to through out some numbers, and then you can do calculations, or
whatever. Using the KLI-10203 again, the cell sites are 7 μm square pixels.
There are 3 rows of 10200 cells each, so 30600 total cells. Row spacing is 154
μm centre to centre. There is no sideways offset of cells in each row, and the
spacing allows a processing timing gap of 22 lines.
And how much of that determines the ppi of the final application? Hint
- nothing, but now we know you can read a data sheet!

So where does *your* figure of 3400ppi limitation for this particular
device come from - apart from your initial misreading of the data?
Dalsa bought out the Philips imaging chip business, though they kept some
engineers and other workers. Is it still possible to buy imaging chips directly
from Philips?

Certainly was the last time I tried, which I believe was earlier this
year although time flies.
Anyway, they do have some nice information on chips on their
website.

They do, but *none* of them are linear arrays and making inferences from
the limitations of 2-D arrays, particularly colour arrays, on linear
devices is misleading at best and completely deceptive at worst. For
example, DALSA's biggest array is only 5344 pixels along the largest
axis - but you wouldn't interpret that as state of the art for a linear
array!
Okay, to be more specific, each row on a linear CCD has a colour filter over
it.

Precisely - but that isn't what you wrote last time! You stated that
the 3 colours resulted in a resolution of only one third of the number
of pixels in the line.

Okay, so I don't recall mentioning interleaving, but interpolation was
mentioned, though only for upsizing or downsizing to change resolution. The OP
wants to use what he thinks might be extra resolution in one dimension of the
specifications for his scanner.
No he doesn't - or at least that isn't what he has asked about. He is
interested in using available samples in two axes that do not provide as
much resolution as he would like as a means of achieving improved signal
to noise at a lower resolution.

The CCD in his case is similar to the NEC uPD8880 device, a trilinear
array with 21360 cells in each colour, capable of producing 2400ppi
across an A4 platform. Each of the colour lines comprises two rows of
10,680 cells capable of reproducing 1200ppi on the flatbed, but offset
by half a pixel pitch to create a 2400ppi sample density. In addition,
the scanner motor is capable of moving the scan head in 4800ppi steps,
further oversampling the original pixels. He is interested in using
these oversamples optimally for signal to noise improvement at 2400ppi
and possibly as low as 1200ppi rather than have some of their
information being used to achieve resolution which is already
compromised by the optical system of the scanner.
An exception to colour filtering is in many Nikon film scanners, since they use
coloured LEDs as a light source. I would suspect those are Sony imaging chips
in those Nikon scanners.

You would be wrong.
While many do like the LED approach, it is interesting
to note that is not done in any high end scanning systems.

Wrong again! It is exactly the process used in high end film scanner
systems - the difference being that the LEDs are replaced with colour
lasers to achieve a higher intensity and thus a faster throughput.
I doubt it is some
patent issue, and more likely that a single light source provides a more
predictable scanning operation in regards to colour accuracy over the life of
the scanner.

Anyway, I apologize for not being more clear, a 10200 linear CCD should be
correctly termed a 3 x 10200 element linear CCD. Regardless the resolution is
still limited by the physical size of the cell site, the scanner optics, and
the accuracy of movement of the imaging components within the scanner. A linear
image sensor with a single array of 1000 photosites of pitch 10 μm would have a
resolution of 2540 dpi (1000 / (1000 x .01 mm x 1"/25.4mm)). If that sensor
were used in an optical system to image an 8" wide document, then the
resolution in the document plane would be 125 dpi (1000 pixels / 8"). If we
consider the 7 μm cell size for the KLI-10203, for example, then we can
estimate for that imager.
You don't need to go round the houses - the calculation is trivial. An
8.5in scan width with 10200 cells per line (no matter what the optical
system or the cell size or pitch is) results in 10200/8.5 = 1200ppi.
 
K

Kennedy McEwen

Kennedy McEwen ha scritto:
[snip]
[snip]

Is nearest neighbour resizing (though I've got no idea what it is! but
thankfully there's the Internet for that) what I am looking for?
That depends on the version you use. Direct nearest neighbour
downsampling would not help since that just throws away the unused data.
However many applications prefilter the data prior to downsampling, if
not by exactly the correct blur average, by something very close to it.
The exact function would be the 2:1 pixel average in each axis that you
are downsampling by.

Oh. Which is what you suggested to me in other messages, isn't it?
Well, I suppose I'll settle for that then.

In the end I don't really care what applications do, since I'm going to
write my own little program to do this -- I need to do that for other
reasons, anyway.

So, just out of curiosity, nearest neighbour after the applying the
"correct blur average" corresponds to 2:1 pixel average?
Yes - assuming you are downsampling 2:1 in that axis, as discusssed.
Just one more time, to be sure, what you're telling me to do is

for(int y=0; y<OriginalHeight/2; y++) {
for(int x=0; x<OriginalWidth; x++) {
NewImage[x, y] = (OldImage[x, y*2] + OldImage[x, y*2+1]) / 2;
}
}
Seems OK.
That'll give you half as many y pixels in the new image as the old, but
with an SNR of about x1.4 of the old - assuming that the bit depth in
NewImage[x,y] is adequate to avoid overflow prior to that divide by 2
step.

You could go further with a reduction to 1200x1200ppi by summing 4y and
2x pixels for each NewImage[x,y] and achieve an SNR improvement of x2.8,
but you need to maintain adequate temporary precision to compute the sum
of 8 pixels, before dividing and truncating, to get the final result.
 
G

Gordon Moat

Kennedy said:
The KLI-10203 is a tri-linear CCD (check the FIRST line of the data
sheet!) - *each* of the lines is 10200 cells long and *each* of the
lines is a separate colour - no interleaving. So, contrary to your
claim that this could only resolve 3400ppi, because it has 3 colours in
each line, it can 5100cy/length. Without optical scaling it resolves
3600ppi, with optical scaling (as would be used in a scanner
application) this can be set to match whatever the scanner width is - on
the 8.5in flatbed scanner configuration that the OP is referencing, it
would produce around 1200ppi.

It figures that an amateur mathematician hobbyist would have never used a high end
scanner. Theories disappear when you actually are able to use devices that have
these installed in them. There are no imaging chips with 100% efficient cell sites,
nor any without a dead zone between cell sites of greater than 1 µm in size. You can
calculate all you want, but actual tests of this gear are far better than theory.
I don't think so, mainly since the statement is based on *YOUR* figures
that the 10200pixel CCD is only capable of 3400ppi!

Not the CCD, but the system in which it is installed. You cannot have a flat bed
scanner without optical components. Those optical components will limit the total
system resolution. In fact, that resolution is based on actual tests of scanners
with that exact 10200 pixel (3 rows to be specific) imaging CCD. I don't pull these
numbers out of my ass, I get them from the industry that uses these things and
actually does test them.
Perhaps you see now
why it was ridiculous? And before you wriggle further - KODAK DON'T
MAKE A 3400PIXEL LONG TRILINEAR (10200 TOTAL CELLS) CCD AND NEVER HAVE!!

That statement shows your level of ineptitude, and lack of reading comprehension.
The 3400 dpi figure is the OPTICAL resolution, not the size of the file. The number
of cells does not determine the optical resolution, since all system components
affect the "optical" (or true, or actual) resolution. In fact, the current best flat
bed actual optical resolution is 5600 dpi across the entire 12" by 17" scanner bed,
and those two particular scanners used an 8000 element tri-linear CCD. That very
simple fact should tell you that the optical resolution is not simply a factor of
the imaging chip construction.
An A4 flatbed scanner, as the type under discussion in this thread,
means a scan width of at approximately 8.5"; 10200 pixels across that
distance yields exactly 1200ppi - no division by three because the
colours are on three separate lines of 10200 cells *each*, not
interleaved on a single line as you suggested.

Three rows of 10200 cell sites each, 30600 in total . . . did you not read my second
reply, or are you just trying to be dense on purpose. Further information is that
the particular example I chose, the KLI-10203, has a physical dimension of 76.87 mm
by 1.6 mm . . . seems to me that is much smaller than 8.5" across, unless you are
using a different metric to english conversion.

Just to update you a little bit, the smallest bed width in which the KLI-10203 is
actually installed is 305 mm, or about 12". The length of that particular smallest
scanner is 457 mm, or about 18". Much larger than A4. In fact, I don't know of any
true high optical resolution scanners that are A4 sized, nor do I know of any A4
sized flatbeds that use the KLI-10203. Maybe I should have picked a lesser imager
for this discussion.
Not across the full A4 width o a single pass it isn't. To achieve
3200ppi resolution requires a scan width of no greater than 3.2" -
around a third of the width of the flatbed under discussion!

What you are missing is that not all scanning systems in flat beds use a "pass" in
one direction method of scanning. There are XY scan and XY stitch, and variations of
that to scan the entire flat bed area. Send off an 8" by 10" transparency to
Creo/Kodak and ask them to scan it for you . . . of course, I should alert you that
they only offer that for potential customers who are serious about buying their
products.
And how much of that determines the ppi of the final application? Hint
- nothing, but now we know you can read a data sheet!

So where does *your* figure of 3400ppi limitation for this particular
device come from - apart from your initial misreading of the data?

Actual test of high end scanning gear. True optical resolution. In fact, the very
best can do much better than 3400 dpi, though all of those use a different imaging
chip. Many of those use an 8000 element tri-linear CCD, and add better optics,
active chip cooling, and even more precise positioning. Try Creo EverSmart line,
Dainippon Screen Cezanne, and Fuji Lanovia Quattro. Actually, the Fuji Lanovia
Quattro has a 10500 Quad-linear CCD for colour scans based on their super CCD
technology, and adds a single line 16800 element CCD for copydot usage (do I need to
explain copydot scanning?), so that particular Fuji (and their FineScan 5000)
actually do better than 5000 dpi across the scanning bed. I could also mention
Purop-Eskofot, but they are not easy to find.

Just to give you a very simple explanation, that 1200 dpi figure you calculated
would be very close to the actual in a system in which very simple optics were used
in the scanner. In fact, around 1999, when these chips were new, that was nearly the
limit in almost any flat bed scanner. Since that time, scanner optics have improved,
and positioning of optical elements has improved. Those improvements are expensive
to implement, and why you only see them at the high end. However, those improved
optics and better ways to move the optical elements help that family of circa 72 mm
CCDs achieve better than 1200 dpi true optical resolution, and even high
interpolated resolution.
Certainly was the last time I tried, which I believe was earlier this
year although time flies.

Okay, glad to see you got something right, and nice to hear Philips chip division is
still plugging away. ;-)
They do, but *none* of them are linear arrays and making inferences from
the limitations of 2-D arrays, particularly colour arrays, on linear
devices is misleading at best and completely deceptive at worst. For
example, DALSA's biggest array is only 5344 pixels along the largest
axis - but you wouldn't interpret that as state of the art for a linear
array!


Precisely - but that isn't what you wrote last time! You stated that
the 3 colours resulted in a resolution of only one third of the number
of pixels in the line.

I misstated it, though hopefully it is more clear in the following posts. Also, I
did apologize for not being as correct and thorough as I usually write. Interesting
that your earlier tone is different . . . almost makes me feel that you respond in
line prior to reading everything, which would be careless in the event that is the
situation.
No he doesn't - or at least that isn't what he has asked about. He is
interested in using available samples in two axes that do not provide as
much resolution as he would like as a means of achieving improved signal
to noise at a lower resolution.

The CCD in his case is similar to the NEC uPD8880 device, a trilinear
array with 21360 cells in each colour, capable of producing 2400ppi
across an A4 platform. Each of the colour lines comprises two rows of
10,680 cells capable of reproducing 1200ppi on the flatbed, but offset
by half a pixel pitch to create a 2400ppi sample density. In addition,
the scanner motor is capable of moving the scan head in 4800ppi steps,
further oversampling the original pixels. He is interested in using
these oversamples optimally for signal to noise improvement at 2400ppi
and possibly as low as 1200ppi rather than have some of their
information being used to achieve resolution which is already
compromised by the optical system of the scanner.

Okay, so sounds like a UMAX, Epson, or maybe a Microtek.
You would be wrong.

Big Fluffy Dog . . . I have run through enough broken Nikon scanners to avoid them.
They are poor production choices. Great shame they are not as well built and rugged
as their top level cameras.
Wrong again! It is exactly the process used in high end film scanner
systems - the difference being that the LEDs are replaced with colour
lasers to achieve a higher intensity and thus a faster throughput.

I don't recall Imacon using LEDs . . . okay, just checked and all current models are
not LEDs. Or perhaps you actually think a Nikon film scanner is a high end product?
Put any Nikon scanner into a high volume environment, and they break just a bit too
soon to take them seriously for producing income. It is better to spend a bit more
and get high resolution with high volume and little to no downtime. Now to be just a
little critical of Imacon, they did have some units in the recent past that were a
little more troublesome than should have been expected, though their service is very
fast and efficient (a statement few would make of the current situation at Nikon
USA).
You don't need to go round the houses - the calculation is trivial.

The calculation is from the spec sheet, and used as an example. It was also posted
as a lure to see how you would respond. I did not come up with the original
calculation in that paragraph, I merely transposed it. Anyway . . . . .
An
8.5in scan width with 10200 cells per line (no matter what the optical
system or the cell size or pitch is) results in 10200/8.5 = 1200ppi.

Why don't you tell me how 3400 dpi measured optical resolution is possible using a
circa 72 mm 10200 element tri-linear CCD. This should be quite amusing. Oh, and just
for fun, use that 12" by 17" bed as your explanation basis. The device is the Creo
iQSmart1, in case you have not figured that one out yet.

What I think you are missing is that "line" is a term for the line of the CCD, which
is about 72 mm, not 8.5". I am sure you have read about many scanners with a "sweet
spot" near the centre of the flat bed. This is due to limitations in movement of the
optics, mirror, CCD platen, or any other components that move to allow scanning to
occur. Low end and mid range systems, of which I am certain are your primary
experience, have very simple and very limited imaging components. Better control of
optics, movements, and signal processing will improve results.

Come on Kennedy, I thought you were smarter than this. See this as a challenge, and
then figure out why high end scanning gear works so well, and costs so much. I judge
scanners based on actual tests performed to determine true optical capability, and
not just resolution. Good design control will also help colour accuracy and Dmin to
Dmax performance. Read too many Epson, Canon, UMAX, MicroTek, Minolta, or other low
and mid range gear spec sheets, and you can easily be fooled into thinking these
cheaper devices are much better than they really perform. At least the lower cost
film scanners do better than the low cost flat bed scanners.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 
K

Kennedy McEwen

Gordon Moat said:
It figures that an amateur mathematician hobbyist would have never used
a high end
scanner.

I have no idea, and care less, what your particular bent or limitation
is, although your comments betray lack of any scientific or
instrumentation design knowledge. I assume you have some photographic
knowledge, and as a consequence some experience of using commercial
scanner systems. Suffice to say that I have spent over 25 years in the
electro-optic imaging industry and in that time have designed, built and
tested many high end imagers and scanning systems for applications you
would probably never be able to contemplate. Please don't use your
personal limitations as an excuse for blatant stupidity.
Theories disappear when you actually are able to use devices that have
these installed in them. There are no imaging chips with 100% efficient
cell sites,
nor any without a dead zone between cell sites of greater than 1 μm in
size. You can
calculate all you want, but actual tests of this gear are far better
than theory.
Dead zones between pixels determine the fill factor and *improve* the
resolved MTF - they make absolutely no difference to any of these
calculations! As far as tests are concerned - you should revise yours:
the Kodak specification for the device you referenced actually explains
this effect in surprising detail for a data sheet. Perhaps you will
read it, but it has no effect on the fact that this device will produce
a resolution of 1200ppi on an 8.5" scan width.
Not the CCD, but the system in which it is installed. You cannot have a
flat bed
scanner without optical components.

No, you are wriggling again! Your initial comment made no statement
about optics - this was, according to you, the maximum that a 10200 cell
linear array could resolve, and it is as wrong now as it was then -
despite a feeble attempt to invoke optics at the last minute!
Those optical components will limit the total
system resolution. In fact, that resolution is based on actual tests of
scanners
with that exact 10200 pixel (3 rows to be specific) imaging CCD. I
don't pull these
numbers out of my ass,
Sounds like you are pulling excuses out of your ass though.
I get them from the industry that uses these things and
actually does test them.

There's the rub, bozo - I am part of that industry and have been two and
a half decades and these figures are trivial to derive from basic design
criteria and tolerancing.

The MTF of your example Kodak array is around 60% at Nyquist, depending
on the clock rate. The MTF of a suitable optic can easily exceed 70% at
the same resolution. If you are measuring much less than 35% contrast
at 1200ppi on an 8.5" scan from this device then you really need to be
re-examining your optical layout, because it certainly isn't high
performance. As for the optical MTF at your claimed 3400ppi limit for
the device: it should readily exceed 90% and thus has little effect at
all on the performance of the device.
That statement shows your level of ineptitude, and lack of reading
comprehension.
The 3400 dpi figure is the OPTICAL resolution, not the size of the
file.

On the contrary, it shows you have no idea what you are talking about.
Name ONE (even an obsolete example) Kodak trilinear CCD with 10200 total
cells in each line which had an optical resolution of only 3400ppi when
optically scaled to an 8.5" scan width. You really are talking
absurdities! Even directly at the focal plane itself, the KLI-10203
device is capable of 3600 samples per inch with an MTF of approximately
60% at that resolution (and I remind you that your allegation was not
specific to this device with its particular pixel size, but to all 10200
element linear arrays!).
The number
of cells does not determine the optical resolution,

It certainly does in terms of "dpi", "ppi" parameters that you have been
quoting. These terms define the SAMPLING RESOLUTION!
since all system components
affect the "optical" (or true, or actual) resolution.

I suggest you learn something about imaging system design before making
yourself look even more stupid than you already do. First lesson should
be what defines optical resolution and what units it is measured in.
Clue: you haven't mentioned them once yet!
In fact, the current best flat
bed actual optical resolution is 5600 dpi across the entire 12" by 17"
scanner bed,
and those two particular scanners used an 8000 element tri-linear CCD.
That very
simple fact should tell you that the optical resolution is not simply a
factor of
the imaging chip construction.
You really don't have a clue, do you? How many swathes does this Rolls
Royce of scanners make to achieve 5600ppi on a 12" scan width with only
8000 pixels in each line? Perhaps you dropped a zero, or misunderstood
the numbers or just lied.
Further information is that
the particular example I chose, the KLI-10203, has a physical dimension
of 76.87 mm
by 1.6 mm . . . seems to me that is much smaller than 8.5" across,
unless you are
using a different metric to english conversion.

You would build a scanner from such a detector without an imaging optic
to project the flatbed onto the focal plane? And you *STILL* claim you
know what you are talking about? You really are stretching credulity to
extremes now.
Just to update you a little bit, the smallest bed width in which the
KLI-10203 is
actually installed is 305 mm, or about 12".

In which case it would be unable to yield much more than 800ppi in a
single swathe at that width!
What you are missing is that not all scanning systems in flat beds use
a "pass" in
one direction method of scanning.

I fail to see how I could have missed this point when I specifically
made reference to the condition of a single pass in the sentence you
have quoted above!

Since the scanner under discussion on this thread is a single pass
scanner, and the OP is specifically interested in what he can achieve in
that single pass, I see no need to extend the explanation to swathe
equipment.
Actual test of high end scanning gear. True optical resolution.

Incredible. Not only because even cheap scanners now achieve better
than this, but because neither "ppi" nor "dpi" is an appropriate
measurement unit for "optical resolution" in the first place!
Just to give you a very simple explanation, that 1200 dpi figure you calculated
would be very close to the actual in a system in which very simple
optics were used
in the scanner.

I didn't suggest otherwise - a simple optic with a single pass scan.
That is what we are discussing in this thread. You are the one bringing
in additional complications to justify your original mistaken advice to
the OP.
In fact, around 1999, when these chips were new, that was nearly the
limit in almost any flat bed scanner. Since that time, scanner optics
have improved,
and positioning of optical elements has improved. Those improvements
are expensive
to implement, and why you only see them at the high end. However, those
improved
optics and better ways to move the optical elements help that family of
circa 72 mm
CCDs achieve better than 1200 dpi true optical resolution, and even high
interpolated resolution.
I suggest you look up the original patents for this "microscan"
technology - you will find a familiar name in the inventors - and it was
well before 1999 - although that could be around the time that the
original patents expired. Even so, as the inventor of aspects of that
particular technology, I can assure you that diffraction is still the
limit of all optics.
I misstated it, though hopefully it is more clear in the following
posts.

No, your "following posts" were full of excuses and feeble
justifications (such as optics) to justify your original assertion
rather than a simple statement that you were wrong.
Also, I
did apologize for not being as correct and thorough as I usually write.

No you didn't, you said "OK Maybe I should have stated that better".
That does not, under any circumstances, amount to either an apology or
an admission of being incorrect, let alone both.
Interesting
that your earlier tone is different . . . almost makes me feel that you
respond in
line prior to reading everything, which would be careless in the event
that is the
situation.

No, I browse a post first to capture the gist of the message and then
respond to the specific lines I quote.
Okay, so sounds like a UMAX, Epson, or maybe a Microtek.
Or just about any consumer grade flatbed scanner in that class of the
market these days.
Big Fluffy Dog . . . I have run through enough broken Nikon scanners to
avoid them.
They are poor production choices. Great shame they are not as well
built and rugged
as their top level cameras.
And what does that have to do with your allegation that they contain
Sony CCDs? You are like a child pissing up a wall.
I don't recall Imacon using LEDs . . . okay, just checked and all
current models are
not LEDs.

Did you actually read what was written, Bozo? Why are you still asking
about LEDs?
I did not come up with the original
calculation in that paragraph,

Why is that no surprise??
Why don't you tell me how 3400 dpi measured optical resolution is
possible using a
circa 72 mm 10200 element tri-linear CCD.

TIP: optical resolution is measured at the flatbed surface, not at the
focal plane - the reason for that is that only the flatbed surface is
accessible for testing other than during design and manufacture and it
is the only position that matters to the end user. The physical size of
the CCD has no direct influence on the resolution obtained other than
its implications on the optical system requirements. 7um pixels are
relatively trivial to resolve optically - low cost digital still cameras
work well with sub-3um pixels, albeit with limited minimum apertures,
but the pixel resolution is not particularly demanding.
This should be quite amusing.

It should indeed since it is quite simple really. In terms of
measurement: assess the MTF of the scanner using an ISO-12233 or
ISO-16067 references depending on subject matter and determine the
optical resolution at an agreed minimum MTF. Industry standard is
nominally 10%, but some people play specmanship games though that is
unnecessary here. You should note that this optical resolution will not
be in dpi or ppi, but I leave it to you to figure what it will be, since
you demonstrate ignorance and need to learn some facts.

In terms of design, just for fun, use your example of the KLI-10203
which has a nyquist MTF of better than 60% at 2MHz clock rate. Fit an
IR filter, cut-off around 750nm, to eliminate out of band response.
Select a 1:3 f/4 relay objective from one of many optical suppliers. Few
will fail to meet an MTF of over 70% on axis at the sensor's nyquist
frequency and those from the better suppliers including Pilkington,
Perkin Elmer etc should achieve this across the entire field. Damping
mechanism and timing to eliminate lateral post-step motion or, ideally,
continuous backscan compensation of focal plane by multi-facet polygon.
Result: Scan width = 8.5"; sampling resolution = 1200ppi; MTF at Nyquist
for native resolution >=35% (ie. well resolved, optical resolution
exceeds sampling resolution!).

MTF at Nyquist for 3400dpi should exceed 80%, based on CTE limited MTF
of 95% for the detector and 90% optical MTF with 1 wavefront error at
this lower resolution.

These are just figures for optics and your example detector that I
happen to have in front of me at the moment - with a little searching it
might be possible to obtain better. Nevertheless, 1200ppi resolution is
clearly practical on an 8.5" scan width with the device you seem to
believe can only achieve 3400ppi. Hardly surprising though is it -
similar CCDs from other manufacturers are actually specified as
1200ppi/A4 devices!
Oh, and just
for fun, use that 12" by 17" bed as your explanation basis. The device
is the Creo
iQSmart1, in case you have not figured that one out yet.
Cor, shifting goalposts really is your forte isn't it. We determine a
projected resolution on an 8.5" width platform and you want to see it
achieved on a 12" platform. Do you understand the ratio of 8.5 and 12?
You are an idiot and I rest my case!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top