difficulty drum scanning negatives

J

Jytzel

I sent some negatives and slides to drum scan to have the operator
claim that negatives show more grain in the final scan than slides. I
used 6x6 Fuji NPS 160, a film has low granularity rating. The other
film I used was E100G slide film. I find it hard to believe the
operator's claim. It seems that he is doing something wrong. What
could it be and how to get the best scan out of my negatives?
By the way, they use Crosfield drum scanners.

thanks
J
 
D

Don

This sounds like grain aliasing. The mathematics of this are rather complex
because it involves the MTF of the scanner spot and lens, the line spacing
of the scanner, and the grain size distribution of the film being scanned.
It occurs when the grain size is small enought that it exceeds the Nyquist
limit of the sampling process. It results in the high frequency portions of
the grain being duplicated as lower frequency noise, and adds to the normal
low frequency component of the granularity. The result is an apparent
increase in granularity.

There are only two solutions to this that I know of. The first is to
introduce an anti-aliasing filter in the optical path of the scanner. This
almost has to be done by the manufacturer of the scanner, as it must be
carefully matched to the MTF of the spot and optics. The second solution is
to scan with a higher lpi and a smaller spot size (and better lens MTF). If
that can be done with the scanner that you are currently using, you're in
business. Otherwise, you will need to find a scanner that can handle film
with the small grain size that you have.

Don
 
K

Kennedy McEwen

Don <[email protected]> said:
This sounds like grain aliasing. The mathematics of this are rather complex
because it involves the MTF of the scanner spot and lens, the line spacing
of the scanner, and the grain size distribution of the film being scanned.
It occurs when the grain size is small enought that it exceeds the Nyquist
limit of the sampling process. It results in the high frequency portions of
the grain being duplicated as lower frequency noise, and adds to the normal
low frequency component of the granularity. The result is an apparent
increase in granularity.

There are only two solutions to this that I know of. The first is to
introduce an anti-aliasing filter in the optical path of the scanner. This
almost has to be done by the manufacturer of the scanner, as it must be
carefully matched to the MTF of the spot and optics. The second solution is
to scan with a higher lpi and a smaller spot size (and better lens MTF). If
that can be done with the scanner that you are currently using, you're in
business. Otherwise, you will need to find a scanner that can handle film
with the small grain size that you have.
With a drum scanner the spot size (and it's shape) *is* the anti-alias
filter, and the only one that is needed. One of the most useful
features of most drum scanners is that the spot size can be adjusted
independently of the sampling density to obtain the optimum trade-off
between resolution and aliasing to suit the media being used, but there
is usually an automatic option which will achieve a compromise at least
as good as any CCD device.

I doubt that this is just aliasing though, especially if both were
scanned at 4000ppi or more. Remember that negative images are
compressed on film (the corollary being that negative film has more
exposure latitude and the ability to capture a wider tonal range).
Consequently, when producing a positive from the film image, whether by
scanning or by conventional chemical printing techniques, the image must
be contrast stretched. So, even if the grain on the film has the same
amplitude as the same as in slide film (a reasonable assumption for
similar speed films of the same generation from the same manufacturer,
the resulting image from the negative will always appear more grainy
than the image from the slide film.

There is a lot of truth in what the drum operator told Jytzel. Whether
its the truth, the whole truth and nothing but the truth is another
story. ;-) However, when viewed at 100% scaling, the size of the
original has little bearing on the results so I would expect to see more
grain on the 6x6cm negative image than from the 35mm slide under those
conditions.
 
D

David J. Littleboy

Jytzel said:
I sent some negatives and slides to drum scan to have the operator
claim that negatives show more grain in the final scan than slides. I
used 6x6 Fuji NPS 160, a film has low granularity rating. The other
film I used was E100G slide film. I find it hard to believe the
operator's claim. It seems that he is doing something wrong. What
could it be and how to get the best scan out of my negatives?

I also find that negative materials scan grainier than slide films (although
I haven't tried either of those films). Try shooting some Reala
Konica-Minolta Impressa 50.

Here's a page with a lot of scan samples to get an idea of what to expect.

http://www.terrapinphoto.com/jmdavis/

David J. Littleboy
Tokyo, Japan
 
G

Gregory W Blank

I sent some negatives and slides to drum scan to have the operator
claim that negatives show more grain in the final scan than slides. I
used 6x6 Fuji NPS 160, a film has low granularity rating. The other
film I used was E100G slide film. I find it hard to believe the
operator's claim. It seems that he is doing something wrong. What
could it be and how to get the best scan out of my negatives?
By the way, they use Crosfield drum scanners.

thanks
J

I find NPS to be really a horrible film to scan for whatever reason,
E100 films are T grain color emulsion. NPH (400 asa) scans better
than NPS.
 
P

Paul Schmidt

Gregory said:
I find NPS to be really a horrible film to scan for whatever reason,
E100 films are T grain color emulsion. NPH (400 asa) scans better
than NPS.

What are the best films for scanning say one or two brands/types
in each of these categories:

B&W (what's best old tech, new tech, chromogenic)
Colour Consumer films ( what's best Slide or Negative?)
Colour Pro films (what's best Slide or Neagtive?)

Paul
 
D

David J. Littleboy

Paul Schmidt said:
Gregory W Blank wrote:

What are the best films for scanning say one or two brands/types
in each of these categories:

B&W (what's best old tech, new tech, chromogenic)

None of the above. Silver films don't allow the use of ICE* and the
chromogenics are grainy (this latter one is a minority viewpoint: my
definition of "acceptable grain" is Provia 100F, and the chromogenics are
seriously gross compared to Provia 100F.)

*: I've only scanned one roll of silver film: Tech Pan. It was quite gritty
(the lab that processed it may have messed up: I was expecting it to be the
eighth wonder, and was disappointed) and dust stood out incredibly
obnoxiously against the grain. (There's no dust in the crops below, though.)

http://www.terrapinphoto.com/jmdavis/ugly-c1.jpg
http://www.terrapinphoto.com/jmdavis/ugly-c2.jpg
Colour Consumer films ( what's best Slide or Negative?)

The scans I've seen of these indicate they should be avoided at all cost.
One exception: Sensia.
Colour Pro films (what's best Slide or Neagtive?)

Here, the usual suspects are all fine. My favorites are:

Negative: Konica Impressa 50, Reala
Slide: Provia 100F, Velvia 100F, Astia 100F

David J. Littleboy
Tokyo, Japan
 
B

Bill Tuthill

In rec.photo.film+labs Gregory W Blank said:
I find NPS to be really a horrible film to scan for whatever reason,
E100 films are T grain color emulsion. NPH (400 asa) scans better
than NPS.

Agreed. NPS *is* grainier than 100G using the 2.5 RMS conversion formula:

NPS RMS 4 * 2.5 = 10
100G RMS = 8

Currently 100 speed slide films are better (lower grain, higher resolution)
than 100 speed negative films, with the possible exception of Reala.

However 400 speed print films are better (lower grain, higher resolution)
than 400 speed slide films, although Provia 400F is better than some.
 
G

Gregory W Blank

Paul Schmidt said:
What are the best films for scanning say one or two brands/types
in each of these categories:
B&W (what's best old tech, new tech, chromogenic)
Colour Consumer films ( what's best Slide or Negative?)
Colour Pro films (what's best Slide or Neagtive?)
Paul

I don't use consumer films only Pro films.
I don't shoot chromogenic B&W film.
At this point I primarily shoot MF &LF
so my experience may differ somewhat
but I agree with David that Provia 100
is one of the best, The kodak E films
are also very good in terms of grain however
I tend to like fuji film for color. As for B&W
I get really good scans from most of my B&W
negatives, mainly because I shoot 4x5 so grain
is a much smaller issue.
 
J

Jytzel

Kennedy McEwen said:
With a drum scanner the spot size (and it's shape) *is* the anti-alias
filter, and the only one that is needed. One of the most useful
features of most drum scanners is that the spot size can be adjusted
independently of the sampling density to obtain the optimum trade-off
between resolution and aliasing to suit the media being used, but there
is usually an automatic option which will achieve a compromise at least
as good as any CCD device.

I doubt that this is just aliasing though, especially if both were
scanned at 4000ppi or more. Remember that negative images are
compressed on film (the corollary being that negative film has more
exposure latitude and the ability to capture a wider tonal range).
Consequently, when producing a positive from the film image, whether by
scanning or by conventional chemical printing techniques, the image must
be contrast stretched. So, even if the grain on the film has the same
amplitude as the same as in slide film (a reasonable assumption for
similar speed films of the same generation from the same manufacturer,
the resulting image from the negative will always appear more grainy
than the image from the slide film.

There is a lot of truth in what the drum operator told Jytzel. Whether
its the truth, the whole truth and nothing but the truth is another
story. ;-) However, when viewed at 100% scaling, the size of the
original has little bearing on the results so I would expect to see more
grain on the 6x6cm negative image than from the 35mm slide under those
conditions.

thanks Kennedy,

Now I need some definitions of some terms: "spot size", "sampling
density", and "grain aliasing". And how can I tell if it's real
amplified grain or "grain-alaising"? Is there any solution to this
problem or should I give up using negatives altogether?

J.
 
G

Gordon Moat

Jytzel said:
I sent some negatives and slides to drum scan to have the operator
claim that negatives show more grain in the final scan than slides.

Actually not that unusual an observation. Somewhat depends upon which
films are being compared, since a few transparency films do scan with a
very grainy appearance.
I
used 6x6 Fuji NPS 160, a film has low granularity rating. The other
film I used was E100G slide film.

Kodak E100G should scan substantially better than the Fuji NPS. Be aware
that the print granularity, and transparency film grain index are not
directly comparable numbers. Kodak has a technical document PDF about
this if you want to explore more on that issue.
I find it hard to believe the
operator's claim. It seems that he is doing something wrong. What
could it be and how to get the best scan out of my negatives?
By the way, they use Crosfield drum scanners.

thanks
J

I have not tried the Crosfield for drum scans, though I have noticed
some films need a few tricks to get the best results. Other than the
skill and experience of the operator being in question, you might have a
scan of the negative done as a positive, and reverse it in your editing
software. While I am not sure exactly why that works better, you might
want to give it a try. Be aware that not all film and scanner
combinations react the same, so having it drum scanned on another type
of machine might be a better option.

Ciao!

Gordon Moat
Alliance Graphique Studio
<http://www.allgstudio.com>
 
K

Kennedy McEwen

Jytzel said:
Now I need some definitions of some terms: "spot size", "sampling
density", and "grain aliasing".

Spot size: the size of the scanning spot which each sample in the scan
is averaged over. Usually this is of the order of a few microns in
diameter at the film plane, and anything from 3 to 10um are commonplace.
If the spot is a uniform circle, then the photomultiplier in the scanner
produces a signal which is proportional to the average illumination over
the area of the spot. More commonly, the spot has a gaussian profile or
something similar, so the average is weighted accordingly.

Sampling density: the density that the samples are taken, which is what
you would usually refer to as the pixels per inch on the image. Many
novices assume that the spot size and the sample pitch (ie. the inverse
of the sample density) should be the same for optimum image resolution,
but fairly simple diagrams demonstrate that this is not the case.

A spot can resolve image detail which is finer than the diameter of the
spot, or the side of the spot if it is square or rectangular (as in the
elements of a CCD scanner). However a sampled system cannot
unambiguously resolve detail which has a spatial frequency greater than
half the sampling density. Anything which the spot can resolve but the
sampling system cannot is aliased, and appears at a greater scale (if
the frequency extends over many samples, this can be a much greater
scale) than in the original.

Quite often, a scan is obtained at which all of the image detail is
fully resolved by both the spot and the sampling system, however the
latter is inadequate to resolve the grain from which the image is
composed, but the spot can resolve it. As a result the grain is
aliased. This is especially true of well defined grain with sharp
structures - the edges of the grains produce the spurious high spatial
frequencies which are aliased. However, since the grain is random and
smaller than the spot size, each aliased grain only extends over a
single pixel in the image - but this can be many times larger than the
actual grain on the original. Consequently the scanned image can appear
much more grainy than a chemically produced equivalent.

For some examples of this, see http://www.photoscientia.co.uk/Grain.htm

Part of the skill of the drum scan operator is adjusting the spot or
aperture size to optimally discriminate between the grain and the image
detail for particular film types, however some film types are difficult,
if not impossible to achieve satisfactory discrimination.
And how can I tell if it's real
amplified grain or "grain-alaising"?

Well, that's not so easy because once something, including grain, has
aliased there is no way to tell from the resultant image whether it is
an aliased artefact or not. In some cases, additional knowledge of the
scene may help - you know, for example, that the bricks on a wall do not
have that large pattern across them, or the colour fringing on that roof
isn't there in real life, but in general without anything to compare it
to, you just cannot say. Unfortunately grain in scanned images is just
like that - the only way to tell for sure if it is aliased is to compare
it to the original, unsampled, slide or negative - which usually means
comparing it to a conventional chemically and optically produced print
of the same size.
Is there any solution to this
problem or should I give up using negatives altogether?
There are several post scan filters around which purport to remove grain
from the image after it has been scanned. Examples are Neat Image and
Kodak's GEM. However, all ("all" being a relative term here!) that
these packages can do is analyse the image for fine random structure and
then remove as much of that as the user is prepared to tolerate. That
is fine if the grain is finer than the finest details in the image - the
two can be separated without loss of image detail. However, grain and
aliased grain cannot be finer than single pixel size thus, if your image
contains detail on the same scale (which it probably does, because that
is why you paid to have it scanned at such a fine resolution in the
first place) then you inevitably sacrifice image sharpness in the
process of removing or reducing the grain. How much you are prepared to
sacrifice is a compromise.
 
D

Don

Now I need some definitions of some terms: "spot size", "sampling
density", and "grain aliasing". And how can I tell if it's real
amplified grain or "grain-alaising"? Is there any solution to this
problem or should I give up using negatives altogether?

Spot size is the spot diameter. This is somewhat of a misnomer, since the
spots in commercial scanners are inevitably poorly formed. They are often
approximated as gaussian figures of revolution, but only for convenience -
they usually deviate from that in some important aspects. Ideally they
should be airy discs, but that is unachievable in the price range for
commercial labs or service bureaus. In most commercial scanners, 63% of the
spot energy is within a 3-8 micron diameter circle at the smallest
achievable spot size. When adjusted for larger spots, the spot shape
usually becomes less well defined.

Higher quality scanners with better spot shape control exist, but are
generally unavailable to the public. Scanning microdensitometers are an
example, though not necessarily optimum.

Sampling density is the spots (or scan lines) per millimeter. The scanning
density should be at least twice the resolution that you're trying to
maintain from the film. Current high resolution commercially available
color negative film can reach advertised resolutions of over 100 line
pairs/mm (high contrast), but with an affordable camera/lens combination
would rarely achieve over 70-80 or so on axis, less at the field edges.
Black & white films can be twice that. Consumer grade color films from the
1950s achieved maybe half that at best.

Aliasing (of any type, including of grain) was described by Nyquist in his
papers on information sampling. It arises when information is sampled less
frequently than the details that exist in the data, i.e. twice the highest
frequency in the data. For example, if you sampled a 60 hertz sine wave at
exactly 60 samples per second, each sample would occur at the same point on
the curve, and you would conclude that you had a DC signal, not a 60 hertz
AC signal. Without going into great depth here, sampling at anything below
twice the highest frequency contained in the data will cause the data to
later be reconstructed with the highest frequencies repoduced as erroneous
lower frequencies, with the resulting distortions. It can be avoided by
filtering out the high frequency data before sampling, an almost universal
practice in all sampling systems except photography, where it is usually
done crudely at best due to the difficulty and cost.

Radio engineers call this effect hetrodyning. Physicists and other
engineers call it intermodulation. Photographers call it aliasing. It is
the source of the "jaggies" you see on straight edges in improperly
digitized imagery as well as other problems. Grain aliasing is also a form
of this, and is caused by using scan dot spacings too far apart for the
grain sizes, without using a proper low pass filter in the image stream,
e.g. a properly shaped scanning spot. A good commercial drum scanner
operator (or the scanner often does it automatically) tries to matrch the
spot size to the line spacing. Unfortunately, the more-or-less gaussian
spot shape is not a very good low-pass filter. When sized to adequately
reduce information that exceeds the Nyquist limit it also considerably
reduces the in-band information that produces the fine detail that you would
like to keep.

The only practical solution to this is to oversample the image, i.e use a
sample spacing and spot size that are much smaller than necessary, and then
down-sample the result using an algorithm which approximates an optimum
filter. While this sounds good, in practice it is hard to do with
commercial grade equipment and fine grain films. Films with an average
grain size of 4 microns will have a significant fraction of the grain at
less than half that size. A scanning density of 1000 lines/mm or so (25,000
lines per inch) with a spot size on the order of 1 micron would be required,
and the resulting file size would be huge, nearly 5 gigabytes for a 35 mm
negative scanned with 16 bit depth. This would have to be stored
uncompressed (or with lossless compression) untill after the downsampling
was done. Also, the computer doing the downsampling would have to cope with
a file size that large - pretty much beyond the ability of current 32 bit
chips in common workstation use. And the whole operation would be slooooow.

The upshot is, practically speaking, accept the grain as a fact of life,
whatever the source. You might want to try several service bureaus, as the
quality of the equipment and competence of the operators does vary.

Don
 
K

Kennedy McEwen

Don <[email protected]> said:
Aliasing (of any type, including of grain) was described by Nyquist in his
papers on information sampling.

Aliasing was never mentioned by Nyquist in any of his papers - not ever!
Nor did he ever publish any papers on information sampling - the
technology for sampling simply did not exist in his time, or at least in
the time when he was undertaking his most groundbreaking work.

However, in Nyquist's 1924 internal Bell Labs circulation "Certain
Factors Affecting Telegraph Speed" later published in his 1928 paper
entitled "Certain Topics in Telegraph Transmission Theory" he laid out
the underlying equations which would govern information sampling and
defined the mathematical limit of "N/2 sinusoidal components necessary
to determine a wave" unambiguously.

It was his interest in sending analogue signals across telegraph lines
essentially designed for Morse, which was inherently digital in nature,
that led Nyquist to this conclusion. The bandwidth of the line
determined the maximum Morse rate and Nyquist showed mathematically what
the maximum frequency of the signal that could be transmitted on that
same line. However, the signal and the line bandwidth were both
analogue and continuous concepts, no sampling was involved whatsoever.
What happened "beyond the Nyquist limit" was never questioned because
the equations defined the bandwidth of the line from the maximum
frequency of the Morse signal they supported. Essentially Nyquist
defined the maximum digital signal which could be carried by an analogue
line, not vice versa.

It was a decade later, in 1938, that Alec Reeves invented the concept of
sampling, analogue to digital conversion and patented pulse code
modulation which, for obvious reasons, remained a classified technology
shared only by the British and Americans, for several years.

The "Nyquist sampling limit" was essentially introduced by Claude
Shannon in his 1948 paper "A Mathematical Theory of Communication" which
laid the foundation of IT as we know it today and who recognised the
significance of Nyquist's (and Ralph Hartley's) early work to the new
technology. Indeed, many publications refer to this as the "Shannon
limit", whilst the Shannon-Hartley limit defines the maximum amount of
information which can be carried over a sampled multilevel communication
channel - effectively the amount of information that your n-bit ADC
scanner can pull off of the film when sampling at x ppi. ;-)

Incidentally, Nyquist was actually born with a family name of Johnsson
and his father changed the name to Nyquist because other local families
had the same name and this led to confusion with the post office. Years
later, a colleague discovered a limiting noise power source in telegraph
lines which appeared to be proportional to the resistance of the line
and to temperature. Harry Nyquist set about the problem and, in 1928,
he and his colleague published papers under the almost same titles,
"Thermal Agitation of Electric Charge in Conductors" and "Thermal
Agitation of Electricity in Conductors" respectively. His colleague's
paper addressed the experimental evidence for this noise, whilst
Nyquist's paper addressed the theoretical derivation of the noise from
physics first principles. The predictions of Nyquist's theory were
consistent with his colleague's measurements. Today we often refer to
that as "Johnson Noise" - after Nyquist's colleague, J B Johnson, who
just happened to have the same name he was born with!
Radio engineers call this effect hetrodyning.

Not the same thing at all - heterodyning does not require any sampling
and, indeed, was a commonplace rf technique before sampling was ever
conceived.
Physicists and other
engineers call it intermodulation.

Again, not the same - that *is* more like a heterodyne.
Photographers call it aliasing.

As do engineers and physicists when they are actually referring to this
effect. I know... I R 1 !

Aliasing is much more akin to the traditional Moire effect, which is a
true sampling process - sampling the pattern of one layer of muslin
through the apertures in a top layer.
It is
the source of the "jaggies" you see on straight edges in improperly
digitized imagery as well as other problems.

No it isn't!

Jaggies occur because of inadequately filtered reconstruction systems.
Not because of inadequate sampling! A jagged edge occurs because the
reconstruction of each sample introduces higher spatial frequencies than
the sampled image contains, for example by the use of sharp square
pixels to represent each sample in the image. This can occur whether
the data has been sampled correctly or not. It is a *related* effect,
but quite distinct. Aliasing *only* occurs on the input to the sampling
system - jaggies occur at the output.
 
R

Ron Andrews

Thanks for the tutorial.
One question: Is there a difference between the "Nyquist sampling
limit", the "Shannon limit", and what I learned in school (a long time ago)
as the "Whittaker-Shannon sampling theorem"? It sounds like they are
different names for the same bandwidth/2 concept.

--
Ron Andrews
http://members.hostedscripts.com/antispam.html
Kennedy McEwen said:
Aliasing was never mentioned by Nyquist in any of his papers - not ever!
Nor did he ever publish any papers on information sampling - the
technology for sampling simply did not exist in his time, or at least in
the time when he was undertaking his most groundbreaking work.

However, in Nyquist's 1924 internal Bell Labs circulation "Certain
Factors Affecting Telegraph Speed" later published in his 1928 paper
entitled "Certain Topics in Telegraph Transmission Theory" he laid out
the underlying equations which would govern information sampling and
defined the mathematical limit of "N/2 sinusoidal components necessary
to determine a wave" unambiguously.

It was his interest in sending analogue signals across telegraph lines
essentially designed for Morse, which was inherently digital in nature,
that led Nyquist to this conclusion. The bandwidth of the line
determined the maximum Morse rate and Nyquist showed mathematically what
the maximum frequency of the signal that could be transmitted on that
same line. However, the signal and the line bandwidth were both
analogue and continuous concepts, no sampling was involved whatsoever.
What happened "beyond the Nyquist limit" was never questioned because
the equations defined the bandwidth of the line from the maximum
frequency of the Morse signal they supported. Essentially Nyquist
defined the maximum digital signal which could be carried by an analogue
line, not vice versa.

It was a decade later, in 1938, that Alec Reeves invented the concept of
sampling, analogue to digital conversion and patented pulse code
modulation which, for obvious reasons, remained a classified technology
shared only by the British and Americans, for several years.

The "Nyquist sampling limit" was essentially introduced by Claude
Shannon in his 1948 paper "A Mathematical Theory of Communication" which
laid the foundation of IT as we know it today and who recognised the
significance of Nyquist's (and Ralph Hartley's) early work to the new
technology. Indeed, many publications refer to this as the "Shannon
limit", whilst the Shannon-Hartley limit defines the maximum amount of
information which can be carried over a sampled multilevel communication
channel - effectively the amount of information that your n-bit ADC
scanner can pull off of the film when sampling at x ppi. ;-)

Incidentally, Nyquist was actually born with a family name of Johnsson
and his father changed the name to Nyquist because other local families
had the same name and this led to confusion with the post office. Years
later, a colleague discovered a limiting noise power source in telegraph
lines which appeared to be proportional to the resistance of the line
and to temperature. Harry Nyquist set about the problem and, in 1928,
he and his colleague published papers under the almost same titles,
"Thermal Agitation of Electric Charge in Conductors" and "Thermal
Agitation of Electricity in Conductors" respectively. His colleague's
paper addressed the experimental evidence for this noise, whilst
Nyquist's paper addressed the theoretical derivation of the noise from
physics first principles. The predictions of Nyquist's theory were
consistent with his colleague's measurements. Today we often refer to
that as "Johnson Noise" - after Nyquist's colleague, J B Johnson, who
just happened to have the same name he was born with!

Not the same thing at all - heterodyning does not require any sampling
and, indeed, was a commonplace rf technique before sampling was ever
conceived.


Again, not the same - that *is* more like a heterodyne.


As do engineers and physicists when they are actually referring to this
effect. I know... I R 1 !

Aliasing is much more akin to the traditional Moire effect, which is a
true sampling process - sampling the pattern of one layer of muslin
through the apertures in a top layer.


No it isn't!

Jaggies occur because of inadequately filtered reconstruction systems.
Not because of inadequate sampling! A jagged edge occurs because the
reconstruction of each sample introduces higher spatial frequencies than
the sampled image contains, for example by the use of sharp square
pixels to represent each sample in the image. This can occur whether
the data has been sampled correctly or not. It is a *related* effect,
but quite distinct. Aliasing *only* occurs on the input to the sampling
system - jaggies occur at the output.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when
replying)
 
N

Neil Gould

Recently said:
Not the same thing at all - heterodyning does not require any sampling
and, indeed, was a commonplace rf technique before sampling was ever
conceived.
Could it be that Don was referring to interference patterns resulting from
the overlaying of multiple frequencies? The analogy to scanning would be
the overlaying of the scanner's frequency on the target's frequency, and
in such a case, interference patterns certainly result.
As do engineers and physicists when they are actually referring to
this effect. I know... I R 1 !

Aliasing is much more akin to the traditional Moire effect, which is a
true sampling process - sampling the pattern of one layer of muslin
through the apertures in a top layer.
I have some problems with this analogy, because it requires too many
qualifiers to be accurate. If the two layers are from the same piece of
muslin, *and* the muslin is of high quality such that the aperture grid
formed by the weave is a consistent size, then this is more akin to the
phase shifted heterodyning of two signals of the same frequency. I don't
see it as a good example of scanning issues, because the likelihood of
both the scanner and subject frequency being the same is fairly low,
especially for silver-based negatives.

Further, if the two muslin pieces are from a different weave or of low
quality such that the apertures vary in size, then it's not really a good
example to use to represent either sampling or Moiré, though it can be an
analogy to the heterodyning of two frequency modulated (FM) signals.

However, looking through the (high quality) muslin at another subject may
be a good example of both the visible Moiré problems *and* aliasing caused
by sampling. All one has to do is imagine that each square of the muslin
grid can only contain a single color. If the subject has a regular
repeating pattern, Moiré will result if the frequencies of that pattern
are not perfectly aligned with the frequency and orientation of the muslin
grid, and aliasing will result from re-coloring portions of the aperture
to conform to the single color limitation of sampling.
No it isn't!

Jaggies occur because of inadequately filtered reconstruction systems.
Not because of inadequate sampling! A jagged edge occurs because the
reconstruction of each sample introduces higher spatial frequencies
than the sampled image contains, for example by the use of sharp
square pixels to represent each sample in the image.
While I understand your complaint, I think it is too literal to be useful
in this context. Once a subject has been sampled, the "reconstruction" has
already taken place, and a distortion will be the inevitable result of any
further representation of those samples. This is true for either digital
or analog sampling, btw.
Aliasing *only* occurs on the
input to the sampling system - jaggies occur at the output.
Whether one has "jaggies" or "lumpies" on output will depend on how pixels
are represented, e.g. as squares or some other shape. However, that really
misses the relevance, doesn't it? That there is a distortion as a result
of sampling, and said distortion *will* have aliasing which exemplifies
the difficulty of drum scanning negatives, and that appears to be the
point of Don's original assertion. Our elaborations haven't disputed this
basic fact.

Regards,
 
K

Kennedy McEwen

Ron Andrews said:
Thanks for the tutorial.
One question: Is there a difference between the "Nyquist sampling
limit", the "Shannon limit", and what I learned in school (a long time ago)
as the "Whittaker-Shannon sampling theorem"? It sounds like they are
different names for the same bandwidth/2 concept.
Not really, they are all descriptions of the same basic concept that
Shannon identified in Nyquist's original work.
 
K

Kennedy McEwen

Neil said:
Could it be that Don was referring to interference patterns resulting from
the overlaying of multiple frequencies? The analogy to scanning would be
the overlaying of the scanner's frequency on the target's frequency, and
in such a case, interference patterns certainly result.
I am sure that this *is* what Don was referring to, however there is a
significant difference between the continuous waves, as use in
heterodyne systems, and sampling systems which use a series of delta
functions. Aliasing is analogous to heterodyning, but not the same as
it.
I have some problems with this analogy, because it requires too many
qualifiers to be accurate. If the two layers are from the same piece of
muslin, *and* the muslin is of high quality such that the aperture grid
formed by the weave is a consistent size, then this is more akin to the
phase shifted heterodyning of two signals of the same frequency.

Since they are at different distances from the viewer - and in the case
where the layers are actually in contact that difference will be very
small - they cannot produce the same spatial frequency on the retina,
and hence aliasing does result which is not simply phase shifting.
I don't
see it as a good example of scanning issues, because the likelihood of
both the scanner and subject frequency being the same is fairly low,
especially for silver-based negatives.

The aliased frequency is simply the difference between the sampling
frequency and the input frequency - they do not have to be the same or
even nearly the same. However differentiation of the aliased output
from the input is obviously easier to perceive when the input becomes
close to the sampling frequency. All that is necessary is that the
input exceeds the Nyquist limit - and that is a much greater probability
irrespective of the medium of the original.
Further, if the two muslin pieces are from a different weave or of low
quality such that the apertures vary in size, then it's not really a good
example to use to represent either sampling or Moiré, though it can be an
analogy to the heterodyning of two frequency modulated (FM) signals.
I disagree. The critical difference between heterodyne and sampling is
that a heterodyne multiplies two continuous level signals - the input
and the reference - at all levels, and all levels contribute to the
output. In sampling, the input is either sampled (multiplied by unity)
or ignored (multiplied by zero) - there is no continuous level between
the two and nothing between the two contributes to the output. In the
analogy with muslin, the pattern on the lower layer is either passed by
the apertures in the upper layer (multiplied by unity) or blocked by the
weave (multiplied by zero), which is much more akin to sampling than the
continuous level heterodyne process.
However, looking through the (high quality) muslin at another subject may
be a good example of both the visible Moiré problems *and* aliasing caused
by sampling. All one has to do is imagine that each square of the muslin
grid can only contain a single color. If the subject has a regular
repeating pattern, Moiré will result if the frequencies of that pattern
are not perfectly aligned with the frequency and orientation of the muslin
grid, and aliasing will result from re-coloring portions of the aperture
to conform to the single color limitation of sampling.

You seem to be limiting your definition of aliasing to situations where
the input frequency extends over many samples, which is certainly useful
for visualising the effect but not necessary for its occurrence - as
grain aliasing clearly demonstrates. Single grains usually extend over
less than one complete sample pitch.
While I understand your complaint, I think it is too literal to be useful
in this context. Once a subject has been sampled, the "reconstruction" has
already taken place, and a distortion will be the inevitable result of any
further representation of those samples. This is true for either digital
or analog sampling, btw.

That is simply untrue although it is a very popular misconception - *NO*
reconstruction has taken place at the point that sampling occurs.
Reconstruction takes place much later in the process and can, indeed
usually does, use completely different filters and processes from those
associated with the sampling process, resulting in completely different
system performance.

An excellent example of this occurs in the development of the audio CD.
The original specification defined two channels sampled at 44.1kHz with
16-bit precision and this is indeed how standard CDs are recorded. Early
players, neglecting the first generation which used 14-bit DACs or a
single 16-bit DAC multiplexed between both channels, reproduced this
data stream directly into the analogue domain using a 16bit DAC per
channel followed by a "brick wall" analogue filter. However, the SNR
and distortion present in the final audio did not meet the theoretical
predictions of the process. Initial attempts to resolve the inadequacy
involved the use of higher resolution DACs to ensure that the
reproduction system did not limit the result. Still, the noise and
distortion present in the output fell well short of what should have
been possible. Then the concept of a "digital noise shaping
reproduction filter" was introduced, such that the data on the CD was
digitally filtered and interpolated to much higher frequencies which
were then converted to analogue and filtered much more crudely, the new
sampling frequency being several orders of magnitude beyond the audio
range. Suddenly, improvements in measurable audio quality were
achieved, with the results much closer to theoretical predictions. This
was subsequently followed by Matsushit (Panasonic/Technics) introducing
MASH (Multi-stAge noise SHaping), a high bit depth (21-26bits depending
on the generation) digital filter with only a 3.5-bit Pulse Width
Modulation DAC per channel and ultimately by the Philips "Bitstream"
system where only a 1-bit Pulse Density Modulation DAC was required. In
these latter systems, which are now virtually generic in all CD players,
the full theoretical limits of the original specification were actually
met.

Oversampling, noise shaping, PWM and PDM output were never part of the
original Red Book specification and the improvements apply (and can be
measured) on CDs that were available in 1981 just as readily as they
apply to the latest CDs issued today. The difference is in the
reconstruction, not in the sampling process and the reconstruction is
completely independent of the sampling process.

I was hoping to point you to a longstanding article on the ChipCenter
website but I just checked and it has been removed - perhaps as a
consequence of my complaint about the serious errors it contained. I
have an archived copy if you are interested in reading it though!
Basically, this article, like your statement above, assumed that the
"reconstruction" had already taken place when the signal was sampled
and, after several pages of examples of perfectly valid oscilloscope
traces demonstrating the distortion introduced by sampling whilst
completely ignoring the reconstruction filter, concluded with a table of
the ratio of sampling to maximum signal frequency required to achieve a
certain signal to noise ratio in the data. Note "the data" - the
requirement for an appropriate reconstruction filter applies however the
sampled data is analysed just as much as to the final analogue signal,
and this is not only the crux of noise shaping CD players, but of the
error in the ChipCentre article. This significant but subtle error
resulted in the conclusion, fed to untold millions of electronic
engineers as fact, that 16-bit accuracy required sampling at least 569x
greater than the highest frequency in the signal - some 235x greater
than Nyquist and Shannon require - and 36x for 8-bit accuracy, with the
conclusion that audio CDs cannot reproduce much more than 1kHz at a
marginal SNR! I know many audiophiles who consider CDs to be poor, but
none who consider them to be that bad, but it demonstrates the
consequence of ignoring the reconstruction filter which is fundamental
to the sampling process.
Whether one has "jaggies" or "lumpies" on output will depend on how pixels
are represented, e.g. as squares or some other shape. However, that really
misses the relevance, doesn't it?

Not at all - it is critical. "Jaggies" in an image produced from
sampled data indicate inadequate reconstruction filters. You seem to
have a missplaced assumption that the sampled data itself is distorted -
that can only occur if the input filter is inadequate, which is the crux
of the issue of selecting an appropriate spot size and shape in the drum
scanner.
That there is a distortion as a result
of sampling, and said distortion *will* have aliasing

No, that is completely wrong. A sampled system requires two filters -
an input filter, which is present in the signal stream prior to the
sampling process, and an output filter which is present in the signal
stream after the sampling has been undertaken. Aliasing is a
consequence of an inadequate input filter. This will result in
distortion of the sampled data, however if the filter is adequate then
there is no reason for such distortion to be present.

Ideal sampling itself does *not* introduce distortion - I suggest
reading Claude Shannon's very readable original paper on the topic if
you have difficulties grasping this. Clearly practical sampling will
introduce some distortion since all samples are not in the exact place
that they should be, however in almost all cases this is negligible.
This is achieved through the use of crystal oscillators for the
generation of the sampling frequency, or high accuracy lithography of
the semiconductor industry for the generation of the spatial sampling
frequency, as is the case with scanners.

Jaggies, or output distortion, are a consequence of an inadequate output
filter - just as the inadequate output filters caused poor SNR and
excess THD (the direct audio equivalent of jaggies) in those early CD
players.
which exemplifies
the difficulty of drum scanning negatives, and that appears to be the
point of Don's original assertion.

A properly filtered sampled system exhibits *NO* distortion. Nyquist's
mathematics is not an approximation, it is *EXACT*, which is why
Shannon, who was particularly pedantic in most respects, based his
entire thesis on it, which has well stood the test of time.
Our elaborations haven't disputed this
basic fact.
On the contrary, I hope you now see the difference!
 
N

Neil Gould

Hi,

Recently said:
That is simply untrue although it is a very popular misconception -
*NO* reconstruction has taken place at the point that sampling occurs.
Oh? Then, are you under the impression that the sample data and the
subject are in identity? If so, I strongly disagree with this notion. If
not, then the sampled data is a reconstruction of the subject, regardless
of the format and whether the format is perceivable as a representation of
the subject. IOW, it could be a serial list of numbers, but that is how
the sample is representing _the subject_, and not just some unrelated
event. It is therefore a construct representative of the subject, aka a
reconstruction. However, more to the point, distortion is inextricably
inherent in the sampled data, and thus relevant to the "difficulty drum
scanning negatives".
Reconstruction takes place much later in the process and can, indeed
usually does, use completely different filters and processes from
those associated with the sampling process, resulting in completely
different system performance.
You appear to be referring to an interpretation of the sampled data. That
process will introduce another set of distortions, and disregards the fact
that the data is already a distortion.
An excellent example of this occurs in the development of the audio
CD. The original specification defined two channels sampled at
44.1kHz with 16-bit precision and this is indeed how standard CDs are
recorded.
No, that's how CDs are duplicated or replicated. Typically, professional
CDs are currently recorded at sample rates of up to 192 kHz, with 24 bit
or greater precision, manipulated (e.g. "mixed" and "master mixed") at 56
bits or greater precision, and down-sampled to the final audio
specification of 44.1 kHz / 16 bit using various dithering algorithms to
mask the sample conversion errors. I think people would be horrified by
the results if such an approach was used to scan negatives. We don't have
to go there.
I was hoping to point you to a longstanding article on the ChipCenter
website but I just checked and it has been removed - perhaps as a
consequence of my complaint about the serious errors it contained. I
have an archived copy if you are interested in reading it though!
Basically, this article, like your statement above, assumed that the
"reconstruction" had already taken place when the signal was sampled
I think that we are in disagreement only about use of the term
"reconstruction". I (and I suspect Don) was using it in the context of the
sampled data being a construct not in identity with the subject. In that
context, it is representation of the subject with a unique structure, and
by its association with the subject a "reconstruction" of it. I was *not*
referring to, for example, images that are generated from the stored data
(which, as we agree, introduces new and independent errors). However, I
understand the distinction that you are making, and agree with it, for the
most part. ;-)

My main problem with this line of discourse is that it ignores the fact
that the inescapable distortions inherent in the sampled data are at the
heart of the topic at hand. I think we should be talking about those
distortions and how to improve the representation within the scope of drum
scanning, rather than pursuing some tangential issue of semantics.
Not at all - it is critical. "Jaggies" in an image produced from
sampled data indicate inadequate reconstruction filters. You seem to
have a missplaced assumption that the sampled data itself is
distorted - that can only occur if the input filter is inadequate,
which is the crux of the issue of selecting an appropriate spot size
and shape in the drum scanner.
As has already been pointed out, the smallest spot size available to
"commonplace" drum scanners is still larger than the smallest grains in
"commonplace" films. Other consequences of "real world" dot shapes were
discussed, as well. How can those *not* result in distortions of the
orignal subject? (the quotes are to suggest that one may not consider a
US$100k device to be "commonplace", yet it will have these limitations).
No, that is completely wrong. A sampled system requires two filters -
an input filter, which is present in the signal stream prior to the
sampling process, and an output filter which is present in the signal
stream after the sampling has been undertaken. Aliasing is a
consequence of an inadequate input filter. This will result in
distortion of the sampled data, however if the filter is adequate then
there is no reason for such distortion to be present.
Do you really see your comment as a rebuttal to my statement? "Aliasing
is a consequence of an inadequate input filter" is simply another way to d
escribe that one form of distortion (there are others) inherent in samples
from drum scanners, given that the "input filter" of "commonplace" drum
scanners *will* be inadequate to flawlessly sample "commonplace films".
;-)
A properly filtered sampled system exhibits *NO* distortion.
Perhaps you can point the OP to such a system, so that he can get his film
scanned flawlessly, putting this matter to rest? ;-)

Best regards,
 
K

Kennedy McEwen

Neil said:
Hi,


Oh? Then, are you under the impression that the sample data and the
subject are in identity?

No, however the sampled data is in identity with the subject *after* it
has been correctly filtered at the input stage. This principle is the
entire foundation of the sampling process. No information can get past
the correct input filter which cannot be accurately and unambiguously
captured by the sampling system.

"Accurately and unambiguously" = "No distortion".
If
not, then the sampled data is a reconstruction of the subject, regardless
of the format and whether the format is perceivable as a representation of
the subject.

If properly filtered prior to sampling then the sampled data is a
*perfect* representation of the filtered subject. The filtering may
remove information from the subject, but it cannot add information. As
such, when properly reconstructed, the sampled data will *exactly* match
the subject after input filtering. In short, there may be *less*
information in the properly sampled and reconstructed subject than in
the original, but there can never be more. However imperfect
reconstruction will result in artefacts and distortion which are not
present in the original subject - false additional information, and
jaggies fall into this category, they are not aliasing artefacts.
IOW, it could be a serial list of numbers, but that is how
the sample is representing _the subject_, and not just some unrelated
event. It is therefore a construct representative of the subject, aka a
reconstruction.

The list of numbers is the minimum data stream that the subject can be
represented in. As such, it represents the subject in a coded form
however, inherent in that coding is the notion that it requires decoding
to accurately reconstruct the subject. Each sample represents a measure
of the subject at an infinitesimally small point in space (or an
infinitesimally small point in time). The samples themselves are not a
reconstruction of the subject, they are merely a representation of it,
just as ASCII is not a reconstruction of your text, merely a
representation of it - it requires a decoding process (ASCII to text) to
turn that representation into a reconstruction.
However, more to the point, distortion is inextricably
inherent in the sampled data, and thus relevant to the "difficulty drum
scanning negatives".

Sorry Neil, but that is completely wrong. The entire principle of
sampling is based on the fact that it does *not* introduce distortion
because *all* of the information present in appropriately filtered input
signal is captured by the sampling system. This really is not something
you should be arguing about because there are numerous mathematical
proofs around which explain the concept far more efficiently than can
ever be achieved in a text only medium.
You appear to be referring to an interpretation of the sampled data. That
process will introduce another set of distortions, and disregards the fact
that the data is already a distortion.
That, most certainly, is *NOT* a fact! Whilst I am referring to an
interpretation of the sampled data, the correct interpretation does
*not* introduce distortion. You appear to be hung up on the false
notion that every step introduces distortion - it does not.
No, that's how CDs are duplicated or replicated.

No, that is the Red Book specification - I suggest you look it up - how
yo get to that sampled data is irrelevant to the discussion on the
reconstruction filter.
Typically, professional
CDs are currently recorded at sample rates of up to 192 kHz, with 24 bit
or greater precision, manipulated (e.g. "mixed" and "master mixed") at 56
bits or greater precision, and down-sampled to the final audio
specification of 44.1 kHz / 16 bit using various dithering algorithms to
mask the sample conversion errors.

The *original* recording systems used for the creation of CDs simply
applied a band limiting analogue filter and sampled the resulting audio
directly at 16-bit accuracy for writing to the CD. I am well aware that
there have been improvements in the recording technology over the years
however that merely improves the input filter prior to creating the
16-bit data samples which are written on the CD. A CD published in 1980
has the conventional PCM audio signal encoded in exactly the same way as
a CD published in 2004.

However the point, which you chose to snip from the text, is that
improvements to the *reconstruction* filter through the introduction of
oversampling, noise shaping, MASH and Bitstream systems is equally
relevant to the publications of 1980 as it is to those of 2004 - yet the
data still has the same limitations - 2-channel 16-bit 44.1kHz
representations of the audio stream.
I think people would be horrified by
the results if such an approach was used to scan negatives. We don't have
to go there.
Many people do go there, without even thinking about it. Scan at 16bpc
at 4000ppi, process to optimise the image, downsample to 3000ppi or
2000ppi, reduce to 8bpc and archive - much the same process and overhead
between capture and archive as 192kHz 24bpc audio is to the 44.1kHz
16bpc CD publication.

Of course, this approach assumes that the entire image can be adequately
represented in 3000 or 2000ppi, which may not be the case, just as many
audiophiles clamour for HD-CD media to met their higher representation
requirements.
I think that we are in disagreement only about use of the term
"reconstruction".

From your subsequent statements that would not appear to be the case!
My main problem with this line of discourse is that it ignores the fact
that the inescapable distortions inherent in the sampled data are at the
heart of the topic at hand.

When there is nothing there it is reasonable to ignore it. There is no
distortion in sampling a correctly filtered signal - that, it would
appear, is the crux of the disagreement between us. Since the
mathematics behind this is well beyond a text only medium, I suggest you
browse a few textbooks on the subject. There are many that I could
suggest but, whilst its main topic is a mathematical technique which is
highly significant to this subject, Ron Bracewell's classic text "The
Fourier Transform and its Applications" covers the matter in very
precise detail. Specifically I refer you to Chapters 5, dealing with
the Impulse function, as a forerunner to Chapter 10, which details the
mathematics of the entire sampling process. Once again though, I
suggest you read some of Claude Shannon's very original and readably
texts on the topic, specifically Shannon's Sampling Theorem which states
that:
"When the input signal is band limited to meet the Nyquist Sampling
Criterion that signal can be reconstructed with full accuracy from the
sampled data."

Your assertion that the sampled data is inherently distorted and that
this inevitably passes into the reproduction is in complete disagreement
with Claude Shannon's 1949 proof. I suggest that you will need much
more backup than a simple statement of disagreement before many people
will take much notice of such an unfounded allegation.
I think we should be talking about those
distortions and how to improve the representation within the scope of drum
scanning, rather than pursuing some tangential issue of semantics.
We are - however your claim that sampling introduces its own distortions
irrespective of the rest of the system prevents that discussion from
moving forward to any degree.
As has already been pointed out, the smallest spot size available to
"commonplace" drum scanners is still larger than the smallest grains in
"commonplace" films. Other consequences of "real world" dot shapes were
discussed, as well. How can those *not* result in distortions of the
orignal subject? (the quotes are to suggest that one may not consider a
US$100k device to be "commonplace", yet it will have these limitations).

Good God, I think he's finally got it, Watson! The spot is part of the
input filter of the sampling system, just as the MTF of the imaging
optics are!

Indeed these components (optics, spot etc.) can be used without sampling
in the signal path at all, as in conventional analogue TV, and will
result in exactly the same distortions that you are referring to. If
this is not proof that sampling itself does not introduce an inherent
distortion then I do not know what is!

Just in case you haven't noticed, you have in the above statement made a
complete "about-face" from your previous statements - you are now
ascribing the distortions, correctly, to the input filter not the
sampling process itself, which introduces *no* distortion, or the
reconstructon filter which can introduce distortion (eg. jaggies) if
improperly designed.
Do you really see your comment as a rebuttal to my statement?

Absolutely - there is a vast distinction between the filters in the
total system and the sampling process itself. In a correctly filtered
system the sampling process can be added and removed without introducing
distortion of any kind or level whatsoever.
Perhaps you can point the OP to such a system, so that he can get his film
scanned flawlessly, putting this matter to rest? ;-)
The point is that he has already done this - most drum scanner
manufacturers produce equipment capable of the task, unfortunately many
operators are not up to driving them close to perfection - often because
they erroneously believe that such perfection is unobtainable in sampled
data, so why bother at all.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top