Descreening previously scanned images

T

to do list

Is there a way to "descreen" magazine scans which have previously been
scanned without the scanner being set to descreen?

A friend of mine sent me scans of various magazines, in total around 600
pages, but the scanner was not set to decreen the pages. Is there a way to
simulate descreening in Photoshop or other editing software?

Steve.
 
C

CSM1

to do list said:
Is there a way to "descreen" magazine scans which have previously been
scanned without the scanner being set to descreen?

A friend of mine sent me scans of various magazines, in total around 600
pages, but the scanner was not set to decreen the pages. Is there a way to
simulate descreening in Photoshop or other editing software?

Steve.

There is probably not any good way to get rid of moiré after the scanner.

Moiré is created by the interaction of the dots in the half-tone screen and
the dot placement in the scanner's sensor. One of the ways to descreen is to
scan at a high resolution to resolve the individual dots.

Once the pattern is in the file, there is little that one can do, except
blur the image.

You need to ask your friend to do in over and do it right or live with the
Moiré.
 
J

James Kendall

Is there a way to "descreen" magazine scans which have previously been
scanned without the scanner being set to descreen?

A friend of mine sent me scans of various magazines, in total around 600
pages, but the scanner was not set to decreen the pages. Is there a way to
simulate descreening in Photoshop or other editing software?

Steve.

Actually, there is an excellent way to remove halftone screens and
other spatially-periodic artifacts by use of the Fourier transform.
See: http://www.reindeergraphics.com/tutorial/chap4/fourier04.html
The program applied there is rather expensive for the reason that it
is a great compendium of processing routines. The freeware called
Scion Image also provides the FT-based artifact removal. But, note
that the process of correcting one image may take two or three minutes
of operator time because decision-making is involved. -- j.
 
B

Brendan R. Wehrung

James said:
Actually, there is an excellent way to remove halftone screens and
other spatially-periodic artifacts by use of the Fourier transform.
See: http://www.reindeergraphics.com/tutorial/chap4/fourier04.html
The program applied there is rather expensive for the reason that it
is a great compendium of processing routines. The freeware called
Scion Image also provides the FT-based artifact removal. But, note
that the process of correcting one image may take two or three minutes
of operator time because decision-making is involved. -- j.


Some image editors (like Paint Shop Pro) also include Moire removal
filters, although I've never found them to work that well.

Brendan
--
 
K

Kennedy McEwen

to do list said:
Is there a way to "descreen" magazine scans which have previously been
scanned without the scanner being set to descreen?

A friend of mine sent me scans of various magazines, in total around 600
pages, but the scanner was not set to decreen the pages. Is there a way to
simulate descreening in Photoshop or other editing software?
Not without degrading the images. There are manual methods of
descreening after the scan, but they rely on scanning at a higher
resolution to begin with. Take a look at:
http://www.scantips.com/basics06.html and subsequent pages to see how to
do it. If the originals are not scanned at sufficient resolution then,
depending on the images, it might be worth sacrificing a little more
resolution for cleaner images - but that is what it comes down to in the
end. There are no easy fixes. The alternative is to get the originals
rescanned at a higher resolution in the first place.
 
K

Kennedy McEwen

James Kendall said:
Actually, there is an excellent way to remove halftone screens and
other spatially-periodic artifacts by use of the Fourier transform.
See: http://www.reindeergraphics.com/tutorial/chap4/fourier04.html
The program applied there is rather expensive for the reason that it
is a great compendium of processing routines. The freeware called
Scion Image also provides the FT-based artifact removal. But, note
that the process of correcting one image may take two or three minutes
of operator time because decision-making is involved. -- j.
As with any other method though, once the screen has aliased there is no
way to remove it without loss of image detail. The FT method will
remove specific spatial frequencies but, since the spatial frequencies
of the print dots have aliased during the scanning process, image
information is also present at those same frequencies. The FT method
can also be implemented by spatial convolution, which is the method used
by conventional softening and sharpening filters.

I have a few test images from 30-odd years ago demonstrating the FT
method using purely analogue techniques. An image of a small newspaper
print was photographed and the negative illuminated by a laser, then
focused by a lens. This produces an image at infinity which is the
fourier transform of the image at the focus, and this was then brought
to a focus on another piece of film. By masking the original image so
that only a uniform area of grey, showing just the dot pattern, was
left, the fourier transform of the dot pattern was created on the film.
This was then developed and replaced in situ. The entire original was
then revealed, creating the FT of the original image complete with dot
spatial frequencies (many more exist than just the limited set shown on
that synthetic test page!) which the developed negative masked out to
leave only the FT of the image. This was then projected back to
infinity by another lens, creating a dot free image at its focus, which
was then recorded on film. Its a lot easier with computers doing the
number crunching!
 
J

James Kendall

Not without degrading the images. There are manual methods of
descreening after the scan, but they rely on scanning at a higher
resolution to begin with. Take a look at:
http://www.scantips.com/basics06.html and subsequent pages to see how to
do it. If the originals are not scanned at sufficient resolution then,
depending on the images, it might be worth sacrificing a little more
resolution for cleaner images - but that is what it comes down to in the
end. There are no easy fixes. The alternative is to get the originals
rescanned at a higher resolution in the first place.

To anyone interested, I will e-mail a JPG example of a scanned
newspaper photograph, the Fourier transform of that, the edited
transform, and the recovered image. There is no loss of visible detail
in the final image because no blurring is applied. The amount of
Fourier "energy" eliminated in editing is insignificant in comparison
with the overall image information.
 
K

Kennedy McEwen

James Kendall said:
To anyone interested, I will e-mail a JPG example of a scanned
newspaper photograph, the Fourier transform of that, the edited
transform, and the recovered image. There is no loss of visible detail
in the final image because no blurring is applied.
Filtering specific frequencies by fourier transform is *exactly* the
same as blurring with a specifically matched filter! These are
mathematical *identical* operations. Consequently, if the image has
been scanned with insufficient resolution to isolate the individual dots
in the image, they will alias into the image content and *no* amount of
filtering, whether in the FT domain or the spatial domain can separate
them without further loss of image content.

There is no question that if the image is scanned with sufficient
resolution then you can eliminate the dots without image loss by spatial
filtering in the fourier domain, and I have no doubt that your example
demonstrates the capability well. However under those same
circumstances you can achieve the same results by filtering in the
spatial domain without fourier transforming the image.
The amount of
Fourier "energy" eliminated in editing is insignificant in comparison
with the overall image information.

There is no such thing as 'Fourier energy"'. The FT is simply the same
information as the image contains displayed as a spatial frequency map,
just the same as an audio waveform being displayed as a frequency
spectrum. Multiplication of two functions in the fourier domain is the
mathematical identity of convolution of two matching functions in the
spatial domain - this is actually called the "Convolution Theorem". If
you FT an image and then mask specific spatial frequencies, you merely
multiply the FT of the image with the mask. The convolution theroem
states that this is exactly the same as convolving the image with the
inverse FT of the mask. Consequently, the kernel of the user defined
blur filter is merely the inverse FT of the mask.

It is relatively trivial to achieve an approximation of the ideal user
defined filter kernel (with the same accuracy as approximating the ideal
FT mask with which to remove the FT of the dots with minimum effect on
the FT of image) simply by examination of the original image and the dot
matrix array contained within it.
 
J

James Kendall

.... if the image has
been scanned with insufficient resolution to isolate the individual dots
in the image, they will alias into the image content and *no* amount of
filtering, whether in the FT domain or the spatial domain can separate
them without further loss of image content.

Well, if the dots are not resolved, then there's no problem to begin
with. To the extent that *any* periodic artifacts are visible, the FT
will remove them, and do so without discarding any image content
significant to *visual appearance*. One caveat: it removes stripes in
men's shirts and demolishes polka-dot patterns.
... you can achieve the same results by filtering in the
spatial domain without fourier transforming the image.

Would one do that? Once a convolution matrix is worked out (probably
via Fourier methods, given typical image sizes) one would almost
surely carry out the convolution via FFT. Therefore, why not use the
direct method? It's easy, requires only drawing crude masks at the
offending frequencies in the Fourier plane, and is all accomplished in
a matter of seconds (per color channel), start-to-finish.

To answer the original poster's query, yes, there is DEFINITELY a
practical means for the removal of halftone screens and other periodic
patterns from an image. Significantly, Photoshop and similar programs
do not include frequency-selective removal methods.

Again, I offer to anyone interested a practical example of halftone
removal and a description of how it's done using freeware. Just ask.
It'll convince.
-- j.

PS to Kennedy: I understand and very much appreciate your experiment
of yore, "... 30-odd years ago demonstrating the FT method using
purely analogue techniques." I did nearly the same in nearly the same
era.
 
J

James Kendall

To answer the original poster's query, yes, there is DEFINITELY a
practical means for the removal of halftone screens and other periodic
patterns from an image. Significantly, Photoshop and similar programs
do not include frequency-selective removal methods.
Subsequent to writing three messages on the subject of the
descreening of prevously-scanned images, I came to the realization
that the initial posting concerned removal of moiré effects, rather
than the removal of the screen grid, per se. What I asserted in my
postings is irrelevant to the moiré problem, and therefore incorrect.
I apologize for the confusion thus created. -- j.
 
K

Kennedy McEwen

James Kendall said:
Well, if the dots are not resolved, then there's no problem to begin
with.

No, the issue is where the dots are not capable of being reproduced by
the sampling density. They may still be resolved by the scanner optics
and CCD elements, and consequently have aliased. Aliasing means that
the dots are now inseparably mixed with the image content and
consequently cannot be removed by any form of filtering without removing
image content as well.
To the extent that *any* periodic artifacts are visible, the FT
will remove them, and do so without discarding any image content
significant to *visual appearance*. One caveat: it removes stripes in
men's shirts and demolishes polka-dot patterns.
Only because you are eliminating the wrong spatial frequencies. It is
fairly easy to determine the spatial frequency of the dot pattern and
eliminate only that from the FT of the image,
Would one do that?

Yes, when it is a much more efficient method computationally. An
imaging system model that I developed some time ago with a late
colleague uses several such filters for each of the system components.
Being the software and mathematical engineer, he did an extensive
analysis of whether it was better to FT and multiply or just to
convolve. Not surprisingly, simple convolution works out to be most
efficient for higher spatial frequencies, where the resulting filter
kernel is small. On the other hand, the FT route is more efficient for
low spatial frequencies. The breakpoint depends on the size of the
image. Now, it seems to me that the dot pattern of a printed image is
almost invariably a high spatial frequency, resulting in a small
convolution mask, which makes convolution the most expedient solution.
Once a convolution matrix is worked out (probably
via Fourier methods, given typical image sizes) one would almost
surely carry out the convolution via FFT.

As I mentioned, with a little experience it is easy to determine the
convolution matrix without an FT, and there are only a few terms that
require to be calculated. Furthermore, when the dots are resolved and
not aliased, a simple blur filter is adequate to remove them completely,
without any FT at all - which is all that the descreening process does
in the scanner software.
Therefore, why not use the
direct method?

Depends what you consider the "direct method" - if there is no need to
compute the FT then convolution is good enough. If the FT procedure
were the more direct method then that would be used in the descreening
algorithm of the scanner software, but it isn't.
It's easy, requires only drawing crude masks at the
offending frequencies in the Fourier plane, and is all accomplished in
a matter of seconds (per color channel), start-to-finish.

To answer the original poster's query, yes, there is DEFINITELY a
practical means for the removal of halftone screens and other periodic
patterns from an image.

That wasn't the original poster's question! He wanted to remove the
dots from images which had not been scanned with sufficient resolution
(ie. the dots had aliased) without loss of image quality. The answer to
that question is that there is *no* such technique.
 
F

false_dmitrii

to do list said:
Is there a way to "descreen" magazine scans which have previously been
scanned without the scanner being set to descreen?

A friend of mine sent me scans of various magazines, in total around 600
pages, but the scanner was not set to decreen the pages. Is there a way to
simulate descreening in Photoshop or other editing software?

One hopes that your friend holds the rights to these 600 pages, or
that the scans are for personal use only....:)

false_dmitrii
 
F

false_dmitrii

(e-mail address removed) (James Kendall) wrote in message
To answer the original poster's query, yes, there is DEFINITELY a
practical means for the removal of halftone screens and other periodic
patterns from an image. Significantly, Photoshop and similar programs
do not include frequency-selective removal methods.

Again, I offer to anyone interested a practical example of halftone
removal and a description of how it's done using freeware. Just ask.
It'll convince.
-- j.

PS to Kennedy: I understand and very much appreciate your experiment
of yore, "... 30-odd years ago demonstrating the FT method using
purely analogue techniques." I did nearly the same in nearly the same
era.

Well, "Fourier Transform" means nothing to me yet, but there were a
couple of times when I'd have found pattern removal handy (instead, I
went with median/average filtering). Plus, I'm curious out of general
principle. Any references on the subject?

false_dmitrii
 
K

Kennedy McEwen

James Kendall said:
Subsequent to writing three messages on the subject of the
descreening of prevously-scanned images, I came to the realization
that the initial posting concerned removal of moiré effects, rather
than the removal of the screen grid, per se. What I asserted in my
postings is irrelevant to the moiré problem, and therefore incorrect.
I apologize for the confusion thus created. -- j.
That is what I was concerned about, hence my explanations. ;-)
 
B

Bruce Gaylinn

FFT mathematically converts an image to a representation of its phase
and frequency patterns. Periodic image elements become spots which you
can visualize and manipulate. This can be useful for descreening,
enhancing or quantifying patterns, or removing motion blur. As Kennedy
has pointed out, other methods as used in Photoshop are often
mathematically equivalent and more convenient. FFT is available in some
commercial image processing packages and in scientific packages like
ImageJ (free from NIH, but aimed at technical uses).

For an example of descreening with FFT:
http://www.reindeergraphics.com/tutorial/chap4/fourier13.html
 
J

James Kendall

That wasn't the original poster's question! He wanted to remove the
dots from images which had not been scanned with sufficient resolution
(ie. the dots had aliased) without loss of image quality. The answer to
that question is that there is *no* such technique.
--

Kennedy is correct in all he has explained. I mistook the original
question, and responded inappropriately. My sincere apology. -- j.
 
K

Kennedy McEwen

(e-mail address removed) (James Kendall) wrote in message



Well, "Fourier Transform" means nothing to me yet, but there were a
couple of times when I'd have found pattern removal handy (instead, I
went with median/average filtering). Plus, I'm curious out of general
principle. Any references on the subject?
Quite literally, thousands if not millions. Just type "fourier" or
"fourier transform" into Google. Unfortunately, being a highly
mathematical process most of these references get into quite complex
mathematics very quickly, which I find unfortunate because it puts many
people off learning about the approach and its strengths and uses as
well as gaining an insight into what must rank as one of the most
beautifully simple mathematical techniques yet devised. This was a
problem I had when I first encountered them more years ago than I care
to remember and they were just complex mathematics with no obvious
purpose. So I learned the mathematics "by rote" well enough to get
through examinations. I always remember an optics lecturer telling us
that we would know we understood FTs when we could switch between
thinking of a problem in the normal domain and the fourier domain at
will - at the time I thought he must be mad! It was several years
before I recognised their value, one example being the situation above,
and in doing so realised that all the complex mathematics just obscured
the core operation. Soon after I found myself doing exactly what my
optics professor had predicted - and I hadn't realised I was doing it
until someone asked me how I had resolved a certain problem without
using the detailed maths at all! It really is very simple and elegant
after all the detailed mathematics have been stripped away, so here is a
low mathematics content perspective.

The approach is named after Jean Baptiste Fourier, an incredibly adept
Frenchman. Fourier originally trained for the priesthood, but never
took his vows. He served in Napoleon's army as a scientific advisor
during its invasion of Egypt and became stranded there, along with the
rest of Napoleon's army, after the British under the command of Nelson
annihilated the French fleet in the Battle of the Nile. Returning to
France a year after Napoleon skulked back, Fourier settled to teach
mathematics and study the physics of heat flow. Amongst his many
achievements, Fourier also designed and supervised the construction of
the main highway between Grenoble and Turin - so his mathematics were
focussed on practical applications, which is often ignored when his
technique is taught these days.

The story all started 50 years before Fourier's debacle in Egypt when
the Swiss mathematician Daniel Bernoulli showed that the waveforms in a
vibrating string could be described as the sum of a series of sine and
cosine waves. Two years later another Swiss, probably the greatest
mathematician of all time: Leonhard Euler, based on Bernoulli's results,
proposed the theory that *any* waveform could be represented as a sum of
sine waves. Nobody questioned Euler's theory, but nobody used it either
until Fourier came along with a mathematical proof that Euler's Theory
was correct in 1807. However, the mathematics he used to do this was
revolutionary - so much so that the greatest mathematician of the time,
Lagrange, denounced the work of the young upstart as "impossible!".
Between himself and Laplace, the second greatest mathematician of the
day, they conspired to prevent Fourier's paper even being published!
This wasn't just professional jealousy - there is ample evidence that
both of these mathematical giants of their time simply did not
understand the completely original approach that Fourier had used, it
was so different and unique. These days though, it is a lot easier to
grasp because much of the language and concepts developed by Fourier
have found their way into common usage.

So there you have it, Fourier proved that any waveform could be
represented by the sum of a series of sine waves in the correct
proportion and phase to each other. *Every* waveform, whether it was a
simple mathematical function, the sound wave produced by a musical
instrument, waves on the surface of the ocean, radio waves - even
photographic images, which are simply two dimensional waveforms of
light. Fourier proved that all of these can be reduced to the sum of a
series of sine waves in their respective media. The process of
identifying these sine waves, their amplitudes and phases is called
Fourier Analysis, whilst the data of the amplitudes and phase for each
frequency in a given waveform is called a Fourier Transform. The plot
of such data as a function of frequency is just a frequency spectrum.

A slight confusion here is that the operation of computing the discrete
frequency components in sampled repetitive data, as opposed to analysing
the exact components in continuous free waveforms, is called a discrete
fourier transform. So the term Fourier Transform is often applied to
the operation as well as the result. There are special tricks that
permit the discrete fourier transform to run faster on computers, by
storing the results of certain functions that are repeated many times in
lookup tables and ordering the process in certain ways and these are
called Fast Fourier Transforms, or FFTs. The most famous of these is
probably the Cooley-Tukey algorithm, IBM and Princeton researchers,
first published in 1965, although it turned out to be the rediscovery of
a long lost procedure used by the German mathematician and physicist,
Gauss almost 150 years earlier. Computers are nothing new! ;-)

To begin with though, it is easier to think of the operation of fourier
analysis in one dimension, such as an audio signal. So, for example,
take a pure audio sine wave. If you plot out the audio waveform in
terms of amplitude versus time, you have a sine wave that extends as far
as you can see to the left and right. If you plot the frequency
spectrum then you get a single spike at the frequency of the sine wave
and nothing else - no other frequencies are present. That single spike
spectrum is just the Fourier Transform of the sine wave. Easy or what?

So when you look at the power spectrum of your audio amplifier or the
speakers in your hi-fi system, you are just looking at the fourier
transform of the waveforms as the hifi can reproduce them. You can see
if the hifi boosts the low frequencies in the bass more than the high
frequencies of the treble region or vice versa - very difficult to do if
you looked at how the hifi affected an audio waveform, but *much* more
meaningful to how you hear the music!

There is a slight gloss over some complication here, because the fourier
transform of the sine wave actually contains two spikes, one with a
positive frequency and one with a *negative* frequency, which might be a
little confusing at first. However a neat property of the FT is that
any real waveform (ie. a waveform which can be described only by real
numbers, not complex or imaginary numbers such as the sqrt(-1) ) has an
FT which is perfectly symmetric about 0 on the frequency axis. So all
those strange negative frequencies are just reflections of the real
frequency and can often be ignored - but remember they exist because
they are important in some cases, such as AM radio demodulation and the
analysis of aliasing.

Another fairly simple example would be a highly distorted audio signal,
a square wave. Fourier analysis shows that the square wave can be
broken down into the sum of a fundamental frequency and all of the odd
harmonics - so that is frequencies that are 3, 5, 7, 9... times the
fundamental, added in inverse proportion to their relation to the
fundamental with alternating positive and negative terms. It is more
difficult to describe in words, so in mathematical terms (sorry, but
this is the only equation!) the square wave is:
sin(f) - sin(3f)/3 + sin(5f)/5 - sin(7f)/7 +.... and so on.

If you lot this spectrum then you get a spike at the fundamental
frequency, f, with a negative spike at 3f which is 1/3rd of the size of
the fundamental, then a positive spike at 5f which is 1/5th of the size
of the fundamental and so on, with zero at all the other frequencies in
the plot. This data is just the fourier transform of the square wave.

An audio signal is just a waveform in one axis, time. However the same
technique can be extended to two and more dimensions. An image is just
a two dimensional waveform of light, and has a similar two dimensional
fourier transform. I notice yesterday's news of the death of Francis
Crick, joint Nobel prizewinner with James Watson for discovering the
structure of DNA which was achieved by x-ray crystallography - a 3
dimensional fourier analysis technique. So it is a very powerful
process.

It is important to realise that the Fourier Transform is not different
from the original waveform, it is just the same information sorted and
viewed in a different way. I like to compare the Fourier Transform to
those famous silhouettes of two faces looking towards each other. In
one view, you see white faces on a black background but blink and take a
different view and you see a black vase on a white background. It is
just the same information but you are prioritising the information and
sorting it differently to give you an alternate view of what is present
in the image. The waveform and its FT are just like that - the same
information viewed in a different way. You can change a waveform into
its frequency components by the Fourier Transform and you can change the
frequency components into the waveform by an almost identical operation
called an inverse fourier transform.

Consequently, it is no surprise that there are lots of relationships
that link the waveform and its FT together - change one and the other
must also change, perform a certain operation on one and a matching
operation is performed on the other. Almost all of these relationships
are symmetric - which is one of the beautiful things about fourier
transforms. So if an operation applied to the waveform results in a
matching operation applied to the FT then the application of that same
operation to the FT results in the same matching operation being applied
to the waveform.

I have already mentioned one of these relationships - the symmetry rule.
If a waveform has no imaginary components (ie. it is a real waveform)
then the spectrum is symmetric around zero frequency. The corollary of
this is that if the spectrum is real then the waveform is symmetric
about the time (or space) axis. Symmetry within symmetry!

It follows that an asymmetric waveform (and how many of our images are
perfect mirror reflections along the vertical and horizontal axes?) has
a spectrum which must contain complex numbers. These complex numbers
are simply a mathematical description of the amplitude and phase of the
sine wave at any frequency. The amplitude is just the absolute value of
the complex number (not the real part) and the phase is the argument,
which is the arc-tangent of the ratio of the imaginary to real part. So
moving a waveform, or image, in any direction simply changes the
proportion of real and imaginary parts in the spectrum, the phase of the
frequencies, whilst maintaining their amplitude.

Another relationship, which is quite simple to grasp, is the principle
of truncation. Remember the audio sine wave that was plotted out
against time and extended as far as you could see to the left and right?
That means it extended to infinity - the sound existed for all time.
However the spectrum was quite finite - just a single frequency was
present. This is a general principle - a finite waveform will have an
infinite fourier transform that extends to infinite frequencies.
Similarly a finite spectrum corresponds to a waveform that exists over
infinite time (or space). This can cause problems, and certainly
determines the size of FT necessary to efficiently approximate the
correct result but, generally speaking waveforms that are infinite are
repetitive, or we don't generally care if they are, so they can be
replaced by finite waveforms that are assumed to repeat - which is the
underlying principle of the discrete fourier transform.

The general rule is that very small details in the waveform correspond
to very large frequencies in the fourier transform. That is why some
operations are more easily implemented in one domain than the other -
because some functions are large in the original domain, but have small
fourier transforms, whilst others that have large fourier transforms are
small in the original waveform. It is a balance between the computation
time of the FT to get a short process time for a small function and
process time for the larger function.

This is important when it comes to sampling waveforms, including images.
The image is finite, so its spatial frequency spectrum is, by
consequence, infinite. However, another relationship is the Convolution
Theorem that I mentioned a couple of posts back. This basically says
that if you multiply two waveforms together in a piece wise manner that
the result has the same fourier transform as convolving the fourier
transforms of the two waveforms. The converse is also true, so that
Convolving is just a mathematical term for blurring one waveform by
another. At every possible position of one waveform relative to the
other, you multiply each point in one waveform with the corresponding
point in the other and adding up the sum of all of those products to get
the result for that position. Sounds complex, but it is just a
repetitive average product as one waveform slides past the other.

When you sample the image with a CCD in a scanner or a digital camera
you implement a couple of processes. There are lots of different ways
of viewing this process, but they all reduce to the same thing, and the
following is the easiest to visualise in my opinion. The first process
is that you average (blur) the image at each point by the area of the
CCD element. This produces what I call a pixel response map. Then you
actually sample the pixel response map by producing an output signal for
just certain specific points where the CCD elements are actually
centred.

That first process, the averaging or blurring by the pixel area is
convolution - so the effect on the spatial frequency spectrum of the
image is... to multiply it with the fourier transform of the pixel's
response. Since this is nominally uniform across the area of the pixel,
and the pixel is finite, the fourier transform extends to infinity,
however the high frequency components are very much less than the low
frequency ones. The outcome is that the spatial frequency spectrum of
the image detected by each pixel contains very much less high frequency
content than the original image might have. There are standard FT
results that help to visualise what is happening here. For example, if
the pixel is perfectly square with a uniform response across its area
then it turns out that the FT is a sinc function, which is
sin(pi.a.f)/(pi.a.f), where a is the width of the pixel and f is the
spatial frequency. A bigger pixel means less high frequency information
is reproduced.

The second step in the sampling process is just to output the signal for
each pixel position. This is the same as multiplying the blurred image
of the pixel response map with an array of infinitely narrow spikes,
known mathematically as delta functions. The effect of this
multiplication on the spectrum is of course... convolution. A special
property of the array of delta functions is that they have an FT which
is another array of delta functions. Whilst the spikes are separated in
the original CCD by the pixel pitch, they are separated in the FT by the
sampling frequency. As with everything else relating waveforms and
transforms, a smaller pitch means a bigger sampling frequency.

So now we have a convolution of the fourier transform of the pixel
response map with a series of spikes which are separated by the sampling
frequency. That effectively means the FT of the pixel map is replicated
at every delta function with the zero frequency point mapped to the
spatial frequency of the spike. This is quite a complex thing to
visualise if you have never tried it before, so it might help to
consider just two of these spikes, the one at the origin of the spatial
frequency scale and one at the sampling frequency. So we have two pixel
response map spectra being overlaid on top of each other, offset by the
sampling frequency.

Now, remember that first relationship of FTs, the symmetry rule? Well,
the original image was real, and the CCD elements were real, so the
pixel response map is also real - which means that the spatial frequency
spectrum of the pixel response map is... symmetric! Remember those
negative frequencies that I mentioned we had to consider - this is one
case where their existence is important.

So, now what you have is two symmetric spectra, overlaid and offset
relative to each other by the sampling frequency. That means that if
the spectrum of the pixel response map contains any spatial frequencies
which are greater than half of the sampling frequency then they will mix
with the symmetric negative spatial frequencies of the spectrum that has
been offset by the sampling frequency. Similarly, those large negative
spatial frequencies will mix with the positive spatial frequencies of
the spectrum at the origin. And this mixing occurs identically at every
delta function in the fourier transform. Furthermore, the higher the
spatial frequency is the lower it mixes into the adjacent spectra. This
is the classical proof of Nyquist's Theorem that the sampled system can
unambiguously resolve frequencies up to half of the sampling frequency.
Fairly obviously, half the sampling frequency is the point at which each
spectrum starts to overlap the adjacent one. It also shows that even
the ideal CCD array with a flat response across each pixel area and no
dead area between adjacent pixels, will always alias an otherwise
unfiltered image - because the spatial frequency response of the pixel
is still significant beyond half the sampling frequency.

Finally, coming back onto the topic, it also shows that if the dots in a
halftone screen are not sampled with sufficient resolution to have at
least two samples for each dot pitch then they will give rise to
frequencies in the spatial spectrum of the pixel image map which are
greater than half the sampling frequency and will consequently mix
inseparably with frequencies of the image content - something we all
know and recognise as aliasing and moiré. It is obvious, once you
follow the fourier processes, that there is absolutely no way to remove
this defect without also removing significant image content. It is also
clear, however, that if the dots are sampled with sufficient density
(note they need not be resolved in the strict sense) then they can
simply be removed by masking out the high spatial frequencies in the
fourier transform of the image - those corresponding to greater than
half the spatial frequency of the dots themselves, which are simply
another sampling step in the half tone process. This uniform mask in
the spatial frequency spectrum is a multiplication, so the same process
can be achieved by convolution (ie. blurring) the original image with an
appropriate filter kernel, and in most cases an adequately sized blur
will do the job just as well.
 
A

Anoni Moose

to do list said:
Is there a way to "descreen" magazine scans which have previously been
scanned without the scanner being set to descreen?

A friend of mine sent me scans of various magazines, in total around 600
pages, but the scanner was not set to decreen the pages. Is there a way to
simulate descreening in Photoshop or other editing software?

As many have mentioned, it's best to have minimized the problem to
start with. There's a book by Margolis (I actually ended up with two
copies of it, I liked it so much... albeit a version written for PS 5,
not that the version of PS matters for the subject matter) who talks
about how using "moire" filters while scanning is horrible because one
can do a lot better job w/o blurring the resulting image like the filters
do. Maybe I should put my extra book on ebay for your friend. :) :) :)

Anyway, you do things like scan with the image at the proper angle in order
to optimize things against the way halftones are printed in the magazine
and taking advantage that different colors have their pattern put down
at different angle (so they don't moire with themselves I think) and
that some colors make a bigger difference (I recall that the yellow was
the least impactful for resulting patterns so it's made the fall-guy to
cut the patterns made by the others) then much better results can be
had with scanner vs print moire patterns.

Read his book(s) for the details. From what I've fussed with, his
lessons seem to work.

As to your current situation with already scanned images, perhaps
something like neatimage would work well (I know it does). Still
get blurring though -- but I think it's doing it in the frequency
domain (fourier business others were talking about), but don't know
for sure. That many pictures would take a good while though, it's
rather compute intensive (part of my thinking it's doing things in the
frequency domain).

Mike
 
K

Kennedy McEwen

James Kendall said:
Kennedy is correct in all he has explained. I mistook the original
question, and responded inappropriately. My sincere apology. -- j.
Sorry about that James, I wasn't trying to browbeat you! I posted the
response above before I had downloaded your earlier acknowledgement of
the mistake.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top