Should I scan Slides, Negatives, Photos at 24-bit, 48-bit, or some other color depth? 2400 or 4800 p

P

Peter D

I read that there is no point in scanning slides (originals are Ekta 25,
Fuji 100) at more than 2400ppi because there is no practical gain in quality
(all other things being equal), and specifically that 4800 ppi is a total
waste of space and may actually prodcue a lesser quality output because of
all the unnecessary information it collects. Ditto for negatives.

What's the best archival resolution for photos? They consist of old B&W,
100-400 printed on Kodak/Fuji/Other paper in Matte and Gloss, 4x6 through
8x10, various ages, various conditions.

As for color-depth, someone wrote that 8-bit is more than enough. Is it? I
was planning to scan using 24-bit or even 48-bit, but if I don't have to,
I'll speed up scanning and reduce storage needs.

Advice and opinions, please.
 
B

bmoag

For my personal use it is best to scan 35mm materials at a minimum of
2000-2400dpi: files can be downsized later as needed but not the opposite.
If you have the time and storage space there is no reason not to scan at
higher resolutions. If an image requires a lot of hand correction, for
example to eliminate scratches, it is actually easier if there are more
pixels to begin with. Most CPUs with 512mbs RAM can handle 2400dpi files
(~24mbs) but if you go larger or have more than one image open in Photoshop
you may need to increase memory and CPU speed. If your computer bogs down
with large size image files shut off all background programs, especially
Norton antivirus (with appropriate precautions).

For flatbed scanning of pictures there are calculations you can look up for
relating scan dpi to final image size. In general scanning at 300dpi for
prints on the flatbed will yield good 1:1 reproduction--even the best photo
inkjet printers do not really print at much greater than 300 dpi. However if
you are going to have to do a lot of manual retouching of scratches, etc. it
is easier if you have more pixels to work with.

The ony real controversial issue is whether to scan at color depths of
greater than 8 bits. For scans of color prints the answer is a clear no
because the print itself does not likely reproduce the whole 8 bit gamut.

File sizes scale up dramatically when going from 8 to 16 bits and only
Photoshop CS can work with this size gamut using most of the program's
devices.

People who advocate using 12 or 16 bit color are correct that the gamut is
wider, but the gamut is wider than what most people can see, can reproduce
on their monitor and far exceeds what a printer with its 8 bit gamut can
reproduce. Therefore at some point in your workflow the gamut is going to be
arbitrarily chopped down to 8 bits by a rigid algorithm in some driver or by
the limits of your own eyes. Epson's educational materials point out that if
you happen to be working in 16 bit at the edge of a gamut that is outside
what the 8 bit printer can reproduce you will not be able to print the image
you see regardless of color management.

So unless you have a specific need to use 12 or 16 bit gamuts and understand
how to do so-because if you understand the relevant issues there are good
reasons to go up to 16 bit color-there is not a good rationale for most
users to use more than 8 bit color.
 
W

Witold

Peter D said:
I read that there is no point in scanning slides (originals are Ekta
25, Fuji 100) at more than 2400ppi because there is no practical gain
in quality (all other things being equal), and specifically that 4800
ppi is a total waste of space and may actually prodcue a lesser
quality output because of all the unnecessary information it collects.
Ditto for negatives.

The above probably depends on the quality of the scanner that is used.
Certainly a dedicated film scanner at 3200 dpi will pull out a lot of
data from the original negative. It will likely easily surpass even a
high quality flatbed scanner "rated" at 4800 dpi.

If you have a lot of slides/negatives to scan, then higher resolutions
will lead to big files that need to be stored somewhere. Note that a 3200
dpi scan of a 35mm film frame will produces a resolution that is
spatially equivalent to that of a 13.7 MP digital camera. A 4800 dpi scan
is spatially equivalent to 30 MP, while 2400 dpi would be equivalent to
7.7 MP. Note that a scanner samples each RGB channel individually, while
a digital camera samples only one of the three RGB channels at each
photosite, producing the missing colors by interpolation.
What's the best archival resolution for photos? They consist of old
B&W, 100-400 printed on Kodak/Fuji/Other paper in Matte and Gloss, 4x6
through 8x10, various ages, various conditions.

For archival purposes, I would scan the photos using at least 300 dpi. If
you ever want to do prints of the same photo, then modern digital print
labs use printers that print around that resolution. Some do go as high
as 400 dpi.

If you settled on 400 dpi, that should prove adequate, while keeping the
file sizes reasonable. A 24-bit file of an 8"x10" print scanned at 400
dpi produces an uncompressed file that is 38 MB in size. If scanned as a
48-bit file, it would be double this (76 MB).
As for color-depth, someone wrote that 8-bit is more than enough. Is
it? I was planning to scan using 24-bit or even 48-bit, but if I don't
have to, I'll speed up scanning and reduce storage needs.

Note that the term 8-bit usually has the same meaning as 24-bit. The 8-
bit refers to 8-bit data for each R, G, and B channel. When taken
together, each pixel then has 24 bits of data associated with it. In the
same way, a 48-bit scan will store each RGB channel as a 16-bit piece of
data.

If you scan using a 48-bit color depth, then you will find that the image
is better able to withstand editing of things like color balance and
having its levels adjusted. There are simply more tones to work with, and
banding is less likely to occur.

Note that not all scanner devices will scan in 16-bit mode. Many have
12-bit or 14-bit resolution. However, the resulting scans are usually
stored with 16-bit (48-bit RGB) data, unless you save them as JPGs, where
they would be downconverted to 8-bits (in an 24-bit file mode). This is
because the JPG file format does not support 48-bit data, but I have
heard that the newer JPEG 2000 format does.
 
P

Peter D

Thank you all for the response. I should hav eadded that "size doesn't
matter" (<g>). With DVD burners and quality media so cheap, it really isn't
an issue.

I''ve decided to scan all slides at 2400ppi (max for my scanner) and 24-bit
color. Testing with 48-bit color shows no apprecialbe gain to me and bogs
down the machine in an unacceptable way.

Is there any benefit to also scaling up the scan, say to 400%? What if I
want to print a poster one day?

For picture scanning, the responses focused on the printing results at 1:1,
but what if I want to print a poster one day? What ppi/dpi should I scan at
the get the max benefit before the increased resolution yields no better
results. I'm not looking for what is the least I can get away with decent
results, but rather what is the most I can scan at before the increased
quality yields no appriecable benefit.

For example, for archiving I'd scan at 2400 ppi/dpi. But if I found out that
(for example) the grain of 400 asa negatives becomes visible and pronounced
at 600ppi, I wouldn't. I'd scan 400 asa nega below that level. If printed
photos produce unwanted noise and artifacts at above 400ppi, I'd scan them
below that res. And so on.

Another example. I read that above 2000 dpi/ppi a slide scanner produces
negligible finer detail and at 4800 the scan will contain unwanted
noise/grain. Form that I concluded that 4800 was a bad choice for slide. and
that going much beyond 2000 was pointless. But I feel that a little bit of
negligible benefit can't hurt so I settled on 2400.

Finally, what format is better for storage of first-scan archival results.
Size doesn't matter. Having the best possible source file for later editing
and conversion does.
 
G

Glen S

Witold said:
The above probably depends on the quality of the scanner that is used.
Certainly a dedicated film scanner at 3200 dpi will pull out a lot of
data from the original negative. It will likely easily surpass even a
high quality flatbed scanner "rated" at 4800 dpi.

If you have a lot of slides/negatives to scan, then higher resolutions
will lead to big files that need to be stored somewhere. Note that a 3200
dpi scan of a 35mm film frame will produces a resolution that is
spatially equivalent to that of a 13.7 MP digital camera. A 4800 dpi scan
is spatially equivalent to 30 MP, while 2400 dpi would be equivalent to
7.7 MP. Note that a scanner samples each RGB channel individually, while
a digital camera samples only one of the three RGB channels at each
photosite, producing the missing colors by interpolation.
snip

If scanning at 3200 dpi of my negatives is equiv to 13.7 MP digital, is
this why there seems to be so much noise in my 35mm negative scans? I
have an LS-2000 coolscan and I notice that unless the picture (negative)
being scanned is a quite bright outdoor shot, the noise is very
noticeable when compared to a print scan of the same picture, and
(usually) no where near the clarity of even a 2.5 MP pic from my kodak
point & shoot. Albeit the details are much clearer from a neg scan than
a flatbed scan, the picture itself still often has that very grainy look.
 
D

Don

I''ve decided to scan all slides at 2400ppi (max for my scanner) and 24-bit
color. Testing with 48-bit color shows no apprecialbe gain to me and bogs
down the machine in an unacceptable way.

That's because your monitor only shows you 24-bits and your eyes only
see ~24-bits. You just can't look at a 48-bit image using a 24-bit
monitor or 24-bit eyes.
Is there any benefit to also scaling up the scan, say to 400%? What if I
want to print a poster one day?

Nope. You can always scale later. Scaling is "pretend". It does not
create new image detail, it just "invents" in-between pixels to make
the file... I mean... image size ;o) bigger.
Another example. I read that above 2000 dpi/ppi a slide scanner produces
negligible finer detail and at 4800 the scan will contain unwanted
noise/grain. Form that I concluded that 4800 was a bad choice for slide. and
that going much beyond 2000 was pointless. But I feel that a little bit of
negligible benefit can't hurt so I settled on 2400.

That's backwards. It's like saying when I spray Windex on my TV and
clean it I can see all the lines. Nah, it's better leave the glass
dirty and fuzzy, because then I don't see any lines. ;o)

If what you want to do is archive you should always scan at maximum
optical resolution and bit depth.
Finally, what format is better for storage of first-scan archival results.
Size doesn't matter. Having the best possible source file for later editing
and conversion does.

A lossless format. The consensus seems to be TIFF because it's layout
is known and there are bound to be viewers for quite some time to
come.

Don.
 
D

Don

The ony real controversial issue is whether to scan at color depths of
greater than 8 bits. For scans of color prints the answer is a clear no
because the print itself does not likely reproduce the whole 8 bit gamut. ....
So unless you have a specific need to use 12 or 16 bit gamuts and understand
how to do so-because if you understand the relevant issues there are good
reasons to go up to 16 bit color-there is not a good rationale for most
users to use more than 8 bit color.

That's correct but just to be a bit more specific, if one goes
straight from scanner to output (screen or print) 16-bits may not
bring much, if anything.

The main reason for scanning at 16-bit depth is for editing. So if
after scanning the image is edited then 16-bit is essential,
especially for marginal images. Editing a difficult image in 8-bit
will quickly show banding. Editing the same image in 16-bit will give
much more elbow room.

Don.
 
D

Don

A lossless format. The consensus seems to be TIFF because it's layout
is known and there are bound to be viewers for quite some time to
come.

Oh, no! I finally did it!

fx: hangs head in shame...

That should read:

"... because *its* layout ..."

Maybe it was the spell checker? Yeah, that's it! ;o)

Don.
 
P

Peter D

Don said:
That's because your monitor only shows you 24-bits and your eyes only
see ~24-bits. You just can't look at a 48-bit image using a 24-bit
monitor or 24-bit eyes.

That's all very nice, but why should it matter to me? :)

My monitor can display 32-bit. My eyes? I'm not sure they can even see
24-bit (getting old). How about other eyes? Can they tell the difference
between 24-bit and 48-bit? If they can't, why use 48-bit?

While I want to get the "best" scan from my slides, there has to be a
certain point at which the benefit decreases to the point where it is...
well... pointless. I'm talking about practical application rather than a
theoretical one -- re "best" that is.
That's backwards. It's like saying when I spray Windex on my TV and
clean it I can see all the lines. Nah, it's better leave the glass
dirty and fuzzy, because then I don't see any lines. ;o)

So, is it true that a 4800ppi scan produces unwanted noise/grain? And if it
produces grain/noise that may be wanted, what purpose woudl it be wanted
for? IOW, is there any _real_ value to scanning at 4800ppi? If there is,
tell me and I'll go out and buy a new scanner. :)
A lossless format. The consensus seems to be TIFF because it's layout
is known and there are bound to be viewers for quite some time to
come.

Thanks. Some people here talk of TIFF Compressed as being "lossless". Is it?
IOW, if I used that variation of TIFF, would I gain the advantage of smaller
files size without any loss in quality? How about PNG? Is that a lossless
compression? How does it compare to TIFF.
 
E

Elwood Dowd

My monitor can display 32-bit. My eyes? I'm not sure they can even see
24-bit (getting old). How about other eyes? Can they tell the difference
between 24-bit and 48-bit? If they can't, why use 48-bit?

To store the maximum amount of gradation between colors in the
photograph. That is the only reason. Your eyes may not be able to see
even 24 bits worth of color, but they are designed to recognize
contrast---ie gradation between light and dark. This translates to
colors as well.

Realistically, 48 bits are interesting from the point of view of
editing. When you edit a photograph you throw out tons of information
with each slide of the bar, in any direction. Having 48 bits for each
pixel means that some pixels that would wash out---degrade to other
colors---during editing might be preserved as slightly different
gradations, which also preserves some of the original contrast of the image.

Neither your printer nor your monitor will actually give you 48 bits,
either. However, your editor can use that information to give you a
better end product.

That being said, I scan and store well above 99% of my images in 24-bit
color, because the source is simply not worth the extra work. Family
photos, my own composition and exposure practice, even genealogical
photos, are all poor enough in quality that taking the compute power to
scan, store, and edit in 48 bits isn't worth the trouble to me.
Besides, most of that stuff I won't even edit (or print, for that matter).

Thanks. Some people here talk of TIFF Compressed as being "lossless". Is it?

Compressed with LZW, yes, it is lossless. It is also possible to
compress TIF with JPG algorithms, which is lossy. LZW will give you
between 35% and 70% compression with no loss of data.

PNG is a good idea that never took off. 10 years from now, Photoshop
will still read TIF, but I bet it won't read PNG.

My two cents...
 
P

Preston Earle

In replying to a question about bit depth, "bmoag" wrote; "File sizes
scale up dramatically when going from 8 to 16 bits and only Photoshop
CS can work with this size gamut using most of the program's devices.

People who advocate using 12 or 16 bit color are correct that the gamut
is wider, but the gamut is wider than what most people can see, can
reproduce on their monitor and far exceeds what a printer with its 8
bit gamut can reproduce."
--------------------

Uncompressed file sizes for 16-bit files are double the size of 8-bit
files. Compressed 16-bit files can be many times larger than compressed
8-bit files.

I don't believe 16-bit files have a larger color gamut than 8-bit files.
The gamut is controlled by the color space definitions (white value,
black value, primary color values). 0 in an 8-bit file will equal 0 in a
16-bit file and 256 in an bit file will equal 65,656 in a 16-bit file.
There will be more color gradations in a 16-bit file, but there won't be
any larger gamut.

Whether there is any benefit to scanning at 16-bit depth is sort of a
matter of religious belief: if a person believes it is better, no amount
of reason, logic, or experiment will convince them otherwise. The
weakness of 8-bit files is that they theoretically can show banding
after severe color moves that will be smooth in a 16-bit file. In
practice, only the "religious" have ever seen (or claim to have seen)
the difference. The "non-believers" admit that there may be an image
somewhere that will show some difference, but no one has ever produced
one. If a person doesn't mind handling and processing files twice a
large, and archiving files many times as large, he might use 16-bit. He
may sleep better, but he will never see any difference in his final
images.

Preston Earle
 
P

Peter D

it?

Compressed with LZW, yes, it is lossless. It is also possible to
compress TIF with JPG algorithms, which is lossy. LZW will give you
between 35% and 70% compression with no loss of data.

PNG is a good idea that never took off. 10 years from now, Photoshop
will still read TIF, but I bet it won't read PNG.

As long as I have a proggy that can read PNG and write TIFF, it wouldn't
really matter. If I promise to always keep such a program around, is there
any advantage to PNG over TIFF, and is PNG "lossy"? :)
 
D

Dances With Crows

Firefox, Mozilla, Safari, Opera, and even Internet Exploder can render
PNGs properly so long as they don't contain transparency. (Exploder
*still* has problems with transparent pixels in PNG.) Where is this
"never took off" business from?

Gimp and ImageMagick, OTOH, have supported PNG for the last ~4 years.
Are Adobe on crack? The reference library for PNG is available under a
BSD-like license for $0.00. The libpng FAQ at
http://www.libpng.org/pub/png/pngfaq.html says, "Photoshop has
traditionally been the poster child for poor PNG implementations, but in
fairness, recent releases have also included ImageReady, an optimizer
that does a better job." The FAQ looks a bit out-of-date though.

Or do they think they can ignore PNG and hope it'll go away? Won't
happen; file formats that are open standards will persist as long as
there are programmers and users who find those file formats useful.
As long as I have a proggy that can read PNG and write TIFF, it
wouldn't really matter. If I promise to always keep such a program
around, is there any advantage to PNG over TIFF, and is PNG "lossy"?

PNG is lossless. PNG will display in all sane Web browsers, while TIFF
won't, which may be an advantage in some cases. Some TIFF compression
methods on some images result in much smaller files than PNG--a
black-and-white image in TIFF G4 is about 1/5 the size of the same image
in PNG. TIFF has some features (image info in standard tags, multipage
support) which PNG lacks.

Whatever; there are and will probably always be Free programs like
ImageMagick that can do TIFF<->PNG without much effort. If you want to
put losslessly-compressed junk on the Web, PNG is the way to go. If you
can live with lossy compression, JPEG is the way to go. HTH,
 
E

Elwood Dowd

Firefox, Mozilla, Safari, Opera, and even Internet Exploder can render
PNGs properly so long as they don't contain transparency. (Exploder
*still* has problems with transparent pixels in PNG.) Where is this
"never took off" business from?

Er... Netscape 4.5 was able to read PNG before Mozilla even existed.
However, that is immaterial. Try submitting one to a prepress agency.
http://www.libpng.org/pub/png/pngfaq.html says, "Photoshop has
traditionally been the poster child for poor PNG implementations, but in

Hahah, that is definitely true. They are also the poster child for
terrible implementations of other well-used formats, like Premier's
implemention of MPEG2, like FrameMaker's sorry excuse for an
implementation of HTML converter.... they are the Microsoft of MarCom.
(Feel free to quote me on that.)

Believe me, I am not an Adobephile, but my work requires me to use their
products on a daily basis, and so do about 95,000 other people,
including all of the duplicators and prepress agencies I work with, who
have been using TIF since well before the days of Ventura Publisher and
who will continue using it until we are all sucking our nutriton from
straws. I don't have to like the situation, nor do I, but there it is.
You wouldn't believe how long it took the publication industry in
general to accept PDF files over raw PostScript.
Whatever; there are and will probably always be Free programs like
ImageMagick that can do TIFF<->PNG without much effort.

It's the "will probably always be" part that worries me. Technology
comes and goes.

PNG has been around for much longer than GIMP or ImageMagick---I think
the original spec was done in the late 80s. It is a very well-designed
format, lossless, and I think everyone should be using it. However, not
everyone listens to me. Personally I will continue to keep my image
archives in TIF, because I am old enough to have run across document
archives I could not find software to read, and frankly I'm not that old.
 
K

Kennedy McEwen

Peter D said:
So, is it true that a 4800ppi scan produces unwanted noise/grain? And
if it produces grain/noise that may be wanted, what purpose woudl it be
wanted for? IOW, is there any _real_ value to scanning at 4800ppi? If
there is, tell me and I'll go out and buy a new scanner. :)
Of course it is true that scanning at higher resolution picks up more
rubbish, whether noise, grain or dirt and scratches is irrelevant, than
scanning at lower resolutions. However, that is not all that it picks
up - it does get more real information off of the slide or negative.

Back when the highest resolution from Nikon was a mere 2700ppi I was
using their then top of the range LS-2000 scanner but I also had my own
colour printing facility running as well. I was frequently disappointed
that when I created large prints from scans the grain was much more
visible than when I made equivalent sized prints from the same negative
or slide in the darkroom. And it wasn't just me - most people, when
shown side by side prints and asked which had more grain picked the
print from the scan.

The scanning process was clearly exaggerating the film grain, and it
didn't take long to work out what was probably happening: 2700ppi
*might* have been enough to resolve all of the image information but it
just wasn't enough to resolve the film grain or give a clear margin
between that and the image content. Consequently the grain was aliased
by the scanner and exaggerated in size to the same scale or larger than
the finest detail of the image.

If you are having problems understanding this, and even if you are not,
then there is a fairly good introduction to grain aliasing at:
http://www.photoscientia.co.uk/Grain.htm

Once grain aliasing happens, no amount of post processing can remove it
or clean the image up without losing some of that fine image detail.
There are plenty of packages around, such as GEM and NeatImage, that
claim to achieve this particular physical impossibility - and perhaps
millions of users of those packages who claim that they are successful -
but when tested rigorously under situations where the grain is about the
same size as or larger than the finest image details they *all* fail to
achieve their claims. Every one of them results in plasticised skin
tones and other textures, or significant residual grain which is still
visibly worse than that in a conventional photographic print.

The only solution to this particular problem is to prevent the aliasing
from occurring in the first place. Possible ways to do this are
defocusing, blurring or diffusing the image BEFORE it is scanned - but
all of these lose information and if your scanner has just enough
resolution to capture all of the image detail then all of these
techniques will also lose image content, producing prints which are
visibly softer than their chemical comparisons - even if the grain is
reduced.

That leaves scanning at a higher resolution as the only viable solution
and when Nikon introduced their 4000ppi LS-4000 scanner I waited a
respectable time for major design problems to be reported by others and
then, when content that they did not exist, upgraded to the higher
resolution device.

Several things were immediately clear from comparing printed output from
the LS-4000 with earlier chemical and LS-2000 scanned prints.

First, there was MUCH less grain in the LS-4000 scan than in the LS-2000
scan, when prints were compared side by side. Not surprising at all
since 2x the pixel density should significantly reduce grain aliasing -
and it certainly did.

Secondly, there was significantly more image detail in LS-4000 scan than
there was in the LS-2000 one - which completely debunked any arguments
that had been made, and which you appear to be repeating, that more than
2400ppi adds no more image information. Film certainly contains more
information than a 2400ppi scan can pull from it. Furthermore, others
have shown that the 5400ppi resolution of the Minolta SE can pull more
image detail off the film than a 4000ppi scan if it exists in the first
place. Perhaps a few of my images taken on a tripod with high shutter
speed and high resolution film justify this, but it comparison scans on
my typical images haven't shown this on the occasions I have tried the
Minolta, so it isn't something I am rushing to upgrade.

Finally though, it was clear that the prints made from the LS-4000 scan
actually had more image detail than the chemical prints. In short, they
were simply better, and I put this down to the limitations of my
darkroom equipment. The result was that I closed down the darkroom,
moved to the semi-digital film only process, got better results with
less mess and expense and freed up an extra room in my house! So
scanning at 4000ppi over 2700ppi was win, win and win again.
 
B

Bart van der Wolf

SNIP
Whether there is any benefit to scanning at 16-bit depth
is sort of a matter of religious belief: if a person believes
it is better, no amount of reason, logic, or experiment will
convince them otherwise. The weakness of 8-bit files is
that they theoretically can show banding after severe color
moves that will be smooth in a 16-bit file. In practice, only
the "religious" have ever seen (or claim to have seen) the
difference.

Have a look at this, and from now on consider yourself religious:
http://www.xs4all.nl/~bvdwolf/main/downloads/SatDish.jpg
It is just the result of a gamma adjustment from linear to average PC
monitor gamma in 8-bit/channel versus 16-b/ch mode with Photoshop. In
case one doesn't (want to) see it, look at the left side of the
satellite dish and to the shadows. The original linear gamma file was
from a 12-b/ch P&S digicam.
The "non-believers" admit that there may be an image
somewhere that will show some difference, but no one
has ever produced one.

They should look at the image above, with open eyes/mind.
If a person doesn't mind handling and processing files
twice a large, and archiving files many times as large,
he might use 16-bit. He may sleep better, but he will
never see any difference in his final images.

It is okay to save the final result in 8-b/ch mode, even as high
quality JPEG, as long as the result is final, no further
post-processing required.

Postprocessing 8-b/ch files will accumulate 8-bit round-off errors
with each operation. One can better change the mode to 16-b/ch for
multiple processing steps, and back to 8-b/ch for saving the result.
That will only be slightly less accurate than starting with a 16-b/ch
file because of the initial quantization error of half a bit.

Bart
 
D

Dances With Crows

Dances said:
Firefox, Mozilla, Safari, Opera, and even Internet Exploder can
render PNGs properly so long as they don't contain transparency.
Where is this "never took off" business from?
Er... Netscape 4.5 was able to read PNG before Mozilla even existed.
However, that is immaterial. Try submitting [a PNG] to a prepress
agency.

The company I work for *is* a prepress agency. It's Saturday now, so I
can't just walk 100 feet down the hall and ask the prepress folks what
they'd do with a PNG. But, if a PNG showed up at work in materials from
a prepress client, either the prepress people would deal with it, or
they'd grab me, ask me how to deal with it, and I'd use something to
convert it into whatever format they found easier to deal with.
YPrepressAgencyMV. Also, "PNGs are not common in prepress" != "PNGs are
not used anywhere". "PNG never took off", as someone said upthread,
strongly implies "no one is using PNG for anything".
Believe me, I am not an Adobephile, but my work requires me to use
their products on a daily basis, and so do about 95,000 other people,
You wouldn't believe how long it took the publication industry in
general to accept PDF files over raw PostScript.

In a world where FORTRAN and sendmail.cf still exist, I can believe
anything related to format stupidity.
It's the "will probably always be" part that worries me.

0. The TIFF specification is Free and so is its reference
implementation in C.
1. The PNG specification is Free and so is its reference
implementation in C.
2. There is at least one Free C compiler that runs on all Unices, Win32,
and Mac OS X.
3. Given points 0-2, all it takes to write PNG<->TIFF conversion
software is a semi-skilled C programmer and some time. There is
still a plentiful supply of C programmers and time.
Technology comes and goes.

Yes, but check points 0 and 1 above. Anybody who wants to can read the
specifications for TIFF and PNG. That means anybody who wants to can
create a TIFF or PNG implementation. They could even create an
implementation if all they had was an assembler or an opcode list,
although that'd take a lot longer than using a regular language. That
freedom allows people to use TIFF and/or PNG as long as they find it
useful.
PNG has been around for much longer than GIMP or ImageMagick---I think
the original spec was done in the late 80s.

That wouldn't surprise me.
Personally I will continue to keep my image archives in TIF, because I
am old enough to have run across document archives I could not find
software to read, and frankly I'm not that old.

PNG isn't going away anytime soon. TIFF's been around longer, sure, but
PNG has found a niche (lossless images with > 256 colors displayed on
the WWW) and will be difficult to eradicate. When someone invents an
image format that's better than PNG for @purposes, a "png2betterformat"
utility will show up, just like "gif2png" showed up when Unisys made
noises about charging people money to use GIF.
 
K

Kennedy McEwen

Bart van der Wolf said:
SNIP

Have a look at this, and from now on consider yourself religious:
http://www.xs4all.nl/~bvdwolf/main/downloads/SatDish.jpg
It is just the result of a gamma adjustment from linear to average PC
monitor gamma in 8-bit/channel versus 16-b/ch mode with Photoshop. In
case one doesn't (want to) see it, look at the left side of the
satellite dish and to the shadows. The original linear gamma file was
from a 12-b/ch P&S digicam.

Danger Will Robinson, danger!

This is not what is being compared or discussed in this thread, although
it does raise a significant point that should not be ignored.

When storing images with 8-bit resolution it is imperative that this is
done in a gamma space comparable to the inverse of the perceptual
response of the eye. The entire argument that 8-bits is more than the
eye can discern *requires* that the levels represented by those 8 bits
be fairly equally distributed throughout the perceptual response range
of the eye - which is far from linear.

In your example Bart, the original image is scanned in linear space and
the decimation to 8-bits is also performed in linear space. Consequently
the levels in both images are more finely spaced in the highlights than
the eye can distinguish whilst there are inadequate levels dedicated to
the shadows and mid-tones. When converted to the correct gamma range
for viewing, this distinction becomes more obvious in the 8-bit image
than it is in the 12-bit original - although it is still present and
visible even in that, hence the need for more than 12-bit linear coding
of the CCD output in scanners.

This example certainly does not demonstrate that more than 8-bits are
required to save all of the useful data in an image that can be
perceived by the eye, but it does demonstrate that those 8-bits must be
correctly used. If you scan in linear mode (as some in this forum
recommend) the it is imperative that all of the data is retained and
stored. If you scan in a gamma close to the inverse of the perceptual
response which, as Charles Poynton says, is coincidentally very close to
the gamma of CRTs, then 8-bit encoding and storage is significantly more
than the eye can discern, the video displays and printer can reproduce.
In fact, as certain contributors to this forum and this thread have
discovered by convoluted scan methodologies, 8-bit gamma encoding
requires more than 17-bit linear source material to capture all of the
information that can be perceived. ;-)

In practice, all CCD based scanners operate in linear space at the start
of the process, and all of the data depth is used when producing an
8-bit gamma coded output. Your example, whilst a valid demonstration
that linear coding is inferior to gamma coding, does not follow the same
process.

They should look at the image above, with open eyes/mind.

Since it starts from a premise which fails to meet the non-believer's
initial requirements (8-bit linear space rather than 8-bit gamma space)
it will certainly fall far short of convincing more than the most
gullible amongst them and I suspect that, by definition, there are very
few of them available to convince. ;-)

When you produce a gamma encoded image (inverse gamma 2 - 2.5
acceptable) that shows perceptible difference between 8-bit and ANY
higher bit depth then I suspect that you might convince them. AIUI,
Dan's long standing challenge on that particular issue is still open
after running for many years.
 
D

Don

That should read:

"... because *its* layout ..."

Maybe it was the spell checker? Yeah, that's it! ;o)

I forgot to mention yesterday:

"Spelling chucker fer sail. Wurx grate!"

Don ;o)
 
D

Don

That's all very nice, but why should it matter to me? :)

Oh well, if it doesn't matter... ;o)
My monitor can display 32-bit. My eyes? I'm not sure they can even see
24-bit (getting old). How about other eyes? Can they tell the difference
between 24-bit and 48-bit? If they can't, why use 48-bit?

It's generally accepted that eyes can't see more than 24-bits.
However, some people claim that given a single color then 256 shades
may not be enough (24-bit color = 256 shades of red * 256 shades of
green * 256 shades of blue).

But I bet that 5 out of 4 ;o) people can't tell the difference when
viewing a plain vanilla color image.

Seriously though, try this test: Take your 24-bit image and reduce the
number of colors to 256 (indexed color). Then view this 256-color
image and a 24-bit image side by side. Odds are that for most images
you'll find you can't really see much (if any?) difference under
normal viewing conditions.

The reason for scanning 48-bit is for editing (I explained that
further down in the original message). And since you want to archive
images for possible later editing, using 48-bit would be essential.
While I want to get the "best" scan from my slides, there has to be a
certain point at which the benefit decreases to the point where it is...
well... pointless. I'm talking about practical application rather than a
theoretical one -- re "best" that is.

Yes, there is definitely a point of diminishing returns. Where that
point is, depends on each person's requirements.

However, making this decision based on viewing a 48-bit image with
24-bit monitor/eyes, doesn't make sense.
So, is it true that a 4800ppi scan produces unwanted noise/grain? And if it
produces grain/noise that may be wanted, what purpose woudl it be wanted
for? IOW, is there any _real_ value to scanning at 4800ppi? If there is,
tell me and I'll go out and buy a new scanner. :)

Go and buy a new scanner... ;o)

Seriously though, you're looking at it the wrong way. 4800 does not
"produce" grain, it only shows you what's there. 2400 is too "blind"
too see it. So, 4800 not only sees grain but it also sees lots of
extra image detail which 2400 can't see.

Now, if you don't care for that then that's fine and you should scan
at 2400. But if you insist on getting the highest quality, as you do,
then grain is the necessary evil/byproduct of higher resolution.
Thanks. Some people here talk of TIFF Compressed as being "lossless". Is it?
IOW, if I used that variation of TIFF, would I gain the advantage of smaller
files size without any loss in quality? How about PNG? Is that a lossless
compression? How does it compare to TIFF.

Yes, compressed TIFF is lossless. It's like zipping any file.
Compressed TIFF just does that automatically in one step.

However, there are some viewers which may have difficulty with
compressed TIFFs so it's probably safer to use regular TIFFs and
compress them afterwards with a compression program of your choice.

Don.
 
Top