Minolta 5400 or Coolscan 5000

D

Don

NOTE: I skipped the rest because we're just running around in circles
and chasing our tails... In the same vain consider this a response to
other messages in this sub-thread.

I have already answered that Don, with several examples as I recall. The
most specific being that the first thing that happens to your raw image
upon entering Photoshop is that it is reduced from 16-bit range to
15-bits. Now that might not matter much to you with a scanner which
only produces 14-bits and has no capability to multiscan, although I
notice that you have already encountered the 15-bit limit in Photoshop
from your attempts to implement manual multiscan within it, however I
remind you the subject matter of the thread - Minolta 5400 or Coolscan
5000. These are *both* full 16-bit enabled scanners, and your workflow
will immediately result in performance loss over implementing a set of
functions in the scanner software.

A general comment first. It is quite clear from Jerry's (justified)
laments about the turns this thread has taken that the current
subthread is not really related to the subject line anymore.

But back to the subject matter (of this sub-thread). You did not
really give specific scenarios in the past but just made generic
statements like "Analog Gain" or "bit depth".

The comment above is more specific and comes closer to what I had in
mind. However, it focuses once again on a very specific case of
Photoshop, which is not at discussion. I'm sure you must see that. So,
to get things re-calibrated and consolidated let's review the summary
and take it from there.

The whole thing can be distilled into these three lines:
Exactly the opposite is true.

Your objection boils down to my use of the word "exactly", correct?
You hold that although the statement is justified in general terms,
there is a minor case where this does not hold. Right?

Now, what I'm asking for is a detailed scenario where this exception
is demonstrated so we focus on that instead of constantly getting
sidetracked and digressing into sub-sub-threads.

Don.
 
D

Don

In which case you are using the wrong tool to do the job! Pixel noise
is a function of correct exposure in the scanner, and this can readily
be determined from the preview. You might like to view the results at
300-400% for the purpose of examining the noise, but not for controlling
it.

Yes, but how do you control it unless you can see it and identify it?
And to do that I, for one, need 300-400% magnification.

Indeed, I'm currently trying to empirically determine the cutoff point
(appears to be around 32) but this is difficult because of the way
Threshold works in Photoshop (based on Luminance rather than composite
RGB). Since all this is not as easy as it seems I'm currently trying
the alternatives such as Desaturate and the Channel Mixer. But that's
enough digression. I hope it illustrates what I'm talking about.
So, the question still stands, please explain how and why the preview is
inadequate for controlling the scanner operation.

I have, and you are now redefining the question. Exposure is just one
aspect and I have recently wrestled with one specific slide for days.
The nominal scan (0% clipping) produced a histogram where 0-127
contained a nice, juicy histogram "mountain" while 128-255 was
basically a flat, 1 pixel high line.

On the face of it, that line seemed like insignificant highlights
which can be safely clipped. Indeed, the pixel count was so low that a
clipping of mere 0.3% and rescan clipped about 66% of these
highlights.

However, upon examining the image in Photoshop at full resolution
these highlights, although nominally a very small portion of the
image, were essential and I didn't want to lose them.

In short, relying on Preview alone would have lost me valuable image
data.
In the article which you responded to, or more precisely, quoted in the
article you responded to!

Yes, by you, not me! If and when I mention Photoshop it's a shortcut
for "image editing" which is quite clear because I usually append "or
any image editing software of choice".

Anyway, instead of running around in circles, let's focus on the other
message where, hopefully, all these sub-threads are consolidated.

Don.
 
D

DavidTT

Toby, you are absolutely correct that a scanner's exposure plays a role
in getting a raw scan. But unlike cameras, the desktop scanners offer
little or no hardware control over the exposure. The Polaroid ss4000 has
no hardware exposure control. When I get a raw scan, the ss4000 uses its
fixed exposure. For those who use the user interface's Auto Exposure, or
changing a film type, etc., the scanner hardware will first scan with
the fixed exposure, then the scanner sw will modify the scan's
information to "change" the exposure.

The Nikon scanner offers an analog gain control, which is a hardware
control for exposure. If I were using a Nikon, I will definitely use it
for a raw scan for an image that calls for it. (Don gave a much more
detailed explanation.) I would really like to hear from the others
whether the Minolta or Canon scanners also offer hardware exposure
controls.

In my posts, I try my best to separate a scanner's hardware from the
software. Once you understand what is doing what, you can decide what a
raw scan means. You will also come to the conclusion that a scanner's
hardware features for exposure and focus is no better than a drug
store's disposable camera with pre loaded film, and far less than a
point and shoot. The scanner's sw creates an illusion that you are
operating a Nikon F5.
 
K

Kennedy McEwen

Don said:
Yes, but how do you control it unless you can see it and identify it?
And to do that I, for one, need 300-400% magnification.
As explained in the text above, you don't need to see it to control it.
Noise is a direct function of the exposure used. It is intrinsic within
the scanner and can only change by changing the exposure time, the
analog gain. Too short an exposure and the noise degrades, especially
in shadows where you are fighting excess noise over the photon noise.
Too long an exposure and the image clips in the highlights (both
references assuming positive scans). Noise is controlled by the use of
the histogram, and you can see the end result of that more readily in
the final scan that was adjusted using previewed information than you
can in a secondary scan based on a view of a first full resolution scan.
Indeed, I'm currently trying to empirically determine the cutoff point
(appears to be around 32) but this is difficult because of the way
Threshold works in Photoshop (based on Luminance rather than composite
RGB). Since all this is not as easy as it seems I'm currently trying
the alternatives such as Desaturate and the Channel Mixer. But that's
enough digression. I hope it illustrates what I'm talking about.
It admirably illustrates that you would be better off using the
histogram in NikonScan preview to do the job!
I have, and you are now redefining the question.

No, I am not redefining the question at all. I am addressing one aspect
that *you* raised as a dominant cause of deficiencies in the suggested
workflow. However, you have yet to prove that the histogram based on a
reasonable sized preview will produce the errors that you claim.
Exposure is just one
aspect and I have recently wrestled with one specific slide for days.
The nominal scan (0% clipping) produced a histogram where 0-127
contained a nice, juicy histogram "mountain" while 128-255 was
basically a flat, 1 pixel high line.

On the face of it, that line seemed like insignificant highlights
which can be safely clipped. Indeed, the pixel count was so low that a
clipping of mere 0.3% and rescan clipped about 66% of these
highlights.

However, upon examining the image in Photoshop at full resolution
these highlights, although nominally a very small portion of the
image, were essential and I didn't want to lose them.

In short, relying on Preview alone would have lost me valuable image
data.
No, relying on your misinterpretation of the histogram information would
have irretrievably lost the image data. Another example of the old
adage about workmen and tools!
Yes, by you, not me!

Where did you explain that you wished to change the issue from the
specific sentence that Toby quoted and responded to and which you
subsequently requoted in full?
If and when I mention Photoshop it's a shortcut
for "image editing"

More caveats!
which is quite clear because I usually append "or
any image editing software of choice".
As you have *clearly* not done above, for example!
Anyway, instead of running around in circles, let's focus on the other
message where, hopefully, all these sub-threads are consolidated.
I have done, and specified a test there which I hope you will replicate
for yourself and prove to yourself once and for all the error of your
ways. That error won't hurt too much with your current toolset, except
in the very problem you are facing at the moment wth dense Kodachrome,
but it will hurt others that are taking your advice if they were
prepared to buy the better equipment that you baulked at.
 
K

Kennedy McEwen

Don said:
The whole thing can be distilled into these three lines:


Your objection boils down to my use of the word "exactly", correct?
You hold that although the statement is justified in general terms,
there is a minor case where this does not hold. Right?
Pretty much, which is why this entire episode began with my rebuttal
"Not exactly". You should also note that your objection specifically
referred to raw scans into Photoshop, so excusing Photoshop from the
comparison at this stage is shifting the goalposts - again!

Finally, as you are now encountering in your own work, it isn't such a
minor case after all.
Now, what I'm asking for is a detailed scenario where this exception
is demonstrated so we focus on that instead of constantly getting
sidetracked and digressing into sub-sub-threads.
As stated previously there have been a number of examples previously
provided, but your experience with manual multiscan demonstrates one
issue which also occurs, often to a greater degree, when implementing
level shifts - although you are unlikely to notice with only 14-bits of
data to play with in the first place.

However, to demonstrate that your specific problem is, in fact, likely
to be a manifestation of the limitations of your workflow resulting in
less visible errors, but errors just the same (I think that is the
phrase you used earlier, but frankly the post has now expired from my
server and I can't be bothered searching GoogleNews for it) try this
little test.

First, a little reminder: your concern is how the scanner software
manipulates the raw data from the scanner hardware. You believe that it
is always "better" to take that raw data into Photoshop for subsequent
processing, in accordance with the three lines you have quoted above,
because the scanner software always corrupts that data. I disagree,
and state that this is not always the case, that there are examples
where it is better to utilise some of the scanner software functions to
pre-process the data before passing it to Photoshop specifically. To
this previous statement I also add and *expand* my claim, being quite
clear that this is the first time I have done so in this thread, that
implementing those features in NikonScan is *never* any worse than
implementing those same functions in *any* image processing application.

Now, are you sitting conformably? Then let us begin...

Once upon a time in a far, far distant land, populated by scanners,
16-bit analogue to digital convertors and charge coupled devices,
someone created a *synthetic* 16bpc ramp image from 0 to, lets say 255
with all levels equally populated (ie. an integer multiple of 256 pixels
in that synthetic image). This could, if you implemented sufficient
multiscans be the raw image a suitable scanner produced from a ramp
image on film. Now, if you open that image in either Photoshop or
Nikonscan and view the standard 8-bit histogram, all of the data will
lie in pixel level 0. If it is significantly different, check you have
colour management off and gamma set to 1.00, we don't want either piece
of software corrupting our test image now, do we.

Having got this pristine very dense test image, create a copy of it. Now
open one of these test images in NikonScan4. Go to the Curves window in
the tools palette and select a peak white of 3 and a gamma (mid grey
pointer) of 3.00. Save the NikonScan processed image.

Now start Photoshop (I tried this in PS7.1, but I believe any version of
PS that handles 16bpc images will do the same). Open the copy of your
pristine test image making sure that you don't select any colour
management, go into Levels and, again, apply a peak white of 3 and a
gamma of 3.00, just the same parameters as you applied in NikonScan.

Now it may help at this point to run a few manual calculations for the
application of these functions to the test data, just so you have an
idea of the results you should expect. An Excel spreadsheet is quite
handy for this, but any calculator will do.

Still in Photoshop, open the NikonScan4 processed image and compare it
to the Photoshop processed image. Which is most posterised? Look at
the histograms. Which has most missing codes and peaks? Which image
and histogram actually best matches the manual calculations you did?
Explanation? Well, if you look back in this thread you will find that I
have been banging on about the issue almost since the thread started -
Photoshop is not sacrosanct and scanning raw just to get its benefits
ignores the potentially *superior* functions of the scanner software
itself.

NikonScan4, for example, is a true 16-bit platform capable of processing
the raw output of 16-bit capable scanners correctly without corrupting
the data (ie. data from LS-5000 & 9000 as well as 4x or greater
multiscanned output from the LS-4000 & 8000). It has to be, otherwise
the superiority of the scanners it is used with over their competitors
would never be apparent. Photoshop, in contrast, is merely a 15-bit
platform masquerading as 16-bit capability and as soon as you import a
raw image from any of those scanners (and others, such as the Minolta
unit in the subject of this thread) you irretrievably corrupt the image,
whether that is a raw image or one that has been previously processed by
the scanner software. It is therefore a more accurate workflow to apply
certain functions, including curves and levels but not limited to them,
in Nikonscan *prior* to importing the resulting image into Photoshop for
further processing. That way your level shifting procedures are
implemented with better precision and quality.

Now, wasn't that essentially what Toby said? Whilst your statement that
the opposite was true is a general principle, the absolute truth is not
exactly the opposite of Toby's statement - which was my specific
criticism. I am in no doubt that you will not just take my word for it
and undertake this test yourself. When you have verified my claims I
expect a full apology for wasting my time, your time and generally
misleading the collective.
 
B

Bruce Graham

The Nikon scanner offers an analog gain control, which is a hardware
control for exposure. If I were using a Nikon, I will definitely use it
for a raw scan for an image that calls for it. (Don gave a much more
detailed explanation.) I would really like to hear from the others
whether the Minolta or Canon scanners also offer hardware exposure
controls.
The Canon lets you choose the exposure time
Scanning speed is either 8, 12, 16, 24 mSec/line according to

http://www.canon.com.au/ftp/scanners/fs4000scannerhigh.pdf

Both Vuescan and Canon Filmget provide this control (and you can hear the
scanner ticking over slowly when exposure is longer than normal). I have
not investigated Silverfast due to a lack of support for the IR channel
on the Canon scanner and cost.

With longer exposures and maybe a second pass for IR cleaning and maybe a
third pass if you are using the long exposure option in Vuescan, you have
a long wait for a result. I do find the Long Exposure option useful for
situations such as sun in the frame (negative) to avoid gritty sky around
the sun, especially when +2 stops comp has been used for the the
foreground. I remember that Ed Hamrick complained that the IR lamp is
very weak and Vuescan often sets the IR exposure to max. The Canon
software does not.

A normal 4000dpi scan on a transparency is only about 40 sec however.

I can't comment on the noise performance. If the scanner is noisier than
others, then all this control probably just gets it up to the standard
achieved by others. Maybe the James photography is the best measure of
this - it uses a standard target across multiple samples of most
scanners. I would be interested in other peoples comments on that test
especially as it relates to noise.

Bruce Graham
 
W

WD

Kennedy,

From where did you find out the Photoshop actually works in 15 bits
(vs. 16 bits) and why do think that they would write the code that way?

W
 
D

Don

Kennedy,

From where did you find out the Photoshop actually works in 15 bits
(vs. 16 bits) and why do think that they would write the code that way?

In the 'comp.graphics.apps.photoshop' Chris Cox (one of the
programmers on the Photoshop team, his name is in the "About" credits)
posted once stating that Photoshop "16-bit" mode actually uses 15-bits
"+ 1".

I don't get the "+ 1" but that's what the man said. To handle the "+
1" they have to be using 16-bit words and the word must, therefore, be
unsigned - so it all seems very odd (possibly an arcane Mac legacy).

It certainly looks like one of those "historical" reasons which I
always hear/read as "hysterical" reasons - as in "hysterically funny"
rather than the intended justification. ;o)

I'll append the saved message below.

Don.

--- cut ---
No, it is correct.



Yes, it can.



No, it represents 32769 values. I said the range was 0 to 32768.



So far none of them produce more than 14 bits/channel (until you get to
some really expensive scientific cameras).

Chris
--- cut ---
 
D

Don

Pretty much, which is why this entire episode began with my rebuttal
"Not exactly". You should also note that your objection specifically
referred to raw scans into Photoshop, so excusing Photoshop from the
comparison at this stage is shifting the goalposts - again!

I can only respond with "not exactly"... ;o)

But seriously, considering both the magnitude of the error stating
that raw "loses a lot of quality" as well as the rest of my reply
including subsequent messages, clearly indicates that Photoshop is a
side issue. The gist was and is the mistaken notion that raw loses ("a
lot of") quality.

As I have said, let's first dispel this error unambiguously before
descending into minutiae - no matter how important they may or may not
be in the final analysis.

Getting sidelined into minor exceptions and shadings of meaning
(whether correct or not) is bound to only confuse anyone failing to
grasp even the most elementary concepts, as our sidelines and
sub-threads of arcane details have clearly demonstrated.
Finally, as you are now encountering in your own work, it isn't such a
minor case after all.

That was just a summary of what you yourself said i.e. "he is more
wrong than you".
First, a little reminder: your concern is how the scanner software
manipulates the raw data from the scanner hardware. You believe that it
is always "better" to take that raw data into Photoshop for subsequent
processing, in accordance with the three lines you have quoted above,
because the scanner software always corrupts that data.

That's not entirely correct. I maintain that the notion that scanning
raw loses (a lot of) quality (regardless of what software one may use
later) as opposed to making modifications in scanner software is
wrong.
I disagree,
and state that this is not always the case, that there are examples
where it is better to utilise some of the scanner software functions to
pre-process the data before passing it to Photoshop specifically. To
this previous statement I also add and *expand* my claim, being quite
clear that this is the first time I have done so in this thread, that
implementing those features in NikonScan is *never* any worse than
implementing those same functions in *any* image processing application.

And as I have repeatedly stated that the quality of relevant
algorithms (NikonScan, Photoshop or whatever...) is not the issue.
It's the fact that scanner data has been modified before being passed
on, and stating this modified data is "a lot" superior quality-wise to
the raw scanner data is just simply wrong.

!!!
If you really want to dissect the details, then your response is not
"exactly" true either. David, who was the part of the original
discussion, doesn't even use NikonScan, he has a Polaroid scanner!

So, once again, whether it's Photoshop or NikonScan is not the point.
The point is simply scanning "raw" vs. "cooked" and the mistaken
notion that raw "loses a lot of quality".
Now, are you sitting conformably? Then let us begin...

Let me just get a cup of tea, first... Oh yes, and a biscuit! ;o)
Once upon a time in a far, far distant land, populated by scanners,
16-bit analogue to digital convertors and charge coupled devices,
someone created a *synthetic* 16bpc ramp image from 0 to, lets say 255
with all levels equally populated (ie. an integer multiple of 256 pixels
in that synthetic image). This could, if you implemented sufficient
multiscans be the raw image a suitable scanner produced from a ramp
image on film. Now, if you open that image in either Photoshop or
Nikonscan and view the standard 8-bit histogram, all of the data will
lie in pixel level 0. If it is significantly different, check you have
colour management off and gamma set to 1.00, we don't want either piece
of software corrupting our test image now, do we.

Having got this pristine very dense test image, create a copy of it. Now
open one of these test images in NikonScan4. Go to the Curves window in
the tools palette and select a peak white of 3 and a gamma (mid grey
pointer) of 3.00. Save the NikonScan processed image.

Now start Photoshop (I tried this in PS7.1, but I believe any version of
PS that handles 16bpc images will do the same). Open the copy of your
pristine test image making sure that you don't select any colour
management, go into Levels and, again, apply a peak white of 3 and a
gamma of 3.00, just the same parameters as you applied in NikonScan.

There is a problem here... Levels gamma calculations are incorrect.
Instead, I use manually calculated gamma curves (AMP files).

Indeed, comparing Levels gamma corrected files and images corrected
with AMP curves revealed a significant difference. It's immediately
apparent even to the naked eye how inexact Levels gamma is (banding,
pixelizations, etc).
Now it may help at this point to run a few manual calculations for the
application of these functions to the test data, just so you have an
idea of the results you should expect. An Excel spreadsheet is quite
handy for this, but any calculator will do.

Still in Photoshop, open the NikonScan4 processed image and compare it
to the Photoshop processed image. Which is most posterised? Look at
the histograms. Which has most missing codes and peaks? Which image
and histogram actually best matches the manual calculations you did?
Explanation? Well, if you look back in this thread you will find that I
have been banging on about the issue almost since the thread started -
Photoshop is not sacrosanct and scanning raw just to get its benefits
ignores the potentially *superior* functions of the scanner software
itself.

I have just addressed that above!!! There are two issues here: one,
focus on Photoshop and, two, operator (in)competence. Photoshop is
just an example of an external image editing tool, and by judicious
testing and use of proper tools (e.g. AMP curves) this can clearly be
handled. That's why it is not a part of the quality of "raw" vs.
"cooked" scan discussion.

!!!
Finally, it's also telling that in order to spot posterization you had
to load the image into Photoshop! Such posterization would not even be
detectable in NikonScan!
NikonScan4, for example, is a true 16-bit platform capable of processing
the raw output of 16-bit capable scanners correctly without corrupting
the data (ie. data from LS-5000 & 9000 as well as 4x or greater
multiscanned output from the LS-4000 & 8000). It has to be, otherwise
the superiority of the scanners it is used with over their competitors
would never be apparent. Photoshop, in contrast, is merely a 15-bit
platform masquerading as 16-bit capability and as soon as you import a
raw image from any of those scanners (and others, such as the Minolta
unit in the subject of this thread) you irretrievably corrupt the image,
whether that is a raw image or one that has been previously processed by
the scanner software. It is therefore a more accurate workflow to apply
certain functions, including curves and levels but not limited to them,
in Nikonscan *prior* to importing the resulting image into Photoshop for
further processing. That way your level shifting procedures are
implemented with better precision and quality.

Aside from the point of this not being tied to Photoshop exclusively,
but more of a conceptual discussion, conversion to gamma 2.2 is one of
the things I still do in NikonScan for the simple reason that doing it
in an external editor does not really add anything to the process.

There is no loss of quality and it simplifies the process considerably
by doing gamma in NikonScan because I don't have to keep changing
gamma all the time.
Now, wasn't that essentially what Toby said?

No.

At the time that was so over Toby's head (I'm sorry Toby, no offence
intended, it's just an example) and that is more than apparent from
the rest of the thread (both before and immediately after) that you
are just projecting your own knowledge.

The rest of the thread (before and immediatelly after) clearly shows a
lack of even basic knowledge let alone such arcane details, so to
assume someone like that would even be aware of such obscure details
let alone imply them is just not a reasonable assumption or
conclusion.

That's why I said that in order to help someone with such lack of
basic concepts - instead of confusing them with details from the start
- it's far better to make these elementary concepts clear first. And
only then indicated there may be exceptions. And only then - and only
if they want to - explain *in increasing level of detail* what they
are.

Otherwise, by descending immediately into minutiae, just confuses the
issue and indeed makes it more difficult for them to understand the
basic concepts hidden in all the torrent of detail.
Whilst your statement that
the opposite was true is a general principle, the absolute truth is not
exactly the opposite of Toby's statement - which was my specific
criticism. I am in no doubt that you will not just take my word for it
and undertake this test yourself. When you have verified my claims I
expect a full apology for wasting my time, your time and generally
misleading the collective.

I'll let the collective speak for themselves but I certainly have
nothing to apologize for.

And there is also nothing to test, because you are still basing
everything on Photoshop as the external editor.

I suspect you will stick to that, so since we said everything already,
instead of running around in circles let's just "agree to disagree
agreeably" on this one, and let the reader make up their own mind.

Don.
 
K

Kennedy McEwen

WD said:
Kennedy,

From where did you find out the Photoshop actually works in 15 bits
(vs. 16 bits)
Some tests didn't give the results I expected so I investigated further.
I assumed, as did others, that this was because they used integer
arithmetic, rather than unsigned words. However I recall this being
disputed by Chris Cox somewhere although his explanation was somewhat
less than convincing for the 32769 discrete levels it copes with.
and why do think that they would write the code that way?

I am still convinced they are using standard integer arithmetic (15-bit
data plus one bit sign) for ease of programming. The range they claim
could be achieved simply by offsetting the abstract datum, so that the
available positive integers represented 1 to 32768, with the sign bit
being used as a zero flag.
 
K

Kennedy McEwen

Don said:
In the 'comp.graphics.apps.photoshop' Chris Cox (one of the
programmers on the Photoshop team, his name is in the "About" credits)
posted once stating that Photoshop "16-bit" mode actually uses 15-bits
"+ 1".
Sounds familiar, but I don't think the quote I snipped was where I read
it, so there is probably more than one instance that this has been
leaked from Adobe Towers. Bit of an exaggeration really. That extra
"+1" makes Photoshop a 15.0000440268868273167176441087067-bit
application, which is a lot closer to 15 bits than 16 bits. Marketing
and spin taken to excess methinks.
 
K

Kennedy McEwen

Don said:
That's not entirely correct. I maintain that the notion that scanning
raw loses (a lot of) quality (regardless of what software one may use
later) as opposed to making modifications in scanner software is
wrong.

And that is where the dispute lies because, whilst I am aware that this
may well be what you meant, it is not what you said or implied in the
lines I have retained above!

Exactly the opposite of Toby's "You lose a lot of quality this way (scan
raw and edit in Photoshop)" is "You lose a lot of quality editing in the
scanner software before passing to Photoshop". This was further
clarified (or reinforced interpretation) by your immediately succeeding
statement that the quality was lost "By doing image editing at the
scanning stage you irreparably *corrupt* the image at the earliest
possible stage and with the crudest possible "tools"."
And as I have repeatedly stated that the quality of relevant
algorithms (NikonScan, Photoshop or whatever...) is not the issue.
It's the fact that scanner data has been modified before being passed
on, and stating this modified data is "a lot" superior quality-wise to
the raw scanner data is just simply wrong.
As I have stated numerous times, that may well be wrong, but so is your
statement that the "exactly the opposite is true"! Exactly the opposite
is NOT true. Exactly the opposite is generally true, but NOT always.

If we quantify quality for a moment as a number and call editing in
scanner software prior to PS "A" and scanning raw and editing in PS "B"
then Toby effectively stated that A is greater than B. You stated that
the opposite is true, which means that you are claiming that B is
greater than A. Even if a single instance can be shown where A is
*equal* to B then your statement is wrong (as is Toby's) and more wrong
if situations exist where A really is greater than B. No matter how
many more cases of A>B than B>A exist, if a single case of B>=A exists,
then your statement is wrong.

Expressed in English rather than basic logic, this means that if a
single instance can be demonstrated where processing in the scanner
software prior to passing to Photoshop does NOT lose any quality or
results in improved quality then your statement is clearly wrong. The
test below demonstrates that in the case of Photoshop there are cases
where passing raw information to it for editing does result in inferior
results, consequently your statement is wrong. Even extending your
definition of "Photoshop" to any image processing package, even one
which correctly processes 16-bit data the statement is still wrong
because then both routes would produce identical results!
!!!
If you really want to dissect the details, then your response is not
"exactly" true either. David, who was the part of the original
discussion, doesn't even use NikonScan, he has a Polaroid scanner!
Indeed, and the discussion prior to your involvement actually concerned
third party software and Toby's migration from NikonScan to Silverfast,
however the statements that *you* quoted and responded specifically to
addressed the merits of raw scan data being passed to Photoshop. It was
the absolute nature of *your* response to Toby's comment that I
recognised as being incorrect.
So, once again, whether it's Photoshop or NikonScan is not the point.
The point is simply scanning "raw" vs. "cooked" and the mistaken
notion that raw "loses a lot of quality".
Once again, that is what you my have meant. It is *NOT* what you said.
The exceptions to your statement prove that it is *NOT* absolute, in
short "Not exactly!".
There is a problem here... Levels gamma calculations are incorrect.
Instead, I use manually calculated gamma curves (AMP files).
OK, just use linear transforms to pull those lower 8 bits out, the
result is still the same - Photoshop loses because it only has 7 lower
bits, while NikonScan really has as total of 16.
I have just addressed that above!!! There are two issues here: one,
focus on Photoshop

Is Photoshop no longer an example of an "image editing" suite? I am
sure that Chris Cox et al. would have a different view of things, just
as Ed's view of Vuescan differed somewhat from your implications and
statements.
and, two, operator (in)competence.

Gee, thanks for the compliment (not)!
Photoshop is
just an example of an external image editing tool, and by judicious
testing and use of proper tools (e.g. AMP curves) this can clearly be
handled.

But its limitation to 15-bits, as opposed to native scanner software's
ability to handle 16-bits, cannot be handled by any plugin processes or
options.
That's why it is not a part of the quality of "raw" vs.
"cooked" scan discussion.
However it is part of the "scan raw and edit in Photoshop" discussion,
which is this one, unless you are going to change those 3 lines quoted
above!

As already stated, even ignoring Photoshop's limitations, your statement
is still wrong because at least one scanner package (and possibly more,
and I haven't checked Vuescan or Silverfast for this issue specifically)
processes the data accurately, consequently no 16-bit "image editing"
*can* do any better. Now, if a package provided *more* than 16-bits
internally and you specifically restricted your comment to that package,
then I would concede that your statement was, absolutely, correct even
if the scanner package itself did so, since it could not transfer more
than 16-bits of data per channel. I am not aware of such a package, and
you have not restricted your comment to such software, so the situation
remains unchanged: your statement is wrong in the use of "exactly".
!!!
Finally, it's also telling that in order to spot posterization you had
to load the image into Photoshop! Such posterization would not even be
detectable in NikonScan!
Quote: "Just because these gaps are not displayed, it doesn't mean they
aren't there." Remember who said that?

Also, your statement is completely wrong. I did not *need* to load the
image into Photoshop to see the effect. I could have saved the
Photoshop processed image and compared the two images in Nikonscan. I
used Photoshop to demonstrate that the 16-bit image is corrupted as soon
as it is imported into Photoshop. Hence it is better to apply proper
16-bit arithmetic functions in NikonScan (and potentially other scanner
applications) *before* importing them to Photoshop (and potentially
other image editing applications).
Aside from the point of this not being tied to Photoshop exclusively,
but more of a conceptual discussion, conversion to gamma 2.2 is one of
the things I still do in NikonScan for the simple reason that doing it
in an external editor does not really add anything to the process.

There is no loss of quality and it simplifies the process considerably
by doing gamma in NikonScan because I don't have to keep changing
gamma all the time.
Dammit Don, this statement in itself contradicts your initial one!
No.

At the time that was so over Toby's head (I'm sorry Toby, no offence
intended, it's just an example) and that is more than apparent from
the rest of the thread (both before and immediately after) that you
are just projecting your own knowledge.
Only because such projection is necessary to explain to *you* the error
of your absolute statement.
The rest of the thread (before and immediatelly after) clearly shows a
lack of even basic knowledge let alone such arcane details, so to
assume someone like that would even be aware of such obscure details
let alone imply them is just not a reasonable assumption or
conclusion.
What is assumed they are aware of is irrelevant. Your statement was
absolute and that is what was wrong with it. Had you said something
like, to al intents and purposes, the opposite is true" or "there are a
few exceptions that you don't need to know about at the moment but in
general the opposite is true" then you might have a case. You didn't,
you don't, and your continual twisting and cavorting to get off that
particular hook are to no avail.
Otherwise, by descending immediately into minutiae, just confuses the
issue and indeed makes it more difficult for them to understand the
basic concepts hidden in all the torrent of detail.
I well recall a situation as a mere 12-year old a particular experiment
being undertaken by my physics teacher to demonstrate interference
between a radio wave and its reflection. I asked the question if the
radio receiver itself was distorting the radio waves and affecting the
results of the experiment and, if it was, how did we know that we simply
weren't measuring those effects and the interference itself did not
exist. His response partially explained my concern in that the radio
receiver was very small compared to the emitter or the reflector, so its
effect could be neglected. However he went on to explain that, if I
continued to study physics beyond school, I would indeed encounter
situations where the very act of observing an experiment would influence
the results, but not to worry too much about that at the moment just to
recognise it as a valid possibility in any experiment. It was another 8
years older before I encountered the Hysenberg Uncertainty Principle to
which he alluded.

I doubt that even if someone spent a small fraction of the time I
studied physics on those 8 years addressing the issues of scanning raw
versus editing that they would fail to encounter exceptions to your
statement. Consequently, your excuse for ignoring it is invalid.
I'll let the collective speak for themselves but I certainly have
nothing to apologize for.
You have, and I suspect you know it.
And there is also nothing to test, because you are still basing
everything on Photoshop as the external editor.
Photoshop is an example of an image editor to which raw scans can be
fed. As explained, it is neither unique nor necessary to the case, but
is a convenient example to demonstrate the error of your statement.
 
D

Don

We both pretty much said all we have to say on this, so we would only
be repeating ourselves at this point.

So, let the reader decide if the "Photoshop exception" is relevant to
them or, indeed, if it is an exception. They should have ample amount
of information from both points of view by now.

Don.
 
D

DavidTT

Thanks for the inputs.

Bruce said:
The Canon lets you choose the exposure time
Scanning speed is either 8, 12, 16, 24 mSec/line according to

http://www.canon.com.au/ftp/scanners/fs4000scannerhigh.pdf

Both Vuescan and Canon Filmget provide this control (and you can hear the
scanner ticking over slowly when exposure is longer than normal). I have
not investigated Silverfast due to a lack of support for the IR channel
on the Canon scanner and cost.

With longer exposures and maybe a second pass for IR cleaning and maybe a
third pass if you are using the long exposure option in Vuescan, you have
a long wait for a result. I do find the Long Exposure option useful for
situations such as sun in the frame (negative) to avoid gritty sky around
the sun, especially when +2 stops comp has been used for the the
foreground. I remember that Ed Hamrick complained that the IR lamp is
very weak and Vuescan often sets the IR exposure to max. The Canon
software does not.

A normal 4000dpi scan on a transparency is only about 40 sec however.

I can't comment on the noise performance. If the scanner is noisier than
others, then all this control probably just gets it up to the standard
achieved by others. Maybe the James photography is the best measure of
this - it uses a standard target across multiple samples of most
scanners. I would be interested in other peoples comments on that test
especially as it relates to noise.

Bruce Graham
 
K

Kennedy McEwen

Bruce said:
and speed of processing?
Maybe not so much these days. Most processors can process long integers
(31bit + sign) in about the same time as processing integers (15bit
+sign). Certainly a reason for not doing the work in reals though. ;-)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top