What is better Nikon Super Coolscan 4000 or Coolscan V ED?

H

Hans-Georg Michna

it's dubious to me whether
you can actually achieve higher resolution even with misaligned scans: come
to think of it, scans are not just *misaligned* in your everyday scanner,
but they're misaligned in ways that vary *in* the image.

The top may have some amount of average misalignment, the bottom may have
some other amount, *and each row or column might have a different and
random amount of misalignment* due to errors in the motor steps
(vertically) and loose movement of the CCD array (horizontally).

How would software find out about all this?
No, I think multi-scanning, in practice, will be much more useful for noise
reduction, with scanners.

Lorenzo,

we're leading a highly speculative discussion now, but for the
sake of entertainment, I'd say that it would be interesting to
find out how scanners actually misalign successive scans of the
same photo. Perhaps it's not as bad as you think.

But even if, say, a slide slowly moves during a scan, software
could still transform the resulting slightly distorted images.
It could, for example, work not just with a linear model on the
entire picture, but could align pieces. Autostitch gives a vague
indication what's possible.

It would be interesting to be able to obtain a higher resolution
from a relatively cheap scanner, even if it takes long, just for
the one very important photo.

Hans-Georg

p.s. Looked at ALE and liked what I saw, but I simply don't have
the time to learn and understand all those parameters. I would
need an automatic program that's easy to use.
 
H

Hans-Georg Michna

Most
flatbeds fall into the latter category because they are already
utilising a form of misaligned resolution enhancement through the
HyperCCD approach

Kennedy,

ah, I didn't know that. Must read up on it.

These flatbed scanners are really cheap these days. Just bought
a little Canon LiDE 60 for something around ¤60, which is
nothing compared to what scanners used to cost a couple of years
ago.

Hans-Georg
 
H

Hans-Georg Michna

in
case of LS-50 and other 4000 dpi scanners, in practical terms, they
already extract the maximum resolution from average film.

Don,

that doesn't coincide with my observations at all, based mostly
on my Nikon LS-4000 ED. I had already discussed that with
Kennedy, and if I reproduce the results correctly, all common
4000 ppi scanners are unsharp, to say it simply. If they had the
required optics to make the best possible 4000 ppi picture file
from a photo or slide, then you would be right, but the required
optics would be extremely expensive and totally out of the
question.

Therefore all scanners, particularly the amateur ones like mine,
make compromises, yielding 4000 ppi pictures that are simply not
as sharp as the original film.

I guess that, to reproduce most of what's in a 35 mm slide,
you'd need to buy a scanner with a resolution of 8000 to 12000
ppi. You could then reduce the resolution of the resulting files
by means of a good filter, like Lanczos, without losing very
much. But a normal 4000 ppi scanner will not yield the same
quality by far.

Very unfortunate, but this is how it is. Of course the happy guy
who has just shelled out thousands to buy his wonderful 4000 ppi
scanner won't like to hear that at all. Sorry, folks.

Hans-Georg
 
L

Lorenzo J. Lucchini

Kennedy said:
Lorenzo J. Lucchini said:
Kennedy said:
[snip]

Yes, two perfectly aligned scans won't do it. But it's dubious to me
whether you can actually achieve higher resolution even with misaligned
scans

That isn't in any doubt, and I have built enough systems that have
proved it in side by side comparison to be convinced.

Kennedy, you snipped the part where I said *in what context* I don't think
resolution can be gained...
Having looked at your previous posting again Lorenzo, there doesn't seem
to be any additional context that I snipped. The text I quoted was your
in-line reply to Hans-Georg's statement - and I specifically kept the
part about correct alignment no providing any gain to keep your comment
in context. You did make additional comments over and above that, but
see below.

Hm well then, I don't remember exactly what I wrote; I could search back, of
course, but suffice to say that my intention was not to make a general
statement that "higher resolution is not attainable by means of merging
multiple misaligned images", but just that "higher resolution is probably
not attainable in consumer scanners because of lack of mechanics
precision".

I do realize that the theory behind it all is valid, and that the
mathematics would indicate a resolution gain is possible.
Mainly military scanned imaging sensor systems, but one or two
industrial imaging sensor systems I designed used a systematic
misalignment between frames to achieve increased resolution, which was
measured in test rooms.


Yes, I snipped because it was "in addition to" your original comment as
covered by your "come to think of it" statement, not a contextual
qualifier.

Don't bet on it - military kit is reliable, but the precision is often
no better than commercial, particularly when the kit is operating at the
extreme of its mechanical design envelope, such as on a fast jet pulling
maximum g or a tank manoeuvring over rough terrain.

The question is whether it varies significantly enough to prevent
consistent alignment to half a pixel accuracy or so. That isn't so
demanding as your overarching statement, and many scanners are indeed
capable of this. Even for those where frame to frame alignment might be
out by more than a pixel, making dumb multipass multiscanning (as in
Vuescan) worthless, the variation across the field on each frame can
easily be less than half a pixel.

But if, say, the first scan line is "correct", and the last scan line is off
by half a pixel (or even more!), we can still do a geometrical
transformation to correct for this, can't we (ALE does this)?.

What I'm dubious about is what happens when *each* scan line is off by a
significant amount. At the end of the scan, the first scan line and the
last scan line could even both be in perfect alignment, if the previous
errors averaged out. But in this case, I can't think of a way ALE or any
other program could reasonably reconstruct alignment...

I think that's a big "may". I mean, it's certainly something that would be
interesting to try, I just think the probabilities of success are very
low, for the reasons I mentioned above. But then, perhaps some "dedicated
film desktop scanners" have really really good mechanics, who I am to
know. (Or rather, what money do I have to know ;-).
Let's just say that the alignment of consecutive scans on my Nikon
LS-4000 is certainly better than half a 4000ppi pixel across the frame,
although the frame to frame alignment itself certainly is a lot worse.
There ay well be long term variations, say if one scan was performed
several hours after the other, but consecutive scanning is pretty good.

But even if the alignment is better than half a pixel across the frame, what
is the average misalignment between a pixel (or scan line) and the next?
Isn't this the most important factor?
Again, *across-frame misalignment* can be corrected for anyway, I think, as
a program like ALE has enough data to deduce the amount of misalignment. I
cannot really think of a way to guess the amount of scanline-to-scanline
misalignment, as a program could only deduce it using the data from *one*
scanline!


by LjL
(e-mail address removed)
 
L

Lorenzo J. Lucchini

Hans-Georg Michna said:
Lorenzo,

we're leading a highly speculative discussion now, but for the
sake of entertainment, I'd say that it would be interesting to
find out how scanners actually misalign successive scans of the
same photo. Perhaps it's not as bad as you think.
Perhaps.

But even if, say, a slide slowly moves during a scan, software
could still transform the resulting slightly distorted images.
It could, for example, work not just with a linear model on the
entire picture, but could align pieces. Autostitch gives a vague
indication what's possible.

Yes, *if a slide slowly moves during a scan* (or rather if the stepping
motors slowly drifts).
ALE can do it, and not just by "working with pieces", but by actually
automatically performing transformations (shrinking, enlargement, rotation
even) on each image. Sorry for sounding like an advert for ALE, I'm not
involved in it in any way, I just think it's a cool program.

But I'm more dubious on what could happen if the misalignment doesn't just
change slowly, but has significant errors at *each* scan line -- errors
which, in the end, could even average out to zero, giving the impression of
*no* error at all! And still, it would be worse than "a slide slowing
moving during the scan".

It would be interesting to be able to obtain a higher resolution
from a relatively cheap scanner, even if it takes long, just for
the one very important photo.

Yeah, running ALE does take *very* long though for big images, at least
here.
Hans-Georg

p.s. Looked at ALE and liked what I saw, but I simply don't have
the time to learn and understand all those parameters. I would
need an automatic program that's easy to use.

I think you guys here in CPS are simply scared by the command line :p
ALE does have a lot of parameters, as any complex and powerful program, but
it also has reasonable defaults for the times you don't want to use all
those parameters.

Just specify:

ALE image_1 image_2 ... image_n output_image

and you'll be able to merge frames in a multi-pass multi-scan way (but
*with* the realignment).

This way, the output image will not have more resolution than the input
images, just (hopefully) less noise. If you want to try what we're talking
about in this thread, you can do

ALE --scale=x image_1 image_2 ... image_n output_image

where x is the scale factor (2 for example).


In both cases, you'll be using defaults (specifically, you'll default to
option "--q0") that try to make the process fast (sort of), but sacrifice
quality.

Add options "--qn", "--q1" or "--q2" (sorted by quality, from lowest to
highest) to change that. I think that, for experimenting with resolution
improving, one should use either "--q1" or "--q2", while "--qn" could be
adequate for normal multi-pass multi-scanning (the "n" in "--qn" stands for
"noise").


Also, by default, ALE only tries applying "Euclidean" (translation,
rotation) transformations to re-align images. You can specify
--projective
to get all the transformations ALE is capable of, or
--translation
if you are in a hurry.


There are a ton more parameters that can be used, but hey, if you trust the
program's authors as far as the program code is concerned, you could trust
them on their choice of default parameters as well! At least if you don't
have long hours to waste.


by LjL
(e-mail address removed)
 
H

Hans-Georg Michna

Lorenzo,

thanks a lot for the quick user guide! That's very helpful.

Downloading ALE.

Hans-Georg
 
K

Kennedy McEwen

Lorenzo J. said:
But if, say, the first scan line is "correct", and the last scan line is off
by half a pixel (or even more!), we can still do a geometrical
transformation to correct for this, can't we (ALE does this)?.
That depends on whether the transition from "correct" at the start of
the scan to "incorrect" at the end of the scan is progressive or in one
or more discontinuous (whether monotonic or not) steps.
What I'm dubious about is what happens when *each* scan line is off by a
significant amount.

If each line is off by a significant amount then you won't get the
quoted resolution in the first place, let alone any enhancement of it.
At the end of the scan, the first scan line and the
last scan line could even both be in perfect alignment, if the previous
errors averaged out. But in this case, I can't think of a way ALE or any
other program could reasonably reconstruct alignment...
Quite - if all you have is random offsets between samples then you
really can't do much about it. However, where I disagree is the
occurrence of such conditions in commercial scanners - I spent some time
today looking for just this type of effect on my Epson flatbed (having
established that it isn't significant in the Nikon) and it also steps
consistently with much better than half pixel precision on each step. It
is relatively trivial to establish this just by presenting the scanner
with a high enough resolution test image that induces aliasing.
Relatively small phase shifts of the sampling points, of the type you
are concerned about, produce very large phase errors of the aliased
signal. Simple arithmetic scales the alias phase error back to the
original.
But even if the alignment is better than half a pixel across the frame, what
is the average misalignment between a pixel (or scan line) and the next?
Isn't this the most important factor?

It is an important factor - whether it is the most important depends on
the magnitude. As described above though, you can work all of this out
by inducing aliasing at the appropriate scale and observing the error.
If the line to line variation is significant then you won't be able to
induce aliasing in the first place. From the tests I have run on the
scanners I have at hand, line to line error isn't as significant as
general drift or step changes (the latter presumably due to the build up
and release of stiction at some point in the scan process).
 
D

Don

Don,

sorry for being ambiguous,

No problem.
but I did indeed mean that your
message in question was correct, and Fred's unwarranted attack
on it only confirmed that he had nothing substantive to reply.

I usually take such attacks as a confirmation of the attacked
message.

Yes, unwarranted attacks like that usually mean they have a hard time
admitting they're wrong. Unfortunately, we've seen a lot of that here
from some Vuescan users.

In the end it just makes matters worse. In the long run, it's always
better just to admit when one is wrong and learn from it.

Don.
 
D

Don

that doesn't coincide with my observations at all, based mostly
on my Nikon LS-4000 ED. I had already discussed that with
Kennedy, and if I reproduce the results correctly, all common
4000 ppi scanners are unsharp, to say it simply. If they had the
required optics to make the best possible 4000 ppi picture file
from a photo or slide, then you would be right, but the required
optics would be extremely expensive and totally out of the
question.

Therefore all scanners, particularly the amateur ones like mine,
make compromises, yielding 4000 ppi pictures that are simply not
as sharp as the original film.

I have not done any tests because I only have an LS-50 (and LS-30
before that) so I can't tell if any higher resolution like Minolta's
5400 actually resolves additional detail.

I was indeed going on what Kennedy and others have written here
before.

The only first hand experience I have is going from LS-30's 2700 dpi
to LS-50's 4000 dpi and the difference between them is night and day.
LS-50 does resolve considerably more detail than then LS30. But that
doesn't tell me if LS-50 actually resolves everything on the film.
I guess that, to reproduce most of what's in a 35 mm slide,
you'd need to buy a scanner with a resolution of 8000 to 12000
ppi. You could then reduce the resolution of the resulting files
by means of a good filter, like Lanczos, without losing very
much. But a normal 4000 ppi scanner will not yield the same
quality by far.

I've seen some comparison on a web site between a consumer scanner and
a drum scanner but such things are hard to tell from a tiny web image.

I didn't really put too much time into this (far too busy with dynamic
range and Kodachrome cast) but one problem I found is that image size
is different making a direct comparisons difficult. For a side-by-side
view I had to reduce one or enlarge the other. In either cases this
introduces distortions which may skew the comparison.

I'm sure there are more objective/scientific ways of going about this
but I found 4000 adequate and didn't explore it any further.
Very unfortunate, but this is how it is. Of course the happy guy
who has just shelled out thousands to buy his wonderful 4000 ppi
scanner won't like to hear that at all. Sorry, folks.

:)

Don.
 
N

Noons

Don said:
Therefore, blaming the user for such a sorry state of affairs is not
exactly fair. In such a case the blame lies squarely with the author.

It doesn't matter: it doesn't mean anyone can make a comment
about a "feature" that is simply not there, as the doco shows
black on white. My version of the doco is now months old.
It has said the same since...
Not really the same thing because complaints are about current VS
versions and the dynamic is totally different.

Yes it is the same thing. Like I said: my copy of the doco
has said the same thing for months and clearly warns of the
problems.
You'll be hard pressed to find a Linux user looking for old kernel
versions (which are freely available, BTW) because the new version is
worse.

Come again? How much have you tried to use 2.6 kernel releases
on prior 2.4 production boxes?
:)
 
D

Don

Yes it is the same thing. Like I said: my copy of the doco
has said the same thing for months and clearly warns of the
problems.

Yes, but comparing, say, new Linux, versions to new Vuescan versions
is really night and day. The dynamic is quite different.

But I do agree that people in general don't read the docs, although
this is in part due to bad documentation these days - if there is any!
I even have a saying: "If all else fails, read the documentation". ;o)
Come again? How much have you tried to use 2.6 kernel releases
on prior 2.4 production boxes?
:)

Upgrading Linux (I mean the process) is a whole different ball of wax.
Heck, even Windows upgrades more gracefully! fx: runs and hides...

Especially when it comes to "Red Soft", I mean "Micro Hat", no, no,
Red Hat, yes, that's it, Red Hat. It so hard to tell them apart these
days. fx: tongue firmly in cheek... ;o)

Seriously, I had enough of circular rpm references so when the next
upgrade/install is due I'll probably switch to Debian.

Don.
 
N

Noons

Don said:
Yes, but comparing, say, new Linux, versions to new Vuescan versions
is really night and day. The dynamic is quite different.

Granted. I was just trying to find an analogy and Linux was
an easy target (in the middle of pushing 2.6 to do the same as
our 2.4 at the moment...)
Especially when it comes to "Red Soft", I mean "Micro Hat", no, no,
Red Hat, yes, that's it, Red Hat. It so hard to tell them apart these
days. fx: tongue firmly in cheek... ;o)

:)
Too right. Took me two weeks of pouring through all the
patches just to get a coherent, working db server!
Unbelievable...
Seriously, I had enough of circular rpm references so when the next
upgrade/install is due I'll probably switch to Debian.

Hmmmm, I've been tempted. Currently using Suse on
my laptop and having a ball, but it has its quirks.
Debian is looking a lot more attractive now that the
latest kernel finally got out. If I could only convince
powers that be to try something other than MicroHat!

And of course we have to stay with windoze for the
"desktops", because no one at HO knows how to use
anything other than outlook...

Ah well, enough of this IT crap: back to scanning.
Have you been able to bypass the blue cast problems
with your Nikon? I read somewhere the new 9000
scanner is supposed to overcome those problems with
Kodachrome, maybe it is indeed the light source itself?
 
D

Don

Hmmmm, I've been tempted. Currently using Suse on
my laptop and having a ball, but it has its quirks.
Debian is looking a lot more attractive now that the
latest kernel finally got out. If I could only convince
powers that be to try something other than MicroHat!

I heard good things about Suse too. I run Linux "for fun" i.e.
privately so I in that sense I'm free to change, but inertia is hard
to overcome. "Better the devil you know" type of thing so there's
still a strong probability I'll grumble and install RedHat again. It's
all on hold because I'm about to make a major disk upgrade "soon".
And of course we have to stay with windoze for the
"desktops", because no one at HO knows how to use
anything other than outlook...

As my best friend calls it "LookOut". ;o)

He's also a Linux fan forced to use Windows at work.
Ah well, enough of this IT crap: back to scanning.
Have you been able to bypass the blue cast problems
with your Nikon? I read somewhere the new 9000
scanner is supposed to overcome those problems with
Kodachrome, maybe it is indeed the light source itself?

I just finished my KCs the other day. I'm not entirely happy so odds
are I may go back to it one day, but I just needed to get on with
archiving the rest. It's all a part of a long term project to
"digitize my life" and this includes all paper documents such as
letters and OCRing articles and... etc. with the idea of fitting
everything on a single USB disk I can take with me. Anyway...

What follows is probably "too much information" (so feel free to skip)
but for the benefit of others wresting with KCs on a Nikon, here goes:

I scanned each slide twice and in order to reach into those deep
shadows the second scan was at +4 AG. The trouble is this left me with
very little "overlap" (too much clipping) so I used a modified boost
of R=+4, B=+3.4, G=+3.24 to maximize the gain. I then wrote a program
to combine these two scans by generating a lookup table. Without going
into too much detail (too late... ;o)) I create a "characteristic
curve" for each pair of scans which adds flexibility. There's a little
bit more to it than that but that's the gist of it.

I also noticed that KCs differ a *lot* from year to year. I only shot
KCs for a relatively short period of time (1982-1988, inclusive) and
the oldest ones are sheer torture to scan. Then, around early 1984,
"something happened" and the red cast which until then was only
lurking in shadows extended to midtones pushing the blue cast up!?

Anyway, after that upheaval KC seems to have stabilized and the last
few years (1985-1988) are actually very good needing only a minor
adjustment afterwards. I don't know if it's only me (or my labs at the
time) or if this is documented elsewhere, but that's my experience.

Finally, removing the blue cast is not a case of a simple curve with a
single graypoint setting but it needs a more complex curve with at
least 3 points (~32-64, 128, ~192). I find that really does wonders
but it's time consuming. Once I'm done scanning and start editing I'm
hoping to come up with a single curve by averaging multiple images. As
things stand now, I'll need three such "global" curves: early blue
cast, the transition red-blue cast, and the minimal late KC period.

Don.
 
N

Noons

Don said:
archiving the rest. It's all a part of a long term project to
"digitize my life" and this includes all paper documents such as
letters and OCRing articles and... etc. with the idea of fitting
everything on a single USB disk I can take with me. Anyway...

:) Did you read the "Personal Petabyte" thing from Jim Gray?
Jim is a research fellow for Microslop and he has some brilliant
insights. One of his presentations has to do with the concept that
you need around one Petabyte to digitize your entire life.
And that we are not too far away from that sort of capacity in
desktops, maybe around 2010-2015. What happens then to
indexing and cataloguing all that is where his research gets
involved. Very interesting stuff, in an IT-geek kinda way.
I scanned each slide twice and in order to reach into those deep
shadows the second scan was at +4 AG. The trouble is this left me with
very little "overlap" (too much clipping) so I used a modified boost
of R=+4, B=+3.4, G=+3.24 to maximize the gain. I then wrote a program
to combine these two scans by generating a lookup table. Without going
into too much detail (too late... ;o)) I create a "characteristic
curve" for each pair of scans which adds flexibility. There's a little
bit more to it than that but that's the gist of it.

Very interesting stuff. Similar to the technique of "sandwiching" two
scans into one, with one scan geared for highlights and the other
for deep shadows, then combining the best of both?
I also noticed that KCs differ a *lot* from year to year. I only shot
KCs for a relatively short period of time (1982-1988, inclusive) and
the oldest ones are sheer torture to scan. Then, around early 1984,
"something happened" and the red cast which until then was only
lurking in shadows extended to midtones pushing the blue cast up!?


They certainly do differ. I've got some KCs dating from the 1950s and
they scan COMPLETELY different from my 1980s KC25s! And that's
with the Epson 4990, let alone what something with real quality would
do. I wonder if Kodak would be able to dig-up some characteristic
curves for different periods? Probably not even interested...

Anyway, after that upheaval KC seems to have stabilized and the last
few years (1985-1988) are actually very good needing only a minor
adjustment afterwards. I don't know if it's only me (or my labs at the
time) or if this is documented elsewhere, but that's my experience.

I wonder if it was with the introduction of KC200? It was around that
time it showed up and Kodak back then said they had changed a few
things in the process to accomodate the higher speed film. I stopped
using KC around 1988, went for E64 and Fujichrome for the underwater
stuff (deep blues) and then never came back. Pity, I really do miss
it.
My best definition slides are without a doubt the K25s I took back in
the early 80s. I've never been able to match the quality with Velvia,
although it got much better of late.

hoping to come up with a single curve by averaging multiple images. As
things stand now, I'll need three such "global" curves: early blue
cast, the transition red-blue cast, and the minimal late KC period.

Having a load of fun, eh? ;)
 
D

Don

:) Did you read the "Personal Petabyte" thing from Jim Gray?

I've heard of it but haven't actual read any of his stuff. Of course,
the size will vary not only depending on the amount of data but how
recent it is. Digitizing old photographs with only about 300 dpi worth
of data is not the same as a 10 megapixel camera image or whatever.

But the concept of digitizing everything and having it in one place is
a very intuitive thing many people instinctively want to do. My idea
is to archive the originals but then also generate compressed versions
when applicable (JPG, MP3, etc) which all just may fit on a single HD.
Very interesting stuff. Similar to the technique of "sandwiching" two
scans into one, with one scan geared for highlights and the other
for deep shadows, then combining the best of both?

Yes, that's basically what it is. Also known as "contrast masking"
(CM). The trouble is CM doesn't "color coordinate" as I call it, or
"tone map" as the official high dynamic range term goes. With CM the
images are just blended resulting in a clear border between the two
for any image with even a hint of a gradient.

So I spent a lot of time trying to figure this out, doing countless
scans and plotting the results of how each channel changes as the
exposure goes up. The end result is my procedure which allows me to
combine any two scans without worrying where the border is. It can be
in the middle of a gradient and there is still a seamless composite
image without having to use any feathering or blurring.
They certainly do differ. I've got some KCs dating from the 1950s and
they scan COMPLETELY different from my 1980s KC25s! And that's
with the Epson 4990, let alone what something with real quality would
do. I wonder if Kodak would be able to dig-up some characteristic
curves for different periods? Probably not even interested...

I did check their site and they only have one single JPG with the
characteristic curve plot. However, what I needed was actual data I
could use to edit the image. I even took apart the Nikon KC profile to
get the data out but the problem was twofold. First, it only contains
4K of points, if memory serves (I wanted full 64K) but more
importantly their KC mode doesn't really go far enough anyway.
I wonder if it was with the introduction of KC200? It was around that
time it showed up and Kodak back then said they had changed a few
things in the process to accomodate the higher speed film. I stopped
using KC around 1988, went for E64 and Fujichrome for the underwater
stuff (deep blues) and then never came back. Pity, I really do miss
it.

I don't know if KC200 was the culprit but something definitely
changed. At the time I switched to Ektachrome and they're a piece of
cake to scan. Not only does ICE work but I don't need to boost the
shadows scan as much as I did for Kodachromes. I find that +3 EV (AG)
is more than enough to penetrate film base between the frames which
means it can take care of any shot. Besides, since I use this +3 on
top of the nominal highlights exposure I end up with actual +3.25 to
+3.5 on average for the shadow scan and that takes care of any noise.
My best definition slides are without a doubt the K25s I took back in
the early 80s. I've never been able to match the quality with Velvia,
although it got much better of late.

I went completely digital a few years back and never shot any Velvia
but people keep saying how great it is - for scanning in particular.
With all this focus on scanning of late, I sort of got nostalgic and
took out my old Canon A1 and it's like riding a bike! ;o) I'm really
tempted to go back to analog and shoot a few rolls "just for fun"!
Having a load of fun, eh? ;)

Oh, yeah... The fun never ends! ;o)

Don.
 
N

Noons

Don said:
I've heard of it but haven't actual read any of his stuff. Of course,

Go here:
http://research.microsoft.com/~gray/talks/NYLFtalk.ppt
I find it an amazing presentation. Very intriguing.
Watch out: 7MB of ppt file...
Yes, that's basically what it is. Also known as "contrast masking"
(CM).

That used to be a darkroom technique as well, if I recall correctly?
Mostly for B&W in those days. But sometimes used for slide
copying as well.

So I spent a lot of time trying to figure this out, doing countless
scans and plotting the results of how each channel changes as the
exposure goes up. The end result is my procedure which allows me to
combine any two scans without worrying where the border is. It can be
in the middle of a gradient and there is still a seamless composite
image without having to use any feathering or blurring.

Got it. Very interesting stuff. Didn't know the curves for each
colour would change significantly with different exposures.
With all this focus on scanning of late, I sort of got nostalgic and
took out my old Canon A1 and it's like riding a bike! ;o) I'm really
tempted to go back to analog and shoot a few rolls "just for fun"!

It *is* fun! I'm having an absolute ball after 5 years of
digital-only.
Velvia is very nice and forgiving. And some of Fuji's colour negative
stock is not bad at all, it scans real nice. Dying to get my rb67
back from the regular service, going to give it a go with Velvia and
scanning and see what gives!
 
D

Don

Go here:
http://research.microsoft.com/~gray/talks/NYLFtalk.ppt
Thanks!!!

I find it an amazing presentation. Very intriguing.
Watch out: 7MB of ppt file...

Uh-oh! I'm on a POTS (plain old telephone system) right now so it'll
take a while. Thanks for the warning!
That used to be a darkroom technique as well, if I recall correctly?
Mostly for B&W in those days. But sometimes used for slide
copying as well.

Yes, it's exactly the same thing but adapted to the digital paradigm.
Got it. Very interesting stuff. Didn't know the curves for each
colour would change significantly with different exposures.

That caused me so much trouble! There is the nominal difference -
which is expected - i.e. if blue peaks at histogram bin 100 increasing
exposure by 1 EV will (theoretically but not in real life) boost this
to 200 (in gamma 1). On the other hand, if red is at 50, the same
boost will take it only to 100, etc.

Now that's bad enough (and the reason why I modify the exposure) but
KC characteristic curve is *not* linear! That is to say different
parts of the curve behave differently. Ugh...!
It *is* fun! I'm having an absolute ball after 5 years of
digital-only.
Velvia is very nice and forgiving. And some of Fuji's colour negative
stock is not bad at all, it scans real nice. Dying to get my rb67
back from the regular service, going to give it a go with Velvia and
scanning and see what gives!

Yes, it's exactly Velvia I want to try because I hear so much good
stuff about it. The trouble is life keeps getting in the way... Maybe
at year end I'll have some time over the holidays.

But shooting a few rolls of Velvia has definitely gone way up on my
"2DO" list after I held the trusty old A1 in my hands again! I know
it's all nostalgia and very subjective but it just felt so good! ;o)

Don.
 
J

John

Don said:
Yes, it's exactly Velvia I want to try because I hear so much good
stuff about it. The trouble is life keeps getting in the way... Maybe
at year end I'll have some time over the holidays.

But shooting a few rolls of Velvia has definitely gone way up on my
"2DO" list after I held the trusty old A1 in my hands again! I know
it's all nostalgia and very subjective but it just felt so good! ;o)

Don.

I'm slightly aghast at anyone describing Velvia as 'forgiving' - I would
have suggested the opposite! I've just got back a roll of Velvia 50: most of
the shots are underexposed - partly my fault but also partly Velvia's. (Many
people rate Velvia 50 (actually in the process of being discontinued now) at
around 40 ASA - they reckon that Fuji deliberately over-rate it to increase
the saturation by slight underexposure - fine if you are projecting it, but
not if you are going to scan it.) However, the shadows on these shots are
just about opaque - even HDR wouldn't rescue them.

Now all this is moot because the new 100ASA rated film which is replacing
the old RVP 50 is supposed to be correctly rated. However, the
charateristics of Velvia which people seem to like are (a) its saturation,
(b) its slight magentary cast (is that a word? :) ) and (c) its
contrastiness. (a) and (b) are irrelevant if you are going to scan it and
process it digitally, (c) is definitely not what you want.

By all means try it and form your own opinion; however, for scanning, I
suggest Provia is a better bet.

We're going a bit off topic here - I apologise!
 
R

Roger S.

Hi John,
I think Don's comparing Velvia to Kodachrome, but still "forgiving"
isn't a word I'd use to describe Velvia. I shot a few rolls of it and
then switched to Provia and Sensia as they are much easier to work with.
 
D

Don

(b) its slight magentary cast (is that a word? :) )

It is now! ;o)

Far better than "magentatious" I would've used! ;o)
By all means try it and form your own opinion; however, for scanning, I
suggest Provia is a better bet.

We're going a bit off topic here - I apologise!

No, no, right on topic!!

I often heard both Velvia and Provia being mentioned here but I really
read all that out of the corner of my eye because I switched to
digital, as I said. Actually, I may have confused them because when
you're not paying attention those names do sound somewhat similar.

Anyway, any specific ratings/speed you would recommend in particular
(for scanning)? Any favorites?

Don.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top