color profile embedding (not conversion) utility?

F

false_dmitrii

I'm experimenting with color management now that Paint Shop Pro X
supports multiple color spaces. Unfortunately, the software lacks a
mechanism to assign input profiles to untagged files. It can convert
an embedded input profile to current working space, and it can embed
the working space in the image, but it can't use input profiles as
working spaces or convert from an arbitrary profile in the absence of
an embedded one.

Is anyone here aware of a cheap or (preferably) free utility that can
embed a profile into a 16-bit TIFF with minimal fuss and no color
conversion? I can't imagine it's especially difficult, but then I'm
not a programmer. So far I've tried TIFFICC (from LCMS) and the demo
of QImage, but I couldn't figure out a way to get either to just "embed
profile" (and both don't seem to like PSPX's method of saving 16-bit
TIFFs, either). Am I overlooking the obvious?

All suggestions are welcome. Thanks as always. :)

false_dmitrii
 
R

Roy

I'm experimenting with color management now that Paint Shop Pro X
supports multiple color spaces. Unfortunately, the software lacks a
mechanism to assign input profiles to untagged files. It can convert
an embedded input profile to current working space, and it can embed
the working space in the image, but it can't use input profiles as
working spaces or convert from an arbitrary profile in the absence of
an embedded one.

Is anyone here aware of a cheap or (preferably) free utility that can
embed a profile into a 16-bit TIFF with minimal fuss and no color
conversion? I can't imagine it's especially difficult, but then I'm
not a programmer. So far I've tried TIFFICC (from LCMS) and the demo
of QImage, but I couldn't figure out a way to get either to just "embed
profile" (and both don't seem to like PSPX's method of saving 16-bit
TIFFs, either). Am I overlooking the obvious?

All suggestions are welcome. Thanks as always. :)

false_dmitrii
Hi

What happens when you open an untagged image, and then "Save As". Does it
have a Tagged Profile the next time you open it, or has it just been left
Untagged.

If you are unhappy then perhaps you should try Photoshop or even Photoshop
Elements.

Roy G
 
F

false_dmitrii

Roy said:
What happens when you open an untagged image, and then "Save As". Does it
have a Tagged Profile the next time you open it, or has it just been left
Untagged.

PSPX can embed the current working space in a file. However, it cannot
use input profiles as its working space. There's no built-in way to
tag the file prior to opening it into the working space.
If you are unhappy then perhaps you should try Photoshop or even Photoshop
Elements.

Photoshop is too expensive. Elements 3 can't even provide working
spaces other than sRGB and AdobeRGB. Is Elements 4 that much better?
Picture Window Pro strikes me as a much better use of money for $100
color management, but I'd like to solve the profile embedding problem
without spending quite so much.

false_dmitrii
 
R

Roy

PSPX can embed the current working space in a file. However, it cannot
use input profiles as its working space. There's no built-in way to
tag the file prior to opening it into the working space.


Photoshop is too expensive. Elements 3 can't even provide working
spaces other than sRGB and AdobeRGB. Is Elements 4 that much better?
Picture Window Pro strikes me as a much better use of money for $100
color management, but I'd like to solve the profile embedding problem
without spending quite so much.

false_dmitrii

Hi.

I don't know about Elements 4, and only know about 2 because I got a free
one with my scanner.

Are you trying the high quality technique of scanning, without colour
managing in the scanner, and then Assigning the scanner profile at Import,
and Converting to Working Space.

From what I can gather that gives the best results, but if your program is
not advanced enough, then perhaps you should just stick to Colour Managing
the Scan process, and Converting to Working Space at Import.

I don't think there is anything to be gained by using anything other than a
recognised Working Space Profile as your actual Working Space. Which is
probably why your program does not seem to allow it. That would rather
defeat the object of having Device Independant Working Space Profiles and
tagging them to images.

Roy G
 
F

false_dmitrii

Roy said:

Are you trying the high quality technique of scanning, without colour
managing in the scanner, and then Assigning the scanner profile at Import,
and Converting to Working Space.

Trying to try. I don't have a target or custom profiles at the
moment...and I won't until I have a reliable way to use them. :)

I don't think there is anything to be gained by using anything other than a
recognised Working Space Profile as your actual Working Space. Which is
probably why your program does not seem to allow it. That would rather
defeat the object of having Device Independant Working Space Profiles and
tagging them to images.

The only time you would gain would be if your software can apply
profiles only by first having them set as the working space. Like
Paint Shop Pro X. :)

false_dmitrii
 
W

Winfried

Try Picture Window Pro from http://www.dl-c.com
They have a 4 week Trial-Version (full function) in the download
section .
All function work with 16-bit Tiffs (and 48-bit) and the program has
agood implentation of colormanagement
The program can be customized to asked you which profile should be
embedded if there is none in the file.
There is also a tranformation "Change Color Profile" that has a
pulldown option
"Profile setting only" that will also solve your problem.

Qimage may also work, but it is a bit more tricky and I am not shure
whether 16-bit ist supported.

Winfried.
 
B

Bart van der Wolf

SNIP
Qimage may also work, but it is a bit more tricky and I am
not shure whether 16-bit ist supported.

Qimage reads 16-bit/channel files, but output is 8-b/ch (printer
drivers are 8-b/ch).

Bart
 
M

mp

Two issues here:

1. Preserving Color Accuracy
I'm assuming that's the goal behind attaching a profile to an image.
How about switching to LAB color instead? Better photo processors will
make a print no problems, others will choke on LAB. Once you use LAB
for a while it becomes second-nature and much more intuitive than
popular color models.

2. High-bit?
I assume these are color images, so I don't see any benefit to keeping
16-bits around. You can't see them on the monitor and they don't print
even in the above-average production environment. So why do you keep
them?
 
L

Lorenzo J. Lucchini

mp said:
Two issues here:

I'm not sure what post you're replying to. I think you should provide
some quoting for clarity.
1. Preserving Color Accuracy
I'm assuming that's the goal behind attaching a profile to an image.
How about switching to LAB color instead? Better photo processors will
make a print no problems, others will choke on LAB. Once you use LAB
for a while it becomes second-nature and much more intuitive than
popular color models.

I don't really know much about LAB, but I think converting to LAB would
have the same issues as applying (not attaching, applying) a profile:
you would get quantization errors.

Unless you work with floating-point LAB or whatever, but that's usually
not too computationally feasible.
2. High-bit?
I assume these are color images, so I don't see any benefit to keeping
16-bits around. You can't see them on the monitor and they don't print
even in the above-average production environment. So why do you keep
them?

There's usually no point in keeping 16 bits per channel *in the final
image*, if it's going to be printed (or otherwise "finalized") and never
edited.

But as long as an image is still going to be edited (and note that
applying a profile that was just *attached* to the image would be an
editing operation, as would be any gamma correction the printer decides
to apply), 16 bits may well be quite useful.


by LjL
(e-mail address removed)
 
M

mp

you would get quantization errors

What's the benefit of working in device dependent color spaces when
one *still* has to run test prints? It's an uglier version of your
"quantization problem. " Eliminate the device problem on your PC, work
in LAB. What's so hard about that?
There's usually no point in keeping 16 bits per channel *in the final
image*, if it's going to be printed (or otherwise "finalized") and never
edited.
Would your favorite image editor somehow use those 16-bits to make a
better looking image later? No. Now, keeping the original capture in
16-bits makes archival sense. But chances are pretty good we won't do
anything with them because we can't see them.
 
L

Lorenzo J. Lucchini

mp said:
What's the benefit of working in device dependent color spaces when
one *still* has to run test prints? It's an uglier version of your
"quantization problem. " Eliminate the device problem on your PC, work
in LAB. What's so hard about that?

I don't follow you at all. Why does LAB avoid having to run test prints?
Would your favorite image editor somehow use those 16-bits to make a
better looking image later? No.

Uh, yeah, if you decide (as an example) to boost the range of the
highlights or the shadows.
Now, keeping the original capture in
16-bits makes archival sense. But chances are pretty good we won't do
anything with them because we can't see them.

We can't see them until we make edits that let us see them, that's
pretty evident.

But I can't follow your line of reasoning: if we can't do anything with
the 16-bits, why does it make archival sense to keep them?


by LjL
(e-mail address removed)
 
M

mp

I don't follow you at all. Why does LAB avoid having to run test prints?
http://developer.apple.com/document...o/csintro_colorspace/chapter_3_section_5.html

When Adobe runs out of ideas, they'll probably get their photoshop
gurus talking it up. For now, they fear LAB and tell you there are
"terrible problems" with blacks which are far worse than the terrible
problems of battling RGB/CMYK + paper + ink + light problems.

It's really great working in LAB. Especially with a photoprocessor who
can run your LAB images. Not all problems go away, but it's *so* much
simpler and easier. Until Adobe makes you pay for another version of
Photoshop with New LAB tools!!! I guess I'll be one of the few using
it.
Uh, yeah, if you decide (as an example) to boost the range of the
highlights or the shadows.
The shadow is rendering in 8-bit regardless of it's boosted state. Are
you suggesting there is "hidden" high-bit shadow data? Hmm, well your
favorite image editor throws those out too when you "boost". It's just
not that elegant. I wish it was different too.
if we can't do anything with
the 16-bits, why does it make archival sense to keep them?
In theory, there is more data for future devices not yet invented to
render the image. But in practice, the only urgency lies around
preserving and make more pleasing what humans can see.
 
L

Lorenzo J. Lucchini

mp said:
I don't follow you at all. Why does LAB avoid having to run test prints?

[snip: LAB]

Look, I really don't know enough about LAB to take a stance.

All I can say is that you can *attach* a profile to an image instead of
*applying* it, and this has advantages (wrt quantization errors); I
assume you could do the very same if you (and your profiling program)
take LAB as the standard color space to work in.

But profiling would still be involved, whether you use LAB as the
standard space or not, though maybe using LAB would have advantages over
using something else -- again, I don't really know.
The shadow is rendering in 8-bit regardless of it's boosted state. Are
you suggesting there is "hidden" high-bit shadow data?

Yeah. Or highlight, or midtone, or something. If there isn't any, well,
then your scanner is an 8-bit scanner that's sold with a 16-bit A/D just
for fooling customers.

Clearly, no current (and probably no future) scanner has real data in
*all* of the 16 bits, but I do make the assumption that most scanners
have *some* valid bits beyond the 8 most significant.

Otherwise, surely, doing *anything* 16-bit would be a *complete* waste
of time.
Hmm, well your
favorite image editor throws those out too when you "boost". It's just
not that elegant. I wish it was different too.

Throws out? Why?
Things are "thrown out" when and if I convert a 16-bit image to 8-bit,
that's just life.
But I can *decide* to some extent *what* is thrown out: how? precisely
by adjusting curves, for example to boost the shadows, as I said.

I don't quite see how my image editor (unless my image editor can only
work at 8-bit, of course) would throw out things that can (or are made
to) fit in the image's 8 bits.
In theory, there is more data for future devices not yet invented to
render the image.

I *really don't think* this is the reason. If it's data the eye can't
see, anyway, it would make no sense to print (or otherwise render) them.
But in practice, the only urgency lies around
preserving and make more pleasing what humans can see.

The current urgency, as I see it, lies around getting a final 8-bit
image that's as color-rich and as little posterized (quantized) as possible.
As soon as just about any editing is made on the original scanned image,
more than 8 bits of input are needed to fill the whole 8-bit output
range nicely.


by LjL
(e-mail address removed)
 
D

Don

Eliminate the device problem on your PC, work in LAB.

Like any color space LAB has both, advantages and disadvantages. It
really depends on the task at hand. Therefore, insisting that
everything be done in one color space (whatever that color space may
be) is a bit of a blunt instrument. It's that old adage: "Horses for
courses".
Would your favorite image editor somehow use those 16-bits to make a
better looking image later?

If you work in 16-bit on a difficult image, absolutely!
Now, keeping the original capture in
16-bits makes archival sense. But chances are pretty good we won't do
anything with them because we can't see them.

We may not see all of the colors in a 16-bit image but we can
certainly see the dynamic range i.e. the brightness!

There are already experimental monitors which can display high dynamic
range images without compression and, eventually, they'll work their
way into a consumer product at which point the final output for
viewing will be 16-bits (or more?) not 8-bits.

Don.
 
M

mp

All I can say is that you can *attach* a profile to an image instead of
*applying* it, and this has advantages (wrt quantization errors); I
assume you could do the very same if you (and your profiling >program)
take LAB as the standard color space to work in.

LAB process is much simpler:
1. establish white point/black point
2. render image
Therefore, insisting that everything be done in one color space.
I'm not insisting, I'm making a suggestion. It's a good tool in the
toolbox for sure.

LAB can simplify color. Again, I guess you will wait for Adobe to get
on board before it's deemed "The next big thing."
Otherwise, surely, doing *anything* 16-bit would be a *complete* waste of time.
High-bit work is most relevent when working monotone images. (more
shades of a single color) Other than archiving the original scan or
perhaps multisampling a shadow-filled scan on a scanner, there's
little technical merit for working with high-bit images.

Do whatever you want, but please take this away from the thread:

Whatever common knowledge is out there on digital imaging much of it
was created by people with graphics production experience, not digital
imaging hardware/software development experience. If they worked with
anyone on the developer side, they worked with the Marketing dept. Not
engineering. Therefore, much of the common knowledge practice has no
basis in how the technology *really* works.
 
R

Roger S.

"Whatever common knowledge is out there on digital imaging much of it
was created by people with graphics production experience, not digital
imaging hardware/software development experience."

How do you know that, and what's your source of information on "how the
technology really works"?

Scanner hardware, monitors, and contone printers are RGB devices, so
why not work in RGB? Not that I have anything against LAB- use it if
you want to.
 
L

Lorenzo J. Lucchini

mp said:
LAB process is much simpler:
1. establish white point/black point
2. render image

Excuse me, how are you going to correct whatever color bias you scanner
(or printer, on the other side of the chain) has by doing *just that*?

A profile *must* somehow be involved, whether you work in LAB or not.
I'm not insisting, I'm making a suggestion. It's a good tool in the
toolbox for sure.

LAB can simplify color. Again, I guess you will wait for Adobe to get
on board before it's deemed "The next big thing."

LAB might be quite useful, I'm not trying to imply otherwise, I just
don't know much about it.
High-bit work is most relevent when working monotone images. (more
shades of a single color) Other than archiving the original scan or
perhaps multisampling a shadow-filled scan on a scanner, there's
little technical merit for working with high-bit images.

How is multi-sampling on a scanner different from single-sampling (at
16-bit) on another scanner that has less noisy sensors?

Or are you simply saying that *no* scanner has sensors good enough to
get you anything after the 8th bit, and that the *only* way to obtain
something in the 9th bit and beyond is by multi-sampling?
Do whatever you want, but please take this away from the thread:

Take away what?
Whatever common knowledge is out there on digital imaging much of it
was created by people with graphics production experience, not digital
imaging hardware/software development experience. If they worked with
anyone on the developer side, they worked with the Marketing dept. Not
engineering. Therefore, much of the common knowledge practice has no
basis in how the technology *really* works.

If you say so.
At this point, I'd really like some comments on from someone else on this.


by LjL
(e-mail address removed)
 
M

mp

Excuse me, how are you going to correct whatever color bias...
Color management's task is to keep track of what the "real" (device
dependent) color of an image is supposed to be. With LAB, you always
have the "real" color encoded in the file. (device independent)

FYI: Color bias is normally corrected at the firmware/driver.
Multi-sampling:
Some scanners have the ability to multi-sample. they read the same
line X number of times, compare results and have some limited
intelligence as to which values to choose.
Scanning at a high-bit, there's not as much sampling going on. So, the
precision goes down a great deal.

I come by all this knowledge working for an OEM a little while ago.
I've been laughed at so many times by the Engineers before they kicked
me out of their offices that I've learned the hard way.
 
L

Lorenzo J. Lucchini

mp said:
Color management's task is to keep track of what the "real" (device
dependent) color of an image is supposed to be. With LAB, you always
have the "real" color encoded in the file. (device independent)

I don't see why this should be a good thing, while I can see at least
one reason (quantization errors) why it may become a bad thing.
FYI: Color bias is normally corrected at the firmware/driver.

Not if you don't correct it and just attach the scanner's profile to the
scanned image -- something that can have advantages and disadvantages,
as I see it.

The main advantage is, again, the avoidance of quantization errors when
you'll later have to apply a printer (or whatever) profile.

One disadvantage, as I was discussing with Don, is IMHO that, if you
scan at 8-bit, *applying* (and not just attaching) the scanner's profile
*at scan time* (i.e. internally, which is often done at more than 8-bit)
may (or may not, as Don notes) give better quantized results.
Some scanners have the ability to multi-sample. they read the same
line X number of times, compare results and have some limited
intelligence as to which values to choose.

I know what multi-sampling is. As far as I know, though, the "limited
intelligence" is usually just "take the average" (and I don't even think
one is able to do much better than that).
Scanning at a high-bit, there's not as much sampling going on. So, the
precision goes down a great deal.

Scanning at a high-bit depth *without multi-sampling*, I suppose you mean.

Yeah, of course, single-sampling will always have less precision than
multi-sampling (unless you don't have enough bits to hold the better,
multi-sampled data).

But what I asked you was: if multi-sampling on scanner A is capable of
producing data that go meaningfully beyond the 8th bit, why shouldn't
single-sampling on scanner B (which, assume, has a better, less noisy
CCD than A) be capable of doing the same?

by LjL
(e-mail address removed)
 
G

Gordon Moat

mp said:
LAB process is much simpler:
1. establish white point/black point
2. render image

Agreed that LaB is much simpler, though at some point an output needs to be created
from an image. Picking an editing space, like Ektaspace, ProPhotoRGB, et al, can
greatly simplify the workflow, though if you have lots of time stick to LaB. There
are also some advantages for certain edits to be done in LaB only, since they
involve essentially imaginary colours, in other words colours that cannot be
recreated in a print.

I make an assumption of printing here because monitors are very limited. I doubt
many would have the latest EIZO on their desk, though another reality is that type
of monitor is just a proofing solution to get near SWOP. A real issue is going from
LaB, or some RGB working space, to some flavour of SWOP, or some sort of CMYK,
HiFiColour, HexaChrome, or custom Pantone combination for printing. With so many
printing possibilities now available, the device independent LaB choice is a good
choice for storage when future printing is not known.
I'm not insisting, I'm making a suggestion. It's a good tool in the
toolbox for sure.

LAB can simplify color. Again, I guess you will wait for Adobe to get
on board before it's deemed "The next big thing."

I remember several years ago the BruceRGB was all the rage. This was an attempt to
simplify workflow for printing outputs (commercial printing, not desktop inkjet).
Then it seemed that everyone jumped on AdobeRGB (1998), and lately more people
playing around with sRGB. There is a bit of that "flavour of the month" feeling to
all this . . . sort of makes me laugh sometimes.
High-bit work is most relevent when working monotone images. (more
shades of a single color) Other than archiving the original scan or
perhaps multisampling a shadow-filled scan on a scanner, there's
little technical merit for working with high-bit images.

It makes for slightly less destructive further editing of images. Unfortunately
most printing systems and layout software only handle 8-bit images. If someone
knows the output, and the scan needs no changes, then an 8-bit only workflow can
greatly streamline your work. Saving time is not a bad reason for doing 8-bit.
Do whatever you want, but please take this away from the thread:

Whatever common knowledge is out there on digital imaging much of it
was created by people with graphics production experience, not digital
imaging hardware/software development experience.

Sure, check out <http://www.gracol.org> for some of the latest. The idea is that
lots of scanning uses are for things that will be offset printed. Obviously there
is also scientific imaging, and another group of standards and experts. It really
depends upon the end uses for those scans.
If they worked with
anyone on the developer side, they worked with the Marketing dept. Not
engineering. Therefore, much of the common knowledge practice has no
basis in how the technology *really* works.

GraCOL is open for submissions of concepts that will improve printed results. If an
engineer or scientist really wanted to make a difference and improve the potential,
then contacting GraCOL is the way to help. The GraCOL standard is new and emerging,
though the intention is to do better than SWOP <http://www.swop.org>.

There are several colour scientists involved in the industry. Whether they are
listened to on everything sometimes comes down to cost and time. With high end
devices, the better concepts and ideas are more implemented than they are in more
general applications.

Time constraints limit commercial results more than any other factor. Those who do
this as a hobby, or without time constraints, have the luxury of achieving better
results.

The popularity of certain writers within the industry (like Bruce Fraser, Seth
Resnik, et al) determines more of how things get done in graphics production than
any science or engineering. If some engineers and scientists were better writers,
then there might be more people doing scanning in a better manner.

Ciao!

Gordon Moat
A G Studio
<http://www.allgstudio.com>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top