Why is high resolution so desireable?

B

bxf

Dan said:
...

As long as the number of pixels in the source image is less than the number
being displayed, increased resolution doesn't buy you anything when viewing
the image. If the number of pixels in the source image is greater than the
current display setting, then a higher display "resolution" will improve the
picture because more of the source pixels can be represented.

For example:
Your screen is set at 800x600 = 480,000 pixels = 0.48MegaPixels.
You have a digital camera that take a 2MegaPixel picture = 1600x1200.
You will only be able to see about 1/2 of the detail in the picture if you
display the picture full screen. However, you buddy has a "high resolution"
monitor capable of 1600x1200 pixels. When he views the picture full
screen, he will see it in all it's glory. }:) Now given that 4, 5, 6, and even
8 MP cameras are common today, you can see why higher resolutions
can be convenient for displaying and working with digital images.

Sorry Dan, the above is incorrect.

If you view a large image on a screen set to 800x600, you will see only
a portion of the image. If you view the same image with a 1600x1200
setting, the image will be smaller and you will see a larger portion of
it. That's all. There's nothing here that implies better detail. The
image may appear SHARPER at 1600x1200, but that is simply because the
detail is smaller, just like small TV screens look sharper than large
ones.
In the case of a DVD, the picture is something like 852x480 (16:9 widescreen).
Your 800x600 display will display nearly all the information in every
frame of the DVD. On your buddy's system, either the picture will be smaller,
or interpolated to a larger size (likely causing a small amount of degredation).
You might argue that a screen setting just large enough to display a complete
852x480 window give the best results for watching a movie.

Well, this makes sense to me, and I'm trying to confirm that I'm
understanding things correctly. In addition to less degradation, there
should also be less CPU overhead, due to the absence of interpolation.
That's fine for DVD, but what if you want to watch your HD antenna/dish/cable
feed? Then you might want 1278x720, or even 1980x1080 to see all the detail
in the picture.

Once again, the monitor setting does not improve the detail you can
see. If your IMAGE is larger (e.g. 1980x1080 vs 1278x720), THEN you are
able to see more detail. But this is not related to your MONITOR
setting, which is only going to determine the size of the image and
hence what portion of it you can see.
 
B

bxf

J. Clarke said:
Depends on the image. If it's 100x100 then you don't gain anything, if it's
3000x3000 then you can see more of it at full size or have to reduce it
less to see the entire image.

OK, but this is not a quality issue. You view the image at a size that
is appropriate for your purposes.

If I'm photoediting an image, I need to see a certain level of detail
in order to work. That means that, on any given monitor, I must have
the image presented to me at a size that is convenient for my intended
editing function. Does it matter whether this convenient size is
achieved by adjusting monitor "resolution" or by interpolation (either
by the video system or by the application)? If the "resolution" setting
is low, then I would ask the application to magnify the image, say,
20x, whereas at a higher "resolution" setting I may find it appropriate
to have the application magnify the image 40x (my numbers are
arbitrary). Is there a difference in the end result?
 
B

bxf

In addition to the above we have the question of the larger pixels, but
I don't know how to fit that into the equation.
 
J

J. Clarke

bxf said:
OK, but this is not a quality issue. You view the image at a size that
is appropriate for your purposes.

If I'm photoediting an image, I need to see a certain level of detail
in order to work. That means that, on any given monitor, I must have
the image presented to me at a size that is convenient for my intended
editing function. Does it matter whether this convenient size is
achieved by adjusting monitor "resolution" or by interpolation (either
by the video system or by the application)? If the "resolution" setting
is low, then I would ask the application to magnify the image, say,
20x, whereas at a higher "resolution" setting I may find it appropriate
to have the application magnify the image 40x (my numbers are
arbitrary). Is there a difference in the end result?

Suppose your monitor could display one pixel? How much more useful to you
would a monitor that can display two pixels be? How about four? See where
I'm going?

If the application magnifies 40x, whether you get a benefit from higher
resolution or not depends again on the image size. If the feature size on
the image is at 40x still smaller than the pixel size then you gain from
the higher res. If not then you don't. One thing you do gain if you use
the default settings for font size and whatnot is that there is more
available screen area to display your image and less of it taken up by
menus and the like.

If you're used to low resolution and you change to high resolution then you
may not notice much difference. But when you go back to low-res you almost
certainly will.
 
B

bxf

J. Clarke said:
If the feature size on
the image is at 40x still smaller than the pixel size then you gain from
the higher res. If not then you don't.

I believe this statement is relevant, but I need to know what you mean
by "feature size". Also, by "pixel size" do you mean the physical size
of the pixel on the monitor?
If you're used to low resolution and you change to high resolution then you
may not notice much difference. But when you go back to low-res you almost
certainly will.

While I'm writing as if I believe that "resolution" setting makes no
difference to the image we see, I am in fact aware that this is not the
case. I know that at low settings the image looks course.
 
B

Bob Myers

bxf said:
If I'm photoediting an image, I need to see a certain level of detail
in order to work. That means that, on any given monitor, I must have
the image presented to me at a size that is convenient for my intended
editing function. Does it matter whether this convenient size is
achieved by adjusting monitor "resolution" or by interpolation (either
by the video system or by the application)? If the "resolution" setting
is low, then I would ask the application to magnify the image, say,
20x, whereas at a higher "resolution" setting I may find it appropriate
to have the application magnify the image 40x (my numbers are
arbitrary). Is there a difference in the end result?

OK - I think I see what the basic question really is, now, and
also let me apologize for not having been able to keep up with the
conversation the last couple of days due to some business travel.

Unfortunately, the answer to the above is going to have to be "it
depends." Let's consider an original image with a pixel format
far beyond anything that could reasonably be accomodated, in
total, on any current monitor - say, something like a 4k x 3k image.
And all you have to view (and edit) this image on is a 1024 x 768
display. Clearly, SOMETHING has to give if you're going to
work with this image on that display.

You can, as noted, scale the full image down to the 1024 x 768
format of the display - which is effectively a combination of
resampling and filtering the high-resolution information available
in the original down to this lower format. Obviously, you
unavoidably lose information in presenting the image this way, since
you only have about 1/16 of the original pixels to deal with.
The other way is to treat the display as a 1024 x 768 "window"
into the original 4k x 3k space, which preserves all of the original
information but which means that you can't possibly see everything
at once. (We'll ignore intermediate combinations of these for the
moment.)

If you go with the latter approach, you can examine all of the detail
the original has to offer, but if you're trying to gauge qualities of
the original image which can't be observed by only looking at a
small part (the overall color balance or composition, say), then clearly
this isn't the way to go. Looking at the scaled-down image, on the
other hand, lets you see these "overall" qualities at the cost of not
being able to examine the fine details. So the answer to the question of
which one is "best" depends almost entirely on just what you're trying
to do with the image. For a lot of image-editing or creation work,
the optimum answer is going to be a combination of these two
approaches - showing a scaled-down but "complete" image for
doing color adjustments and so forth, and working with the raw
"full-resolution" version to observe and tweak the full details. As
long as you preserve the original 4k x 3k DATA somewhere, no
matter how you VIEW it, nothing is really lost either way.

Did that help more than the previous takes on this?

Bob M.
 
B

Bob Myers

J. Clarke said:
The size of a single pixel of the raw image.

OK, but this gets into another often-overlooked aspect of
digital imaging, or rather the spatial sampling of images.
While you can speak of the pixel "pitch" of the raw image
(in terms of pixels per inch or cycles per degree or whatever),
the "pixels" of that image technically do not have ANY physical
size. Sampling theory requires that we consider them as
dimensionless point samples; in other words, ideally we have
a set of "pixels" which represent luminance and color information
taken at zero-size sampling points across the image. When
DISPLAYING this information, we then have to deal with various
forms of filtering that are then imposed upon this array of sampled
values (the most basic being just a "rectangular" or "block" filter,
i.e., the values of that sample are considered as applying equally
over the full width and height of a given physical area), but it is
this process which then introduces error/noise into the representation
of the image. (Now, it may be that the original image information
was produced by a device which does, in fact, employ physical
"pixels" of a given size - but when dealing with samples images
from a theoretical or mathematical perspective, it's still important
to understand why a "pixel" in the image data is to be considered
as a point sample.)

Dr. Alvy Ray Smith, who was the first Graphics Fellow at
Microsoft and one of the cofounders of Pixar, wrote a very readable paper
explaining which this is an important distinction to make; it can be
found at:

ftp://ftp.alvyray.com/acrobat/6_pixel.pdf


Bob M.
 
B

bxf

Firstly, my apologies for the delay in responding. As I work away from
home, I have no web access over the weekend. Also, I am in a European
time zone.

Rather than make specific quotes from the last few posts, let me say
that all the provided info is useful and appreciated. I do find that
the questions in my own mind have been redefining themselves somewhat
as the thread progresses.

Bob, I'm glad you had comments to make about John's statement
"The size of a single pixel of the raw image", because I would not
have known how to tackle that. I was not able to associate the term
"size" with a pixel of an image. To me, it has no size. At least
not until I print it or display it on a monitor.

And yet, paradoxically, I can't help but feel that this statement is
relevant to the one issue that I feel still has not been fully answered
here. Specifically, what is the significance of large monitor pixels,
as opposed to small ones? I can see that if image pixels did in fact
have size, then one could express a relationship between image pixel
size and monitor pixel size. But, as Bob explains, image pixels have no
size of their own.

So, at the risk of repeating myself, let's see if the following will
help us pinpoint the question (my current question) more precisely: if
we have an image of 100x100 pixels, what is the difference, if any,
between displaying it at 200% magnification on a monitor set to
1600x1200, and displaying it at 100% magnification on a monitor set to
800x600? There is no issue here with resolution or detail, as in either
case all the pixels are visible in their entirety, nor is there an
issue with image size, as in either case the displayed image will be
exactly the same size. Are small pixels "better" than large ones?

If the answer to the above is "no real difference", then I would
have to wonder why not run at a lower monitor "resolution" setting
and relieve the video system of some of the hard work it must do when
coping with high "resolution" settings (ignoring, of course, the
need for a high setting when it is required in order to view the
desired portion of the image). I believe this question is valid for
those situation where one in fact has control over everything that is
displayed. Unfortunately, this is not often the case. We can control
the size of an image or a video clip, but we cannot usually control the
size of the application's user interface. Nor the size of the
desktop, explorer, or whatever. Because of this, it seems to me that my
questions have no potential practical benefit. Perhaps one day we will
have scalable GUIs, etc, at which time my points will have more
significance.
 
B

Bob Myers

Bob, I'm glad you had comments to make about John's statement
"The size of a single pixel of the raw image", because I would not
have known how to tackle that. I was not able to associate the term
"size" with a pixel of an image. To me, it has no size. At least
not until I print it or display it on a monitor.

Right - it has no size at all. What it DOES have, though - or rather,
what the entire set of image data represents - is a certain spatial
sampling frequency (or usually a pair of such frequencies, along
orthogonal axes, even though they are often the same value).
Nyquist's sampling theorem applies to image capture just as well
as to anything else - anything within the original which represents
a spatial frequency greater than 1/2 the sampling rate (in terms of
cycles per degree or however you choose to measure it) CANNOT
be captured in the sampled data, or worse results in undesirable
artifacts through an "aliasing" process (which is precisely what the
infamous "Moire distortion" really is).
And yet, paradoxically, I can't help but feel that this statement is
relevant to the one issue that I feel still has not been fully answered
here. Specifically, what is the significance of large monitor pixels,
as opposed to small ones?

And if it's in those terms - "large" vs. "small" pixels, with no other
considerations - then there is no significance at all. You must know
the size of the image in question, and the distance from which it
will be observed, to make any meaningful comments about what
differences will result from different "pixel sizes." Concerns over
the "pixel size of the original image" are really bringing up a related
but distinct topic, which is the effect of resampling the image
data (if "scaling" is done) in order to fit it to the display's pixel
array. And as long as you are UPscaling (i.e., going to a higher
effective sampling frequency), this process can be done with
zero loss of actual information (which is not the same thing, of
course, as saying that the resulting upscaled image LOOKS the
same). Downscaling (downsampling) must always result in a
loss of information - it's unavoidable.

So, at the risk of repeating myself, let's see if the following will
help us pinpoint the question (my current question) more precisely: if
we have an image of 100x100 pixels, what is the difference, if any,
between displaying it at 200% magnification on a monitor set to
1600x1200, and displaying it at 100% magnification on a monitor set to
800x600?

Assuming that the display "pixels" are the same shape in both case,
and that the image winds up the same size in both cases, then there is
virtually no difference between these two (assuming they have been done
properly, which is also not always the case).
There is no issue here with resolution or detail, as in either
case all the pixels are visible in their entirety, nor is there an
issue with image size, as in either case the displayed image will be
exactly the same size. Are small pixels "better" than large ones?

Ah, but the problem here is that you've brought an undefined (and
highly subjective!) term into the picture (no pun intended :)) - just
what does "better" mean? More or less expensive? Having a
certain "look" that a given user finds pleasing? Being the most
accurate representation possible? Having the best color and
luminance uniformity, or the brightest overall image. Again, there
is exactly ZERO difference between these two cases in the amount
of information (in the objective, quantifiable sense) that is available
or being presented to the viewer. But that does not mean that all
viewers will "like" the result equally well.
If the answer to the above is "no real difference", then I would
have to wonder why not run at a lower monitor "resolution" setting
and relieve the video system of some of the hard work it must do when
coping with high "resolution" settings (ignoring, of course, the
need for a high setting when it is required in order to view the
desired portion of the image).

And you're right, that would be the sensible thing to do IF this
were all there was to it. On the other hand, having more (but
smaller) pixels with which you can play also opens up the
possibility of certain tricks you could play with the original
imaging data (like "smoothing" or "anti-aliasing" things a little
better) which may make the image LOOK better, even though
they are not in any way increasing the objective accuracy of its
presentation. So it comes down to what you (or the particular
viewer in question) are most concerned about, and nothing more.
If it's just looking at the original data in an accurate presentation,
warts and all, and doing this in the most efficient manner possible,
you'd probably want to choose the "lower res" display setting.
If you want to play games to make it "look good" (and aren't
worried about what's actually in the original data), you may have
some reason to go with the "higher-res" setting.

Bob M.
 
B

bxf

On the other hand, having more (but
smaller) pixels with which you can play also opens up the
possibility of certain tricks you could play with the original
imaging data (like "smoothing" or "anti-aliasing" things a little
better) which may make the image LOOK better, even though
they are not in any way increasing the objective accuracy of its
presentation. So it comes down to what you (or the particular
viewer in question) are most concerned about, and nothing more.
If it's just looking at the original data in an accurate presentation,
warts and all, and doing this in the most efficient manner possible,
you'd probably want to choose the "lower res" display setting.
If you want to play games to make it "look good" (and aren't
worried about what's actually in the original data), you may have
some reason to go with the "higher-res" setting.

Bob M.

I believe you've covered just about everything that could be said on
the subject (certainly at my level, and then some). The last paragraph
spells out some of the practical benefits of small pixels, which should
have been rather obvious, yet were not details that I had considered
while formulating the questions posed in this thread.

Thanks for the conversation and all contributions.

Bill
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top