flat panel monitors

J

J. Clarke

Mxsmanic said:
In other words, you disagree.


Yes. But the definition of both is arbitrary.

"Signal" is whatever information you are trying to transfer. "Noise" is
whatever is on the line that you did not intentionally put there. There is
nothing "arbitrary" about it.
Not if there is no noise obscuring it.

If there is "no noise obscuring it" then it is not not dropping below the
noise threshold.
Same thing.

No, not the same thing. In a measurement system you don't control the
signal that you're trying to measure, in a communication system you do.
And how does the grain size compare to LCD pixel sizes?

What difference does it make? You're claiming that there is no limit.
Not unlike a vacuum tube.

You don't have a clue how a vacuum tube, a CRT display, or an electron
microscope work, do you? First, most vacuum tubes have no magnets of any
kind associated with them. Second, running an electron microscope with the
same beam energy as a CRT would destroy just about any sample you put in it
in short order. Third, the type of CRT uses in computer monitors and
televisions has one set of magnets, not the several sets that are used in
focussing an electron microscope, and the cavity in the microscope is also
much smaller than the face of a CRT.

But you don't care about the facts, do you? Since you claim to be manic you
might look into a mood stabilizer.
 
B

Bob Myers

Mxsmanic said:
Pixels (or subpixels) on a flat panel have a constant luminosity
throughout their dimensions. On a CRT, the luminosity will vary
depending on the colors of the adjacent pixels and the bandwidth of the
monitor. This can make aliasing less obvious on a CRT. I'm not saying
that this is good or bad, just that it happens.

Exactly. That's part of that change in overall "look" I
was talking about. But it's not the same thing as
actually being able to resolve the "pixels" any finer than
the limit imposed by the mask/dot pitch. One thing you
clearly CAN get with CRTs is "softer" edges on things,
but on the other hand you're also having to deal with
luminance and color errors in single-pixel details (when
the size of the "pixel" within the image is getting down
to the same size as the dot triad or mask pitch).
It's interesting that ClearType makes text on an LCD look much better in
most cases, even though it makes the pixels "messier" with its low-level
antialiasing. I haven't tried ClearType on a CRT, so I don't know what
that does (I doubt that it works very well).

Here again is a case where "works well" is pretty much a
matter of personal preference; I don't care much for ClearType
on a CRT, but that's just because I'm overly sensitive to that
"messier" aspect you mentioned. (I can never use myself as
a test subject for display evaluations, since after all this time
I simply do NOT look at these things the same way the average
user will. Display engineers are the last people you want
subjectively evaluating displays. :))

Bob M.
 
B

Bob Myers

Mxsmanic said:
I'm not sure what you mean by "24-bit accuracy." How many bits per
second?

"24-bit accuracy" is not dependent on the data rate. It simply
means that your system can accurately produce, communicate,
and interpret levels, repeatedly and without error, to within
half of the implied LSB value - in this case, whatever the peak
expected signal would be, divided by (2^24-1). For instance,
in a typical analog video system (with 0.7 Vp-p signal swings),
"24-bit accuracy" would mean that you are confident you can
determine the signal amplitude to within about 21 nV - and
yes, that's NANOvolts. But this is simply not possible in any
real-world video system, since the noise in any such system
over the specified bandwidth is significantly higher than this
value. (The thermal noise across a 75-ohm termination
resistor at room temperature alone is about 25 mV RMS.)
You can always maintain at least the accuracy of the equivalent digital
system.

Sure; but that's just it - you can always build an EQUIVALENT
digital system. You can't do better than the noise limit in
either case, and the noise limit sets the bound on accuracy -
and so information capacity - no matter how you encode the
information, whether it's in "analog" or "digital" form.
If you can push 200 Mbps through a digital channel, you can also get at
least 200 Mbps through the same channel with analog encoding (and
typically more). However, the analog equipment may cost more.

Sorry - not "typically more". You're still comparing
specific examples of "analog" and "digital"; "digital" does NOT
imply that straight binary coding, with the bits transmitted in
serial fashion on each physical channel, is your only option.
For instance, a standard U.S. TV channel is 6 MHz wide -
and yet, under the U.S. digital broadcast standard, digital
TV transmissions typically operate at an average data rate
of slightly below 20 Mbps. How do you think that happens?
(You should also not assume that straight binary, serial
transmission is all we will ever see in display interfaces; there
are published standards which employ more efficient coding
methods.)
Information theory proves it.

Information theory proves exactly the opposite; it shows
that the maximum capacity of a given channel is fixed, and
that that limit is exactly the same for all possible general
forms of coding. Specific types within those general forms
may be better or worse than others, but the maximum limit
remains unchanged.
The basic limit is the fact that you declare anything below a certain
level to be noise.

Which is equally true in analog systems. No analog system
can provide "infinite" accuracy, or anything remotely approaching
it, and for the same fundamental reasons that limit digital. You
are also here again assume that a digital system cannot be made
noise-adaptive, which is incorrect even in current practice.

What limits resolution in a monochrome CRT? Scanning electron
microscopes prove that electron beams can be pretty finely focused.

Yes, but there the beam does not have to pack enough punch
to light up phosphor. There is an inescapeable tradeoff between
beam current and spot size, and also a point below which practical
phosphors simply cannot be driven to usable levels. This, along
with the unavoidable degradation in spot size and shape which
results from the deflection system (another little detail that SEMs
don't worry about in nearly the same way) results in some very
well-defined limits on any practical CRT


Bob M.
 
B

Bob Myers

Yes. But the definition of both is arbitrary.

Not at all; "signal" is the information we desire to
communicate; "noise" is EVERYTHING else that gets
in the way of doing that. That's hardly arbitrary, given
that the goal of any communications system is to transfer
information.
Not if there is no noise obscuring it.

Ummm...how does your signal drop below the noise
threshold, and still have "no noise obscuring it"?

And how does the grain size compare to LCD pixel sizes?

Not relevant; the claim was that there was no inherent limit to
the resolution of a monochrome CRT, not how that technology
compared to others. Phosphor grain size DOES set one such
limit, but other limits have already come into play well before
that point is reached.

Also, please note that practical LCD devices have been
constructed with pixel sizes well below 10 microns. You HAVE
heard of LCOS, right? (And yes, those ARE usable as direct-
view displays.) It would be POSSIBLE to produce pixels of
that size on a larger display, even on a glass substrate (the
best polysilicon-on-glass process can now work with 0.8 micron
design rules) - there's just no practical reason for doing so.

Not unlike a vacuum tube.

Very much unlike the vacuum tube in question in any
practical form.

FWIW, the best CRT display I saw, in terms of resolution, got
ALMOST to 300 dpi, or a "pixel" size of about 0.085 mm.
It may still be in limited production, but it was used only in an
extremely expensive and very limited-volume monitor. This
level of performance HAS been duplicated within the past 5
years in a fairly large-screen monochrome LCD monitor, again
aimed at a very limited high-end market.

Bob M.
 
B

Bob Myers

Mxsmanic said:
What recent changes have been made to Trinitrons?

Sony continued to patent changes to the gun design,
manufacturing processes, etc., right up to the day that
monitor-type Trinitrons left production, and continues to do
so in the case of their TV tubes. Being able to build something
based on the technology described in the expired original
patents would let you duplicate Trinitrons the way they were
in the 1970s, which would mean an expensive and, by
current standards, lower-performance tube. No one is
going to invest in the equipment needed to produce these
things (which is different than a standard CRT process) just
to make 1970s tubes, when they can already make flat-screen
conventional-mask designs more cheaply and with better
performance.

I was considering the Diamontron for a time, but I understand that,
although it's an aperture grille like a Trinitron, it apparently just
doesn't have the quality of a true Trinitron. I guess that's all
becoming increasingly academic now.
Absolutely.


I think they stopped making them out of misguided business decisions.

Time will tell. Having discussed this matter with both
companies, I personally am convinced that their decision was
made on a very sound basis and at (for them) the correct time,
although it certainly has caused some problems for those who
might still wish to purchase these products.
The vast majority of monitors being sold today are still CRTs. This is
even more true for television sets.

In the case of monitors, you are incorrect. LCDs took over
the #1 spot in the monitor market by unit volume this past
year. (They had already been #1 in terms of revenue for some
time, of course.) The TV market remains a good 85% or better
CRT, but with that share expected to decline over this decade
and into the next. The rate of decline may accelerate, depending
on what happens with the pricing of other technologies; it will
not be reversed.

Bob M.
 
B

Bob Myers

Mxsmanic said:
Neither, as I recall.

This year. Actually, several quarters ago. It was last
year if only certain markets (U.S., Europe, Japan) are c
considered. This year, the switch occurred in the worldwide
numbers.

Same technology.

And the Saturn V is the same basic technology as my
kid's little Estes model rocket - stuff shoots out THIS end,
and it moves in the direction the OTHER end is pointing. :)
TV and monitor CRTs are VERY different beasts. (One
significant difference at this point is that the former remains
in production in a lot of places that have already given up
on the latter!)


Bob M.
 
B

Bob Niland

Bob Myers said:
FWIW, the best CRT display I saw, in terms of
resolution, got ALMOST to 300 dpi, or a "pixel"
size of about 0.085 mm.

Laser printing intro'd around 180 dpi. Inkjet printing
intro'd at 75 dpi. Both grew quickly, and are now at
1200 dpi or higher.

A 300 dpi monitor of any typical size would of course
impose a huge raster requirement, only recently economical,
and a substantial rendering compute load for games,
CAD, etc., something not necessarily desired even yet.

But the limitations on monitor res are set by Microsoft.

In the early days of Windows, Mr.Bill or one of his
minions made an assumption that displays would always
be in the 60-100 dpi range, and locked their icons and
early screen raster font pixel sizes to that assumption.

XP may finally have icon scaling, but the installed
base of pre-XP Windows does not, nor do many existing
and legacy apps.

If you go much above 100 dpi, Windows is nearly or
completely unusable. I had a .22mm dp CRT (115 dpi).
A true 1600x1200 on a such a 21"(20V) CRT is just beyond
the limit for what's practical to use with Windows.

Sony demo'd a .15mm dp (169 dpi) CRT at COMDEX a few years
ago, but never released it, probably due to this. IBM has
a 200 dpi LCD (the T221), but doesn't promote to the general
market, partly on account of the Windows.ico problem. (It also
requires dual-link DVI.)

Display technology is not the reason why we don't today
have common higher-res displays.
 
B

Bob Myers

Bob Niland said:
Laser printing intro'd around 180 dpi. Inkjet printing
intro'd at 75 dpi. Both grew quickly, and are now at
1200 dpi or higher.

OK, but I'm not sure how that's relevant here. The
above represents the best monochrome CRT I have
ever seen or heard about - and there is also basically
zero development underway right now aimed at pushing
these devices beyond that point.
A 300 dpi monitor of any typical size would of course
impose a huge raster requirement, only recently economical,
and a substantial rendering compute load for games,
CAD, etc., something not necessarily desired even yet.

Well, again, this was a monochrome product, with a very
limited gray scale ("bit depth") capability, and aimed at
an extremely high-end niche market. It was shipped only
with a custom graphics system which, as I recall, produced
something like 4k x 3k at perhaps 4-6 bits/pixel. The
computational/rendering load was not all that significant
here, as it was aimed at such things as document review,
radiology, etc., where you're simply dealing with static images
scanned in from another source.
But the limitations on monitor res are set by Microsoft.

No; Bill Gates may sometimes be accused of thinking he's
God, but I seriously doubt that even he would claim
responsibility for the laws of physics. Microsoft is also not
the only game in town, and especially not in the markets
where such displays are of interest.

Sony demo'd a .15mm dp (169 dpi) CRT at COMDEX a few years
ago, but never released it, probably due to this. IBM has
a 200 dpi LCD (the T221), but doesn't promote to the general
market, partly on account of the Windows.ico problem. (It also
requires dual-link DVI.)

More precisely, QUAD-link DVI; monitors using that
9.2 Mpixel panel require two dual-link DVI outputs from
the graphics card. And at that, the delivered frame rate
at the monitor input is somewhat under 50 Hz. (The panel
itself doesn't necessarily run at that rate, due to some other
internal bizarreness.
Display technology is not the reason why we don't today
have common higher-res displays.

Well, perhaps it's not the ONLY reason. There ARE
some markets that would very much like to have a full-color,
video-rate 300 dpi display, if only such things (and the
hardware to drive them) were available for a price less
than the contents of Ft. Knox, despite the limitations Windows
might also place on things.

Bob M.
 
M

Mxsmanic

Bob said:
This year. Actually, several quarters ago. It was last
year if only certain markets (U.S., Europe, Japan) are c
considered. This year, the switch occurred in the worldwide
numbers.

What happens when individual countries and regions are examined?
 
M

Mxsmanic

J. Clarke said:
"Signal" is whatever information you are trying to transfer. "Noise" is
whatever is on the line that you did not intentionally put there. There is
nothing "arbitrary" about it.

You've just illustrated the arbitrary character of the distinction.
If there is "no noise obscuring it" then it is not not dropping below the
noise threshold.

It can drop below your predetermined noise threshold, as in digital
systems.
What difference does it make?

That depends on the ratio between the two sizes.
But you don't care about the facts, do you?

I enjoy facts. Do you have any?
 
M

Mxsmanic

Bob said:
Not at all; "signal" is the information we desire to
communicate; "noise" is EVERYTHING else that gets
in the way of doing that.

And our desire is not arbitrary?
That's hardly arbitrary, given
that the goal of any communications system is to transfer
information.

What makes the distinction between information and noise, if not our
arbitrary decisions?
Ummm...how does your signal drop below the noise
threshold, and still have "no noise obscuring it"?

You define a specific threshold and call anything below it noise, and
anything above it signal. This is the basis of digital systems.

The advantage is that you have error-free transmission, as long as the
actual noise level stays below your predefined threshold. The
disadvantages are that you lose any real information below the
threshold, and that you sacrifice part of the channel's bandwidth in
order to eliminate errors.

Of course, for it all to work in a practical sense, you must define your
artificial threshold so that it is above the actual noise level in the
channel.
 
M

Mxsmanic

Bob said:
In the early days of Windows, Mr.Bill or one of his
minions made an assumption that displays would always
be in the 60-100 dpi range, and locked their icons and
early screen raster font pixel sizes to that assumption.

These problems had been considered long before "Mr. Bill" came around.

The actual limitations are imposed by human vision, not by any
engineering difficulties. There's no point in building displays that do
substantially better than human vision can perceive (if they are
intended for human eyes).
 
M

Mxsmanic

Bob said:
"24-bit accuracy" is not dependent on the data rate. It simply
means that your system can accurately produce, communicate,
and interpret levels, repeatedly and without error, to within
half of the implied LSB value - in this case, whatever the peak
expected signal would be, divided by (2^24-1). For instance,
in a typical analog video system (with 0.7 Vp-p signal swings),
"24-bit accuracy" would mean that you are confident you can
determine the signal amplitude to within about 21 nV - and
yes, that's NANOvolts. But this is simply not possible in any
real-world video system, since the noise in any such system
over the specified bandwidth is significantly higher than this
value.

Then there's not much point in using it in any system with analog
components, which includes all systems that interface with the physical
world, which in turn includes all display systems.
Sure; but that's just it - you can always build an EQUIVALENT
digital system.

Digital systems can never match analog systems, not even in theory.

But ironically, in practice, it's often cheaper and easier to build very
precise digital systems than it is to build equally precise analog
systems, at least where chains of components that accumulate errors are
required.
For instance, a standard U.S. TV channel is 6 MHz wide -
and yet, under the U.S. digital broadcast standard, digital
TV transmissions typically operate at an average data rate
of slightly below 20 Mbps. How do you think that happens?

It just depends on the noise level. If the noise level is zero, the
capacity of the channel is infinite.
Information theory proves exactly the opposite; it shows
that the maximum capacity of a given channel is fixed, and
that that limit is exactly the same for all possible general
forms of coding.

It also shows that the lower the noise level, the higher the capacity of
the channel, all else being equal, which means that a noise-free channel
has infinite capacity.

Thus, if you improve systems in a way that lowers noise, you can get
more capacity out of them. This is how dial-up modems have been doing
it for years.
 
J

J. Clarke

Bob said:
OK, but I'm not sure how that's relevant here. The
above represents the best monochrome CRT I have
ever seen or heard about - and there is also basically
zero development underway right now aimed at pushing
these devices beyond that point.


Well, again, this was a monochrome product, with a very
limited gray scale ("bit depth") capability, and aimed at
an extremely high-end niche market. It was shipped only
with a custom graphics system which, as I recall, produced
something like 4k x 3k at perhaps 4-6 bits/pixel. The
computational/rendering load was not all that significant
here, as it was aimed at such things as document review,
radiology, etc., where you're simply dealing with static images
scanned in from another source.


No; Bill Gates may sometimes be accused of thinking he's
God, but I seriously doubt that even he would claim
responsibility for the laws of physics. Microsoft is also not
the only game in town, and especially not in the markets
where such displays are of interest.



More precisely, QUAD-link DVI; monitors using that
9.2 Mpixel panel require two dual-link DVI outputs from
the graphics card. And at that, the delivered frame rate
at the monitor input is somewhat under 50 Hz. (The panel
itself doesn't necessarily run at that rate, due to some other
internal bizarreness.


Well, perhaps it's not the ONLY reason. There ARE
some markets that would very much like to have a full-color,
video-rate 300 dpi display, if only such things (and the
hardware to drive them) were available for a price less
than the contents of Ft. Knox, despite the limitations Windows
might also place on things.

IBM and Viewsonic both used to sell monitors using that panel to end-users.
I notice that Viewsonic no longer lists it on their Web site and IBM only
sells it bundled with Intellistations and suitable video boards now.

Still, I would think that the $8000 price tag was more of an obstacle to its
widespread acceptance than any concern over the manner in which it displays
Windows icons.
 
B

Bob Niland

OK, but I'm not sure how that's relevant here.

Only in that the topic of higher display resolutions
has arisen. No one today would consider buying a
raster printer of less than 600 dpi (@ 1 bit depth).
"Photo" quality is widely considered to be at least
200 dpi (24b).

It would probably surprise most people, if they ran
the numbers, to discover that the majority of available
displays are "only" 100 dpi. The industry takes no pains
to point this out, of course :)

Let me restate that. The limitations on what you can
sell into the retail market, in any economic quantity,
are set by MS. Computer artisans might well want to
have something closer to "photo" res on screen, but
if the GUI/dialogs are unusable ...
Microsoft is also not the only game in town, and
especially not in the markets
where such displays are of interest.

Specialty markets certainly exist. I'm speaking of
displays that CDW might offer.
More precisely, QUAD-link DVI;

Zounds. IBM doesn't make the technical details on the T221
very prominent on their web, so I didn't see that. My
guess, of course, is that they don't want affluent
but otherwise ordinary end users buying these things
and then discovering the gotchas (starting with: who
makes a quad-link card? Matrox?)
 
B

Bob Myers

Mxsmanic said:
What happens when individual countries and regions are examined?

Obviously, there are still areas where the CRT leads
the LCD in unit volume; these are pretty much
without exception the developing markets where
the cost advantage of the CRT outweighs all other
considerations. Still, the LCD is coming on surprisingly
strong even in these areas, and especially (according
to recent industry news) in mainland China. China
very much wants to move into being a "high-tech"
locale, joining Japan, Korea, and Taiwan in supplying
LCD panels and such to the world.

Bob M.
 
B

Bob Myers

Mxsmanic said:
You've just illustrated the arbitrary character of the distinction.

How is this arbitrary? It's fundamental to information theory -
signal is what you want, and noise is what you don't want.
It can drop below your predetermined noise threshold, as in digital
systems.

Well, assuming that the "digital" system in question could
not adapt to this, you would be correct. But that's not the
case in many digital systems, and certainly is not a restriction
which applies to "digital" per se.

Bob M.
 
B

Bob Myers

Mxsmanic said:
And our desire is not arbitrary?

Only if you believe that the distinction of
information from noise, which is fundamental
to information theory, is "arbitrary." But since
we ARE talking here about concepts which
are addressed in that field, you'd better be
willing to stick within the definitions used there,
whether or not you consider them "arbitrary."

Identifying a certain amount of electromotive
force to be a "volt" is also arguably arbitrary;
this does not at all mean that it is not useful to
phrase discussions of certain electrical phenomena
in terms of volts.
What makes the distinction between information and noise, if not our
arbitrary decisions?

Sorry, but we're discussion information theory
here. If you wish to discuss semantics, that's on
the other side of the campus. But don't expect
too many from THIS discussion to follow you
there.


Bob M.
 
B

Bob Myers

Mxsmanic said:
The actual limitations are imposed by human vision, not by any
engineering difficulties. There's no point in building displays that do
substantially better than human vision can perceive (if they are
intended for human eyes).

While this is certainly true, it is of academic interest
only. There is no practical display technology which
challenges the limits of human spatial acuity in all
situations. There is most certainly nothing available
for desktop monitors which does this, given the fact
that it's not all that hard to put my eyes six inches from
the screen.

Bob M.
 
B

Bob Myers

Mxsmanic said:
Then there's not much point in using it in any system with analog
components, which includes all systems that interface with the physical
world, which in turn includes all display systems.

Again, a nice assertion, but there's no evidence,
theory, or reasoning to back it up.
Digital systems can never match analog systems, not even in theory.

Ditto. You were just shown why this is not the case.

It just depends on the noise level. If the noise level is zero, the
capacity of the channel is infinite.

True. But since the noise level cannot be zero even in
theory, this is about as meaningful in terms of discussions of
real-world systems as the time-honored frictionless surfaces
and massless pulleys of freshman physics.
It also shows that the lower the noise level, the higher the capacity of
the channel, all else being equal, which means that a noise-free channel
has infinite capacity.

See above.
Thus, if you improve systems in a way that lowers noise, you can get
more capacity out of them. This is how dial-up modems have been doing
it for years.

I must have missed the part where the phone company
magically removed noise from the standard telephone
line. The standard voice subscriber line carries pretty much
the same bandwidth and noise specs today as it did thirty
years ago.

Bob M.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top