Is DVI connector worth it?

H

hona ponape

Yousuf Khan said:
A friend has got a new LCD monitor and he wants to know if connecting
via a DVI connector would improve the quality over the VGA connectors? I
would suspect it's imperceptible. He doesn't have a video card with a
DVI connector yet, but he wants to get one if there's any difference.

Yousuf Khan

In my experience it is imperceptable. I recently ordered 2 identical 19"
LCDs (some cheap brand, viewera, i think) and hooked them up to a geforce
6800 that had one analog and one DVI output. No difference. Changed the
connections on the LCDs and still no difference. I was expecting to see
some difference because this PC is in an area with a lot of stray RF.
All things being equal I would use the DVI connection, but only if it
doesn't cost you any extra money.
 
M

Mxsmanic

Not said:
Historically there have been issues with integrity of passing the values
along the signal chain in analog fashion, so a lot of thought was put into
a way to maintain the numeric value integrity until the final conversion
is made. That resulted in "digital" interfaces, which in fact did
demonstrate the ability to maintain a higher level of integrity in
preserving the desired value.

Then one must wonder why the original video standards such as CGA were
digital, but were replaced by more "advanced" standards that were
analog, such as VGA.

Digital is less flexible than analog--conceivably a VGA cable can
carry just about any resolution or color depth. Digital provides
fewer errors in exchange for reduced bandwidth and flexibility.
Hence - ya gotta see it to decide!! In most cases for today it is not
easy to justify exchanging cards *only* to gain a "digital channel".

In my case both the card and the monitor support digital, but I can't
get it to work, so I can't really compare.
 
B

Bob Myers

Mxsmanic said:
Then one must wonder why the original video standards such as CGA were
digital, but were replaced by more "advanced" standards that were
analog, such as VGA.

Simplicity. The graphics systems used in the early days
of the PC had only a very limited "bit depth" (number of
shades per color), and the easiest interface to implement for
such a simple system was either one or two bits each, directly
from the graphics output to the input stage of the CRT - i.e.,
just run the bits out through a TTL buffer and be done with
it. With a greater number of bits/color, a D/A converter at the
output and transmitting the video information in "analog" form
(which is what the CRT's going to want at the cathode, anyway)
becomes a more cost-effective solution than trying to deliver the
information as a parallel-digital signal (which is basically what the
"CGA" style of interface was) and instead ship it up on three
coaxes. If it were simply a question of which method were more
"advanced," it would also be legitimate to ask why television
is now moving from "analog" to "digital" transmission, and
computer interfaces are beginning to do the same.

The answer to just about ANY question in engineering which
begins with "Why did they do THIS...?" is generally "because
it was the most cost-effective means of achieving the desired
level of performance." The first computer graphics system I
myself was responsible for used what I supposed would be
called a "digital" output in this discussion, for this very reason.
It was the first "high-resolution" display system in our product
line, way, way back around 1982 - all of 1024 x 768 pixels
at 60 Hz, and we didn't have the option of a D/A to make
"analog" video for us simply because we couldn't put enough
memory into the thing for more than 2 bits per pixel. So we used
a couple of open-collector outputs at the computer end of the
cable, and used the signal coming from those to switch a few
resistors in the monitor (which was also custom) and get a cheap
"D/A" effect - at all of four levels of gray! (Monochrome -
we weren't working on the color version, which would follow
a bit later.)

Digital is less flexible than analog--conceivably a VGA cable can
carry just about any resolution or color depth. Digital provides
fewer errors in exchange for reduced bandwidth and flexibility.

This is incorrect. For one thing, it assumes that simple binary
encoding is the only thing that could possibly be considered under
the heading "digital" (which is wrong, and in fact several examples
of other fully-digital systems exist even in the area of video
interfaces; for instance, the 8-VSB or COFDM encodings
used in broadcast HDTV). The other point where the above
statement is incorrect is the notion that the VGA analog interface
could carry "just about any resolution or color depth." The
fundamental limitations of the VGA specification, including
an unavoidable noise floor (given the 75 ohm system impedance
requirement) and the overall bandwidth constrains the data
capacity of the interface, as it must for ANY practical interface
definition. Over short distances, and at lower video frequencies,
the VGA system is undoubtedly good for better than 8 bits
per color; it is very unlikely in any case that it could exceed,
say, an effective 12 bits/color or so; I haven't run the math to
figure out just what the limit is, though, so I wouldn't want anyone
to consider that the final word.

But the bottom line is that the information capacity of any real-world
channel is limited, in terms of effective bits per second. (Note
that stating this limit in "bits per second" actually says nothing
about whether the channel in question is carrying "analog" or
"digital" transmissions; this is bit/sec. in the information theory
usage of the term. In practice, this limit (called the Shannon
limit) is generally more readily achieved in "digital" systems
than "analog," due to the simple fact that most analog systems
give more margin to the MSBs than the LSBs of the data.
Whatever data capacity is achieved may be used either for
greater bits/component (bits/symbol, in the generic case) or
more pixels/second (symbols/second, generically), but the limit
still remains.

The actual difference between "analog" and
"digital" systems here is not one of "errors" vs."bandwidth,"
but rather where those errors occur; as noted, the typical
analog system preserves the most-significant-bit data vs.
least-significant (e.g., you can still make out the picture, even
when the noise level makes it pretty "snowy") - or in other words,
analog "degrades gracefully." Most simple digital encodings
leave all bits equally vulnerable to noise, which makes for a
"cliff effect" - digital transmissions tend to be "perfect" up to a
given noise level, at which point everything is lost at once.

Bob M.
 
N

Not Gimpy Anymore

Bob Myers said:
Simplicity. The graphics systems used in the early days
of the PC had only a very limited "bit depth" (number of
shades per color), and the easiest interface to implement for
such a simple system was either one or two bits each, directly
from the graphics output to the input stage of the CRT - i.e.,
just run the bits out through a TTL buffer and be done with
it. With a greater number of bits/color, a D/A converter at the
output and transmitting the video information in "analog" form
(which is what the CRT's going to want at the cathode, anyway)
becomes a more cost-effective solution than trying to deliver the
information as a parallel-digital signal (which is basically what the
"CGA" style of interface was) and instead ship it up on three
coaxes. If it were simply a question of which method were more
"advanced," it would also be legitimate to ask why television
is now moving from "analog" to "digital" transmission, and
computer interfaces are beginning to do the same.

The answer to just about ANY question in engineering which
begins with "Why did they do THIS...?" is generally "because
(snip)

The remaining issue with "VGA" (the analog signal path of
"choice" today) is the source and termination impedance
variations due to normal production tolerances. This can result
in a "unbalanced" white point, and colorimetric inaccuracies just
due to those tolerances. However, there are a host of other
contributors to the colorimetric accuracy, as Bob and others
are well aware - in the case of the DVI method, the idea was
to try and maintain the numerically encoded pixel value across
the transmission medium. Additionally, the DVI includes a more
reliable way to recover the pixel clock, allowing closer to perfect
establishment of the proper numeric value into each "bucket".
This is incorrect. For one thing, it assumes that simple binary
encoding is the only thing that could possibly be considered under
the heading "digital" (which is wrong, and in fact several examples
of other fully-digital systems exist even in the area of video
interfaces; for instance, the 8-VSB or COFDM encodings
used in broadcast HDTV).
(snip)

To add another point to Bobs, the DVI signal *is* further
encoded (TMDS), which does include a probablilty that the
encoded value may have some (acceptable) error. Without
TMDS, there is still not enough bandwidth available today to
pass the entire bitmap of values within the refresh time. But
any error related to encoding is not perceptable to the typical
user - it's just a theoretical point to be aware of.

Also, because the analog signal clock recovery method uses
phase locked loop technology, there does remain a possibility
of PLL types of errors within displays using a VGA interface,
separate from any scaling issue related to displaying images
in "non-native" formats.
To best see such artifacts, try displaying a half tone type
of image, and look for background noise in the image. Some
display companies may provide the user with a "calibration
image" in the CD version of the user documentation. This
image will contain some halftone details, and the user should
be directed to activate the "autoadjust" (AKA auto) control
which should put the PLL through its optimization process.
This portion of the "analog chain" technology is continuing
to improve, and we users can benefit from that improvement
when we use some of the more recent products available.
That aspect of performance is one that may be compromised
in "lower end" LCD monitors. For many users it is not
important, but discriminating users should be aware of the
possibility.

Regards,
NGA
 
M

Mxsmanic

Bob said:
This is incorrect. For one thing, it assumes that simple binary
encoding is the only thing that could possibly be considered under
the heading "digital" (which is wrong, and in fact several examples
of other fully-digital systems exist even in the area of video
interfaces; for instance, the 8-VSB or COFDM encodings
used in broadcast HDTV).

No. Digital is nothing more than analog with an arbitrary, non-zero
threshold separating "signal" from "noise." Digital systems always
have less bandwidth than analog systems operating over the same
physical channels, because they set a non-zero threshold for the noise
floor. This allows digital systems to guarantee errors below a
certain level under certain conditions, but it sacrifices bandwidth to
do so. It all comes out analog (or digital) in the end, depending
only on how you look at it.
But the bottom line is that the information capacity of any real-world
channel is limited, in terms of effective bits per second.

And all digital systems set their design capacities _below_ that
theoretical limit, whereas analog systems are constrained by precisely
that limit. If you set the noise threshold above zero, you reduce
bandwidth and you make it possible to hold errors below a threshold
that is a function of the noise threshold; your system is then
digital. If you set the noise threshold to zero, you have an analog
system, limited only by the absolute bandwidth of the channel but
without any guarantees concerning error levels.
The actual difference between "analog" and
"digital" systems here is not one of "errors" vs."bandwidth,"
but rather where those errors occur; as noted, the typical
analog system preserves the most-significant-bit data vs.
least-significant (e.g., you can still make out the picture, even
when the noise level makes it pretty "snowy") - or in other words,
analog "degrades gracefully." Most simple digital encodings
leave all bits equally vulnerable to noise, which makes for a
"cliff effect" - digital transmissions tend to be "perfect" up to a
given noise level, at which point everything is lost at once.

Same as above.
 
S

syka


For you all arguing about analog and digital

Analog != infinite different values. Know that when we talk about
energy, we have a "digital reality". Accoring to quantum physics,
there is a
smallest amount of energy, and all other amounts of energy are the
smallest amount times [0,1,2,3,4,5,6,7,8,9,10,...,1245,1246,...] any
whole number. Every thing we interact with is energy, even mass
(E=mc2). Light, temperature, sound and all other things base on
different amounts of energy.

So in fact, the "real world" isn’t all analogic, it is mostly
digital! So all interfaces a person has with the world around him, are
digital and it means that "the perfect analogic system" would be, in
fact, digital.
The only truly analogic thing that comes to my mind is distance, but
even distance is measured digitally. I can’t think of any analogic
interface between a person and the world.

Infinite precision is not possible for any interface, but in turuth
everything basing on energy has a value so precise, comparing to macro
level energies, that it can be considered to be "infinitely
precise". But this amount is quntitated and so it’s best to be put in
digital form for processing.Example: although we can’t have infinite
precision about their mass, if we take two pieces of iron, it is
posible that the bigger one is exactly three times heavier than the
other one, with infinite precision!
But according to Heisenbergs Uncertainty principle it can’t be
measured, even in theory. Given that, I must say that even as
interfaces are
digital, the uncertainty of measurements makes it impossible to make a
perfect system analog or digital.

Of course this theory will never be in practice. So I would say that
the dvi
can be better, but in most cases the difference is too small to be
noticeable, in my work I see a lot of vga&lcds in use and most of the
time
no errors visible to the naked eye. A couple of times we have had a
lcd
monitor that looked bad with vga (in the native resolution), and there
the
dvi-interface has helped. My own favourite is still digital, though
some
analogic systems will provide better resolution, the digital systems
will
have less errors and with thecknologic advance: a sufficient
resolution.

And sorry for my bad english.
 
B

Bob Myers

syka said:
Analog != infinite different values. Know that when we talk about
energy, we have a "digital reality". Accoring to quantum physics,
there is a
smallest amount of energy, and all other amounts of energy are the
smallest amount times [0,1,2,3,4,5,6,7,8,9,10,...,1245,1246,...] any
whole number. Every thing we interact with is energy, even mass
(E=mc2). Light, temperature, sound and all other things base on
different amounts of energy.

Not quite. While I agree with you that analog does not by
any means imply "infinite different values" (or infinite capacity,
accuracy, or any other such nonsense), it is not the case that
we have a "digital reality." Such a statement makes the very
common error of confusing "digital" with "quantized"; while it
is true that almost all practical digital systems have fixed
limits imposed by quantization, the two words do not mean
quite the same thing. It is certainly possible to have a quantized
analog representation; it is possible in theory (although certainly
far less commonly encountered) to have a "digital" system in
which the quantization limit is not fixed by the system itself
(for instance, simply by allowing for a variable-bit-length
representation in which the maximum possible bit length is
greatly in excess of the accuracy of the information which can
even POSSIBLY be provided).

The "real world" is neither "digital" nor "analog"- it is simply
the real world, and represents the source of information that
these two encoding methods attempts to capture and convey.
The key to what "digital" and "analog" really mean are right there
in the words themselves. A "digital" representation is simply
any system where information is conveyed as numeric values
(symbols which are to be interpreted as numbers, rather than
having some quality which directly corresponds to the "level"
of the information being transmitted), whereas an "analog"
representation is just that. It is any system in which one
value is made to directly represent another, through varying
in an "analogous" fashion (e.g., this voltage is varying in a
manner very similar to the way that air pressure varied, hence
this is an "analog" transmission of sound). In the extreme of this
perspective, we might go so far as to say that there is truly no
such thing as "analog" or "digital" electronics, per se - there are
just electrical signals, which all obey the same laws of physics.
It is how we choose (or are intended) to interpret these signals
that classifies them as "analog" or "digital." (Power systems,
for instance, are in fact neither, as they are not in the normal
sense "carrying information.")

Our friend Mxsmanic is also in error regarding the capacity
limits of analog systems vs. digital; it is practically NEVER the
case that an analog system is actually carrying information at
anything even approaching the Shannon limit (in part due to the
extreme information redundancy of most analog transmission systems).
A perfect example of this is the case of "analog" vs."digital" television.
The Shannon limit of a standard 6 MHz TV channel at a 10 dB
signal-to-noise ratio is a little over 20 Mbits/sec, which is a
good deal greater than the actual information content of a
standard analog broadcast signal. The U.S. HDTV standard,
on the other hand, actually runs close to this limit (the standard
bit rate is in excess of 19 Mbit/sec), and through compression
techniques available only in a digital system manages to convey
an image that in analog form would require a channel of much
higher bandwidth (although these compression methods, since
some of the steps are lossy, should not really be seen as
somehow beating the Shannon limit).

Bob M.
 
J

J. Clarke

Bob said:
syka said:
Analog != infinite different values. Know that when we talk about
energy, we have a "digital reality". Accoring to quantum physics,
there is a
smallest amount of energy, and all other amounts of energy are the
smallest amount times [0,1,2,3,4,5,6,7,8,9,10,...,1245,1246,...] any
whole number. Every thing we interact with is energy, even mass
(E=mc2). Light, temperature, sound and all other things base on
different amounts of energy.

Not quite. While I agree with you that analog does not by
any means imply "infinite different values" (or infinite capacity,
accuracy, or any other such nonsense), it is not the case that
we have a "digital reality." Such a statement makes the very
common error of confusing "digital" with "quantized"; while it
is true that almost all practical digital systems have fixed
limits imposed by quantization, the two words do not mean
quite the same thing. It is certainly possible to have a quantized
analog representation; it is possible in theory (although certainly
far less commonly encountered) to have a "digital" system in
which the quantization limit is not fixed by the system itself
(for instance, simply by allowing for a variable-bit-length
representation in which the maximum possible bit length is
greatly in excess of the accuracy of the information which can
even POSSIBLY be provided).

The "real world" is neither "digital" nor "analog"- it is simply
the real world, and represents the source of information that
these two encoding methods attempts to capture and convey.
The key to what "digital" and "analog" really mean are right there
in the words themselves. A "digital" representation is simply
any system where information is conveyed as numeric values
(symbols which are to be interpreted as numbers, rather than
having some quality which directly corresponds to the "level"
of the information being transmitted), whereas an "analog"
representation is just that. It is any system in which one
value is made to directly represent another, through varying
in an "analogous" fashion (e.g., this voltage is varying in a
manner very similar to the way that air pressure varied, hence
this is an "analog" transmission of sound). In the extreme of this
perspective, we might go so far as to say that there is truly no
such thing as "analog" or "digital" electronics, per se - there are
just electrical signals, which all obey the same laws of physics.
It is how we choose (or are intended) to interpret these signals
that classifies them as "analog" or "digital." (Power systems,
for instance, are in fact neither, as they are not in the normal
sense "carrying information.")

Our friend Mxsmanic is also in error regarding the capacity
limits of analog systems vs. digital; it is practically NEVER the
case that an analog system is actually carrying information at
anything even approaching the Shannon limit (in part due to the
extreme information redundancy of most analog transmission systems).
A perfect example of this is the case of "analog" vs."digital" television.
The Shannon limit of a standard 6 MHz TV channel at a 10 dB
signal-to-noise ratio is a little over 20 Mbits/sec, which is a
good deal greater than the actual information content of a
standard analog broadcast signal. The U.S. HDTV standard,
on the other hand, actually runs close to this limit (the standard
bit rate is in excess of 19 Mbit/sec), and through compression
techniques available only in a digital system manages to convey
an image that in analog form would require a channel of much
higher bandwidth (although these compression methods, since
some of the steps are lossy, should not really be seen as
somehow beating the Shannon limit).

In practice this sometimes shows--digital TV has no redundancy to speak
of--it either has a perfect image or it has dead air, there is no gradual
degradation like there is with analog.
 
M

Mxsmanic

Bob said:
Our friend Mxsmanic is also in error regarding the capacity
limits of analog systems vs. digital; it is practically NEVER the
case that an analog system is actually carrying information at
anything even approaching the Shannon limit (in part due to the
extreme information redundancy of most analog transmission systems).

True, but the key difference is that analog systems are theoretically
_capable_ of doing this (and can sometimes approach it quite closely,
with proper design), whereas digital systems, by their very nature,
sacrifice some of this theoretical capacity in exchange for holding
errors below a predetermined threshold.
A perfect example of this is the case of "analog" vs."digital" television.
The Shannon limit of a standard 6 MHz TV channel at a 10 dB
signal-to-noise ratio is a little over 20 Mbits/sec, which is a
good deal greater than the actual information content of a
standard analog broadcast signal. The U.S. HDTV standard,
on the other hand, actually runs close to this limit (the standard
bit rate is in excess of 19 Mbit/sec), and through compression
techniques available only in a digital system manages to convey
an image that in analog form would require a channel of much
higher bandwidth (although these compression methods, since
some of the steps are lossy, should not really be seen as
somehow beating the Shannon limit).

The HDTV standard isn't doing anything that analog can't do. It is,
after all, using analog methods to transmit its "digital" information.
The only advantage of digital is that the error rate can be carefully
controlled, whereas in analog, there is no "error rate"--everything is
signal, even when it's not.
 
M

Mxsmanic

J. Clarke said:
In practice this sometimes shows--digital TV has no redundancy to speak
of--it either has a perfect image or it has dead air, there is no gradual
degradation like there is with analog.

True for all digital systems. Analog systems always have some sort of
error, and this error increases gradually and gracefully as noise
increases. Digital systems draw an artificial line below which all is
noise and above which all is signal. As long as noise actually
remains below this line in the channel, digital transmission is
error-free. But if it rises above the line, there is a _sudden_ (and
often catastrophic) appearance of serious, uncorrectible errors in the
channel.

The whole idea of digital is to draw the line at the right place, so
that you always have error-free transmission. You sacrifice the bit
of channel capacity below the line in order to get error-free
transmission at a slightly slower rate than analog might provide.
 
B

Bob Myers

Mxsmanic said:
The HDTV standard isn't doing anything that analog can't do.

Actually, it's doing quite a bit that analog can't do (or more precisely,
it is easily doing things that would extremely difficult if not practically
impossible to do in analog form). Chief among these is permitting
a rather high degree of data compression while still keeping all
components of the signal completely independent and separable.
This points out one of the true advantages of a digital representation
over analog - it puts the information into a form that is easily
manipulated, mathematically, without necessarily introducing error
and loss into that manipulation. The 1920 x 1080, 60 Hz, interlaced-
scan format (which is the highest currently in use in the U.S. system)
would basically be impossible to transmit in an analog system. (The
best analog TV system ever put into use, Japan's MUSE (for MUltiple
Sub-Nyquist Encoding) didn't achieve the same level of quality, and
sacrificied quite a lot to squeeve a signal of somewhat lower resolution
into quite a bit more bandwidth. As noted earlier, this isn't a violation
of the Shannon limit, but (as is the case with just about all compression
methods) trading off redundancy for capacity).

Your comments regarding "analog" systems having an inherent
capability to more readily approach the Shannon limit than "digital"
again are based on the conventional, common examples of specific
implementations of the two (i.e., a straight equal-weight binary
representation for "digital," and the sort of simple examples of "analog"
that we're all used to). But this isn't really an inherent limitation of
digital in general. Digital representations can be (and have been)
designed) which do not place equal noise margin on all bits, etc.,
and provide a more "graceful" degradation in the presence of noise.
Shannon applies to ALL forms of information encoding, and none
have a particular advantage in theory in coming closest to this limit.

after all, using analog methods to transmit its "digital" information.

Again, a confusion of "analog" and "digital" with other terms which
are closely associated but not identical (e.g., "continuous" vs.
"quantized," "linear," "sampled," and so forth. The methods used to
transmit HDTV are not "analog," they simply aren't the easy-to-grasp
examples of "digital" that we all see in classes or books on the basics.

Bob M.
 
J

J. Clarke

Bob said:
Actually, it's doing quite a bit that analog can't do (or more precisely,
it is easily doing things that would extremely difficult if not
practically
impossible to do in analog form). Chief among these is permitting
a rather high degree of data compression while still keeping all
components of the signal completely independent and separable.
This points out one of the true advantages of a digital representation
over analog - it puts the information into a form that is easily
manipulated, mathematically, without necessarily introducing error
and loss into that manipulation. The 1920 x 1080, 60 Hz, interlaced-
scan format (which is the highest currently in use in the U.S. system)
would basically be impossible to transmit in an analog system. (The
best analog TV system ever put into use, Japan's MUSE (for MUltiple
Sub-Nyquist Encoding) didn't achieve the same level of quality, and
sacrificied quite a lot to squeeve a signal of somewhat lower resolution
into quite a bit more bandwidth. As noted earlier, this isn't a violation
of the Shannon limit, but (as is the case with just about all compression
methods) trading off redundancy for capacity).

Your comments regarding "analog" systems having an inherent
capability to more readily approach the Shannon limit than "digital"
again are based on the conventional, common examples of specific
implementations of the two (i.e., a straight equal-weight binary
representation for "digital," and the sort of simple examples of "analog"
that we're all used to). But this isn't really an inherent limitation of
digital in general. Digital representations can be (and have been)
designed) which do not place equal noise margin on all bits, etc.,
and provide a more "graceful" degradation in the presence of noise.
Shannon applies to ALL forms of information encoding, and none
have a particular advantage in theory in coming closest to this limit.



Again, a confusion of "analog" and "digital" with other terms which
are closely associated but not identical (e.g., "continuous" vs.
"quantized," "linear," "sampled," and so forth. The methods used to
transmit HDTV are not "analog," they simply aren't the easy-to-grasp
examples of "digital" that we all see in classes or books on the basics.

In a sense. It's a digital signal transmitted by modulating an analog
carrier. The HD signal can be extracted from the output stream of a BT848,
which is designed to decode analog SD TV.

I don't expect the average non-engineer to grasp this level of subtlety
though.
 
B

Bob Myers

J. Clarke said:
In a sense. It's a digital signal transmitted by modulating an analog
carrier. The HD signal can be extracted from the output stream of a BT848,
which is designed to decode analog SD TV.

I don't expect the average non-engineer to grasp this level of subtlety
though.
 
B

Bob Myers

J. Clarke said:
In a sense. It's a digital signal transmitted by modulating an analog
carrier. The HD signal can be extracted from the output stream of a BT848,
which is designed to decode analog SD TV.

I don't expect the average non-engineer to grasp this level of subtlety
though.

Yes, we've definitely passed the point of general interest here, so I'm
inclined to close out my participation in the thread at this point.

But just to follow up on the above - in the sense I meant, meaning the
specific definitions of "analog" and "digital" covered earlier, I would
say that there's no such thing as an "analog carrier"- the carrier
itself, prior to modulation, carries no information (nor is the information
being impressed upon it in an "analog" form), so in what sense would
we call it "analog" to begin with?

Bob M.
 
J

J. Clarke

Bob said:
Yes, we've definitely passed the point of general interest here, so I'm
inclined to close out my participation in the thread at this point.

But just to follow up on the above - in the sense I meant, meaning the
specific definitions of "analog" and "digital" covered earlier, I would
say that there's no such thing as an "analog carrier"- the carrier
itself, prior to modulation, carries no information (nor is the
information being impressed upon it in an "analog" form), so in what sense
would we call it "analog" to begin with?

In the sense that a chip which is not designed to deal with digital signals
can nonetheless extract the signal.
 
C

Captin

A friend has got a new LCD monitor and he wants to know if
connecting
via a DVI connector would improve the quality over the VGA
connectors? I
would suspect it's imperceptible. He doesn't have a video card
with a
DVI connector yet, but he wants to get one if there's any
difference.

Yousuf Khan

I’m following threads on LCD monitors because I’m in the market. It
seems some opinion is that DVI is not a step forward .
The way I see it is even if DVI is no advantage the better monitors
will
have the option regardless?
Also what I’m wondering about is how much the video card
contributes
towards DVI performance? Are some cards simply holding back the
benefits of the DVI interface?
 
P

Praxiteles Democritus

Yeah, that's basically what I've been hearing from asking around
recently. Several people that I've asked said they couldn't tell the
difference between the DVI interface and the VGA one.

That's my experience so far. We tried analog compared to DVI on his
LCD and to the naked eye we couldn't tell the difference.
 
Y

YKhan

Captin said:
I'm following threads on LCD monitors because I'm in the market. It
seems some opinion is that DVI is not a step forward .
The way I see it is even if DVI is no advantage the better monitors
will
have the option regardless?
Also what I'm wondering about is how much the video card
contributes
towards DVI performance? Are some cards simply holding back the
benefits of the DVI interface?

I doubt it, if anything DVI should be the great equalizer of video
cards (at least as far as picture quality goes, not performance-wise).
DVI bypasses a video card's own RAMDACs in favour of the RAMDACs built
into the LCD monitor. So no matter whether you have an expensive video
card or a cheap one, you're still going to be using the same RAMDACs,
i.e. those inside the monitor. In the past, the RAMDACs inside some
expensive video cards were probably slightly higher precision than
those inside cheap cards; this gained the expensive cards an advantage.

I guess what happens now is that the picture quality is now going to be
dependent on the quality of the RAMDACs inside the monitors instead of
inside the video cards.

Yousuf Khan
 
C

Captin

True for all digital systems. Analog systems always have some
sort of
error, and this error increases gradually and gracefully as
noise
increases. Digital systems draw an artificial line below
which all is
noise and above which all is signal. As long as noise
actually
remains below this line in the channel, digital transmission
is
error-free. But if it rises above the line, there is a
_sudden_ (and
often catastrophic) appearance of serious, uncorrectible
errors in the
channel.

The whole idea of digital is to draw the line at the right
place, so
that you always have error-free transmission. You sacrifice
the bit
of channel capacity below the line in order to get error-free
transmission at a slightly slower rate than analog might
provide.

I’m asking everyone here. Does the performance of DVI vary a great
deal from video card to video card?
I mean is it possible we have a situation where DVI offers a step
forward with some video cards and not so with others?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top