Could this ever work? Using another entire PC to do the job ofgraphics/video card on a 'host' pc....

D

DAN

This may sound like complete rubbish - and it probably is. I'm
certainly NO expert! It's just a random thought I had.....

The PC - three of it's main elements are: processor (CPU), memory and
chipset, right?

The graphics/video card - three of it's main elements are: processor
(GPU), memory and chipset, again.... (right?)

It seems to me that a pc and video card seem to be made up of similar
components, but their relative speeds seem to be very different
presently.

E.G.

My Current Athlon x2 has two cores running at 2000 Mhz.
An Nvidia 8800GTX video card, for example, only seems to have a single
core running at 575 Mhz (is that right?)

If we have the technology to have a main processor core running well
in excess of 3000 Mhz, why is it we can't do the same for a GPU in a
video card? (I understand, however, a lot of the video card speed has
more to do with memory clock).

Given the fact some video cards cost the earth, with fairly modest to
low core/memory speeds COMPARED to even a very low end mobo/cpu/ram
combo (that's even if they CAN be compared - I guess that's what I'm
kinda asking..) ........... What if we could use an entire PC (say,
one we didn't use anymore, but had higher cpu clock and more RAM than
the current video cards) to connect in some way to our MAIN pc, to
solely do the job of a video card??? It would have far faster
processing speeds and as much ram as we could throw at it.

I guess the interface between the two pc's would be a problem
though....?? How would THAT work??

Someone tell me this is complete rubbish (as I'm sure it is!) but
more importantly, I'm interested WHY! :)

Cheers,

Dan
 
P

Paul

DAN said:
This may sound like complete rubbish - and it probably is. I'm
certainly NO expert! It's just a random thought I had.....

The PC - three of it's main elements are: processor (CPU), memory and
chipset, right?

The graphics/video card - three of it's main elements are: processor
(GPU), memory and chipset, again.... (right?)

It seems to me that a pc and video card seem to be made up of similar
components, but their relative speeds seem to be very different
presently.

E.G.

My Current Athlon x2 has two cores running at 2000 Mhz.
An Nvidia 8800GTX video card, for example, only seems to have a single
core running at 575 Mhz (is that right?)

If we have the technology to have a main processor core running well
in excess of 3000 Mhz, why is it we can't do the same for a GPU in a
video card? (I understand, however, a lot of the video card speed has
more to do with memory clock).

Given the fact some video cards cost the earth, with fairly modest to
low core/memory speeds COMPARED to even a very low end mobo/cpu/ram
combo (that's even if they CAN be compared - I guess that's what I'm
kinda asking..) ........... What if we could use an entire PC (say,
one we didn't use anymore, but had higher cpu clock and more RAM than
the current video cards) to connect in some way to our MAIN pc, to
solely do the job of a video card??? It would have far faster
processing speeds and as much ram as we could throw at it.

I guess the interface between the two pc's would be a problem
though....?? How would THAT work??

Someone tell me this is complete rubbish (as I'm sure it is!) but
more importantly, I'm interested WHY! :)

Cheers,

Dan

Differences:

1) CPU makers own their own fab facilities. Intel is about to
ship 45nm, and has decent volume at 65nm. AMD has had 65nm
for a while, and their best clocks come from 90nm parts.

GPU makers (Nvidia and ATI) are fabless companies, that don't
fork out $2B every time the geometry shrinks. They buy fab
capacity at places like TSMC or IBM. They are at the mercy of what
sized chip will yield well, in whatever the best process is
at TSMC or IBM.

Now, the very latest ATI parts, (3850/3870) are supposed to
be at 55nm, so from that point of view, they've finally caught
up. But there is still a lot of product being manufactured, in
larger geometry processes.

A company that owns its own fabs, can do a lot more custom
design work, than a company that doesn't own a fab.

2) One limiting factor for current technology, is leakage current.
Intel seems to have done a good job of curing their problems
(compare Prescott to Conroe). AMD is pushing their stuff pretty
hard, and the top bin parts have a TDP of 125W.

TSMC will be dealing with the very same leakage problems as
everyone else, but their rate of progress will be different.

You can only put as much circuitry, and use as much clock speed,
as thermal limits and on-die electrical noise limits will
allow. The two kinds of chip designs are entirely different.
The GPU has a very large memory interface, for example, and
that doesn't help the noise issue.

3) The architectures are entirely different.

The GPU has some kind of central dispatcher, feeding a large
number (hundreds) of functional units.

The multi-core CPUs have a small number of general purpose
units.

The GPU has way more GFLOPs to offer, if a problem can be
fitted to the resources it provides.

And that is the reason, that it is alright to be at 575MHz.
575MHz times 320 function units, beats 3000MHz times 4 functional units.
As long as the problem being solved, is massively parallel.

http://en.wikipedia.org/wiki/Gpgpu

As an example, I tried a search on "GPGPU speedup", and stopped
with the first thread I found. GPGPU stands for "general purpose
graphics processor unit", and libraries exist today that allow
applications programmers, to use the GPU to do math.

GPGPU definition
http://en.wikipedia.org/wiki/Gpgpu

GPGPU speedup factors achieved
http://www.gpgpu.org/forums/viewtopic.php?t=110

You can expect great things in the future, as application
programmers become more familiar with GPGPU programming.
I can imagine someone getting a 30x speedup transcoding a
movie, if the job is done on the GPU.

Even if it "only" runs at 575MHz :)

If a problem doesn't have parallel elements to it,
neither a multi-core processor, nor a GPU, will make
it go faster. Don't expect to see Microsoft Office
benefit from anything other than pure clock speed.

But for a lot of multimedia applications (video editing
or Photoshop), these are interesting times.

Paul
 
H

Howard Goldstein

: It seems to me that a pc and video card seem to be made up of similar
: components, but their relative speeds seem to be very different
: presently.

I think you're right at a very high level but the similarity ends when
you just narrowly scratch at the surface. Your other PC won't have at
best more than a few SMP cores to throw at the graphics whereas modern
graphics cards support many, many more parallel processes. The
results would be disappointing.

There was a news story earlier this week that invokes things in the
other direction, some guys were talking about (or actually did?) use a
graphics card in the PC as speedy encryption breaking peripheral. As
with graphics there are elements of that task that can be tackled
quite well in parallel.
 
D

DAN

Cheers guys, that's really interesting. I had no idea of those
developments.

So it seems my question has been turned in another direction: What's
stopping amd/intel partnering with mobo/chipset manufacturers to adapt
a GPU-like system for the main system mobo's?

I suppose I could try and answer that myself, but not sure if I'd be
right - but I'll have a go.......

Using Paul's example of 'Microsoft Office'.... running this software
via a GPU (assuming hyperthetically it was possible) would currently
slow it right down, as it's only programmed to use ONE of the GPU's
MANY 'function units', and one function unit on a 8800gtx runs at
'only' 575Mhz, where as a standard system cpu could run that single
function in excess of 3000Mhz. (and I assume that would currently
apply to most 'general purpose' software in use by windows?). HOWEVER,
if MS Office (or whatever software) was reprogrammed to take advantage
of the GPU's many function units at the same time (in 'parallel'), it
could theoretically run a lot faster.

Have I got that right?

And by the way these 'function unit's' we're talking about, is that
the same as 'pipelines' I hear a lot about in a GPU, or is that
something different entirely? :)

Dan
 
F

Frank McCoy

In alt.comp.hardware.pc-homebuilt DAN said:
So it seems my question has been turned in another direction: What's
stopping amd/intel partnering with mobo/chipset manufacturers to adapt
a GPU-like system for the main system mobo's?

AMD bought out ATI, and is working on that.
 
H

Howard Goldstein

: Cheers guys, that's really interesting. I had no idea of those
: developments.
:
: So it seems my question has been turned in another direction: What's
: stopping amd/intel partnering with mobo/chipset manufacturers to adapt
: a GPU-like system for the main system mobo's?
:
: I suppose I could try and answer that myself, but not sure if I'd be
: right - but I'll have a go.......
:
: Using Paul's example of 'Microsoft Office'.... running this software
: via a GPU (assuming hyperthetically it was possible) would currently
: slow it right down, as it's only programmed to use ONE of the GPU's
: MANY 'function units', and one function unit on a 8800gtx runs at
: 'only' 575Mhz, where as a standard system cpu could run that single
: function in excess of 3000Mhz. (and I assume that would currently
: apply to most 'general purpose' software in use by windows?). HOWEVER,
: if MS Office (or whatever software) was reprogrammed to take advantage
: of the GPU's many function units at the same time (in 'parallel'), it
: could theoretically run a lot faster.
:
: Have I got that right?

Yes, I think you do. The problem goes all the way back to the
beginning of the software design process and the designer who didn't
give much thought to concurrency. Compilers can help somewhat, but to
really exploit the available parallelism (sp?) you really have to
start out with a pencil and a clean sheet of paper and design for
concurrency at the front end.

I think the geek spooks in the codebreaking world and the met and nuke
sim folks are probably at the leading edge of this sort of thing. And
they build custom hardware.


:
: And by the way these 'function unit's' we're talking about, is that
: the same as 'pipelines' I hear a lot about in a GPU, or is that
: something different entirely? :)

If they're like pipelines in a CPU they're for efficiency, to avoid
clock cycles waiting for hardware to work (like off-die mem access).
And that raises the question of how much of a good thing? Because
life isn't serial and when it isn't then all that hard work filling
the pipeline has to get dumped and you pay the price in cycles again
while it fills up.
 
H

Howard Goldstein

: I think the geek spooks in the codebreaking world and the met and nuke
: sim folks are probably at the leading edge of this sort of thing. And
: they build custom hardware.

And DUH I left out a really obvious example close to home. SETI@ home
and the fellow travellers in the distributed computing scene

/need more coffee
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top