Nv47: 24 pipes. Spring 05

H

hona ponape

Its not an argument. Anyone can find out for themselves.

If you want to go on arguing, you'll have to pay for another five minutes.

And this isn't the complain department, its 'Getting hit on the head
lessons' in here. Waa
 
A

assaarpa

The 486 was the last CPU to have one pipeline....
What is the difference?

Yadda yaddayadda .. it doesn't MATTER. That's a red herring. The real
difference is in the fact that CPU executes linear sequence of instructions,
yes, it is possible to re-order instructions and execute them out-of-order
when there are no dependencies to other instructions and so on. But this is
just going for straws.

Now look at GPU, the instructions that are executed are SAME for each pixel
in a primitive that is being scanconverted. If there is a triangle with 100
pixels, yes, every single one of those 100 pixels execute precisely the same
shader instructions. The data that comes from samplers varies, but the
shader is the same. This means it is feasible to throw N shaders at the
problem and get the job done N times faster (theoretically). In practise
getting values from samplers is I/O bound, there is specific amount of
memory bandwidth the system can sustain, after that, the system is bandwidth
limited. To remedy this to a degree the memory subsystem been diviced to
multiple stages where the stage closer to the shader unit is faster, but
smaller and the slowest memory is in the DDR3 (for example) modules but is
the cheapest kind of memory so there is most of that type. Just over
simplification of a typical contemporary GPU but should do the trick.

Now, this is a bit different as pipelined CPU architechture.. because.. the
term is simply abused by the almost-know-what-talking-about people.
Generally the people who are clued-in talk about shader arrays (this
terminology depends on the corporate culture you are from) or similiar. The
GPU IMPLEMENTATION can be pipelined, or not, more likely it is pipelined
than not because non-pipelined architechtures are ****ing slow and
inefficient. This applies to a single shader, not array, an array of shaders
is not normally refered as "pipelined" when the array containst more than a
single shader: that is completely different issue.

And to go back to things that annoy in the CPU discussion above...

486DX has 487, by the above definition that would count as a 'pipeline', but
the correct terminology would be 'execution unit'. The real meat of this is
in the fact that 386 wasn't 'pipelined', the 486 was 'pipelined', it was the
processor architechture from *Intel* that introduced pipelining in the
mainstream x86 product line.

It was the Pentium which introduced multi-execution-unit ALU core to x86
product line, those were called the U and V pipe, literally. The next core
design was something completely different: it did decode the x86 instruction
stream into micro-ops, which were executed on number of (3 if I remember
correctly!) execution units. Two of which were simpler and executed only
simplest instructions and one which executed more complex instructions such
as division, multiple and such. This was the PentiumPRO architechture, which
was used in PentiumII and PentiumIII aswell, with the difference that MMX
and SSE were added on the consequent processors.

But why I am telling this is that the PPRO architechture wasn't really
'multi-pipe' in the traditional sense, it was multiple execution units and
out-of-order execution of single instruction stream in micro-op level. The
next design, NetBurst architechture went a step further.. the decoded
instruction streams were stored in so-called trace cache, again in multiple
execution units and the pipeline length was more than doubled since the
previous generations (I don't insult anyone by explaining what pipelining
means in practise, assumed that the reader is familiar with microprocessor
design basics for crying out loud). The pipeline was broken into more
distinct stages to reach higher operating frequencies. Simpler stages
complete in smaller time, therefore the frequency is possible to increase
and still have a design that works reliably and predictably. This seems to
be market driven decision instead of purely engineering decision but that
can be speculated so everyone can draw their own conclusions that is just
mine and not necessarily Truth.

Anyway, the point I am drawing to is that the 'pipelining' in CPU -or- GPU
is implementation detail and not relevant to shader arrays per-se. Merry
Xmas!
 
D

dvus

assaarpa said:
Yadda yaddayadda .. it doesn't MATTER. That's a red herring. The real
difference is in the fact that CPU executes linear sequence of
instructions, yes, it is possible to re-order instructions and
execute them out-of-order when there are no dependencies to other
instructions and so on. But this is just going for straws.

Now look at GPU, the instructions that are executed are SAME for each
pixel in a primitive that is being scanconverted. If there is a
triangle with 100 pixels, yes, every single one of those 100 pixels
execute precisely the same shader instructions. The data that comes
from samplers varies, but the shader is the same. This means it is
feasible to throw N shaders at the problem and get the job done N
times faster (theoretically). In practise getting values from
samplers is I/O bound, there is specific amount of memory bandwidth
the system can sustain, after that, the system is bandwidth limited.
To remedy this to a degree the memory subsystem been diviced to
multiple stages where the stage closer to the shader unit is faster,
but smaller and the slowest memory is in the DDR3 (for example)
modules but is the cheapest kind of memory so there is most of that
type. Just over simplification of a typical contemporary GPU but
should do the trick.
Now, this is a bit different as pipelined CPU architechture..
because.. the term is simply abused by the
almost-know-what-talking-about people. Generally the people who are
clued-in talk about shader arrays (this terminology depends on the
corporate culture you are from) or similiar. The GPU IMPLEMENTATION
can be pipelined, or not, more likely it is pipelined than not
because non-pipelined architechtures are ****ing slow and
inefficient. This applies to a single shader, not array, an array of
shaders is not normally refered as "pipelined" when the array
containst more than a single shader: that is completely different
issue.
And to go back to things that annoy in the CPU discussion above...


486DX has 487, by the above definition that would count as a
'pipeline', but the correct terminology would be 'execution unit'.
The real meat of this is in the fact that 386 wasn't 'pipelined', the
486 was 'pipelined', it was the processor architechture from *Intel*
that introduced pipelining in the mainstream x86 product line.

It was the Pentium which introduced multi-execution-unit ALU core to
x86 product line, those were called the U and V pipe, literally. The
next core design was something completely different: it did decode
the x86 instruction stream into micro-ops, which were executed on
number of (3 if I remember correctly!) execution units. Two of which
were simpler and executed only simplest instructions and one which
executed more complex instructions such as division, multiple and
such. This was the PentiumPRO architechture, which was used in
PentiumII and PentiumIII aswell, with the difference that MMX and SSE
were added on the consequent processors.
But why I am telling this is that the PPRO architechture wasn't really
'multi-pipe' in the traditional sense, it was multiple execution
units and out-of-order execution of single instruction stream in
micro-op level. The next design, NetBurst architechture went a step
further.. the decoded instruction streams were stored in so-called
trace cache, again in multiple execution units and the pipeline
length was more than doubled since the previous generations (I don't
insult anyone by explaining what pipelining means in practise,
assumed that the reader is familiar with microprocessor design basics
for crying out loud). The pipeline was broken into more distinct
stages to reach higher operating frequencies. Simpler stages complete
in smaller time, therefore the frequency is possible to increase and
still have a design that works reliably and predictably. This seems
to be market driven decision instead of purely engineering decision
but that can be speculated so everyone can draw their own conclusions
that is just mine and not necessarily Truth.
Anyway, the point I am drawing to is that the 'pipelining' in CPU
-or- GPU is implementation detail and not relevant to shader arrays
per-se. Merry Xmas!

What'd he say?

dvus
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top