Wall Street Journal & Rahul Sood: ATI & AMD a done deal

Y

Yousuf Khan

The Wall Street Journal is stating that the deal is now a done deal. The
WSJ link below is a registration site, so I'll just quote it below the
link. The Rahul Sood blog link is free, I'll just post the link with
just a single quote.

I'm not so sure this is such a good thing to have happened. I guess
we'll only know in a few years I guess.

Yousuf Khan

Rahul Sood's Blog:
"I am busting at the seams here, I have been holding my opinions on this
potential ATi + AMD deal for many months now. Obviously everyone knows
Intel’s latest product offering is excellent and news on
Conroe/Woodcrest is humming along – but this deal with ATi + AMD has
positive cataclysmic effects on the entire industry. This deal slightly
overshadows any news going on in our industry at the moment."
http://voodoopc.blogspot.com/2006/07/amd-ati-one-one-three.html

WSJ: AMD Will Pay $5.4 Billion To Purchase ATI Technologies
http://us.rd.yahoo.com/finance/exte...115368512442514775.html?mod=yahoo_hs&ru=yahoo
 
G

George Macdonald

The Wall Street Journal is stating that the deal is now a done deal. The
WSJ link below is a registration site, so I'll just quote it below the
link. The Rahul Sood blog link is free, I'll just post the link with
just a single quote.

I'm not so sure this is such a good thing to have happened. I guess
we'll only know in a few years I guess.

We'll certainly get no clue from the anal...yst babble on the subject...
some the most clueless tripe I've read yet on tech. Those guys don't
understand that the collaboration between AMD and nVidia does not have to
be at the same level as nVidia and Intel. D'oh... HT is an open
standard!... Intel's FSB is NOT!

Then there's this one: " Markham, Ontario-based ATI sells its chips to
among others, AMD and Intel, AMD's much-bigger rival for PC and server
chips." from
http://www.marketwatch.com/News/Sto...1DFA7A}&source=blq/yhoo&dist=yhoo&siteid=yhoo
Those people actually get paid good $$ for this garbage?

Personally I dunno what to make of this move - did they make enough from
the sale of Alchemy and Geode to pay for this? I wouldn't have thought so.
I hope that AMD does not F/U ATI as much as Intel did with the graphics &
other chip businesses they absorbed. I see that nVidia's stock took a dump
on Friday too... more anal...yst FUD?

There are lots of possible pros & cons for AMD and ATI here but as you say
it'll take a bit of time for the dust to settle. It'll be interesting to
see how Intel reacts though: will they revoke ATI's license to their FSB?
 
Y

Yousuf Khan

George said:
We'll certainly get no clue from the anal...yst babble on the subject...
some the most clueless tripe I've read yet on tech. Those guys don't
understand that the collaboration between AMD and nVidia does not have to
be at the same level as nVidia and Intel. D'oh... HT is an open
standard!... Intel's FSB is NOT!


What "levels" are you talking about?
Then there's this one: " Markham, Ontario-based ATI sells its chips to
among others, AMD and Intel, AMD's much-bigger rival for PC and server
chips." from
http://www.marketwatch.com/News/Sto...1DFA7A}&source=blq/yhoo&dist=yhoo&siteid=yhoo
Those people actually get paid good $$ for this garbage?

I don't think that was an analyst saying it, I think it was just a reporter.
Personally I dunno what to make of this move - did they make enough from
the sale of Alchemy and Geode to pay for this? I wouldn't have thought so.
I hope that AMD does not F/U ATI as much as Intel did with the graphics &
other chip businesses they absorbed. I see that nVidia's stock took a dump
on Friday too... more anal...yst FUD?

They're also saying that the deal is mostly cash $4.3B in cash, and the
rest is stock. Where is that cash coming from? Last I saw, they had
$2.5B in cash in the bank.
There are lots of possible pros & cons for AMD and ATI here but as you say
it'll take a bit of time for the dust to settle. It'll be interesting to
see how Intel reacts though: will they revoke ATI's license to their FSB?

I don't think there's any doubt that Intel will revoke ATI's chipset
license. But that's no big deal, ATI only had a small window of
opportunity to sell their Intel chipsets, namely in between Intel
chipset production crises. Once those crises are over, Intel typically
discards its partners.

Here's a (long) analysis of it by Charlie Demerjian at the Inq. He says
it's all about mini-cores on CPUs, not GPUs or chipsets. He's even
speculating that ATI will soon not be making GPUs (to placate Nvidia). I
don't know about that, $5.4B seems like a big price to pay just for some
engineering help. I don't think AMD can afford to simply let the revenue
from the graphics products wither away.

AMD has to buy ATI to survive
http://theinq.net/default.aspx?article=33219
 
R

Rod Speed

George Macdonald said:
We'll certainly get no clue from the anal...yst babble on the
subject... some the most clueless tripe I've read yet on tech. Those
guys don't understand that the collaboration between AMD and nVidia
does not have to be at the same level as nVidia and Intel. D'oh...
HT is an open standard!... Intel's FSB is NOT!

Then there's this one: " Markham, Ontario-based ATI sells its chips to
among others, AMD and Intel, AMD's much-bigger rival for PC and server
chips." from
http://www.marketwatch.com/News/Sto...1DFA7A}&source=blq/yhoo&dist=yhoo&siteid=yhoo
Those people actually get paid good $$ for this garbage?

Personally I dunno what to make of this move - did they make enough
from the sale of Alchemy and Geode to pay for this? I wouldn't have
thought so. I hope that AMD does not F/U ATI as much as Intel did
with the graphics & other chip businesses they absorbed. I see that
nVidia's stock took a dump on Friday too... more anal...yst FUD?

Nope, just a recognition that AMD is unlikely to be buying them out now.
There are lots of possible pros & cons for AMD and ATI here but as
you say it'll take a bit of time for the dust to settle. It'll be interesting
to see how Intel reacts though: will they revoke ATI's license to their FSB?

Looks like they already have.
 
T

Tony Hill

What "levels" are you talking about?

Unless I'm misunderstanding George, I think he means that nVidia will
be allowed to keep building chipsets for AMD processors regardless of
what AMD does. On the other hand, if Intel wants, they can easily
cancel (or at least fail to renew) ATI's license to build chipsets for
Intel processors.
I don't think that was an analyst saying it, I think it was just a reporter.

Well ATI actually DOES sell chips to Intel, though not AMD. Intel
sells one or two motherboards with ATI chipsets on them as well as a
couple server boards with ATI video chips. However I really don't
think this is what the analyst and/or reporter were getting at.
They're also saying that the deal is mostly cash $4.3B in cash, and the
rest is stock. Where is that cash coming from? Last I saw, they had
$2.5B in cash in the bank.

They are taking out a $2.5B loan to pay for a good chunk of it.
I don't think there's any doubt that Intel will revoke ATI's chipset
license. But that's no big deal, ATI only had a small window of
opportunity to sell their Intel chipsets, namely in between Intel
chipset production crises. Once those crises are over, Intel typically
discards its partners.

Perhaps, though from what I understand the motherboard chipset
business is now 25% of ATI's revenue and a good chunk of that will be
from Intel-based systems.
Here's a (long) analysis of it by Charlie Demerjian at the Inq. He says
it's all about mini-cores on CPUs, not GPUs or chipsets. He's even
speculating that ATI will soon not be making GPUs (to placate Nvidia). I
don't know about that, $5.4B seems like a big price to pay just for some
engineering help. I don't think AMD can afford to simply let the revenue
from the graphics products wither away.

I see very little other than withering happening from this deal, on
both the AMD and ATI side of things. This is definitely one of the
dumbest mergers I've seen in a while. it's not going to be quite as
big or as bad as the HPaq merger, but it'll be close.

That article focuses on a LOT of possibilities for the future without
answering some important questions. First and foremost in my mind,
how are these "GPU cores" going to get all their memory bandwidth?
Graphics is VERY dependant on high memory bandwidth, ATI's latest and
greatest video card has just shy of 50GB/s of memory bandwidth. AMD's
latest and greatest processors have 12.8GB/s. That's a very
substantial difference that isn't likely to shrink any time soon.
Let's not forget that the latest ATI and nVidia graphics chips are
currently more than twice the size of AMD's dual-core processors (384M
transistors for the Radeon X1900 vs. 154M for the Athlon64 X2). Even
if they don't grow at all and you can shave off 25% of the transistors
from duplication of duties, this still won't be very practical to
implement alongside a CPU on 45nm production. Maybe on a 32nm process
it'll start being practical, assuming no further advances in graphics
technology for the next ~6 years.
 
Y

Yousuf Khan

Tony said:
Perhaps, though from what I understand the motherboard chipset
business is now 25% of ATI's revenue and a good chunk of that will be
from Intel-based systems.

At their conference call this morning, they said it only amounts to
about $70m in profit for ATI, even if it is a lot of revenue.
I see very little other than withering happening from this deal, on
both the AMD and ATI side of things. This is definitely one of the
dumbest mergers I've seen in a while. it's not going to be quite as
big or as bad as the HPaq merger, but it'll be close.

Well, as far as discrete graphics goes, that will still be around, and
it will work with any platform, and there is nothing that anyone can do
to prevent it from being put onto the enemy's platform. Now, what will
need to be found out is whether ATI will pull Intel's Crossfire license,
now that Intel has pulled ATI's chipset license.
That article focuses on a LOT of possibilities for the future without
answering some important questions. First and foremost in my mind,
how are these "GPU cores" going to get all their memory bandwidth?
Graphics is VERY dependant on high memory bandwidth, ATI's latest and
greatest video card has just shy of 50GB/s of memory bandwidth. AMD's
latest and greatest processors have 12.8GB/s. That's a very
substantial difference that isn't likely to shrink any time soon.
Let's not forget that the latest ATI and nVidia graphics chips are
currently more than twice the size of AMD's dual-core processors (384M
transistors for the Radeon X1900 vs. 154M for the Athlon64 X2). Even
if they don't grow at all and you can shave off 25% of the transistors
from duplication of duties, this still won't be very practical to
implement alongside a CPU on 45nm production. Maybe on a 32nm process
it'll start being practical, assuming no further advances in graphics
technology for the next ~6 years.

Well, they did say, they don't want to put whole GPUs into the processor
die. They may want to put some mini-cores with GPU-derived features onto
the core. As far as getting the large memory bandwidths, that shouldn't
be a problem with AMD's integrated memory controller. In fact, a GPU
might benefit from it more than a CPU can, so they may upgrade the
memory controller for faster speeds just for the GPU.

Yousuf Khan
 
P

Peter Matthias

Tony said:
That article focuses on a LOT of possibilities for the future without
answering some important questions.  First and foremost in my mind,
how are these "GPU cores" going to get all their memory bandwidth?

How much memory bandwidth do GPUs need?
Graphics is VERY dependant on high memory bandwidth, ATI's latest and
greatest video card has just shy of 50GB/s of memory bandwidth.  

3D graphics nedd the bandwidth.
AMD's
latest and greatest processors have 12.8GB/s.  That's a very
substantial difference that isn't likely to shrink any time soon.

Again, how much bandwith do /you/ need? I use an old Matrox G450 graphics
card with my A64. It fully supports my needs regarding memory bandwidth. I
don't know its bandwidth, but I suppose it has around 1GB/s. 12.8GB/s is
enough for 90% of the market. That counts.
Let's not forget that the latest ATI and nVidia graphics chips are
currently more than twice the size of AMD's dual-core processors (384M
transistors for the Radeon X1900 vs. 154M for the Athlon64 X2).

What is their market share?

With this deal, ATI will be able to re-use their huge development costs they
make with the top gamer's niche market for 90% of the market. Intel will be
able to do so also. Nvidia not.
Even
if they don't grow at all and you can shave off 25% of the transistors
from duplication of duties, this still won't be very practical to
implement alongside a CPU on 45nm production.  Maybe on a 32nm process
it'll start being practical, assuming no further advances in graphics
technology for the next ~6 years.

Personally I think, a dual core with integrated GPU in 65nm will be very,
very attractive.

Peter
 
G

George Macdonald

What "levels" are you talking about?

nVidia could do their chipset without even talking to AMD. For Intel, they
at least need to get the FSB license and depending on Intel's "needs" maybe
some engineering help; the platform may be nominally "open" but we know
that Intel just *hates* that.
I don't think that was an analyst saying it, I think it was just a reporter.

It's Marketwatch "by Dow Jones" - one would expect at least *some* market
expertise. OTOH, I'd missed that Intel was buying substantial(?) amounts
of ATI chipsets... possibly what he was referring to, but that's hardly a
well established industry paradigm.
They're also saying that the deal is mostly cash $4.3B in cash, and the
rest is stock. Where is that cash coming from? Last I saw, they had
$2.5B in cash in the bank.


I don't think there's any doubt that Intel will revoke ATI's chipset
license. But that's no big deal, ATI only had a small window of
opportunity to sell their Intel chipsets, namely in between Intel
chipset production crises. Once those crises are over, Intel typically
discards its partners.

I believe ATI was also making some headway into the mobile market.
Here's a (long) analysis of it by Charlie Demerjian at the Inq. He says
it's all about mini-cores on CPUs, not GPUs or chipsets. He's even
speculating that ATI will soon not be making GPUs (to placate Nvidia). I
don't know about that, $5.4B seems like a big price to pay just for some
engineering help. I don't think AMD can afford to simply let the revenue
from the graphics products wither away.

AMD has to buy ATI to survive
http://theinq.net/default.aspx?article=33219

Interesting but I'm not convinced on the mini-core thing. The end of the
GPU? I think it'll be a long while yet. Now, in the short-term at least,
if AMD-ATI does a direct HT connect GPU, bypassing the PCI-E tunnel, that
could be interesting.:)
 
T

Tony Hill

Well, as far as discrete graphics goes, that will still be around, and
it will work with any platform, and there is nothing that anyone can do
to prevent it from being put onto the enemy's platform. Now, what will
need to be found out is whether ATI will pull Intel's Crossfire license,
now that Intel has pulled ATI's chipset license.

I doubt that it'll matter much, Intel will probably pull stop any new
Crossfire products anyway. The niche crowd that is willing to spend
$1000 on video cards is a market that Intel will probably hand to
nVidia.
Well, they did say, they don't want to put whole GPUs into the processor
die. They may want to put some mini-cores with GPU-derived features onto
the core.

You can toss the functionality between GPU cores and CPU cores all you
want, though it isn't likely to make much of a difference. Both are
large, expensive chips built using cutting-edge manufacturing process,
so neither has and advantage on this side of things. Going from AGP
4x to 8x to PCI Express 16x has shown that increasing bandwidth does
basically nothing, so they won't gain an advantage here. There are a
few odd cases where you might be able to speed up the occasional
function, but we're talking about a few percentage points here and
there.
As far as getting the large memory bandwidths, that shouldn't
be a problem with AMD's integrated memory controller.

It most definitely will be a problem with AMD's multidrop bus and
memory hanging off little sticks instead of being soldered on a video
card 2cm away from the graphics processor. There is no way that a
CPU, with memory on DIMMs (or RDIMMs or FBDIMMs or whatever) is going
to come anywhere close to matching the bandwidth of a video card with
soldered memory.
In fact, a GPU
might benefit from it more than a CPU can, so they may upgrade the
memory controller for faster speeds just for the GPU.

A GPU *DEFINITELY* benefits a lot more from extra memory bandwidth
than a CPU does. That is the exact reason why they have 50GB/s of
bandwidth while processors are just now breaking into the
double-digits of GB/s.
 
T

Tony Hill

How much memory bandwidth do GPUs need?

For 3D graphics, a LOT.
3D graphics nedd the bandwidth.

Exactly, and ALL graphics are moving towards 3D. Windows Vista Aero
Glass takes 3D graphics to the desktop and GLX does the same for
XWindows on Linux.
Again, how much bandwith do /you/ need? I use an old Matrox G450 graphics
card with my A64. It fully supports my needs regarding memory bandwidth. I
don't know its bandwidth, but I suppose it has around 1GB/s.

Depending on the version, either 1.3GB/s or 2.7GB/s. Ohh, and it
won't run Windows Vista Aero.
12.8GB/s is
enough for 90% of the market. That counts.

Sure, integrated graphics that exist today is more than enough for 90%
of the market too. What's the point of changing things around if it's
going to cost more and not improve performance?
What is their market share?

With this deal, ATI will be able to re-use their huge development costs they
make with the top gamer's niche market for 90% of the market. Intel will be
able to do so also. Nvidia not.

nVidia, Intel and ATI are already doing this with their lower-end
chipsets and integrated chipsets.
Personally I think, a dual core with integrated GPU in 65nm will be very,
very attractive.

Why not just grab a dual-core with the graphics integrated onto the
chipset like we have now. Same performance, less cost.
 
B

bbbl67

George said:
nVidia could do their chipset without even talking to AMD. For Intel, they
at least need to get the FSB license and depending on Intel's "needs" maybe
some engineering help; the platform may be nominally "open" but we know
that Intel just *hates* that.

Oh, okay, I see what you mean.
It's Marketwatch "by Dow Jones" - one would expect at least *some* market
expertise. OTOH, I'd missed that Intel was buying substantial(?) amounts
of ATI chipsets... possibly what he was referring to, but that's hardly a
well established industry paradigm.

Maybe he knows markets, but certainly not technology.

I believe ATI was also making some headway into the mobile market.

Yeah, but that was all in the AMD mobile market. I don't think there is
even an ATI mobile chipset for the Intel market. However, not counting
integrated chipsets, ATI was selling some mobile discrete GPU's for
both the Intel and AMD markets.
Interesting but I'm not convinced on the mini-core thing. The end of the
GPU? I think it'll be a long while yet. Now, in the short-term at least,
if AMD-ATI does a direct HT connect GPU, bypassing the PCI-E tunnel, that
could be interesting.:)

Yeah, I have my doubts about the value of mini-cores too. Neither IBM
Cell nor Sun Niagara have proven themselves in the market yet to any
great extent. And I can't see scientists using GPUs to do floating
point calculations on, as they don't have enough precision.

An HT-connected GPU is a possibility, but I'm sure Nvidia would've
built AMD one of those things if they asked them; no need to purchase a
whole company for that. My feeling is that AMD purchased those ATI
engineers to get insite into how to bring out new products every one to
two years. Intel has said that they will be going to a two year
development cycle now too.

Yousuf Khan
 
R

Ryan Godridge

Yeah, I have my doubts about the value of mini-cores too. Neither IBM
Cell nor Sun Niagara have proven themselves in the market yet to any
great extent. And I can't see scientists using GPUs to do floating
point calculations on, as they don't have enough precision.

An HT-connected GPU is a possibility, but I'm sure Nvidia would've
built AMD one of those things if they asked them; no need to purchase a
whole company for that. My feeling is that AMD purchased those ATI
engineers to get insite into how to bring out new products every one to
two years. Intel has said that they will be going to a two year
development cycle now too.

Yousuf Khan

If you go look at www.gpgpu.org there's a fair old bit of stuff being
done in the scientific floating point space. Agreed for some things
higher precision is required, but see paper
http://hal.ccsd.cnrs.fr/ccsd-00021443/ - gives IEEE-754 accuracy.

Have a look here -
http://www.gpgpu.org/cgi-bin/blosxom.cgi/Scientific Computing/index.html

for the index of scientific stuff being done.

At the moment these would seem to be problems that can stand the
latency, it would be interesting to see what can be attacked when that
is brought down to on-chip or even hypertransport between processors
levels.

Cell and Niagara are not immediately available to everybody and his
dog, but gpus are, thus making them very popular for playing with.

Ryan
 
R

Ryan Godridge

For 3D graphics, a LOT.
In current incarnations that may be true, but there's been trade-offs
in the algorithms and implementations against the abomnidable
latencies, so what would be the bandwidth requirements with closer
coupled processing?

Is that necessarily true or an implementation artifact?

<snip>

Ryan
 
T

Tony Hill

In current incarnations that may be true, but there's been trade-offs
in the algorithms and implementations against the abomnidable
latencies, so what would be the bandwidth requirements with closer
coupled processing?

Just what "abominable latencies" are those? Video memory is soldered
on a board right next to a GPU that has an integrated memory
controller. Memory latency for GPUs is very similar (probably better)
to that of CPUs. Ok, admittedly latency is still a problem even in
the best-case scenario, but moving parts of the GPU into the CPU isn't
going to help that.

So, to answer your question, the bandwidth requirements with closer
coupled processing would be exactly the same as with separate chips.
 
T

Tony Hill

Yeah, but that was all in the AMD mobile market. I don't think there is
even an ATI mobile chipset for the Intel market. However, not counting
integrated chipsets, ATI was selling some mobile discrete GPU's for
both the Intel and AMD markets.

ATI does indeed have a mobile chipset for Intel processors, their
Radeon x200M for Intel. It's actually still being sold in some
low-end models, though pretty much exclusively notebooks using Celeron
processors. They probably would be doing a whole lot better here
except that the whole "Centrino" marketing campaign pretty much killed
their chances for any Pentium or Core branded processors.

It is perhaps worth mentioning though that ATI has just recently
brought out a new version of their Radeon chipset, the Xpress 1100,
and they have only released an AMD version, not an Intel one. Given
the current state of things, I don't expect a new Intel chipset to be
forthcoming.
 
G

George Macdonald

Oh, okay, I see what you mean.


Maybe he knows markets, but certainly not technology.



Yeah, but that was all in the AMD mobile market. I don't think there is
even an ATI mobile chipset for the Intel market. However, not counting
integrated chipsets, ATI was selling some mobile discrete GPU's for
both the Intel and AMD markets.

Oh yeah there's ATI mobile chipsets with integrated graphics for the Intel
platform - I thought they were apparently making headway... but maybe not
enough.
Yeah, I have my doubts about the value of mini-cores too. Neither IBM
Cell nor Sun Niagara have proven themselves in the market yet to any
great extent. And I can't see scientists using GPUs to do floating
point calculations on, as they don't have enough precision.

An HT-connected GPU is a possibility, but I'm sure Nvidia would've
built AMD one of those things if they asked them; no need to purchase a
whole company for that. My feeling is that AMD purchased those ATI
engineers to get insite into how to bring out new products every one to
two years. Intel has said that they will be going to a two year
development cycle now too.

The trouble was not getting nVidia or ATI to build it but getting the mbrd
mfrs to design yet another platform to add to their line card.
 
R

Ryan Godridge

Just what "abominable latencies" are those? Video memory is soldered
on a board right next to a GPU that has an integrated memory
controller. Memory latency for GPUs is very similar (probably better)
to that of CPUs. Ok, admittedly latency is still a problem even in
the best-case scenario, but moving parts of the GPU into the CPU isn't
going to help that.
Sorry I didn't make myself clear - the latency between the cpu and the
gpu i.e. the PCIe link.
So, to answer your question, the bandwidth requirements with closer
coupled processing would be exactly the same as with separate chips.

I'm not sure that such an assertion is yet proven.

Ryan
 
T

Tony Hill

Sorry I didn't make myself clear - the latency between the cpu and the
gpu i.e. the PCIe link.

The latency on this link really isn't all that bad, certainly measured
in nanoseconds. Also I can't think of any situation where reducing
this latency is actually going to result in ANY improvement for
anything. It's not like reducing latency to main memory where you
have frequent and very random access. Data being sent from CPU to GPU
is usually done in a mostly sequential fashion and usually at times
where a few microseconds here or there aren't going to make a big
difference (think loading up an application or in between levels in a
game). The GPU deals pretty much exclusively with stuff in local
memory.
 
R

Ryan Godridge

The latency on this link really isn't all that bad, certainly measured
in nanoseconds. Also I can't think of any situation where reducing
this latency is actually going to result in ANY improvement for
anything. It's not like reducing latency to main memory where you
have frequent and very random access. Data being sent from CPU to GPU
is usually done in a mostly sequential fashion and usually at times
where a few microseconds here or there aren't going to make a big
difference (think loading up an application or in between levels in a
game). The GPU deals pretty much exclusively with stuff in local
memory.

The latency on PCIe (depending on implementation) seems to be anywhere
from 100ns upwards for just the link. The link efficiency goes down
with decreasing packet size.

Across the metal on die latency is nowhere near this, my guess is that
there are 1 to 2 orders of magnitude difference here.

Data has historically been sent in a sequential fashion to the gpu to
hide the latencies in the transport - PCI, AGP, PCIe etc. This is not
because it's the only or best way to implement a graphics subsystem,
but because it was what the technology fitted best. The algorithms
used were built to fit the available technology, as always. The
technology is changing.

There is a possibility that algorithms exist for graphics that can
make use of smaller gpu cores within the cpu if they have low latency
/ high bandwidth connections between themselves and the cpu.

On die stream processors (gpus) may allow small transactions to be
processed by the gpu. This has never been worthwhile before because
setup and latency swamps any gain that dedicated stream processing
would have. If it became worthwhile to stream 10 data items for
processing things would change a lot. As an example it might be worth
sending a gross view of an underlying scene for pre-culling before
sending the whole thing to the gpus.

This could be an underlying architectural change that will have
effects on the algorithms used. Nothing is for certain.
 
T

Tony Hill

The latency on PCIe (depending on implementation) seems to be anywhere
from 100ns upwards for just the link. The link efficiency goes down
with decreasing packet size.

Across the metal on die latency is nowhere near this, my guess is that
there are 1 to 2 orders of magnitude difference here.

Data has historically been sent in a sequential fashion to the gpu to
hide the latencies in the transport - PCI, AGP, PCIe etc. This is not
because it's the only or best way to implement a graphics subsystem,
but because it was what the technology fitted best. The algorithms
used were built to fit the available technology, as always. The
technology is changing.

There is a possibility that algorithms exist for graphics that can
make use of smaller gpu cores within the cpu if they have low latency
/ high bandwidth connections between themselves and the cpu.

On die stream processors (gpus) may allow small transactions to be
processed by the gpu. This has never been worthwhile before because
setup and latency swamps any gain that dedicated stream processing
would have. If it became worthwhile to stream 10 data items for
processing things would change a lot. As an example it might be worth
sending a gross view of an underlying scene for pre-culling before
sending the whole thing to the gpus.

This could be an underlying architectural change that will have
effects on the algorithms used. Nothing is for certain.

It could be, I'm in no position to say that this won't happen. That
being said, it just doesn't seem at all likely to me. Usually if no
one's tried it before (or, more to the point, those that have tried it
failed miserably.. just think of all the failed SOC designs) then
usually there is a good reason not to do things that way.

As usual, I'll believe it when I see it. Until then, I'm sticking
with my original impression: this was a thoroughly bad move on AMD's
part.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top