AMD to produce ATI GPUs in Dresden

T

Tony Hill

This might be what AMD is using to fill the capacity at Fab 30 while it
converts it to Fab 38.

AMD to make GPUs in Dresden fab
http://www.theinquirer.net/default.aspx?article=33320

I doubt that we'll see much, if any, ATI chips produced at AMD fabs
this year, but it only makes sense for AMD to eventually start
building the chips there. My personal guess (based mainly on
gut-instinct) is that they'll start with the motherboard chipsets and
then work out from there. High-end graphics parts are probably at
least a year away from being produced in AMD fabs.
 
D

David Kanter

Tony said:
I doubt that we'll see much, if any, ATI chips produced at AMD fabs
this year, but it only makes sense for AMD to eventually start
building the chips there. My personal guess (based mainly on
gut-instinct) is that they'll start with the motherboard chipsets and
then work out from there. High-end graphics parts are probably at
least a year away from being produced in AMD fabs.

Two years. ATI has never targeted an SOI process, and they'd need to
start using whatever CAD tools AMD uses internally. That's a pretty
big shift, and it's not something you'd ever do in the middle of a
project...

Considering that GPUs take at least 3 years to design, I think the
earliest that this could happen is 2 years from now (assuming that
there is a design that started 6-12 months ago, and was still in
architectural phase and hadn't done any actual physical stuff).

DK
 
K

krw

Two years. ATI has never targeted an SOI process, and they'd need to
start using whatever CAD tools AMD uses internally. That's a pretty
big shift, and it's not something you'd ever do in the middle of a
project...

There is no law that says ANMD *must* do SOI in Dresden. Even if
they wanted to go with the current process, it's not that big of a
shift, at least for the logic designers. Pick one set of "books"
instead of another. Circuit design is different (presumably AMD
already has the necessary circuits), and processing is different,
but one logic chip looks pretty much like the next.
Considering that GPUs take at least 3 years to design, I think the
earliest that this could happen is 2 years from now (assuming that
there is a design that started 6-12 months ago, and was still in
architectural phase and hadn't done any actual physical stuff).

It would take less than a year to migrate a design form bulk to a
mature SOI process.
 
D

David Kanter

Two years. ATI has never targeted an SOI process, and they'd need to
There is no law that says ANMD *must* do SOI in Dresden.

Well considering that AMD has stated they will be doing 'copy exact'...
Even if
they wanted to go with the current process, it's not that big of a
shift, at least for the logic designers.

Yes, but if you've already started on any PD, you're going to need to
redo that. I think from a project management standpoint it would be a
pretty bad idea. I would expect that if they wanted to switch over
they would start with designs that haven't done much PD. My rough
estimate is that the design time for a GPU is 3 years, and PD probably
starts after 1 year. After they start PD, I don't think they'd
retarget.
Pick one set of "books"
instead of another. Circuit design is different (presumably AMD
already has the necessary circuits), and processing is different,
but one logic chip looks pretty much like the next.

AMD has the necessary circuits, but ATI engineers have no experience
with them. That's just asking for trouble, even if you have AMD
circuits guys coaching them along. Delays for CPUs are bad, but for
GPUs they are brutal, since the life time is so much shorter.
It would take less than a year to migrate a design form bulk to a
mature SOI process.

Can you elaborate? Do you mean a design that is already in bulk
production? Or do you just mean a taped out bulk design?

DK
 
T

The little lost angel

My rough
estimate is that the design time for a GPU is 3 years, and PD probably
starts after 1 year. After they start PD, I don't think they'd
retarget.

Why 3 years though? It seems like nVidia and ATI are pushing out new
cores every 12 to 15 months. When nVidia got their asses whopped
during the FX5xxx series (late 02/ early 03), they pretty much took
only about a year to answer with the FX6xxx (Apr 2004), then the
FX7xxx (Jun 2005). It seems that they must necessarily change their
design after their concept with the FX5xxx bombed and that would give
only 2 years at most between that and the 7xxx series.
 
D

David Kanter

My rough
Why 3 years though? It seems like nVidia and ATI are pushing out new
cores every 12 to 15 months. When nVidia got their asses whopped
during the FX5xxx series (late 02/ early 03), they pretty much took
only about a year to answer with the FX6xxx (Apr 2004), then the
FX7xxx (Jun 2005). It seems that they must necessarily change their
design after their concept with the FX5xxx bombed and that would give
only 2 years at most between that and the 7xxx series.

Why three years? Because that's how long it takes from *start* to
*finish*.

Just because you release a product every 12-15 months doesn't mean it
only takes 12-15 months to design. Think about pipelining...

Intel has been releasing new cores every 2-3 years (Willamette,
Prescott, Conroe), but it is quite well known that design takes 5
years. How can that happen? Multiple design teams. Who says ATI and
Nvidia don't have multiple design teams?

DK
 
Y

Yousuf Khan

David said:
Two years. ATI has never targeted an SOI process, and they'd need to
start using whatever CAD tools AMD uses internally. That's a pretty
big shift, and it's not something you'd ever do in the middle of a
project...

I believe that the Xbox360 CPU & GPUs are both produced at Chartered
semiconductor. Although I don't know if they're using SOI or bulk wafers
for these chips, that fab is a SOI fab.

Yousuf Khan
 
K

krw

Well considering that AMD has stated they will be doing 'copy exact'...

One can do bulk CMOS on a line already set up for SOI, is my point.
Yes, but if you've already started on any PD, you're going to need to
redo that. I think from a project management standpoint it would be a
pretty bad idea.

This "bad idea" has been done before.
I would expect that if they wanted to switch over
they would start with designs that haven't done much PD. My rough
estimate is that the design time for a GPU is 3 years, and PD probably
starts after 1 year. After they start PD, I don't think they'd
retarget.

Have you ever *done* this? Me thinks you're talking though your
hat, again.
AMD has the necessary circuits, but ATI engineers have no experience
with them. That's just asking for trouble, even if you have AMD
circuits guys coaching them along. Delays for CPUs are bad, but for
GPUs they are brutal, since the life time is so much shorter.

One doesn't need "experience" with a circuit to use it. Logic is
logic, fer chrissake. The biggest problem with SOI is getting the
circuit designs right.
Can you elaborate? Do you mean a design that is already in bulk
production? Or do you just mean a taped out bulk design?

Either way. Assuming no silicon (though metal changes may be
necessary) respins for logic errors, to SOI production volumes.
This would include hardware verification and perhaps two passes
(engineering then production) through silicon. The switch may be
transparent for a design already in the pipe, though logic errors
are an issue on a new design.
 
D

David Kanter

I believe that the Xbox360 CPU & GPUs are both produced at Chartered
semiconductor. Although I don't know if they're using SOI or bulk wafers
for these chips, that fab is a SOI fab.

I'll ask if it was a SOI device...

DK
 
D

David Kanter

krw said:
One can do bulk CMOS on a line already set up for SOI, is my point.

I don't know enough to evaluate whether that is true or not. However,
if you did bulk, you'd have to fully characterize that process, to get
all the physical parameters you want for design.
This "bad idea" has been done before.

When and where? Did they hit their release targets? Were there any
yield issues?
Have you ever *done* this? Me thinks you're talking though your
hat, again.

No I haven't, I'm not an EE. If you have a timeline you'd like to
suggest, go ahead...
One doesn't need "experience" with a circuit to use it.

I don't really buy this argument. Mainly because you are going to be
constantly tweaking your circuits to get better performance, lower
power or ideally both. That means you have to know what is going on in
circuit land...
Logic is
logic, fer chrissake. The biggest problem with SOI is getting the
circuit designs right.

And yields...

Either way. Assuming no silicon (though metal changes may be
necessary) respins for logic errors, to SOI production volumes.
This would include hardware verification and perhaps two passes
(engineering then production) through silicon. The switch may be
transparent for a design already in the pipe, though logic errors
are an issue on a new design.

Also, how big a design are we talking about? GPUs are not as tricky as
CPUs, but they are some of the larger and more complex ASICs out
there...

If you are saying you can port a SCSI controller from bulk to SOI in a
year, then I'm not sure that means much. I'm hoping you are talking
about a substantial design...

DK
 
K

Keith

I don't know enough to evaluate whether that is true or not.

It is.
However, if you did bulk, you'd have to fully characterize that process, to get
all the physical parameters you want for design.

So? It's not that big of a deal, compared to the other way.
When and where? Did they hit their release targets?
Were there any yield issues?

IBM. I'm not going to get into specifics.
No I haven't, I'm not an EE. If you have a timeline you'd like to
suggest, go ahead...

I already did. Less than a year, assuming a good design.
I don't really buy this argument. Mainly because you are going to be
constantly tweaking your circuits to get better performance, lower
power or ideally both. That means you have to know what is going on in
circuit land...

Of course you don't buy into reality. Logic designers don't tweak
circuits. They select books the circuit designers have already
qualified. One doesn't go messing with circuits once the logic is
cast. Can you say "chase your tail"?
And yields...

Remember, AMD has a mature SOI process. The yield thing isn't part
of the equation.
Also, how big a design are we talking about? GPUs are not as tricky as
CPUs, but they are some of the larger and more complex ASICs out
there...

My experience is with CPUs, so calibrate from there. According to
your assumption, migrating a GPU to SOI is a walk in the park.
If you are saying you can port a SCSI controller from bulk to SOI in a
year, then I'm not sure that means much. I'm hoping you are talking
about a substantial design...

Now really, a SCSI controller in SOI would be rather silly, no?
 
T

The little lost angel

Just because you release a product every 12-15 months doesn't mean it
only takes 12-15 months to design. Think about pipelining...
Intel has been releasing new cores every 2-3 years (Willamette,
Prescott, Conroe), but it is quite well known that design takes 5
years. How can that happen? Multiple design teams. Who says ATI and
Nvidia don't have multiple design teams?

I didn't say they don't have multiple design teams. I had assumed they
had at least two teams.

Let me walk through it again, first they had this design concept that
gave rise to the 5xxx series. It bombed, so whatever next gen they had
based on the prevailing design philosophy would necessarily need to be
re-thought and redesigned.

Essentially, the new design would had begun at step 1 at the point
they realized the basic assumptions they had then was resulting in a
POS. Since it's likely they don't have to redo everything, therefore I
wondered why can't it take only 2years for a new design, especially
since a GPU isn't that complicated and AMD essentially did the Athlon
in 3years from start to first silicon IIANW.
 
D

David Kanter

[snip]
IBM. I'm not going to get into specifics.

So you won't say whether or not it turned out to be a horrible idea?
I already did. Less than a year, assuming a good design.

What constitutes a 'good design'?
Of course you don't buy into reality. Logic designers don't tweak
circuits. They select books the circuit designers have already
qualified. One doesn't go messing with circuits once the logic is
cast. Can you say "chase your tail"?

So what happens if your target frequency is say, 600MHz, and it turns
out you can only hit 400MHz, but you have 3-4 months before tape out?
Do you just sit on your ass and let the circuits moulder? Or do you,
perhaps, start tweaking things to get improvements, track down critical
paths and fix them?

Or do you perhaps go back to the logic guys and say: "Hey, you blew a
lot of timing on this one chunk, could we try something quicker..."
Remember, AMD has a mature SOI process. The yield thing isn't part
of the equation.

I'm not really that sure. Acceptable yields for a 100mm^2 die may be
unreasonable for a 350mm^2 die, especially considering that GPUs have a
much higher density of logic, which is going to be more vulnerable to
defects. For a CPU, you at least have a pretty good chance of hitting
cache, which is easy to fix.
My experience is with CPUs, so calibrate from there. According to
your assumption, migrating a GPU to SOI is a walk in the park.

I think you're putting words in my mouth.

I'm saying that generally CPUs are more complex than GPUs; certainly an
ARM7 isn't as complex as the most modern GPU, but...as a general rule
of thumb, a CPU is going to be more complex than a GPU.

I would agree that porting a GPU to SOI should be easier than porting a
CPU, especially since *on average* you have less custom design work.
However, I think characterizing either as 'trivial' is silly.
Now really, a SCSI controller in SOI would be rather silly, no?

Sure, but I don't know what kind of design you were working on, and you
haven't chosen to share that information. Therefore, I can't really
come to any conclusions about whether your experience porting is
closely correlated to that of a GPU, a lower bound, an upper bound,
etc...

In other words, I need more info!

DK
 
Y

Yousuf Khan

krw said:
One can do bulk CMOS on a line already set up for SOI, is my point.

Is there no common stages between bulk and SOI that require slightly
different variations of a tool? What I mean is that the purpose of the
processing stage may be the same in either case, but due to the
differences between bulk and SOI, you have to buy one version of a tool
for bulk but another version of the tool for SOI.

One doesn't need "experience" with a circuit to use it. Logic is
logic, fer chrissake. The biggest problem with SOI is getting the
circuit designs right.

Isn't ATI's Xbox360 chipset already produced at Singapore's Chartered
Semi? That would likely make it SOI wouldn't it?

Yousuf Khan
 
Y

Yousuf Khan

David said:
I think you're putting words in my mouth.

Well, you are the one saying that GPUs aren't that complex of a
circuitry as compared to CPUs. Are you not?
I'm saying that generally CPUs are more complex than GPUs; certainly an
ARM7 isn't as complex as the most modern GPU, but...as a general rule
of thumb, a CPU is going to be more complex than a GPU.

In Keith's case, he's definitely not talking about ARM. He works for
IBM, and which processors do you think IBM makes?
I would agree that porting a GPU to SOI should be easier than porting a
CPU, especially since *on average* you have less custom design work.
However, I think characterizing either as 'trivial' is silly.

The guys who have adopted SOI (IBM, AMD, Freescale, etc.) had their
major initial problems in getting the process for laying down circuits
on SOI right. Once that was taken care of, they now just use SOI like bulk.

Yousuf Khan
 
Y

Yousuf Khan

David said:
I'll ask if it was a SOI device...

Also Chartered have said that they're using their licensed version of
AMD's APM technology to produce these chips. Again, it doesn't prove
whether it is using bulk or SOI, since APM can be used on any process;
but I would assume it's a stronger indication in one direction or another.

Yousuf Khan
 
K

Keith

Is there no common stages between bulk and SOI that require slightly
different variations of a tool? What I mean is that the purpose of the
processing stage may be the same in either case, but due to the
differences between bulk and SOI, you have to buy one version of a tool
for bulk but another version of the tool for SOI.

Just make sure the operator pizza recipe book open to the right
page. ;-) Things are pretty well automated these days and one can
do many different processes on the same tools in adjacent lots.
You really don't think it takes a complete line per process?
Isn't ATI's Xbox360 chipset already produced at Singapore's Chartered
Semi? That would likely make it SOI wouldn't it?

I'm pretty sure it's bulk, but not 100%.
 
K

Keith


I really wish you wouldn't snip attributions. It's a really bad
habit.
So you won't say whether or not it turned out to be a horrible idea?

It works fine, within the parameters I've mentioned (mature SOI
process, working design). Bringing up SOI is a PITA; going back is
trivial.
What constitutes a 'good design'?

No respins to fix bugs.

So what happens if your target frequency is say, 600MHz, and it turns
out you can only hit 400MHz, but you have 3-4 months before tape out?
Do you just sit on your ass and let the circuits moulder? Or do you,
perhaps, start tweaking things to get improvements, track down critical
paths and fix them?

If you're off that far, you're dead. It's time to shitcan the
entire design, fire all the architects, and sell flowers on the
street corner because you aren't cut out for this business. For
lesser "oopses" (happens all the time) the designer isolates a
critical path at a time then restructures the logic, picks faster
(higher power) "books", diddles with wiring geometry, reduces
loading, moves gates closer together, cheats, steals...
Or do you perhaps go back to the logic guys and say: "Hey, you blew a
lot of timing on this one chunk, could we try something quicker..."

Yes, but unless it's an unmitigated disaster (fire the
architects...), you don't go back to the circuits guys; too late.
I'm not really that sure. Acceptable yields for a 100mm^2 die may be
unreasonable for a 350mm^2 die, especially considering that GPUs have a
much higher density of logic, which is going to be more vulnerable to
defects. For a CPU, you at least have a pretty good chance of hitting
cache, which is easy to fix.

That's irrelevant to the issue at hand. I'm not letting the goal
posts move, this time.


I think you're putting words in my mouth.

No, my experience is with CPUs.

Your assumption (which I don't really believe, but no shock there):
CPU >> GPU

My statement:
CPU PD can be turned (bulk -> SOI) in a year (fact)

Therefore:
A turning a GPU from bulk to SOI is a walk in the park.
I'm saying that generally CPUs are more complex than GPUs; certainly an
ARM7 isn't as complex as the most modern GPU, but...as a general rule
of thumb, a CPU is going to be more complex than a GPU.

We're not talking ARM. We're talking modern high-end widgets.
I would agree that porting a GPU to SOI should be easier than porting a
CPU, especially since *on average* you have less custom design work.
However, I think characterizing either as 'trivial' is silly.

Three years is nuts.
Sure, but I don't know what kind of design you were working on, and you
haven't chosen to share that information. Therefore, I can't really
come to any conclusions about whether your experience porting is
closely correlated to that of a GPU, a lower bound, an upper bound,
etc...

I told you, my experience is with CPUs.
In other words, I need more info!

Evidently.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top