ATI's R6xx GPU family to use 80nm and 65nm Processes

R

Radeon350

http://beyond3d.com/#news29612

R6xx to Utilise 80nm and 65nm Processes
31-Mar-2006, 01:09.28 Reporter : Dave Baumann

ATI have previously mentioned that their next generation, DirectX10
architecture would leverage much of the technology designed for
"Xenos", ATI's graphics processor developed for the XBOX 360, and we
surmised that would equate to the R6xx generation featuring a unified
shader architecture at the hardware level. In a conference call
discussing their latest quarter's financial results ATI's CEO, Dave
Orton, more or less confirmed that the next generation architecture
will be unified, and suggested that they will be better off for it with
the technology having a proving ground with the XBOX and R6xx
effectively being their second generation unified architecture.

A question was asked as to what process their next generation
architecture would be based on, and Dave Orton pointed out that the
80nm process comes in a number of flavours, including a cost reduction
option, which is currently in the process of being adopted for ATI's
low end parts, and also an 80nm HS process, to be used on more
expensive higher end solutions. Dave then went on to say that none of
the R6xx generation is likely to be 90nm based, instead split between
80nm and 65nm processes. This suggests that ATI may be adopting the
process choices they did with the R3xx and R4xx generation, by
introducing the new architecture first at the high end on a known
process, and moving the derivative, lower end parts to a newer, smaller
process. ATI didn't do this with R520, choosing to move build the new
architecture on the new 90nm process simultaneously because their
Shader Model 3.0 choices quite clearly required it, however this
utimately ended up backfiring with chip being held up for several
quarters while a bug in a 90nm library needed chasing down.
 
J

John Lewis

http://beyond3d.com/#news29612

R6xx to Utilise 80nm and 65nm Processes
31-Mar-2006, 01:09.28 Reporter : Dave Baumann

ATI have previously mentioned that their next generation, DirectX10
architecture would leverage much of the technology designed for
"Xenos", ATI's graphics processor developed for the XBOX 360, and we
surmised that would equate to the R6xx generation featuring a unified
shader architecture at the hardware level. In a conference call
discussing their latest quarter's financial results ATI's CEO, Dave
Orton, more or less confirmed that the next generation architecture
will be unified, and suggested that they will be better off for it with
the technology having a proving ground with the XBOX and R6xx
effectively being their second generation unified architecture.

A question was asked as to what process their next generation
architecture would be based on, and Dave Orton pointed out that the
80nm process comes in a number of flavours, including a cost reduction
option, which is currently in the process of being adopted for ATI's
low end parts, and also an 80nm HS process, to be used on more
expensive higher end solutions. Dave then went on to say that none of
the R6xx generation is likely to be 90nm based, instead split between
80nm and 65nm processes.

Not surprising since ATi "unified" GPU design is extremly wasteful
of silicon resources. The R520 (X1900 family) is 353 sq. mm. The
G71(7900 family) is 192 square mm, both on the 90nm process from TSMC.
Plus the R520 gobbles about 40% more power than the G71, both running
at their design clock-rates. To compete with nVida at GPU pricing
levels, Ati has to do something, and a shrink is the easiest apparent
solution. However, nVidia can match them step for step, since they
both have their designs at TSMC. Uncomfortable for Ati, the vicious
competition is great for the consumer.
This suggests that ATI may be adopting the
process choices they did with the R3xx and R4xx generation, by
introducing the new architecture first at the high end on a known
process, and moving the derivative, lower end parts to a newer, smaller
process. ATI didn't do this with R520, choosing to move build the new
architecture on the new 90nm process simultaneously because their
Shader Model 3.0 choices quite clearly required it, however this
utimately ended up backfiring with chip being held up for several
quarters while a bug in a 90nm library needed chasing down.

Most likely happened because the ATi engineers likely short-cut a
full transistor-level timing simulation... which is a highly
time-consuming effort, even on the most powerful computers, but
MANDATORY in the case of a brand-new design architecture...
he X1xxx family. The clock-speed error was propagated throught the
whole family. And apparently also missed at first-run silicon-testing,
since a whole bunch of production wafers had to be shelved or trashed.
Somewere no doubt used for the X1800XL series, but that loss of
production material meant at ATi had to stand at the back of the
production queue for the corrected X1800XT material. More haste...
less speed....no doubt accelerated by management pressure to get the
x1800 series out as soon as possible.

There may have been a library error... such things do occur - after
all software is not perfect -- but the timing simulation would have
caught the error.

John Lewis
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top