AMD Will Continue Intel Chipset Development

L

lyon_wonder

http://www.dailytech.com/AMD+Will+Continue+Intel+Chipset+Development/article6293.htm

AMD ATI chipset development continues for Intel platforms

AMD is continuing ATI chipset development for Intel processors despite
the recent AMD and ATI merger. AMD does not intend to take the market
share crown from Intel however. Jochen Polster, sales and marketing
vice president for AMD, said their goal is to have a reasonable share
of the Intel chipset market -- nothing too large. Relationships with
NVIDIA will continue as well.

When asked if AMD has plans to launch an Intel Centrino-like mobile
platform, Polster denied such plans. “There is no such plan. In fact,
a Centrino-like platform is not a very good strategy for AMD. If we
limit our business partners to develop along the lines of a platform
we set, then all PC products will eventually develop into similar
solutions, which in the end would lead to a price war and minimize
profits for all our partners,” said Polster. “We believe in a open
platform so our business partners can build and develop products that
build on their strengths.”

AMD is currently readying its Trevally mobile reference design, though
it lacks Centrino-like branding. Trevally is based off a mobile
variant of the recently released AMD 690G chipset, the RS690T.
 
C

chrisv

lyon_wonder said:
When asked if AMD has plans to launch an Intel Centrino-like mobile
platform, Polster denied such plans. “There is no such plan. In fact,
a Centrino-like platform is not a very good strategy for AMD.

Hmm... Seems to me that the mobile are makes the most sense of all
for AMD/ATI. The limited expendability of mobile PC's means that many
buyers will want strong 3D graphics built-in to the chipset.
 
Y

Yousuf Khan

chrisv said:
Hmm... Seems to me that the mobile are makes the most sense of all
for AMD/ATI. The limited expendability of mobile PC's means that many
buyers will want strong 3D graphics built-in to the chipset.

I think they're approaching the point where integrated graphics is
getting good enough for most gaming. Some boards are now even allowing
you to overclock the graphics core in an IGP chipset.

Legit Reviews - How To: Overclocking AMD Boards With 690G Integrated
Graphics - AMD 690G IGP Overclocking
http://www.legitreviews.com/article/468/1/

Yousuf Khan
 
R

Robert Redelmeier

Yousuf Khan said:
I think they're approaching the point where integrated graphics
is getting good enough for most gaming. Some boards are now even
allowing you to overclock the graphics core in an IGP chipset.

I cannot see this happening until IG has local memory,
at least for with active video framebuffer. That vidram
gets hammered pretty heavily to keep the screen refreshed.
72Hz * 1200 * 1024 * 32 bpp = 350 MByte/s

-- Robert
 
Y

Yousuf Khan

Robert said:
I cannot see this happening until IG has local memory,
at least for with active video framebuffer. That vidram
gets hammered pretty heavily to keep the screen refreshed.
72Hz * 1200 * 1024 * 32 bpp = 350 MByte/s

Or possibly by the point where the graphics core is integrated into the CPU.

Yousuf Khan
 
R

Robert Redelmeier

Yousuf Khan said:
Or possibly by the point where the graphics core is
integrated into the CPU.

This will not relieve memory pressure. I might make
interleaving access better. But the DRAM still has latency
and video has demanding timing requirements. It would be
easier if the IGPU could at least cache one scanline (8 KB)
if not the whole active screen 4-8 MB.

-- Robert
 
D

David Kanter

Or possibly by the point where the graphics core is integrated into the CPU.

Considering the bandwidth required by a state of the art graphics
card, I don't see that really helping. Here's some bandwidth numbers:

42GB/s - GeForce 7900GS
32GB/s - Radeon Mobility x1800
86GB/s - Geforce 8800 GTX
15GB/s - Radeon Mobility x1600
10GB/s - Total memory bandwidth for "Barcelona"

I'm sorry, but given this reality, I don't see how integrating the GPU
helps. Perhaps if you integrate an additional memory controller that
supports GDDR4, that might do the trick, but then you're talking about
adding quite a few pins to the CPU. It's possible, but I wish AMD
would give some indication of what their plan actually is.

DK
 
R

Robert Redelmeier

David Kanter said:
Considering the bandwidth required by a state of the art graphics
card, I don't see that really helping. Here's some bandwidth numbers:

42GB/s - GeForce 7900GS
32GB/s - Radeon Mobility x1800
86GB/s - Geforce 8800 GTX
15GB/s - Radeon Mobility x1600
10GB/s - Total memory bandwidth for "Barcelona"

I'm sorry, but given this reality, I don't see how integrating
the GPU helps. Perhaps if you integrate an additional memory
controller that supports GDDR4, that might do the trick, but
then you're talking about adding quite a few pins to the CPU.
It's possible, but I wish AMD would give some indication of what
their plan actually is.

Agreed. My #s were for simple 2D refresh. These figures
make it obvious that the GPU is handling _enormous_ amounts
of data, most likely in various 3D functions.

Integrating vidram into the GPU would make more sense.

-- Robert
 
Y

Yousuf Khan

David said:
Considering the bandwidth required by a state of the art graphics
card, I don't see that really helping. Here's some bandwidth numbers:
42GB/s - GeForce 7900GS
32GB/s - Radeon Mobility x1800
86GB/s - Geforce 8800 GTX
15GB/s - Radeon Mobility x1600
10GB/s - Total memory bandwidth for "Barcelona"
I'm sorry, but given this reality, I don't see how integrating the GPU
helps. Perhaps if you integrate an additional memory controller that
supports GDDR4, that might do the trick, but then you're talking about
adding quite a few pins to the CPU. It's possible, but I wish AMD
would give some indication of what their plan actually is.

It may not be necessary to run at those bandwidths. These days, "good
enough" can be done at lower resolutions, with various special effects
turned off. Perfect for the occasional gamer.

Yousuf Khan
 
D

David Kanter

It may not be necessary to run at those bandwidths. These days, "good
enough" can be done at lower resolutions, with various special effects
turned off. Perfect for the occasional gamer.

You are redirecting the argument and missing the point entirely.

Here's what you said:

"I think they're approaching the point where integrated graphics is
getting good enough for most gaming."

Now I don't know what you meant by that statement, but here's my
definition and rationale. If you disagree, please feel free to
elaborate how and why.

If you buy a PC, I expect that it will have a 2 year life time. Good
enough for gaming means that the integrated graphics provide > 30 fps
for the games you will play over the life time of the laptop. More
specifically, that level of performance will be achieved when
operating at no less than 1024x768 (or whatever the closest, but
slightly larger widescreen resolution is), but without anti-aliasing
or anisotropic filtering. I expect that initially you should be able
to run games at somewhat higher resolutions with more eye candy, but
by a year or a year and half after purchase, you'll be doing 1024x768
with no effects.

Note that this is highly dependent on the type of game. Some games
are highly CPU intense (Civilization for instance), some are
graphically intense (FPS, some tactical strategy games), etc.

Anyway, my point is that even mid-range mobile graphics cards today
pack more bandwidth than what AMD or Intel offer on a single socket.
If you want to offer gaming performance, then you need to match that
bandwidth for the GPU alone, and have system bandwidth for the CPU.
Now this isn't impossible, the thing to do is integrate a GDDR4/5/6
controller in the GPU. However, that brings the cost problems back -
graphics memory is expensive stuff, and it's not trivial to lay out on
a board either.

Put another way, the advantage of CPU/GPU integration is that you get
real computational transistors (compared to what Intel does with their
integrated graphics), but it doesn't solve the bandwidth problem.
Raster based graphics still requires lots of bandwidth. If you were
to try other types of visualization algorithms which are more cache
friendly, then you might have better results, since you could keep the
effective bandwidth high without expensive GDDRx.

DK
 
D

David Kanter

Agreed. My #s were for simple 2D refresh. These figures
make it obvious that the GPU is handling _enormous_ amounts
of data, most likely in various 3D functions.

Integrating vidram into the GPU would make more sense.

I think it would very hard to justify that, unless you could get
something very high density. You'd also have to trade off that
against die area, which you want to keep down. I could see using
eDRAM, if IBM's new stuff for a logic process is really suitable for
HVM. That's a big IF.

DK
 
Y

Yousuf Khan

You are redirecting the argument and missing the point entirely.
Here's what you said:
"I think they're approaching the point where integrated graphics is
getting good enough for most gaming."
Now I don't know what you meant by that statement, but here's my
definition and rationale. If you disagree, please feel free to
elaborate how and why.

How is that supposed to be redirecting the argument? I'm talking about
"good enough" graphics in both statements. However "good enough" is
achieved, doesn't matter, but it will involve designing to a hardware
checkpoint. Let's say they design games for the PC like they design it
for consoles. In consoles, the hardware doesn't change for more than 3
years usually. So the game developers have an unchanging landscape for
that many years to write their games for. That's different than the PC
landscape where there are constant hardware improvements every few
months, if not weeks; here they have to design games to make use of the
latest hardware features all of the time. So hardware that was good
enough a couple of years ago are no longer good enough. If they turned
off hardware-optimizations and went with designing for checkpoint
hardware, then things would be simpler for most people.

Yousuf Khan
 
D

David Kanter

How is that supposed to be redirecting the argument?
I'm talking about
"good enough" graphics in both statements. However "good enough" is
achieved, doesn't matter, but it will involve designing to a hardware
checkpoint.

Check point?

What does matter is what is 'good enough'? I suggested that the
bandwidth required by even low-end graphics cards, fit only for
notebooks, exceeds that of a modern MPU. If good enough is more than
low-end embedded graphics, which it is to me, then it sounds like you
have problems with memory bandwidth.

As I said before, why don't you define for us what the phrase "good
enough" means. I proposed my definition already and I'll elaborate a
bit more:

good enough = capable of running 95% of mainstream games that will be
released over the next 2 years. By mainstream, I mean stuff like
Oblivion, Unreal, World of Warcraft, the real hit games...not crap
like Mavis Beacon teaches typing or Deer Hunter XIV.
Let's say they design games for the PC like they design it
for consoles. In consoles, the hardware doesn't change for more than 3
years usually. So the game developers have an unchanging landscape for
that many years to write their games for.

Why would you want to? Why throw away 3 years of progress in
hardware...that also means that every 5 years, you have to totally
redesign your tools from scratch. Look at how much fun everyone is
having working with the PS3...
That's different than the PC
landscape where there are constant hardware improvements every few
months, if not weeks; here they have to design games to make use of the
latest hardware features all of the time.

I don't see how they are at all different. I'm going to posit that
the cost of a game is proportional to the capabilities of the
GPU...meaning that it grows exponentially. You're much better off
with gradual growth in capabilities, than all of sudden, finding that
you are sitting on 10x more hardware than before. Evolution is good,
revolution is bad.

The argument you are making is equivalent to arguing that a closed
platform controlled by a single vendor is better than an open one that
is constantly innovating. Judging by the success of x86 relative to
anything else, it's clear that argument doesn't hold water.
So hardware that was good
enough a couple of years ago are no longer good enough. If they turned
off hardware-optimizations and went with designing for checkpoint
hardware, then things would be simpler for most people.

Checkpoint hardware? What are you talking about?

The problem with the console model is that it relies on being able to
predict what is affordable to produce in the future. The way consoles
are designed is that the first iteration will be really expensive,
because the point is that you want the console to be 'reasonable' in
the middle and end of it's lifecycle. This requires predictions over
the course of 5 years or so. It's quite easy to botch those
predictions...

Honestly, I think trying to predict the 1 year in advance that most
folks try is a lot easier.

DK
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top