AMD to integrate PCIe into CPU

E

epaton


Two thoughts occur first that motherboards are about to get a fair bit
cheaper and second that overclocking is about to get more complex.

sounds like motherboards are basicly going to get turned into sockets and
a few things that wouldn't work inside the cpu. im sure ive read somewhere
that wireless and sound will need to remain seperate due to the way they
work.
 
Y

YKhan

Wireless, sound, IDE/SATA, USB/Firewire. Too many features that just
don't deserve a piece of the CPU real-estate, and don't really need the
full speed of the CPU to be dedicated to them. Just a CPU and a
southbridge basically. Just one step removed from a SOC.

Other things I see them possibly using the integrated PCI-e connector
for is integrated shared memory video. They can use the PCI-e video
protocols to share memory with the integrated video chipset directly.

Another use would be to offer even faster full dual-x16 SLI/Crossfire
support. They can connect one high-end video card to a northbridge x16
connector while the other one uses the CPU's x16 connector.

On the server front, they can connect things like commodity PCI-e
Infiniband cards directly to the CPU for HPC clusters.

Yousuf Khan
 
R

Robert Myers

YKhan said:
On the server front, they can connect things like commodity PCI-e
Infiniband cards directly to the CPU for HPC clusters.
LOM..."Landed-on-motherboard" more likely.

RM
 
Y

YKhan

Robert said:
LOM..."Landed-on-motherboard" more likely.

No, that is already done now, through Hypertransport. But built-in
Infiniband would be a very specialized requirement. That would make it
a very specialized subcategory of an already specialized subcategory.
Can't see the economies of scale being all that good for a motherboard
with built-in Infiniband. This way they can plug a bog-standard PCIe
Infiniband adapter (as bog-standard as those things get anyways), and
get slightly better latency out of it. May not be as good as
motherboard Infiniband, but better than through a chipset PCIe
connector.

Yousuf Khan
 
N

nobody

Two thoughts occur first that motherboards are about to get a fair bit
cheaper and second that overclocking is about to get more complex.

sounds like motherboards are basicly going to get turned into sockets and
a few things that wouldn't work inside the cpu. im sure ive read somewhere
that wireless and sound will need to remain seperate due to the way they
work.
Poor VIA, SIS, and ULI - they will be relegated to making commodity
south bridges, or fight mighty Intel for a piece of Pentium chipset
market. Nvidia and ATI have at least something to fall back on -
graphics. High end GPU will stay separate from CPU at least for quite
a while. OTOH, low end GPU may find its way into south bridges,
making them a bit less of a cheap commodity.
 
Y

Yousuf Khan

Poor VIA, SIS, and ULI - they will be relegated to making commodity
south bridges, or fight mighty Intel for a piece of Pentium chipset
market. Nvidia and ATI have at least something to fall back on -
graphics. High end GPU will stay separate from CPU at least for quite
a while. OTOH, low end GPU may find its way into south bridges,
making them a bit less of a cheap commodity.

Considering the cooling requirements of even a low-end GPU (cooling fins
coming out all over the place), it's unlikely that they'll try to
integrate the GPU with the southbridge. The video chip overheats and you
lose connection to your hard drives? :)

Yousuf Khan
 
N

nobody

Considering the cooling requirements of even a low-end GPU (cooling fins
coming out all over the place), it's unlikely that they'll try to
integrate the GPU with the southbridge. The video chip overheats and you
lose connection to your hard drives? :)

Yousuf Khan

Low end GPU like X300 can do with passive heatsink, and quite a few
north bridges now need a fan even without graphics. So they'll slap a
fan on the south bridge/GPU combo. If a fan is not enough a BIG fan
will do. After all they'll need to sell something, and the market for
cheap integrated chipsets will always be there. Looks like nobody at
Intel is afraid to lose connection to RAM because the integrated
Extreme Graphics could overheat ;-)
 
Y

Yousuf Khan

Low end GPU like X300 can do with passive heatsink, and quite a few
north bridges now need a fan even without graphics. So they'll slap a
fan on the south bridge/GPU combo. If a fan is not enough a BIG fan
will do. After all they'll need to sell something, and the market for
cheap integrated chipsets will always be there. Looks like nobody at
Intel is afraid to lose connection to RAM because the integrated
Extreme Graphics could overheat ;-)

One thing nobody has mentioned yet is the shear irony of this situation.
Intel created PCI-e as a competitor to Hypertransport, because they
refused to adhere to a standard that AMD came up with. AMD gave the
green light to PCI-e without even a fight, knowing full well that PCI-e
and HT would be compatible with each other (just slightly different
physical layers), and now it may come up with the first PCI-e integrated
into the CPU.

Yousuf Khan
 
D

Del Cecchi

Yousuf said:
One thing nobody has mentioned yet is the shear irony of this situation.
Intel created PCI-e as a competitor to Hypertransport, because they
refused to adhere to a standard that AMD came up with. AMD gave the
green light to PCI-e without even a fight, knowing full well that PCI-e
and HT would be compatible with each other (just slightly different
physical layers), and now it may come up with the first PCI-e integrated
into the CPU.

Yousuf Khan

Sigh. where do you guys get these fairy stories? PCI-E was invented as
an IO expansion network to replace pci-x which was reaching the end of
its rope and took too many pins. InfiniBand was too server oriented.

Is everybody in this group full of conspiracy theories? I am really
starting to wonder about you guys.
 
Y

Yousuf Khan

Del said:
Yes, it says intel got pci-e adopted. Hypertransport is totally
different thing, capable of driving a few inches. It is a FSB. Why the
doof that wrote the article even mentioned it isn't clear.

Because there was a time when HT was proposed as the next generation
PCI. It was initially going to allow PCI to get faster by simply
splitting each PCI slot into its own PCI bus, with each of the PCI buses
connected over HT. Then eventually they were talking about HT gaining
its own slot connector and people using HT directly.

Both of those scenarios actually did come true, in a way. HT has become
a very popular underlying layer for PCI, PCI-X and even PCI-E. There is
also a slot connector standard for HT called HTX, but it's not
necessarily all that popular.

Yousuf Khan
 
T

Tony Hill

There's no spec that shows exactly how far each could be driven, but I
suspect that you'll find Hypertransport and PCI-Express could achieve
comparable distances for similar data rates. My idea of "a few
inches" in computer designs is 2-3", and there are definitely HT
setups running at high data rates that go further than that (I would
guess that the furthest I've seen would be about 12" for a 16-bit,
2000MT/s link).
Because there was a time when HT was proposed as the next generation
PCI. It was initially going to allow PCI to get faster by simply
splitting each PCI slot into its own PCI bus, with each of the PCI buses
connected over HT. Then eventually they were talking about HT gaining
its own slot connector and people using HT directly.

Both of those scenarios actually did come true, in a way. HT has become
a very popular underlying layer for PCI, PCI-X and even PCI-E. There is
also a slot connector standard for HT called HTX, but it's not
necessarily all that popular.

To the best of my knowledge there is only ONE HTX add-in card, an
Infiniband card from Pathscale. This card was recently used to set
some world records for low-latency communication in clusters.

The slot is actually VERY similar to PCI-Express (same physical
connectors) and the specs are designed to make it easy to have both
PCI-E and HTX on the same board.

Really when you get right down to it, Hypertransport and PCI-Express
started out with rather different goals but the end result is
surprisingly similar. I guess there really are only so many ways to
skin a cat.
 
D

Del Cecchi

Tony said:
There's no spec that shows exactly how far each could be driven, but I
suspect that you'll find Hypertransport and PCI-Express could achieve
comparable distances for similar data rates. My idea of "a few
inches" in computer designs is 2-3", and there are definitely HT
setups running at high data rates that go further than that (I would
guess that the furthest I've seen would be about 12" for a 16-bit,
2000MT/s link).




To the best of my knowledge there is only ONE HTX add-in card, an
Infiniband card from Pathscale. This card was recently used to set
some world records for low-latency communication in clusters.

The slot is actually VERY similar to PCI-Express (same physical
connectors) and the specs are designed to make it easy to have both
PCI-E and HTX on the same board.

Really when you get right down to it, Hypertransport and PCI-Express
started out with rather different goals but the end result is
surprisingly similar. I guess there really are only so many ways to
skin a cat.


HT can go maybe a foot, if you are really lucky. Work out the skew
budgets. At 2000 MT, the board is allocated less than 100 ps as I recall.

PCI-E on the other hand can go several meters.

Totally different approaches.

del
 
Y

Yousuf Khan

Tony said:
To the best of my knowledge there is only ONE HTX add-in card, an
Infiniband card from Pathscale. This card was recently used to set
some world records for low-latency communication in clusters.

The slot is actually VERY similar to PCI-Express (same physical
connectors) and the specs are designed to make it easy to have both
PCI-E and HTX on the same board.

I didn't realize that PCI-E and HTX had similar connectors. Is one an
extension of the other (eg. HTX is a few extra slots beyond the PCIE
slots, like VESA was compared to ISA), or something like EISA was to
ISA, with somewhat deeper slots? Or are they totally incompatible but
they look similar?
Really when you get right down to it, Hypertransport and PCI-Express
started out with rather different goals but the end result is
surprisingly similar. I guess there really are only so many ways to
skin a cat.

Yup.

Yousuf Khan
 
Y

Yousuf Khan

Del said:
HT can go maybe a foot, if you are really lucky. Work out the skew
budgets. At 2000 MT, the board is allocated less than 100 ps as I recall.

PCI-E on the other hand can go several meters.

Totally different approaches.

Which is why PCI-e never got adopted as a CPU to CPU interconnect.

Yousuf Khan
 
D

David Kanter

Yousuf said:
Which is why PCI-e never got adopted as a CPU to CPU interconnect.

Yousuf Khan

One never knows what the future holds. Anyway, it's pretty obvious
that parallel transmission (read HT) is the way of the past. If you
look at any high performance interconnect, they are all serial. Talk
to the Rambus guys, they know what they are doing...

Now, as to whether serial connections between CPUs is a good idea, I am
not entirely sure; I suspect Del is far more qualified to discuss that
topic than I am. Generally, serial connections can be driven far
faster, but there is slighly longer latency for SERDES.

HT was never envisioned to replace PCI-X, PCI or anything else.
Yousuf, you should at least try to distinguish yourself from AMD PR
personnel...

David
 
Y

Yousuf Khan

David said:
One never knows what the future holds. Anyway, it's pretty obvious
that parallel transmission (read HT) is the way of the past. If you
look at any high performance interconnect, they are all serial. Talk
to the Rambus guys, they know what they are doing...

Not quite, HT is a set of multiple serial interfaces. You can go from
one to 16 unidirectional links, one to 16 in the other direction too.
Exactly the same as PCI-e.
HT was never envisioned to replace PCI-X, PCI or anything else.
Yousuf, you should at least try to distinguish yourself from AMD PR
personnel...

It would be much easier if I didn't have to go around correcting
misinformation.

Yousuf Khan
 
K

Kai Harrekilde-Petersen

David Kanter said:
One never knows what the future holds. Anyway, it's pretty obvious
that parallel transmission (read HT) is the way of the past. If you
look at any high performance interconnect, they are all serial. Talk
to the Rambus guys, they know what they are doing...

While a serial (and encoded) link is way easier to handle, the sky is
not the limit. Consider that at 10Gb/s standard FR-4 board material
have quite frightening losses, which limits the length you can send
it. The several meters that Del talk about is on cables, I think.

And just exactly why would you want to go several meters on a CPU to
CPU interconnect (at least in the x86 mass-market)?

Sure, the parallel link as other problems, also pointed out by Del,
but my point here is that blindly claiming that either technology is
the "right thing" is not a good idea.

Latency, bandwidth, die area, power consumption, and maximum trace
length should all be considered.
Now, as to whether serial connections between CPUs is a good idea, I am
not entirely sure; I suspect Del is far more qualified to discuss that
topic than I am. Generally, serial connections can be driven far
faster, but there is slighly longer latency for SERDES.

Definitely - and as we know: money can buy you bandwidth, but latency
is forever.

Think of the performace of SDRAMs - while the DDR's have awesome peak
BW numbers, they rarely translate into real-world benefits that is
worth taking about.


Kai
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top