Intel found to be abusing market power in Japan

K

Keith R. Williams

George Macdonald wrote:

Mirrored pair sounds to me like they just flip the silkscreen upside
down and have it expose the next die in the opposite direction. Is this
possible? The transistors photograph properly onto the wafer whether the
mask is right-side up or upside down?

I'm not a process type, but I wouldn't think so. It would seem that
the mirroring could be done in the mask preparation (bits is bits).
Both images (multiples of each) could live in the same reticule.
Either way, it seems to be an ugly (and expensive) solution.
They're starting to bring out the 2.6Ghz parts now, and it's likely that
they'll get it all of the way upto 3.0Ghz or more by the end of the 90nm
process.

I don't think that's anything more than the normal process learning and
stabilization.
 
R

Robert Myers

Yousuf said:
Yeah, well, I remember those photos too, but most of that discussion was
centered around whether Prescott is going to have 64-bit or not. Nobody
really guessed about the additional pipeline stages until much later.

I think if we see even a single additional pipeline stage in a
processor, you can consider it a major redesign, because each of those
pipeline stages usually require their own section of circuitry on the chip.
I don't want to go look for the photos, even if they could be found.
Intel marked them up to show that the data flow on the dies was
completely changed.

1. There was no mystery that the layout had been completely redone.

2. One was led to surmise that Intel was really proud of its circuit
design.

That's the marketing pizazz you loathe so much. They know how to lay
it on with a trowel. In this case, though, the plaster dried badly,
cracked, and fell off the wall. If you're saying you didn't think
Prescott was a major redesign after seeing Intel's marked-up photons,
though, I don't know what to say.

Had I known about the pipeline stages--surely a closely-guarded
secret--my reaction would have been, "What the,...," because anyone
acquainted with NetBurst's, ah, very special qualities would have known
that such a change would make NetBurst's most undersirable features
even more undesirable.
That's the Linux crowd. The typical Windows user wouldn't know what to
do with a compiler.
I wouldn't know what a typical Windows user looks like, but people who
develop software for Windows oten use Microsoft's compiler, which has a
very good reputation, and which I believe supported x86-64 rather
quickly. "The Linux crowd," you realize, includes a substantial number
of RedHat users in the server space who could benefit from the added
registers almost immediately and without having to recompile
themselves. I suspect that the enterprise users who really *need* 64
bit pointers are in some cases still testing.

RM
 
Y

YKhan

Robert said:
If it's an original insight, I'll be happy to take credit for it, but
I doubt that it's original.

I think you better take credit, because I've never heard anyone else
expound this theory before.
The lack of named registers is a significant architectural deficiency
for x86. From the point of view of current needs, it's a much more
serious deficiency than 32-bit pointers. If Intel hadn't wanted to
end-of-life x86, the problem would have been addressed long ago.
Don't go searching around Intel's web site to find that in a press
release.

Yeah sure, it's one of the big x86 flaws. But how do you know Intel
specifically targetted this area of deficiency and said, "Lord, let's
pray AMD doesn't improve this part of the architecture, or else Itanium
is in trouble"? I've only ever heard Intel disparage the whole 64-bit
extensions concept, never seen them disparage the specific feature of
the extra registers. If they were specifically worried about it, then
they might've said something like, "And those 8 extra registers, they
have no chance against our Itanium's 128 total registers, they haven't
got a chance!", followed by evil giggling.

Yousuf Khan
 
R

Robert Myers

I think you better take credit, because I've never heard anyone else
expound this theory before.
The relative importance of register pressure vs. pointer size as
limitations of x86 has been hashed through many times online. Just
for curiosity, I checked groups.google.com on

x86 limited "address space"

and got 932 hits vs. 2140 hits for

x86 limited registers

I'm surprised it was that close. Judging from the die photo exchange,
though, nothing short of an explicit declaration from Intel would
convince you that giving up that advantage for Itanium was something
they wouldn't have wanted to do.
Yeah sure, it's one of the big x86 flaws. But how do you know Intel
specifically targetted this area of deficiency and said, "Lord, let's
pray AMD doesn't improve this part of the architecture, or else Itanium
is in trouble"? I've only ever heard Intel disparage the whole 64-bit
extensions concept, never seen them disparage the specific feature of
the extra registers. If they were specifically worried about it, then
they might've said something like, "And those 8 extra registers, they
have no chance against our Itanium's 128 total registers, they haven't
got a chance!", followed by evil giggling.

I think you've got Microsoft mixed up with Intel.

While you've got me in this mode, though, I checked groups.google.com
on

"ia-64 or ia64 or itanium" "too many" registers

and got 299 hits. There is such a thing as too much of a good thing.

RM
 
G

George Macdonald

When the way size is greater than the page size? The page size is
measured in KB, while the way size is measured in "ways". How do you
compare? Oh, you mean cache size divided by the number of ways, compared
against the page size.

Yeah I meant the individual size of each way - dunno how else to express
it. The number of ways would be the associativity.
 
R

Robert Myers

"The outside world" <> memory. Perhaps now you see why I think you're
purposely misleading. Certainly you *know* this. Hypertransport is an
I/O interface primarily, though is used for memory in a UMA sort of system.
It's a very different proposition from putting memory requests on a
front-side bus and letting a memory controller deal with it. The
traffic that would normally go onto a shared Intel front-side bus is
split between traffic for local memory and hypertransport. If I were
more accustomed to NUMA systems, I might have referred to it as an
interconnect, but it is carrying memory traffic, and the on-chip
management of that traffic cannot be trivial. And it is interfacing
to memory through that link. The fact that, with good locality, most
of the traffic might be going through the link to local memory changes
very little. You still have to interface to memory through
hypertransport and you have to manage the traffic on-die. Maybe
that's all easy. It certainly didn't sound easy to me at the time.

I don't think that's true. Few really did "know" this, until it was
over. *VERY* few saw that particular speed bump. After 130nm (a fairly
simple transistion), everyone was cruising. Oops!
Stretched silicon sounded perfectly straightforward to you?
Your posts speak for themselves. You seem to be distrought that Opteron
brought Itanic to it's grave. ...when it was really Intel's senior
management that blew it (on both ends).
I have neither understood nor paid much attention to Intel's plans for
Itanium on the desktop, other than that I knew that Intel planned to
replace x86 completely with Itanium at some point. The space that I
am interested in is impacted by Opteron in a completely different way,
and I'm still not sure I understand what the impact is. As to my
being "distraught," I do occasionally allow myself to be upset by
things I have no control over, but this isn't one of them.

You, nor I, know what money has traded places. I note that you don't
comment on the AMD bodies placed in IBM-EF as a joint venture. Tehy
aren't exactly free either.
I'm just not going to go look for the news releases or the posts
(here). I think the actual figure was smaller, but I just can't be
bothered. IBM and AMD worked out a joint venture that was to their
mutual benefit. The terms of the deal were announced. I don't know
why you want to argue about it, and I don't understand why you think I
read something dark into it. I don't.

Your flip-flop on the technology "problem". First you say it's a
"leakage" problem, then say that P-M is better because it performs better
at lower frequency. Which is it?
If you consume less power for equivalent performance, the part of the
enegy budget that goes into leakage is not so much of a problem, and
there is less leakage to begin with if you can operate at lower
voltage.

Then that's an even dumber response that I'd have expected. Certainly
someone has to sell boxes, but what I said *is* still true. There is no
invention in Dell. It is no more than Intel's box-making plant. It's
interesting that they couldn't even make a profit on white ones, since
that's all they do.

How Intel sells its processors does matter, and Dell, just like
everyone else, ultimately has to sell performance, especially into the
server space. If Intel caved on 64-bits for x86, it's because the
people who sell boxes for them said they needed it. You're the one
who claims Intel is marketing-driven. Who's flip-flopping now?

RM
 
D

Delbert Cecchi

At or about at the same time, RapidIO came from Motorola, Several clock
forwarded interfaces came from IBM including STI and RIO. OIF had SPI4.
Hypertransport wasn't anything to write home about. You seem to be more
impressed with it than most. Putting the memory controller on the chip
was a good idea for a desktop.
It's a very different proposition from putting memory requests on a
front-side bus and letting a memory controller deal with it. The
traffic that would normally go onto a shared Intel front-side bus is
split between traffic for local memory and hypertransport. If I were
more accustomed to NUMA systems, I might have referred to it as an
interconnect, but it is carrying memory traffic, and the on-chip
management of that traffic cannot be trivial. And it is interfacing
to memory through that link. The fact that, with good locality, most
of the traffic might be going through the link to local memory changes
very little. You still have to interface to memory through
hypertransport and you have to manage the traffic on-die. Maybe
that's all easy. It certainly didn't sound easy to me at the time.
(by the way, this paragraph confuses me. Too many pronouns with unclear
anticedents.)

Not clear to me what memory data is coming over HT in a normal type
desktop system. In a NUMA MP system yes.

del cecchi
Stretched silicon sounded perfectly straightforward to you?

yep. What makes you think the strained silicon was the problem with the
90 nm transition?
I have neither understood nor paid much attention to Intel's plans for
Itanium on the desktop, other than that I knew that Intel planned to
replace x86 completely with Itanium at some point. The space that I
am interested in is impacted by Opteron in a completely different way,
and I'm still not sure I understand what the impact is. As to my
being "distraught," I do occasionally allow myself to be upset by
things I have no control over, but this isn't one of them.


I'm just not going to go look for the news releases or the posts
(here). I think the actual figure was smaller, but I just can't be
bothered. IBM and AMD worked out a joint venture that was to their
mutual benefit. The terms of the deal were announced. I don't know
why you want to argue about it, and I don't understand why you think I
read something dark into it. I don't.

<snip>

IBM did the AMD deal for reasons of its own. If Keith or I knew what
those reasons were, it would be unwise to go blabbing all over the net
If you consume less power for equivalent performance, the part of the
enegy budget that goes into leakage is not so much of a problem, and
there is less leakage to begin with if you can operate at lower
voltage.



How Intel sells its processors does matter, and Dell, just like
everyone else, ultimately has to sell performance, especially into the
server space. If Intel caved on 64-bits for x86, it's because the
people who sell boxes for them said they needed it. You're the one
who claims Intel is marketing-driven. Who's flip-flopping now?

RM

Any rational company is marketing driven.

Intel caved on 64 bits for x86 because the handwriting was on the wall
for Itanium.

Here is my question for the day.... What is the effect on HP's servers
of Itanium becoming a niche product?
del
 
R

Robert Myers

At or about at the same time, RapidIO came from Motorola, Several clock
forwarded interfaces came from IBM including STI and RIO. OIF had SPI4.
Hypertransport wasn't anything to write home about. You seem to be more
impressed with it than most. Putting the memory controller on the chip
was a good idea for a desktop.
"The outside world" <> memory. Perhaps now you see why I think you're
purposely misleading. Certainly you *know* this. Hypertransport is an
I/O interface primarily, though is used for memory in a UMA sort of system.
[Hypertransport for shared memory is] a very different proposition from putting memory requests on a
front-side bus and letting a memory controller deal with [the request]. The
traffic that would normally go onto a shared Intel front-side bus is
split between traffic for local memory and hypertransport. If I were
more accustomed to NUMA systems, I might have referred to [the hypertransport interface] as an
interconnect, but it is carrying memory traffic, and the on-chip
management of that traffic cannot be trivial. And [the processor] is interfacing
to memory through that link. The fact that, with good locality, most
of the traffic might be going through the link to local memory changes
very little. You still have to interface to memory through
hypertransport and you have to manage the traffic on-die. Maybe
that's all easy. [Putting all that on the die] certainly didn't sound easy to me at the time.
(by the way, this paragraph confuses me. Too many pronouns with unclear
anticedents.) [All pronouns replaced by explicit antecedents]

Not clear to me what memory data is coming over HT in a normal type
desktop system. In a NUMA MP system yes.
The design work for SMP had to be done, and the 2 and 4 way SMP space
is what mattered. No big deal? How would I know? It's been a long
time since I designed my last microprocessor with on-die memory
controller, hypertransport interface, and crossbar.
yep. What makes you think the strained silicon was the problem with the
90 nm transition?

Oh, I don't know. Andy Grove at a Transmeta roll-out where Transmeta
was talking about a new way to manage leakage? My read was that the
device characteristics hadn't come out as planned. _Something_ took
Intel by surprise.

IBM did the AMD deal for reasons of its own. If Keith or I knew what
those reasons were, it would be unwise to go blabbing all over the net

Right. So why are we talking about it? :).
Any rational company is marketing driven.

Intel caved on 64 bits for x86 because the handwriting was on the wall
for Itanium.

Here is my question for the day.... What is the effect on HP's servers
of Itanium becoming a niche product?

Before I answered _that_ question, I'd want to know what is the effect
on HP's servers of Carly Fiorina being fired?

Fujitsu Siemens winds up with the enterprise server part of HP's
business either as the owner or as the maker of boxes to be rebranded.

But I think those boxes still use Itanium.

RM
 
D

Delbert Cecchi

Robert Myers said:
Robert Myers said:
On Sun, 20 Mar 2005 18:01:14 -0500, Robert Myers wrote:

Since it is *not* a memory interface, it's not a new one, now is it.

You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.

At or about at the same time, RapidIO came from Motorola, Several clock
forwarded interfaces came from IBM including STI and RIO. OIF had SPI4.
Hypertransport wasn't anything to write home about. You seem to be more
impressed with it than most. Putting the memory controller on the chip
was a good idea for a desktop.
"The outside world" <> memory. Perhaps now you see why I think you're
purposely misleading. Certainly you *know* this. Hypertransport
is
an
I/O interface primarily, though is used for memory in a UMA sort
of
system.
[Hypertransport for shared memory is] a very different proposition from putting memory requests on a
front-side bus and letting a memory controller deal with [the request]. The
traffic that would normally go onto a shared Intel front-side bus is
split between traffic for local memory and hypertransport. If I were
more accustomed to NUMA systems, I might have referred to [the hypertransport interface] as an
interconnect, but it is carrying memory traffic, and the on-chip
management of that traffic cannot be trivial. And [the processor] is interfacing
to memory through that link. The fact that, with good locality, most
of the traffic might be going through the link to local memory changes
very little. You still have to interface to memory through
hypertransport and you have to manage the traffic on-die. Maybe
that's all easy. [Putting all that on the die] certainly didn't
sound easy to me at the time.
(by the way, this paragraph confuses me. Too many pronouns with unclear
anticedents.) [All pronouns replaced by explicit antecedents]

Not clear to me what memory data is coming over HT in a normal type
desktop system. In a NUMA MP system yes.
The design work for SMP had to be done, and the 2 and 4 way SMP space
is what mattered. No big deal? How would I know? It's been a long
time since I designed my last microprocessor with on-die memory
controller, hypertransport interface, and crossbar.

Oh, well if you use an appropriate coherence protocol, using a link
isn't much different than a bus. Especially when it is a two node
system.

Look at all the rings and stuff on Power4.
Oh, I don't know. Andy Grove at a Transmeta roll-out where Transmeta
was talking about a new way to manage leakage? My read was that the
device characteristics hadn't come out as planned. _Something_ took
Intel by surprise.

<snip>

Yeah. and why do you think it was strained silicon. I know of several
effects in the last few generations that were found the hard way. :-(
There is one I still don't know why it happens. And never will. Gave
up. Moved on. changed the circuit.
net

Right. So why are we talking about it? :).
You are talking about it because you don't know and have nothing to
lose. I know a small amount and need to eat. So I talk about talking
about it. :)
Before I answered _that_ question, I'd want to know what is the effect
on HP's servers of Carly Fiorina being fired?

Fujitsu Siemens winds up with the enterprise server part of HP's
business either as the owner or as the maker of boxes to be rebranded.

But I think those boxes still use Itanium.

RM

Do you think the HP enterprise servers have the critical mass to support
Itanium development, if Intel were to back away? IBM had to converge,
Sun is getting wobbly about sparc or so it seems.
 
D

Delbert Cecchi

Robert Redelmeier said:
Perhaps the misunderstanding is mine. I'm amazed that they
were that concerned 30 years ago. But perhaps your company
was big juicy target for the DoJ.

-- Robert
Perhaps? Only since the 30s or 40s.

IP was an issue in the 1956 consent decree.
 
R

Robert Myers

Robert Myers said:
On Sun, 20 Mar 2005 18:01:14 -0500, Robert Myers wrote:





Yeah. and why do you think it was strained silicon. I know of several
effects in the last few generations that were found the hard way. :-(
There is one I still don't know why it happens. And never will. Gave
up. Moved on. changed the circuit.

Because that's the process Intel was using? They didn't get the
scaling they needed at 90nm. What would you like to blame it on?

It's true: the problems discussed at the Transmeta briefing were not
peculiar to strained silicon or to any other process. But it was
Intel, with its strained silicon, that had a problem big enough and
urgent enough to get Andy Grove to a public presentation.

Process is black art and nobody knows how it will come out? I think I
understand that. IBM's black art looks to have been more equal to its
needs and AMD's needs than Intel's black art was to its needs. Is
that sufficiently vague to acknowledge how little is really
understood?
You are talking about it because you don't know and have nothing to
lose. I know a small amount and need to eat. So I talk about talking
about it. :)

I know only what I read in the papers. I brought up the figure that
was previously reported and discussed here only to acknowledge that I
had previously made a comment that by some stretch could be construed
to mean that I thought there must be more to the deal than what was
published. I didn't mean to imply that.

I didn't mean to do anything other than to note what was reported in
the press and to observe that it didn't seem like a very large figure,
given the stakes. Period. The end. If Keith thinks I'm trying to
construct some kind of cabal out of no information at all, let him
think it.
Do you think the HP enterprise servers have the critical mass to support
Itanium development, if Intel were to back away? IBM had to converge,
Sun is getting wobbly about sparc or so it seems.
I have a very hard time imagining a scenario in which Intel walks away
from Itanium. Just because I can't imagine it doesn't mean it won't
happen.

RM
 
D

Delbert Cecchi

Robert Myers said:
On Wed, 23 Mar 2005 01:17:28 GMT, "Delbert Cecchi"
snip

Because that's the process Intel was using? They didn't get the
scaling they needed at 90nm. What would you like to blame it on?

It's true: the problems discussed at the Transmeta briefing were not
peculiar to strained silicon or to any other process. But it was
Intel, with its strained silicon, that had a problem big enough and
urgent enough to get Andy Grove to a public presentation.

It was Intel that was the elephant in the room, that was of enough
public interest.
There were all sorts of surprises in 90nm.
Process is black art and nobody knows how it will come out? I think I
understand that. IBM's black art looks to have been more equal to its
needs and AMD's needs than Intel's black art was to its needs. Is
that sufficiently vague to acknowledge how little is really
understood?


I know only what I read in the papers. I brought up the figure that
was previously reported and discussed here only to acknowledge that I
had previously made a comment that by some stretch could be construed
to mean that I thought there must be more to the deal than what was
published. I didn't mean to imply that.

I didn't mean to do anything other than to note what was reported in
the press and to observe that it didn't seem like a very large figure,
given the stakes. Period. The end. If Keith thinks I'm trying to
construct some kind of cabal out of no information at all, let him
think it.
snip
I have a very hard time imagining a scenario in which Intel walks away
from Itanium. Just because I can't imagine it doesn't mean it won't
happen.

RM

Why would Intel stay with it? Desktop and small servers belongs to
x86-64 from Intel and AMD. Anybody but HP making serious noises about
using Itanium? HP is sort of stuck, having ported all that stuff from
Alpha and PA and nonstop to Itanium, but I don't see anyone else in that
boat.

Maybe IBM is way smart and their ventures with AMD led to the death of
Itanium and Intel's plan for hegemony. Wow, that would mean I really
misunderestimated IBM executives. :)
 
R

Robert Myers

Why would Intel stay with it? Desktop and small servers belongs to
x86-64 from Intel and AMD. Anybody but HP making serious noises about
using Itanium?

http://uk.news.yahoo.com/050318/221/fehj1.html

<quote>

Within the past four years, there have probably been more stories
questioning the long-term viability of the Sparc and Itanium
architectures than any other architecture in the past several decades,
aside from the S/390 mainframe. Both Fujitsu Corp and Siemens AG
(Xetra: 723610.DE - news) , the Japanese and German counterparts in
the Fujitsu-Siemens partnership, are long-term planners that move
slowly and methodically. And they both have every intention of making
some money selling Sparc and Itanium servers for the foreseeable
future.

</quote>

http://www.infoworld.com/article/05/03/16/HNfujitsuprimequest_1.html

<quote>

Fujitsu to launch new PrimeQuest Itanium servers
Slated to be announced April 5, PrimeQuest will be company's first
high-end Itanium 2 systems

</quote>

Then there is SGI, of course. I assume that the Altix line will
survive somewhere, and that it will use Itanium, if it's available.

Any of that amounts to critical mass? I don't think so.

Either Intel finds a way for Dell to sell Itanium systems that is
profitable for Dell, or that's it for Itanium, but don't underestimate
Dell/Intel. The incentive for Dell, other than being obedient to
Santa Clara, is that it wants to be in the higher margin businesses
just like everybody else does.

Opteron hasn't yet and may not ever penetrate much beyond the 2 and
4-way space. That's the next line of defense. If Dell as a volume
purveyor of bigger SMP boxes is the way it goes down, Dell will wind
up killing margin rather than capturing it, as it always does.

Intel manages to establish Itanium as the worthy competitor to Power
it wants it to be, Dell creates the value proposition for itanium, and
all hell breaks loose again. Or not.

One indicator will be market penetration by Opteron in the 8-way space
and higher, and how Intel reacts. x86 is already getting hardware
virtualization. If x86 starts to acquire the RAS features Intel now
intends only for Itanium, we will know that Itanium is dead.
HP is sort of stuck, having ported all that stuff from
Alpha and PA and nonstop to Itanium, but I don't see anyone else in that
boat.
The alternatives are: use x86 for mainframe applications (politically
unacceptable, IMHO), continue investing in Sparc, or become dependent
on IBM.

RM
 
G

George Macdonald

On Wed, 23 Mar 2005 01:17:28 GMT, "Delbert Cecchi"


I know only what I read in the papers. I brought up the figure that
was previously reported and discussed here only to acknowledge that I
had previously made a comment that by some stretch could be construed
to mean that I thought there must be more to the deal than what was
published. I didn't mean to imply that.

I didn't mean to do anything other than to note what was reported in
the press and to observe that it didn't seem like a very large figure,
given the stakes. Period. The end. If Keith thinks I'm trying to
construct some kind of cabal out of no information at all, let him
think it.

Robert, just to clarify what is being discussed against that $45.xM
reported here
http://yahoo.businessweek.com/magazine/content/03_10/b3823085_mz063.htm -
it was my impression that was for the initial bail-out on the SOI only. If
it really was a great bargain, possibly IBM wanted a little publicity
benefit from their superior tech... over what Moto -- also an IBM
collaborator -- had been able to provide. IOW, could be that it validates
the technology and puts Moto in their place. Any idea who is actually
making CPUs for Apple?:) The further (continuing) collaboration between
AMD and IBM on process technology was/is a separate deal with separate
financials.
I have a very hard time imagining a scenario in which Intel walks away
from Itanium. Just because I can't imagine it doesn't mean it won't
happen.

There's no doubt, that if it continues in current fashion, shareholders are
going to get antsy. Intel is about big enough to behave like govt when it
comes to pissing away $$ into a bottomless pit - eventually an Itanium tax
on the useful product is going to tell... for customers and shareholders.
From my POV, Intel's ambition to challenge big IBM systems is shooting at
the moon - I don't think even M. Dell believes he can go there.
 
C

chrisv

Robert said:
Opteron hasn't yet and may not ever penetrate much beyond the 2 and
4-way space. That's the next line of defense. If Dell as a volume
purveyor of bigger SMP boxes is the way it goes down, Dell will wind
up killing margin rather than capturing it, as it always does.

Intel manages to establish Itanium as the worthy competitor to Power
it wants it to be, Dell creates the value proposition for itanium, and
all hell breaks loose again. Or not.

One indicator will be market penetration by Opteron in the 8-way space
and higher, and how Intel reacts.

Well, with dual-core, Opteron will be 8-way capable, right? That's
getting to be a serious box!
 
F

Felger Carbon

Robert Myers said:
If Dell as a volume
purveyor of bigger SMP boxes is the way it goes down, Dell will wind
up killing margin rather than capturing it, as it always does.

"Killing margin" is an interesting phrase. A synonym is "eliminating
the middleman markup", which some would see as a desirable goal. Many
americans deplore the disappearance of small outlets with high
markups, but then the same americans do all their shopping at Walmart.
One indicator will be market penetration by Opteron in the 8-way space
and higher

I too favor the glueless NUMA SMP configurations made possible by
Opterons, including 8-way. But when I think of an 8-way motherboard,
I think of a farmer showing up with a John Deere tractor, planning to
plow the back 40. ;-)

And when I think of 8-way Opterons on more than one mobo, I worry
about high-speed link connections between boards. I believe I've been
reassured before on this NG that such connections are possible. Has
this been proven? Are production systems being shipped with 8-way
Opterons on multiple boards?
 
R

Rob Stow

Felger said:
"Killing margin" is an interesting phrase. A synonym is "eliminating
the middleman markup", which some would see as a desirable goal. Many
americans deplore the disappearance of small outlets with high
markups, but then the same americans do all their shopping at Walmart.




I too favor the glueless NUMA SMP configurations made possible by
Opterons, including 8-way. But when I think of an 8-way motherboard,
I think of a farmer showing up with a John Deere tractor, planning to
plow the back 40. ;-)

And when I think of 8-way Opterons on more than one mobo, I worry
about high-speed link connections between boards. I believe I've been
reassured before on this NG that such connections are possible. Has
this been proven? Are production systems being shipped with 8-way
Opterons on multiple boards?

I read an article somewhere about an 8-way Opteron system that
had two stacked 4P boards. I thought it was an HP system, but I
looked just now and couldn't find anything with more than 4P at
HP's site.

It was probably just a review of something demoed at a show like
LinuxWorld and not yet - if ever - available in the retail channel.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top