Intel found to be abusing market power in Japan

K

keith

Perhaps the misunderstanding is mine. I'm amazed that they
were that concerned 30 years ago. But perhaps your company
was big juicy target for the DoJ.

AFAIK, lawyers have always been concerned with getting in the middle of
heat. Besides, who says an old dog doesn't know better tricks. ;-)
 
R

Robert Myers

GMAFB, they did *not* change the instructions set. It is still x86, and
*backwards* compatable, which is the key. They did move the controller
on-die (an obvious move, IMO), but certainly did *not* develope a new
memory interface (what *are* you smoking?). They also dod not go with a
new process. The started in 130nm which was fairly well known. I don't
see any huge "risks" here at all. The only risk I saw was that INtel
would pull the rug out with their own x86-64 architecture. But no, Intel
had no such intentions since that woul suck the life our of Itanic.
Instead they let AMD do the job.
I don't know how to cope with the way you use language. They changed
the instruction set. Period. Backward-compatible !=unchanged.

In what way was hypertransport not a new memory interface for AMD?

It's true, they introduced at 130nm, but at a time when movement to
90nm was inevitable, where new cleverness would be required..."They
had to...go to a new process." If Intel had been able to move to 90nm
(successfully) with Prescott and AMD was stuck at 130 nm, it would
have been a different ball game.
Nonsense. I was betting on the outcome, and unlike you, with real
greenies. ...and don't blame IBM for pulling it off. It was all AMD.
IBM is in business of making money, nothing more.

Another odd choice of language. Blame IBM? Who? For what? Why?

IBM is in the business of making money? What else do you imagine I
think IBM is up to? Trying to put Intel out of business?

As to investment decisions, the only investment prospects I could see
for Intel or AMD were downside, and not in a way that was sure enough
to justify a short position, even were I in the habit of taking such
positions. I didn't like Intel's plans any better than I liked AMD's.
Bullshit! Intel wanted Itanic to be the end-all, and to let x86 starve to
death. AMD had other plans and anyone who had any clue of the history of
the business *should* have known that AMD would win. They won because
their customers won. Intel will make loads of money off AMD64, but they
don't like it.
You haven't shown in what way Intel's plans for Itanium affected the
success or failure of x86 offerings of AMD and Intel. Intel had the
money for the huge gamble it made on Itanium. It was not a bet the
company proposition.
Felger and I have been knwon to disagree. Because he agrees with you this
time, he's now the authority? I see.
Later on, you say that Intel's circuits are better than AMD's. If
they didn't put enough money into Prescott circuit design (maybe
because they put it into IA-64), they got better circuits, anyway?
Which is it, Keith?
You're both wrong. The money went into Itanic! Then an ice-berg
happened. Prescott was what was left of the life-rafts. ...not pretty.
Dramatic imagery is not an argument.
Wrong, wrong, wrong! You and Intel have the same dark glasses on. Itanic
was the failure. "Netburst" was the lifeboat with the empty water
containers. It was too little and *way* too late.
Ignore Itanium. Intel had the money to gamble on Itanium and to
advance x86 technology. They chose the wrong road for x86, which was
to continue NetBurst.

If anybody at Intel ever thought NetBurst was going to be more than
Megahertz hype (except for certain kinds of problems), they should
have been disabused of that notion before it was too late to abandon
the architecture at 90nm...

Or maybe not. Maybe somebody at Intel understood the architecture was
dead-ending on heat and there just wasn't time to move something else
into that space. Nothing in Intel's public pronouncements indicated
they understood the megahertz race was over, though.
You can repeat yourself into next week, but you're still wrong.
Intel had to curtail plans for Prescott and kill other Netburst
projects because of leakage--physics and heat--physics. They thought
they could beat those problems with process improvements--physics.
They couldn't--physics.
You really should study microarchitecture some more. What broke the P4
was sill marketeering. It was too big to fit the die given (by marketing)
so they tossed overboard some rather important widgets.

Which generation are you talking about? The original P4 went on a
transistor reduction plan because of power consumption problems. That
Prescott would have had to go on a transistor budget for similar
reasons seems almost inevitable. The transistor budget was driven by
die-size considerations? That's a new one, and I'm skeptical, to put
it mildly.
The fact that
caused them to have to bring the silly crap to the market was the
failure of Itanic caused by, TA-DA, AMD64. P4 would never have seen the
light if Itanic didn't take a few well-placed (and self-inflicted)
icebergs.
You haven't made this connection. Intel had to find a way to blow AMD
away in the megahertz race, and the way they chose to do it was
NetBurst. That, for once, *was* a marketing decision, and a very bad
one.
You can disagree all you want. It's in the history books now. Physics
had *nothing* to do with this battle (AMD and Intel both are constrained
by the same physics, BTW).

Except that NetBurst is a dramatically different architecture that
runs into the teeth of the physics in a way that previous
architectures didn't. That's why the future is the Pentium-M branch,
and that's something _I_ was saying well before Prescott came out.
It was all marketeering arrogance. INtel
simply won the arrogance battle, and lost the architecture war.


Intel is a marketing driven company. They are responsible for this mess.
It had *NOTHING* to do with circuits (yeesh). I'm quite sure (without
first-hand evidence) Intel's curcuits are still superrior to AMD's.
Intel's helm _was_ "frozen" though.
You can call it marketing arrogance all you want. If you make a plan
driven by marketing, you have to be able to execute it. Intel
couldn't execute. If you say "I'm going to to X," and X is technical,
I don't call that a marketing failure. If X could have been done but
wasn't, it's a technical failure. If X couldn't have been done and
that wasn't recognized but should have been, it's a management
failure.
"Contain" it in a casket, perhaps. Intel had no interest in having x86
survive. All the patents were either expired or were cross-licensed into
oblivion. Why do you think Intel and HP formed a seperate company as a
holder of Itanic IP?
Whatever Intel's original hopes for Itanium were, they had to have
been almost constantly scaled back as it became more and more obvious
that they had undertaken a mission to Mars.
Dell is simply Intel's box marketing arm. No invention there. Who cares?

Keith, that's just ridiculous. _Where_ do you think the money in your
paycheck comes from?

RM
 
K

keith

I don't know how to cope with the way you use language. They changed
the instruction set. Period. Backward-compatible !=unchanged.

Enhanced || added instructions != changed. Your use of language is very
(and I'm sure intentionally) misleading. "Changing the instruction set"
implies incompatability.
In what way was hypertransport not a new memory interface for AMD?

Since it is *not* a memory interface, it's not a new one, now is it.
It's true, they introduced at 130nm, but at a time when movement to 90nm
was inevitable, where new cleverness would be required..."They had
to...go to a new process." If Intel had been able to move to 90nm
(successfully) with Prescott and AMD was stuck at 130 nm, it would have
been a different ball game.

They "had to go to a new process", whether Opteron came about or not.
Your argument is void. The world was going to 90nm and that had nothing
to do with Opteron or Prescott. It was time.
Another odd choice of language. Blame IBM? Who? For what? Why?

You're saying that IBM enabled Opteron, which is hokum.
IBM is in the business of making money? What else do you imagine I
think IBM is up to? Trying to put Intel out of business?

IBM is in business with AMD to make money for IBM, and AFAIK they do (as a
result of that alliance). You're the one who sees something sinister in
AMD. Everything there was obvious to anyone who has followed AMD for a
decade or so.
As to investment decisions, the only investment prospects I could see
for Intel or AMD were downside, and not in a way that was sure enough to
justify a short position, even were I in the habit of taking such
positions. I didn't like Intel's plans any better than I liked AMD's.

I don't do shorts either (all my non-realestate investments are in
tax-deferred accounts, so no shorts allowed), but there was a *lot* there
to justify a long position on AMD several times.
You haven't shown in what way Intel's plans for Itanium affected the
success or failure of x86 offerings of AMD and Intel. Intel had the
money for the huge gamble it made on Itanium. It was not a bet the
company proposition.

Open your eyes, man! Intel attempted to choke x86. Intel *let* it be
enough of a gamble so AMD could take control of the ISA.
Later on, you say that Intel's circuits are better than AMD's. If they
didn't put enough money into Prescott circuit design (maybe because they
put it into IA-64), they got better circuits, anyway? Which is it,
Keith?

Perhaps they are, perhaps not. Intel has classically held the lead, but
it's not the point. Circuits don't define processors.

Besides botching the marketing, Intel botched the *micro-architecture*, or
more accurately the implementation of that micro-architecture, not the
circuits. Circuits have nothing to do with it.
Dramatic imagery is not an argument.

Imagery or not, that's exactly what happened. So far you haven't had
*any* argument, other than Intel == good, AMD == lucky that IBM happened
(and took pity on them).
Ignore Itanium. Intel had the money to gamble on Itanium and to advance
x86 technology.

Sure, but that was *not* their plan. They wanted to isolate x86, starve
it, and take that business private to Itanic. Bad plan.
They chose the wrong road for x86, which was to continue NetBurst.

Intel's preferred choice was *no* x86, but AMD didn't let that happen.
They answered with a poorly implemented, by all reports rushed, P4.
If anybody at Intel ever thought NetBurst was going to be more than
Megahertz hype (except for certain kinds of problems), they should have
been disabused of that notion before it was too late to abandon the
architecture at 90nm...

From what some insiders have said, Intel's Architects *did* sound the
warning bells. No one listened, and if they did were told to dissent
and comitt.
Or maybe not. Maybe somebody at Intel understood the architecture was
dead-ending on heat and there just wasn't time to move something else
into that space. Nothing in Intel's public pronouncements indicated
they understood the megahertz race was over, though.

Of course they knew. Their techies aren't stupid. Whether management
listened or not...
Intel had to curtail plans for Prescott and kill other Netburst projects
because of leakage--physics and heat--physics. They thought they could
beat those problems with process improvements--physics. They
couldn't--physics.

Oh, I see. You're moving the goal posts. I thought we were talking
about Opteron's position in the processor market and AMD's rise to the top
of x86.
Which generation are you talking about? The original P4 went on a
transistor reduction plan because of power consumption problems. That
Prescott would have had to go on a transistor budget for similar reasons
seems almost inevitable. The transistor budget was driven by die-size
considerations? That's a new one, and I'm skeptical, to put it mildly.

Cost = die size. It didn't fit the marketing plan. When the P4 came out
there was still some headroom for power (there still is, but no one wants
to go there). The shifter and fixed multiplier wouldn't have added all
that much more power.
You haven't made this connection. Intel had to find a way to blow AMD
away in the megahertz race, and the way they chose to do it was
NetBurst. That, for once, *was* a marketing decision, and a very bad
one.

Intel was betting on Itanic to conquer the world (they had a chance, but
executed miserably). P4 was an afterthought when it became clear that
Itanic was in deep yogurt and AMD was pulling away.
Except that NetBurst is a dramatically different architecture that runs
into the teeth of the physics in a way that previous architectures
didn't. That's why the future is the Pentium-M branch, and that's
something _I_ was saying well before Prescott came out.

I thought you said (above) the "physics problem" was leakage, not MHz.

You can call it marketing arrogance all you want. If you make a plan
driven by marketing, you have to be able to execute it. Intel couldn't
execute. If you say "I'm going to to X," and X is technical, I don't
call that a marketing failure. If X could have been done but wasn't,
it's a technical failure. If X couldn't have been done and that wasn't
recognized but should have been, it's a management failure.

The marketing failure was in Itanic. Ok, the fact they couldn't execute
may be called a technical failure, but it's marketing that sets the
schedule. Innovation usually doesn't follow M$ Planner.

Whatever Intel's original hopes for Itanium were, they had to have been
almost constantly scaled back as it became more and more obvious that
they had undertaken a mission to Mars.

....and panic sets in. "What have we got? Ah, P4! Ship it!"
Keith, that's just ridiculous. _Where_ do you think the money in your
paycheck comes from?

It's certainly not signed by Mike (or Andy). It doesn't come from
PeeCee wrench-monkeys either.
 
Y

Yousuf Khan

Robert said:
Before you cause a joint dislocation patting yourself on the back,
consider this: at roughly the same time as AMD was retooling for its
next generation architecture, Intel was redoing Netburst. I would
have bet on Intel successully rejiggering Netburst to get better
performance before I would have bet on AMD having the resouces to
produce Opteron.

It didn't turn out that way. Intel's failure with Netburst is
probably a mixture of physics and poor execution, but, in the end,
physics won. If you want to claim that you understood those physics
well enough ahead of time to predict the failure of Netburst, you
shouldn't have any problem at all giving us a concise summary of what
it is you understood so well ahead of time. You might also want to
offer some insights into Intel's management of the design process.

Had Intel done what it expected to with Prescott and done it on
schedule, Opteron would have been in a very different position.

Not at all likely that Intel would've turned things around with
Prescott. First of all nobody (except those within Intel) knew that they
were trying to increase the pipeline from 20 to 30 stages. So until that
point all anybody knew about Prescott was it just a die-shrunk
Northwood, which itself was just a die-shrunk Williamette. Anyways,
bigger pipeline stages or not, it was just continuing on along the same
standard path -- faster Mhz. There wasn't much of an architectural
improvement to it, unlike what AMD did with Opteron.
Your read of history is that 64-bit x86 and AMD won because of 64-bit
x86. My read of history is that IBM's process technology and AMD's
circuit designers won, and Intel's process technology and circuit
designers lost.

Not at all, AMD's 64-bit extensions had very little to do with it. I'd
say the bigger improvements were due to Hypertransport and memory
controller, and yes inclusion of SOI manufacturing techniques. In terms
of engineering I'd say ranking the important developments were: 64-bit
(10%), Hypertransport (20%), process technology (20%), and memory
controller (50%). In terms of marketing, it was 100% 64-bit, which sort
of took the mantle of spokesman for all of the other technology also
included.
It's easy to understand why AMD took the long odds with Hammer. It
didn't really have much choice. Intel wanted to close off the 64-bit
market to x86, and it might well have succeeded.

I'm hearing, "I'd have gotten away with it too, if it weren't for you
meddling kids!" :)
As to "People were willing to wait..." who needed either, really?
Almost no one. This is all about positioning.

Not sure where you get that little piece of logic from. People weren't
waiting for just any old 64-bit processor, they could get those before.
They were looking for a 64-bit x86 processor. Itanium falls into the
category of "any old 64-bit processor", since it's definitely not x86
compatible.

Yousuf Khan
 
Y

Yousuf Khan

Robert said:
I remember the exchanges very well, and I remember what the local AMD
chorus was saying. AMD took a gamble on very long odds, IMHO. The
fact that they were going to 64-bits didn't shorten those odds.

Sure, but if it was merely just a move to 64-bit instructions, then AMD
would've had these processors out by 2001 or 2002, but it would've been
more of a K7.5 rather than a K8. They admitted that the addition of the
64-bit instructions only added 5% to the die area. They could've had an
Athlon 64 out three years ago. But we saw how little time it took Intel
to copy the 64-bit instructions, maybe just a year and a half. So I
think AMD decided to give their architecture a value proposition well
beyond just a 64-bit language upgrade. They piled on an additional two
years of development and came up with the Hypertransport and memory
controller.

Now we see that Intel in the same amount of time, doubled the size of
its core just to add 64-bit instructions and ten additional pipeline
stages. And it's not expected to have its own version of Hypertransport
and memory controller till at least 2007. Even the memory controller
won't be entirely the same as onboard, it'll still be sticking to a
separate memory controller model, but perhaps one memory controller chip
per processor, but still distinctly seperate.
The AMD chorus here wanted: AMD win, x86 win, 64-bits. That, not any
realistic assessment of AMD actually succeeding, was what everybody
was betting on. Well done for AMD and IBM that they could make it
happen, but far from a safe bet.

There's not much reason to give IBM too much credit here, it only helped
out with one of the new technologies that were incorporated into AMD64,
which is the process technology.
I don't think Intel's plans for Itanium had much of an effect on the
success or failure of the x86 offerings of AMD and Intel. As Felger
pointed out, the money went into Prescott. For the return Intel got
on that investment, Intel might almost as well have put that money
into a big pile and burned it (yes, that's an overstatement). The
advice that Intel _should_ have followed would have been to have
canned NetBurst long before they did. Netburst, not Itanium, is the
marketing strategy that gave AMD the opening.

Well, it's true that AMD always wants you to compare it against Xeon
rather than Itanium. But Itanium was supposed to be the eventual
successor to x86 as we all know. Maybe not right away, but eventually,
and it's that strategy that is now in jeopardy.

Itanium sort of parallels a strategy Microsoft followed with its Windows
OSes. It created a series of legacy operating systems in the DOS ->
Windows 3.x -> Windows 9x/ME family of OSes, with the Windows NT ->
2000/XP/2003 family running in parallel until they were ready to take
over from the other family. Except Microsoft actually got it done, but
Intel won't.
I think it's safe to say that Intel didn't plan on spending all that
money for a redesign with such a marginal improvement in performance.
Somebody scoped out performance targets that couldn't be hit. Maybe
they hired a program manager from the DoD.

I can't disagree with the assessment about Prescott, but I don't think
it's quite as pivotal of a problem, as the problem it faces with Itanium.
Intel certainly wanted to contain x86. That's something we can agree
on. Intel's major vendor, Dell, would have done just fine hustling
ia32 server hardware if the performance had been there. The
performance just wasn't there.

Thus the reason for AMD's gamble on more than just 64-bit for Opteron.
It's added dimensions of scalablity to the x86 processor that never
existed before.

Yousuf Khan
 
K

keith

I have no idea where you get the idea I would be interested in
misleading you or anyone else. It's a really unattractive accusation.

That's the only conclusion I can come to. I cannot say you're stupid.
You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.

"The outside world" <> memory. Perhaps now you see why I think you're
purposely misleading. Certainly you *know* this. Hypertransport is an
I/O interface primarily, though is used for memory in a UMA sort of system.

Every scale shrink requires work, but I think everybody understood that
90nm was going to be different.

I don't think that's true. Few really did "know" this, until it was
over. *VERY* few saw that particular speed bump. After 130nm (a fairly
simple transistion), everyone was cruising. Oops!

*Where* do you get the idea that I see anyting sinister in AMD? That's
just bizarre. I don't particularly admire AMD, that's true, but I don't
think AMD sinister. I'm glad that IBM has been able to take care of
business, because I don't want to see IBM pushed out of
microelectronics.

Your posts speak for themselves. You seem to be distrought that Opteron
brought Itanic to it's grave. ...when it was really Intel's senior
management that blew it (on both ends).

One think Intel certainly did not want was for x86 to get those extra
named registers. If Prescott had done well enough with out them, no
problem. But it didn't.

I wonder why? The fact is that they didn't want x86 to "grow up" in any
way. They wanted it dead. Oops, AMD had other ideas.

Clouds of billowing smoke.

There *is* a difference between "circuits", "micro-architecture", and
"process", you know. You really are fabricating your argument out of,
err, smoke. My bet is that you didn't inhale.
I don't see Intel as particularly good in all this. Their performance
has been exceptionally lame. The elevator isn't going all the way to
the top at Intel.

No, it's *trapped* at the top. They cannot see what's on the lower
floors. Intel == Itanic, except it sunk.
IBM took pity on AMD? Wherever did you get that idea?

Perhaps you want to read your posts again.
I did say here
that the money that changed hands ($45 million, if I recall correctly)
didn't sound like a great deal of money for a significant technology
play.

You, nor I, know what money has traded places. I note that you don't
comment on the AMD bodies placed in IBM-EF as a joint venture. Tehy
aren't exactly free either.

Unrealistic plan, to be sure, but, yes, that was their plan.

It *was* their plan. Who has 20-20 hindsight now?
They did something wrong, that's for sure. If they thought itanium was
ready to take out of the oven... No, I don't believe that. Itanium was
in an enterprise processor division, or something like that. They had
to know that x86 had to live on the desktop for at least a while.

They did not. That is the point. They wanted x86 to be burried, by right
about now. Itanic, uber alles!
Since I've been through this "Oh, you're moving the goalposts" gig of
yours before, I've left the conversation unsnipped 6 comments back and
I'll just invite you to review the exchange.

Nope. We *were* talking about Itanic and x86. *YOU* want to talk about
NetBurst/Opteron in a vacuum. Sorry, that's not the way the industry
works. Itanic (and thereby Intel's myopic marketing) is an important part
of this story.
Intel is between a rock and a hard place on x86, and itanium does come
into play.

My, we're bing generous (not).
Whatever they do, they can't afford to have x86 outshine
intanium in the benchmarks. I'd belief that as reason for a
deliberately stunted x86 before I'd believe anything else.

Oh, so they somehow *let* AMD kill 'em here? ...on purpose? Come on,
don't treat us as phools. I though Intel was HQ'd in Santa Clara, not
Roswell.

Pentium-M manifestly runs at much lower power for equivalent
performance. What is there that you don't understand?

Your flip-flop on the technology "problem". First you say it's a
"leakage" problem, then say that P-M is better because it performs better
at lower frequency. Which is it?

I don't think it happened that way. They did botch the Prescott
redesign, and it may well have been deliberately hobbled, as well.

From all accounts (inside and out), it was. Prescott wasn't much more
than a tumor on an ugly wart. Remember, these wunnerful marketeers didn't
want to sell P-M into the desktop. Why? Perhaps they didn't want to be
seen as the idiots they *are*? That still doesn't explain (though I've
tried) the 64bit thing, which *is* where we started here.
Maybe not, but somebody has to sell something to somebody so the
technical weenies can be paid.

Then that's an even dumber response that I'd have expected. Certainly
someone has to sell boxes, but what I said *is* still true. There is no
invention in Dell. It is no more than Intel's box-making plant. It's
interesting that they couldn't even make a profit on white ones, since
that's all they do.
 
R

Robert Myers

Enhanced || added instructions != changed. Your use of language is very
(and I'm sure intentionally) misleading. "Changing the instruction set"
implies incompatability.
I have no idea where you get the idea I would be interested in
misleading you or anyone else. It's a really unattractive accusation.
Since it is *not* a memory interface, it's not a new one, now is it.
You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.
They "had to go to a new process", whether Opteron came about or not.
Your argument is void. The world was going to 90nm and that had nothing
to do with Opteron or Prescott. It was time.
Every scale shrink requires work, but I think everybody understood
that 90nm was going to be different.
You're saying that IBM enabled Opteron, which is hokum.


IBM is in business with AMD to make money for IBM, and AFAIK they do (as a
result of that alliance). You're the one who sees something sinister in
AMD. Everything there was obvious to anyone who has followed AMD for a
decade or so.
*Where* do you get the idea that I see anyting sinister in AMD?
That's just bizarre. I don't particularly admire AMD, that's true,
but I don't think AMD sinister. I'm glad that IBM has been able to
take care of business, because I don't want to see IBM pushed out of
microelectronics.

Open your eyes, man! Intel attempted to choke x86. Intel *let* it be
enough of a gamble so AMD could take control of the ISA.
One think Intel certainly did not want was for x86 to get those extra
named registers. If Prescott had done well enough with out them, no
problem. But it didn't.
Perhaps they are, perhaps not. Intel has classically held the lead, but
it's not the point. Circuits don't define processors.

Besides botching the marketing, Intel botched the *micro-architecture*, or
more accurately the implementation of that micro-architecture, not the
circuits. Circuits have nothing to do with it.

Clouds of billowing smoke.
Imagery or not, that's exactly what happened. So far you haven't had
*any* argument, other than Intel == good, AMD == lucky that IBM happened
(and took pity on them).
I don't see Intel as particularly good in all this. Their performance
has been exceptionally lame. The elevator isn't going all the way to
the top at Intel.

IBM took pity on AMD? Wherever did you get that idea? I did say here
that the money that changed hands ($45 million, if I recall correctly)
didn't sound like a great deal of money for a significant technology
play.
Sure, but that was *not* their plan. They wanted to isolate x86, starve
it, and take that business private to Itanic. Bad plan.
Unrealistic plan, to be sure, but, yes, that was their plan.
Intel's preferred choice was *no* x86, but AMD didn't let that happen.
They answered with a poorly implemented, by all reports rushed, P4.
They did something wrong, that's for sure. If they thought itanium
was ready to take out of the oven... No, I don't believe that.
Itanium was in an enterprise processor division, or something like
that. They had to know that x86 had to live on the desktop for at
least a while.

Oh, I see. You're moving the goal posts. I thought we were talking
about Opteron's position in the processor market and AMD's rise to the top
of x86.
Since I've been through this "Oh, you're moving the goalposts" gig of
yours before, I've left the conversation unsnipped 6 comments back and
I'll just invite you to review the exchange.
Cost = die size. It didn't fit the marketing plan. When the P4 came out
there was still some headroom for power (there still is, but no one wants
to go there). The shifter and fixed multiplier wouldn't have added all
that much more power.
Intel is between a rock and a hard place on x86, and itanium does come
into play. Whatever they do, they can't afford to have x86 outshine
intanium in the benchmarks. I'd belief that as reason for a
deliberately stunted x86 before I'd believe anything else.

I thought you said (above) the "physics problem" was leakage, not MHz.

Pentium-M manifestly runs at much lower power for equivalent
performance. What is there that you don't understand?
The marketing failure was in Itanic. Ok, the fact they couldn't execute
may be called a technical failure, but it's marketing that sets the
schedule. Innovation usually doesn't follow M$ Planner.



...and panic sets in. "What have we got? Ah, P4! Ship it!"
I don't think it happened that way. They did botch the Prescott
redesign, and it may well have been deliberately hobbled, as well.
It's certainly not signed by Mike (or Andy). It doesn't come from
PeeCee wrench-monkeys either.

Maybe not, but somebody has to sell something to somebody so the
technical weenies can be paid.

RM
 
Y

Yousuf Khan

Robert said:
I don't know how to cope with the way you use language. They changed
the instruction set. Period. Backward-compatible !=unchanged.

Tidied it up a bit, here and there. Added some extra registers. But why
is that such an important point? It's the least they could be expected
to do, considering the major _capability_ improvement they are making to
this architecture. The same opcodes that worked in 8, 16, or 32-bit are
still the same in 64-bit.
In what way was hypertransport not a new memory interface for AMD?

Well, it's not a memory interface; at least not exclusively for memory,
it's a generic i/o interface. The onboard memory controller is a
separate subsystem from Hypertransport. The Hypertransport may be used
to access memory, especially in multiprocessor Opteron systems where the
memory addressing duties is split up amongst several processors, each
processor feeds neighbouring processors with data from their own local
pool of memory as requested.
It's true, they introduced at 130nm, but at a time when movement to
90nm was inevitable, where new cleverness would be required..."They
had to...go to a new process." If Intel had been able to move to 90nm
(successfully) with Prescott and AMD was stuck at 130 nm, it would
have been a different ball game.

They had already made a successful transition to 130nm with the K7
Athlon XPs, the Bartons and Thoroughbreds. However, that was a 130nm
without SOI. With the K8's, they used a different 130nm process with
SOI. They then transitioned the 130nm SOI to 90nm SOI.
Another odd choice of language. Blame IBM? Who? For what? Why?

In this case, Keith is just being sarcastic. He's saying "blame" when he
really means "give credit to".
As to investment decisions, the only investment prospects I could see
for Intel or AMD were downside, and not in a way that was sure enough
to justify a short position, even were I in the habit of taking such
positions. I didn't like Intel's plans any better than I liked AMD's.

Another thing, Intel is just as heavily manipulated as AMD's stock. It's
fool's game to try to short either one. There's some very powerful
interests who manipulate its price up and down. Neither stock seems to
be heavily affected by their own news. Instead things like global
markets, oil prices, inflation rates, interest rates, affect their
prices more often. Occasionally, you'll notice one go up while the other
goes down -- that's just the powers-that-be having fun with each other.
You haven't shown in what way Intel's plans for Itanium affected the
success or failure of x86 offerings of AMD and Intel. Intel had the
money for the huge gamble it made on Itanium. It was not a bet the
company proposition.

Well, that's going to be a little difficult now, considering Itanium is
no longer considered a competitive threat.

Yousuf Khan
 
Y

Yousuf Khan

Robert said:
It would have shipped, but to whom? AMD would have another also-ran
processor and Intel would be moving to displace as much of x86 as it
could with Itanium, rather than emphasizing that the competition for
Itanium is really Power.

AMD already had a non-SOI 130nm process running for their K7's. They
would've just shipped Opteron on that process. They may not have been
able to crank Opteron all of the way upto 2.4Ghz at 130nm as they did
with SOI, but they might've been able to go 2.2Ghz (like the K7's had
already reached). Also I think if they weren't in a hurry to get to
90nm, they probably would've been able to crank the 130nm SOI upto 2.6
or even 2.8Ghz.

Their next step would've been to get to 90nm, but without SOI, they
would've run into the exact same issue that Intel did, albeit in a much
less severe form, because they weren't pushing upto 4.0Ghz. We would've
been seeing non-SOI Opterons at around 100W, instead of the 65W that we
see them at now with SOI.

No doubt about it, AMD needed to get to SOI eventually, and without
IBM's help, they may have gotten there probably by the middle of this
year, rather than at this time two years ago (when the first Opteron
shipped). From what I've heard about SOI, this technology has been
around a long time, it's just recently that it's started to find itself
into mass-market ICs -- until now it was used mainly in electronics for
airplanes, because of the extra radiation that they are exposed to up there.

Yousuf Khan
 
R

Robert Myers

Robert Myers wrote:


Not at all likely that Intel would've turned things around with
Prescott. First of all nobody (except those within Intel) knew that they
were trying to increase the pipeline from 20 to 30 stages. So until that
point all anybody knew about Prescott was it just a die-shrunk
Northwood, which itself was just a die-shrunk Williamette. Anyways,
bigger pipeline stages or not, it was just continuing on along the same
standard path -- faster Mhz. There wasn't much of an architectural
improvement to it, unlike what AMD did with Opteron.
We did know it was a complete die redesign. Bigger cache, longer
pipeline, different latencies. In the end, big investment,
disappointing results. Maybe my spin-o-meter is malfunctioning, but I
got the sense that Intel management was just as mystified as everyone
else. Northwood, if you remember, was about a 10% improvement on
Willamette at the same clock (bigger cache, if nothing else).
Not at all, AMD's 64-bit extensions had very little to do with it. I'd
say the bigger improvements were due to Hypertransport and memory
controller, and yes inclusion of SOI manufacturing techniques. In terms
of engineering I'd say ranking the important developments were: 64-bit
(10%), Hypertransport (20%), process technology (20%), and memory
controller (50%). In terms of marketing, it was 100% 64-bit, which sort
of took the mantle of spokesman for all of the other technology also
included.
The only thing out of the 64-bit that really mattered for most users
was more named registers.
I'm hearing, "I'd have gotten away with it too, if it weren't for you
meddling kids!" :)


Not sure where you get that little piece of logic from. People weren't
waiting for just any old 64-bit processor, they could get those before.
They were looking for a 64-bit x86 processor. Itanium falls into the
category of "any old 64-bit processor", since it's definitely not x86
compatible.
Almost no one really needed a processor with 64-bit pointers. If you
don't really need them, and most people don't, you might go to the
trouble to compile so you use 32-bit pointers, anyway. The increased
number of named registers is what most users are really ready to use
(with appropriate compiler support). That's the part Intel really
didn't like, and they wouldn't have had to swallow it if they could
have delivered the performance without them. They couldn't.

RM
 
Y

Yousuf Khan

George said:
Isn't that enough? I didn't think you'd be the one to need convincing
about x86-64 as a necessary component of future PCs. Beyond that I'm not
sure but I'm pretty sure that AMD has some other patents which might be of
interest, e.g. large L1 cache efficiency.

"Large L1 cache efficiency" requires a patent?

Yousuf Khan
 
Y

Yousuf Khan

Yeesh! You guys oughta really crop the quotes down to two levels at most.
You can call it whatever you like, Keith. AMD changed the way its
processors communicate with the outside world.

The majority of its memory accesses are done through its memory
controller, not through Hypertransport. Regardless, what is the point
you're trying to make here about AMD changing the way it communicates
with the outside world? What difference does it make how AMD does its
i/o, it still works the same way it's always worked -- the software
can't tell the difference.

Also it was already doing things differently from Intel as if the K7
Athlons, when it was using the EV7 bus, while Intel was using its own
bus. Software couldn't tell the difference back then either. Now it's
changed over from the EV7 to the Hypertransport -- and software still
remains blissfully unaware.
One think Intel certainly did not want was for x86 to get those extra
named registers. If Prescott had done well enough with out them, no
problem. But it didn't.

Where does this insight into Intel's inner mind come from? I've never
heard Intel disparage extra registers. They've certainly disparaged the
whole x86-64 concept before (boy, have they!), but nothing specifically
about extra registers.
I don't see Intel as particularly good in all this. Their performance
has been exceptionally lame. The elevator isn't going all the way to
the top at Intel.

Or perhaps it's going too often to the top, but not spending enough time
at the ground floors?
Unrealistic plan, to be sure, but, yes, that was their plan.

Which is the point we're trying to make about why we had a feeling that
Intel's Itanium was not going to take off, while AMD's Opteron had a
great chance to take off.
They did something wrong, that's for sure. If they thought itanium
was ready to take out of the oven... No, I don't believe that.

I don't think Itanium was ever ready to take out of the oven.
Itanium was in an enterprise processor division, or something like
that. They had to know that x86 had to live on the desktop for at
least a while.

They knew that x86 had to live on the desktop for awhile, that's why
they were pursuing the same dual-path strategy of legacy and
new-technology that Microsoft employed so well in the transition from
the DOS-family OSes to the NT-family OSes. NT was too advanced for most
home users and even most businesses initially. But they kept grooming it
until it became even easy to use in home settings.

Intel would've pursued a similar strategy with x86 (legacy) and IA64
(new-tech).
Intel is between a rock and a hard place on x86, and itanium does come
into play. Whatever they do, they can't afford to have x86 outshine
intanium in the benchmarks. I'd belief that as reason for a
deliberately stunted x86 before I'd believe anything else.

Benchmarks are the least of their worries. x86 has such a huge installed
base that they are basically immune to benchmarks. The only benchmarks
that matter are the ones comparing one x86 to another. Itanium could not
quite obviously be put into any x86 benchmark. If interprocessor
benchmarks mattered, then x86 would've been gone a long time ago. My
feeling is that if any architecture is able to emulate x86 at full
speed, then that architecture has a chance to take over from x86.

Yousuf Khan
 
Y

Yousuf Khan

Robert said:
We did know it was a complete die redesign. Bigger cache, longer
pipeline, different latencies. In the end, big investment,
disappointing results. Maybe my spin-o-meter is malfunctioning, but I
got the sense that Intel management was just as mystified as everyone
else. Northwood, if you remember, was about a 10% improvement on
Willamette at the same clock (bigger cache, if nothing else).

When did we first know it was a complete redesign? I think we only found
out about larger pipeline about a month before Prescott's release. We
all assumed bigger cache size, that's done during most die-shrinks
anyways. When we found out about extensive pipeline redesign, we all
knew it was a major job that they had done on it, and many of us had
expressed genuine surprise about it. Northwood had actually been doing a
credible job keeping things competitive with AMD, and we were just
expecting Northwood II with Prescott.
The only thing out of the 64-bit that really mattered for most users
was more named registers.

I don't know if that even matters to most users -- even to programming
users. The only thing that's going to matter to most users is when they
start seeing games start using the additional features.
Almost no one really needed a processor with 64-bit pointers. If you
don't really need them, and most people don't, you might go to the
trouble to compile so you use 32-bit pointers, anyway. The increased
number of named registers is what most users are really ready to use
(with appropriate compiler support). That's the part Intel really
didn't like, and they wouldn't have had to swallow it if they could
have delivered the performance without them. They couldn't.

Not sure why it matters to you if people needed extra registers now or
extra memory addressibily now? It's not like as if people have time to
go do major overhauls of architectures everyday, they might as well have
added as much as they could fit in. Most of it is going to be needed
sooner or later anyways.

Yousuf Khan
 
G

George Macdonald

On Sat, 19 Mar 2005 08:54:58 -0500, George Macdonald

<snip>


Even so, I'll stick with my basic position, which is that people who
work on Wall Street are not stupid (even if they used their fraternity
brothers' problem sets to get through calculus), and while they may
not have much of a grasp of carrier mobility or stretched silicon,
they can probably keep straight litigation, regulatory action, and
predatory marketing practices.

Not stupid... in their domain, but they insist on trying to play the tech
expert when some of the quotes from them are so transparently ignorant.

Have any favorites you'd like to pass along? I don't pay any
attention at all when such folks talk technical. Or rather, I tend to
listen for the marketing buzz they're trying to generate, realizing
full well that it has nothing to do with science or engineering.

My recent favorite gaffe would be the idiot at Credit Suisse First Boston
who downgraded AMD a couple of days before the stock jumped $5. or so. For
others, any of the guff they dish out at any IDF time fits the bill nicely.
Here's one:

"Multi-core processing will use up the growing transistor count
driven by Moore’s Law and allow Intel to maintain its ASPs as it integrates
additional capability such as communications, security and multimedia that
leverage its size advantage to reach for incremental growth above PC units,
and all at costs more competitive than what AMD can offer due to greater 65
nanometer/300mm capability," Goldman Sachs said in its report after the
first day of the conference.

I meant to give the URL for above quote:
http://www.reed-electronics.com/electronicnews/article/CA508052?spacedesc=latestNews
ROTFPIMP! Intel's announced "multi-core" processor is an MCM, for
crying out loud! What a mess. The didn't learn their lesson from the P6?

From what I read, Intel's going to do both: a "dual core" die (mirrored
pair ?) and a "twin core" "MCM", though it's modest in terms of MCM if I'm
not mistaken. Maybe they inherited someone from the Digital Equip.
DECStation 5K days?:)
I'm not convinced that AMD's transisiton to 90nm was so smooth either, but...

Yeah there was a time where it looked like they had a little stumble but
the new core does what it's supposed to in terms of power/performance
balance. Their stumble was certainly a lot less public than Intel's.
 
G

George Macdonald

"Large L1 cache efficiency" requires a patent?

When the way size is greater than the page size, you get some extra latency
because the TAG look-up can't be completed till the TLB result is available
- Intel has always kept their way size at or below the page size. I'm
pretty sure I came across a Sun patent on this a couple of years ago - no
idea whether AMD has one of their own.
 
R

Robert Myers

Robert Myers wrote:

Where does this insight into Intel's inner mind come from? I've never
heard Intel disparage extra registers. They've certainly disparaged the
whole x86-64 concept before (boy, have they!), but nothing specifically
about extra registers.

If it's an original insight, I'll be happy to take credit for it, but
I doubt that it's original.

The lack of named registers is a significant architectural deficiency
for x86. From the point of view of current needs, it's a much more
serious deficiency than 32-bit pointers. If Intel hadn't wanted to
end-of-life x86, the problem would have been addressed long ago.
Don't go searching around Intel's web site to find that in a press
release.

RM
 
R

Robert Myers

When did we first know it was a complete redesign?

I don't know when you knew, but Felger and I had an exchange here that
made me go look for die photos six months before Prescott was out,
during Fall IDF, probably. From the die photos themselves and from
annotations on the die photos, some from Intel, it was clear that
Intel was doing a major redesign. I discussed those photos and linked
to them in this forum.

It seemed clear to me that Intel wanted the world to expect big
things, and it also seemed clear that the message was: "Pay no
attention to Opteron. Wait 'til you see Prescott." Not only was
Intel doing a major redesign, it was advertising a major redesign.
The world waited, and it saw.

I don't know if that even matters to most users -- even to programming
users. The only thing that's going to matter to most users is when they
start seeing games start using the additional features.
All you have to do in many cases is to recompile with a compiler that
can exploit the extra registers.

RM
 
Y

Yousuf Khan

Robert said:
I don't know when you knew, but Felger and I had an exchange here that
made me go look for die photos six months before Prescott was out,
during Fall IDF, probably. From the die photos themselves and from
annotations on the die photos, some from Intel, it was clear that
Intel was doing a major redesign. I discussed those photos and linked
to them in this forum.

It seemed clear to me that Intel wanted the world to expect big
things, and it also seemed clear that the message was: "Pay no
attention to Opteron. Wait 'til you see Prescott." Not only was
Intel doing a major redesign, it was advertising a major redesign.
The world waited, and it saw.


Yeah, well, I remember those photos too, but most of that discussion was
centered around whether Prescott is going to have 64-bit or not. Nobody
really guessed about the additional pipeline stages until much later.

I think if we see even a single additional pipeline stage in a
processor, you can consider it a major redesign, because each of those
pipeline stages usually require their own section of circuitry on the chip.
All you have to do in many cases is to recompile with a compiler that
can exploit the extra registers.

That's the Linux crowd. The typical Windows user wouldn't know what to
do with a compiler.

Yousuf Khan
 
Y

Yousuf Khan

George said:
From what I read, Intel's going to do both: a "dual core" die (mirrored
pair ?) and a "twin core" "MCM", though it's modest in terms of MCM if I'm
not mistaken. Maybe they inherited someone from the Digital Equip.
DECStation 5K days?:)

Mirrored pair sounds to me like they just flip the silkscreen upside
down and have it expose the next die in the opposite direction. Is this
possible? The transistors photograph properly onto the wafer whether the
mask is right-side up or upside down?
Yeah there was a time where it looked like they had a little stumble but
the new core does what it's supposed to in terms of power/performance
balance. Their stumble was certainly a lot less public than Intel's.

They're starting to bring out the 2.6Ghz parts now, and it's likely that
they'll get it all of the way upto 3.0Ghz or more by the end of the 90nm
process.

Yousuf Khan
 
Y

Yousuf Khan

George said:
When the way size is greater than the page size, you get some extra latency
because the TAG look-up can't be completed till the TLB result is available
- Intel has always kept their way size at or below the page size. I'm
pretty sure I came across a Sun patent on this a couple of years ago - no
idea whether AMD has one of their own.

When the way size is greater than the page size? The page size is
measured in KB, while the way size is measured in "ways". How do you
compare? Oh, you mean cache size divided by the number of ways, compared
against the page size.

Yousuf Khan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top