Intel COO signals willingness to go with AMD64!!

G

George Macdonald

Easy for you to say. If anyone accurately foresaw the importance of
OoO and just exactly _why_ it would be so important _before_ it was
introduced into common usage, I should be very much indebted to anyone
who can direct me to an appropriate link (who knows, maybe such a link
exists). Run-time scheduling may or may not prove in the long run to
play the critical role that it currently does, so I'm not going to
make any emphatic statements that purport to be true for all time.
People have tried every conceivable scheme for scheduling, and right
now, on-die runtime scheduling appears to be the winner.

I think it's pretty clear that OoO was recognized as important even before
Intel went off on its flight of fancy. The weight of evidence has only
increased since the early days of Merced to the point that it boggles that
they have so stubbornly persisted in this folly. I don't think any "links"
are required here. Besides, some of them might expose the person passing
umm, judgement to some kind of career threatening retaliatory discipline.

Intel designed IA64 so that it would be very hard to clone. That, and
making sure that it could not be construed as subject to any of their
cross-licensing agreements, not performance, was their primary design
goal. As it stands _at_the_moment_, Intel seems to have succeeded
beyond its wildest expectations in those respects. It also happens to
have produced a world-beating processor for certain applications. It
can't be cloned, it isn't subject to cross-licensing agreements, and
it can be virtualized.

The "certain applications" are usually classified as "embarrassingly
suitable".:) As far as the other points, having supplied the world with
merchant processors, which fit within the infrastructure of an open system
architecture, for >20 years, the question is: can Intel finally succeed in
its desire to drag that same open system world down its privately defined
proprietary path?
What makes you think you're so smart?

I think it's because he designs hardware for a living. His views seem to
be shared by many of the other people who do and whose views on hardware
design are respected here and in other fora.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
G

George Macdonald

I look at X86-64 in somewhat the same vein as Kidder's "The Soul of
the New Machine," where they managed to clean some dirt out of an
old architecture as they designed a superset. There's dirt that just
can't be taken out, but improvements can be made. (and were)

Yes there are some striking similarities here. I worked with both the
16-bit and 32-bit Data General systems at a fairly detailed level - quite a
bit of assembly coding - and the lack of a "mode bit" is one of the
technical similarities. As I recall, having observed the difficulties on
the PDP/VAX "switching", that was an edict issued by Edson deCastro at the
start of the MV/8000 project: "there will be no mode bit!" There *does*
seem to have been more err, commitment on the part of AMD's management
though than was accorded the D.G. guys... working in a basement dungeon...
having to scrounge around for analyzers... the first systems were built out
of PALs.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
R

Robert Myers

I think it's pretty clear that OoO was recognized as important even before
Intel went off on its flight of fancy. The weight of evidence has only
increased since the early days of Merced to the point that it boggles that
they have so stubbornly persisted in this folly. I don't think any "links"
are required here. Besides, some of them might expose the person passing
umm, judgement to some kind of career threatening retaliatory discipline.

Oh, come on. Let's see who said what when. Like the guy at the
McCarthy hearings waving a stack of meaningless paper. I have right
here a list! Yes, many people thought Intel was off on the wrong
track, but not for the reason that has proven to be critically
problematical. The view in the rear-view mirror is just great. If
the evidence that the forward view, in the specific way that I
mentioned was so obvious, it should take no effort whatsoever to find
someone who said it ahead of time, not after the fact.
The "certain applications" are usually classified as "embarrassingly
suitable".:) As far as the other points, having supplied the world with
merchant processors, which fit within the infrastructure of an open system
architecture, for >20 years, the question is: can Intel finally succeed in
its desire to drag that same open system world down its privately defined
proprietary path?

Intel plainly will not succeed at its plan of world domination.
Everyone can relax. We have a vital and competitive industry.
Keeping a vital industry, though, includes moving on from x86.
I think it's because he designs hardware for a living. His views seem to
be shared by many of the other people who do and whose views on hardware
design are respected here and in other fora.

Ah yes, the resume argument. Very persuasive. Does his resume
include running a very profitable company that has been dogged by
knock-offs?

No, Itanium is not an engineer's chip. It's turned out to have been
dangerous gamble, and it _may_ turn out to have been a losing one.
It's been wildly unpopular with engineers, many of whom can see better
ways to build chips with less R&D. It being such a horrific chip, and
it being probably easy to design an equivalent or better chip with far
less money leaves the door open for something other than another
me-too chip from AMD. Good thing, too, because that leaves Sun a
whiff of a prayer of reentering a market it should be dominating
instead of on the verge of being shoved out of. Yeah, it would have
been a piece of cake, all right.

RM
 
G

George Macdonald

Oh, come on. Let's see who said what when. Like the guy at the
McCarthy hearings waving a stack of meaningless paper. I have right
here a list! Yes, many people thought Intel was off on the wrong
track, but not for the reason that has proven to be critically
problematical. The view in the rear-view mirror is just great. If
the evidence that the forward view, in the specific way that I
mentioned was so obvious, it should take no effort whatsoever to find
someone who said it ahead of time, not after the fact.

A pointer to a thread is tooo direct I'm afraid. A search at Google News
on EPIC and VLIW and other appropriate keys will turn something up, back in
the 2K+/-1 timeframe I'm sure... assuming you know who the names really are
and where they played then and play now.

Intel plainly will not succeed at its plan of world domination.
Everyone can relax. We have a vital and competitive industry.
Keeping a vital industry, though, includes moving on from x86.

I *hope* you will allow that x86-64 is moving on for the current desktop
and even workstation. Cetainly 14 instead of 6 visible registers to diddle
with and losing the FPU stack is a lot to me. IBM has done rather well for
the last 35+years with such an arrangement in the mainframe space.

Ah yes, the resume argument. Very persuasive. Does his resume
include running a very profitable company that has been dogged by
knock-offs?

No, Itanium is not an engineer's chip. It's turned out to have been
dangerous gamble, and it _may_ turn out to have been a losing one.
It's been wildly unpopular with engineers, many of whom can see better
ways to build chips with less R&D. It being such a horrific chip, and
it being probably easy to design an equivalent or better chip with far
less money leaves the door open for something other than another
me-too chip from AMD. Good thing, too, because that leaves Sun a
whiff of a prayer of reentering a market it should be dominating
instead of on the verge of being shoved out of. Yeah, it would have
been a piece of cake, all right.

Personally I've always thought Sun's IS was not that good - gave RISC a bad
name really. As for Itanium, the engineer arguments may hint at waste of
real estate for the result but I was thinking more of the derision poured
on VLIW. I know enough about compiling to say that I don't see the static
scheduling as a worthwhile solution and the (re)training run and feedback
compilation is just not generally practicable for several reasons. When I
hear hardware experts adding their weight against it, it's a pretty strong
reinforcement... from my POV.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
R

Robert Myers

A pointer to a thread is tooo direct I'm afraid. A search at Google News
on EPIC and VLIW and other appropriate keys will turn something up, back in
the 2K+/-1 timeframe I'm sure... assuming you know who the names really are
and where they played then and play now.
I will do the historical search to educate myself, but I won't be at
all be surprised to find by then people saying that the Itanium
scheduling strategy was a disaster. I can even guess as to a name
that will show up. By then, though, the value of OoO had already been
established in actual practice, the memory wall was an issue around
which conferences had already been organized and held, there were at
least some people who understood why OoO was *so* important, but most
important, the ship and who knows how many hundred million dollars had
already left the dock starting many years before.

From discussions I've already had online, I know that there were by
the 2K time frame serious disagreements within Intel about the future
of static scheduling, but I'm reasonably certain that Intel has been
investigating strategies specifically aimed at that problem starting
from just about then.

All of the evidence available to me is that Intel was damned and
determined to impose IA-64 on the world, that the people with the
power to say yes or no felt that they had the money to do it one way
or another, and that even what many regard as an impending event
(64-bit x86) that will spell certain doom for IA-64 has done little or
nothing to change their minds.

I *hope* you will allow that x86-64 is moving on for the current desktop
and even workstation. Cetainly 14 instead of 6 visible registers to diddle
with and losing the FPU stack is a lot to me. IBM has done rather well for
the last 35+years with such an arrangement in the mainframe space.

Glass half full, glass half empty.

Glass half full: increasing the number of named registers is *such* an
obvious thing to do. FPU stack not so important to me becasue x87
arithmetic not so important to me, but I can see that many will regard
that as progress. Big flat memory space a nice extra and a real
relief for people who actually have to code above 4GB.

Glass half empty: can't virtualize, therefore can't build a true
sandbox. Stuck with emulators like Wine that will always be glitchy.
Sandboxes are really nice for people who are doing serious work and
should probably be mandatory as a security measure for net
applications.
Personally I've always thought Sun's IS was not that good - gave RISC a bad
name really. As for Itanium, the engineer arguments may hint at waste of
real estate for the result but I was thinking more of the derision poured
on VLIW. I know enough about compiling to say that I don't see the static
scheduling as a worthwhile solution and the (re)training run and feedback
compilation is just not generally practicable for several reasons. When I
hear hardware experts adding their weight against it, it's a pretty strong
reinforcement... from my POV.

VLIW has been at the very least controversial for a long, long time.
The point I was trying to drive home is that Intel's motivations in
choosing the design strategy are not and never have been driven
primarily by the considerations that would first (or perhaps ever)
come to the mind of an engineer.

If somebody can come up with a nice RISC chip that will keep us from
returning to a world owned by IBM on the high end, I'm in favor of it.
SPARC via Sun and Fujitsu are the only other games left in town.
People who talk down Itanium planning on x86-64 as a competitor for
IBM domination of high-end computing just aren't thinking clearly.

RM
 
D

Dale Pontius

Easy for you to say. If anyone accurately foresaw the importance of
OoO and just exactly _why_ it would be so important _before_ it was
introduced into common usage, I should be very much indebted to anyone
who can direct me to an appropriate link (who knows, maybe such a link
exists). Run-time scheduling may or may not prove in the long run to
play the critical role that it currently does, so I'm not going to
make any emphatic statements that purport to be true for all time.
People have tried every conceivable scheme for scheduling, and right
now, on-die runtime scheduling appears to be the winner.
I don't claim to have forecast OoO, or anything about it. To be
perfectly honest, I don't know exactly when OoO hit the mainstream. I
seem to remember seeing an overview of the PentiumPro architecture that
had what I now think of as OoO structures, but I'm honestly not sure.

But OoO came to maturity while IA64 was in development. You don't have
to have forecast the future to see that as it happens, you have to
review your current plans.

I've heard of an even bigger similar project than IA64 getting stopped
in its tracks when reality smacked it in the nose. Well, one such
bigger project happened before my time, and one slightly smaller during
my tenure. Sometimes reality makes you change your plans.

On a slightly different, but still related vein... A friend once
brought once back one key piece of wisdom from a conference, "Software
is hard, and hardware is easy."
IA64 is saddled with an instruction set that makes OoO very hard, but
not impossible. OoO also makes nonsense of the premise of the IA64
ISA, which is that all the scheduling was to be preprogrammed, using
predicated instructions to make whatever run-time adjustments were
necessary. The scheme works _much_ better than people give it credit
for. The problem is that you only need a cache miss rate of less than
one percent to produce a factor of two slowdown in code execution
given the current mismatch between processor speed and memory latency.
Only the slightest miscalculation can bring you to ruin, and that's
why IA64 needs such a gigantic cache to perform decently.
One further comment about compiler scheduling of instruction flow...
Isn't this one of the lessons of MIPS - that you'd have poor portability
of binaries from one generation to the next. We've really only seen
two generations of IA64, Merced and McKinley. All else has been shrinks
and cache enhancements. They've had to flog the compilers SOOOO hard to
get the levels of performance so far attained. What happens when the
next real IA64 architecture rev comes along? I truly doubt it will get
its best performance from even the best McKinley compiler, and what
will happen when code from the McKinley+1 compiler is used on McKinley?
Plus, how long will it take to reach the good McKinley+1 compiler? Will
it bring back the days of fat binaries?
Intel designed IA64 so that it would be very hard to clone. That, and
making sure that it could not be construed as subject to any of their
cross-licensing agreements, not performance, was their primary design
goal. As it stands _at_the_moment_, Intel seems to have succeeded
beyond its wildest expectations in those respects. It also happens to
have produced a world-beating processor for certain applications. It
can't be cloned, it isn't subject to cross-licensing agreements, and
it can be virtualized.
Here you hit the nail on the head.

You have to ask what problems Intel was trying to solve with IA64.
They're obviously *always* after performance. But in this case, IMHO
they were after clone-relief, too.

I may not be in the CPU business, but I've spent a lot of years in a
big company - a company once renouned for being self-absorbed. Maybe
I'm only starting to learn about caching issues in ccNUMA, but I've
seen internal corporate politics before, and I think I can recognize
the signs. IA64 reeks of it.

Comparison: Sometimes you get execs who want a magic bullet to solve
their problem. Sometimes you get someone pushing what should be an
academic solution, but they make a convincing claim that they have
the magic bullet. I've seen several projects of this sort started,
and at least two within my tenure go to hardware and conferences, I
even worked on one of them. (The other predates the web.) Agan, I see
similarities in IA64.
What makes you think you're so smart?
Squat. George gives me too much credit. I'm a DRAM designer of many
years, though most of the time, including now, I can't or shouldn't
comment on what I'm really doing. (I don't even feel good about
telling any Rambus stories, and that was YEARS and projects ago.) I
just like to dabble in this stuff on the side.

I have some experience seeing technical proposals whose biggest merit
is that they solve a 'political' problem. My assessment of IA64 is
more based on that than any technical expertise in the field, where
I'd have to defer to many others.

Dale Pontius
--
 
T

Tony Hill

Oh, come on. Let's see who said what when. Like the guy at the
McCarthy hearings waving a stack of meaningless paper. I have right
here a list! Yes, many people thought Intel was off on the wrong
track, but not for the reason that has proven to be critically
problematical. The view in the rear-view mirror is just great. If
the evidence that the forward view, in the specific way that I
mentioned was so obvious, it should take no effort whatsoever to find
someone who said it ahead of time, not after the fact.

This history predates my interest in CPU architecture a bit, but
didn't the Alpha have OOO way back in it's first revision or two?
Certainly in the early 90's the Alpha was seen by many (most?) to be
leading the way for processor design of the future. It seems to me
like Intel needed only to look at what the Digital chip designers were
doing (while hopefully totally ignoring what the Digital marketing and
management people were doing) to get some clues as to what they should
do. As I recall, the Alpha first started shipping in '91, while
Intel/HP didn't even start designing IA64 until '93.
 
D

daytripper

This history predates my interest in CPU architecture a bit, but
didn't the Alpha have OOO way back in it's first revision or two?
Certainly in the early 90's the Alpha was seen by many (most?) to be
leading the way for processor design of the future. It seems to me
like Intel needed only to look at what the Digital chip designers were
doing (while hopefully totally ignoring what the Digital marketing and
management people were doing) to get some clues as to what they should
do. As I recall, the Alpha first started shipping in '91, while
Intel/HP didn't even start designing IA64 until '93.

Well, yeah - there was that little legal tiff about Intel treading upon the
Digital ip portfolio ;-)
 
R

Robert Myers

This history predates my interest in CPU architecture a bit, but
didn't the Alpha have OOO way back in it's first revision or two?
Certainly in the early 90's the Alpha was seen by many (most?) to be
leading the way for processor design of the future. It seems to me
like Intel needed only to look at what the Digital chip designers were
doing (while hopefully totally ignoring what the Digital marketing and
management people were doing) to get some clues as to what they should
do. As I recall, the Alpha first started shipping in '91, while
Intel/HP didn't even start designing IA64 until '93.

http://www.byte.com/art/9612/sec6/art3.htm

1993: First out-of-order execution microprocessor:
IBM and Motorola PowerPC 601

1995: Pentium Pro
[My words] first x86 with OoO core, very controversial because it
didn't execute legacy 16-bit applications well, but it was the first
implementation of the P6 Core. Introduced many other innovations as
well, including register renaming, which is essential to OoO having
any impact at all.

Extracted from a 2001-05-29 06:01:54 PST post to this very newsgroup
by one Yousuf Kahn, quoting a Wall Street Journal Story about the
history of Itanium:

"In June 1994, the companies announced a partnership, with Intel
leading the design of a 64-bit processor that would use many of H-P's
ideas."

[My words] Those ideas had already been cooking in HP's labs for
several years.

It is clear that the broad outlines of the architecture (VLIW) and
probably many details of the ISA for Itanium were in motion _before_
the world of microprocessors even had a look at OoO, never mind having
a grasp of why it was so important.

Could someone have peered into the future and seen that processor
speeds and memory latencies would diverge so widely and then also
accurately predicted the consequences for Itanium? Possibly. If
someone did and a record was made of that person's prescience, could I
be pointed to the details?

RM
 
F

Felger Carbon

Robert Myers said:
Glass half full, glass half empty.

The glass is twice as large as it needs to be.

Felger Carbon
Nuts and bolts engineering dun rite
 
R

Robert Myers

I don't claim to have forecast OoO, or anything about it. To be
perfectly honest, I don't know exactly when OoO hit the mainstream. I
seem to remember seeing an overview of the PentiumPro architecture that
had what I now think of as OoO structures, but I'm honestly not sure.

But OoO came to maturity while IA64 was in development. You don't have
to have forecast the future to see that as it happens, you have to
review your current plans.

I've heard of an even bigger similar project than IA64 getting stopped
in its tracks when reality smacked it in the nose. Well, one such
bigger project happened before my time, and one slightly smaller during
my tenure. Sometimes reality makes you change your plans.
To all appearances to an ordinary mortal on the street, Intel
certainly looks to be guilty of hubris. The P6 architecture (which
was introduced as the PentiumPro) started producing almost immediate
evidence of the impact of OoO, but it wasn't until the PIII that Intel
had the formula down to something attractive. Even so, some time
between 1994 and now there was time to recognize that staking so much
on static scheduling was a very high-risk bet.
On a slightly different, but still related vein... A friend once
brought once back one key piece of wisdom from a conference, "Software
is hard, and hardware is easy."
Software is the true Achilles heel of Itanium, at least as things
stand now.

One further comment about compiler scheduling of instruction flow...
Isn't this one of the lessons of MIPS - that you'd have poor portability
of binaries from one generation to the next. We've really only seen
two generations of IA64, Merced and McKinley. All else has been shrinks
and cache enhancements. They've had to flog the compilers SOOOO hard to
get the levels of performance so far attained. What happens when the
next real IA64 architecture rev comes along? I truly doubt it will get
its best performance from even the best McKinley compiler, and what
will happen when code from the McKinley+1 compiler is used on McKinley?
Plus, how long will it take to reach the good McKinley+1 compiler? Will
it bring back the days of fat binaries?
The problem is even worse than that. Because the critical issue is
cache misses, a binary tuned for a 6MB cache will probably flop
miserably with even a 3MB cache, not even to think about a 1.5MB
cache, and a binary tuned for a 1.5MB cache will make suboptimal use
of a 6MB cache unless (as is very likely) the cache is being shared by
a completely unpredictable mix of jobs.

I've said it so many times that I've begun to sound like a broken
record, but one side-benefit for computing is that Intel has become a
patron of fundamental research as a result of choosing such a hard
strategy. Whether that's good for Intel stockholders or not is
another question, but much good work has come out of Intel's misery.

I may not be in the CPU business, but I've spent a lot of years in a
big company - a company once renouned for being self-absorbed. Maybe
I'm only starting to learn about caching issues in ccNUMA, but I've
seen internal corporate politics before, and I think I can recognize
the signs. IA64 reeks of it.
I'm sure that there have been some real shoot-outs inside Intel over
Itanium. From what I've heard of Intel's corporate style, it doesn't
sound like a fun place to work from any measure. My read, from a very
small bit of exposure and from comments that have been made publicly,
is that Intel is run top-down with a very firm hand. Whatever
politics there are that matter appear to be confined to a very small
circle.
Comparison: Sometimes you get execs who want a magic bullet to solve
their problem. Sometimes you get someone pushing what should be an
academic solution, but they make a convincing claim that they have
the magic bullet. I've seen several projects of this sort started,
and at least two within my tenure go to hardware and conferences, I
even worked on one of them. (The other predates the web.) Agan, I see
similarities in IA64.
Intel is being very stubborn on this one. The reasons why IA64 has
turned out to be so very hard are, in my judgment, actually fairly
subtle. Everybody knew it would require a compiler like no one had
ever seen before, and both Intel and Microsoft have gone after it
hammer and tongs. They have succeeded in doing amazing things, and,
if private correspondence I've had it to be believed, at least one of
the players has advanced the state of compiler art in a way that is
nothing short of amazing.

I persist in thinking they will make it work. Whether they ever get
their investment back is another matter entirely.

RM
 
K

Keith R. Williams

I've heard of an even bigger similar project than IA64 getting stopped
in its tracks when reality smacked it in the nose. Well, one such
bigger project happened before my time, and one slightly smaller during
my tenure. Sometimes reality makes you change your plans.

It wasn't before my time. I was hired to work on that particular nose,
and then went on to the "AMD" version of reality. The customers spoke;
"NO FREAKING WAY".
On a slightly different, but still related vein... A friend once
brought once back one key piece of wisdom from a conference, "Software
is hard, and hardware is easy."

Cute. Clueless, but cute. ;-)
Here you hit the nail on the head.

You have to ask what problems Intel was trying to solve with IA64.
They're obviously *always* after performance. But in this case, IMHO
they were after clone-relief, too.

I may not be in the CPU business, but I've spent a lot of years in a
big company - a company once renouned for being self-absorbed. Maybe
I'm only starting to learn about caching issues in ccNUMA, but I've
seen internal corporate politics before, and I think I can recognize
the signs. IA64 reeks of it.

It has since Intel first disclosed the architecture. ...actually
before, if you count the rumor mills.

<snip>
 
G

George Macdonald

I will do the historical search to educate myself, but I won't be at
all be surprised to find by then people saying that the Itanium
scheduling strategy was a disaster. I can even guess as to a name
that will show up. By then, though, the value of OoO had already been
established in actual practice, the memory wall was an issue around
which conferences had already been organized and held, there were at
least some people who understood why OoO was *so* important, but most
important, the ship and who knows how many hundred million dollars had
already left the dock starting many years before.

Oh yes, the OoO was well established by then; the arguments against
EPIC/VLIW had been around for a while but certainly in that time frame,
there were some notable names who had come *out* against it very openly...
with some attached risk I thought. I guess they had been hoping that it
would wither on the bough or they had been uninterested 3rd parties to that
point.
From discussions I've already had online, I know that there were by
the 2K time frame serious disagreements within Intel about the future
of static scheduling, but I'm reasonably certain that Intel has been
investigating strategies specifically aimed at that problem starting
from just about then.

All of the evidence available to me is that Intel was damned and
determined to impose IA-64 on the world, that the people with the
power to say yes or no felt that they had the money to do it one way
or another, and that even what many regard as an impending event
(64-bit x86) that will spell certain doom for IA-64 has done little or
nothing to change their minds.

So Intel wamted to change its place in the scheme of computing "things"...
and jump the ship which had been feeding them for umpteen years. Others
have tried that too... forgotten names.:) Success in (any) business often
leads to delusions of grandeur which are the path to the final downfall.
Intel needs to keep sight of who pays its bills. If the upcoming rumored
"64-bit" turns out to be x86-64 it'll at least be a sign that there is
still some sanity... and that they learned something from the DRDRAM
debacle.
Glass half full, glass half empty.

Glass half full: increasing the number of named registers is *such* an
obvious thing to do. FPU stack not so important to me becasue x87
arithmetic not so important to me, but I can see that many will regard
that as progress. Big flat memory space a nice extra and a real
relief for people who actually have to code above 4GB.

Glass half empty: can't virtualize, therefore can't build a true
sandbox. Stuck with emulators like Wine that will always be glitchy.
Sandboxes are really nice for people who are doing serious work and
should probably be mandatory as a security measure for net
applications.

Everything's a compromise. For the desktop/workstation space x86-64 is
quite a leap forward from my POV - it also feels right to me. I don't know
enough to say for certain but I'd think with some further loss of legacy
baggage and a bit of evolutionary clean up it could become a very nice
future platform.

I wonder how much the large systems, which fit somewhere between Grid
computing and Super computing, were an intended market by AMD. They're
certainly helping AMD with ASPs... much to the chagrin of the adverserial
anal...ysts.
VLIW has been at the very least controversial for a long, long time.
The point I was trying to drive home is that Intel's motivations in
choosing the design strategy are not and never have been driven
primarily by the considerations that would first (or perhaps ever)
come to the mind of an engineer.

If somebody can come up with a nice RISC chip that will keep us from
returning to a world owned by IBM on the high end, I'm in favor of it.
SPARC via Sun and Fujitsu are the only other games left in town.
People who talk down Itanium planning on x86-64 as a competitor for
IBM domination of high-end computing just aren't thinking clearly.

Hmmm, Compaq, HP and Intel combined, just destroyed any such threat from
the most likely candidate.:) What a waste.:-(

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
T

Tony Hill

http://www.byte.com/art/9612/sec6/art3.htm

1993: First out-of-order execution microprocessor:
IBM and Motorola PowerPC 601

Yeah, I just noticed in an unrelated post that someone mentioned the
first revision or two of the Alpha were in-order designs.
1995: Pentium Pro
[My words] first x86 with OoO core, very controversial because it
It is clear that the broad outlines of the architecture (VLIW) and
probably many details of the ISA for Itanium were in motion _before_
the world of microprocessors even had a look at OoO, never mind having
a grasp of why it was so important.

So it seems that the beginnings of the Itanium were set in motion
before chips had OoO implemented, but not by much. The real work
didn't start until '93 or '94, and as you mentioned above, the first
OoO processor hit the market in '93. Even Intel obviously knew a
thing or two about OoO chips since the design for their P6 core would
surely have been nearly complete by '93 or '94 when they started work
on Itanium.
Could someone have peered into the future and seen that processor
speeds and memory latencies would diverge so widely and then also
accurately predicted the consequences for Itanium? Possibly. If
someone did and a record was made of that person's prescience, could I
be pointed to the details?

I suppose hindsight is always 20/20, but it seems to me like the
writing was on the wall when they when the very first real work
started on the chip. If Motorola and IBM came out with a OoO PowerPC
chip in '93 then there MUST have been some people researching it and
talking about this back in the early '90s or maybe even the late '80s.

Even if the ISA was set before OoO was commonplace, I'd say that it
looks VERY shortsighted not to have seen that OoO might become very
important real-soon-now. Throughout the '90s nearly every chip design
around had moved to OOO execution, so there MUST have been some
research prior to that which suggests this would be beneficial.

I just dug up a bit of research, and among other things I came up with
this page:

http://www.cs.clemson.edu/~mark/metaflow.html

Talking about OoO SPARC v8 chips where the design was started in '88.
This seems to be the beginnings of out-of-order execution on
processors. So certainly there were SOME people out there as early as
'88 talking about OoO before the Itanium design got off the ground,
and that movement was growing fast. From about '95 on-wards nearly
every chip produced was OoO (including those from Intel and HP, the
PPro in '95 and the PA-RISC 8000 in '96). Surely these chips were
well underway in the early 90's.

It seems to me at least that there was enough proof that OoO was THE
way to go as early as '92 or '93, enough so that ALL new processors
being designed at that time were OoO chips (those released in the '93
- '96 timeframe). Yet Intel still decided to get together with HP to
continue work they had done on a chip that was never intended to be
OoO capable.
 
R

Robert Myers

I suppose hindsight is always 20/20, but it seems to me like the
writing was on the wall when they when the very first real work
started on the chip. If Motorola and IBM came out with a OoO PowerPC
chip in '93 then there MUST have been some people researching it and
talking about this back in the early '90s or maybe even the late '80s.

Even if the ISA was set before OoO was commonplace, I'd say that it
looks VERY shortsighted not to have seen that OoO might become very
important real-soon-now. Throughout the '90s nearly every chip design
around had moved to OOO execution, so there MUST have been some
research prior to that which suggests this would be beneficial.

I just dug up a bit of research, and among other things I came up with
this page:

http://www.cs.clemson.edu/~mark/metaflow.html

Talking about OoO SPARC v8 chips where the design was started in '88.
This seems to be the beginnings of out-of-order execution on
processors. So certainly there were SOME people out there as early as
'88 talking about OoO before the Itanium design got off the ground,
and that movement was growing fast. From about '95 on-wards nearly
every chip produced was OoO (including those from Intel and HP, the
PPro in '95 and the PA-RISC 8000 in '96). Surely these chips were
well underway in the early 90's.

It seems to me at least that there was enough proof that OoO was THE
way to go as early as '92 or '93, enough so that ALL new processors
being designed at that time were OoO chips (those released in the '93
- '96 timeframe). Yet Intel still decided to get together with HP to
continue work they had done on a chip that was never intended to be
OoO capable.

You are confusing the herd being headed in a particular direction with
evidence that it could be known by then that OoO was THE way to go.
Even an arcthitect who played a big part in making OoO work on P6
doesn't say that (For the same reason George doesn't want to cite
actual quotes, neither do I. The players still have careers.)

The actual hubris that Intel/HP was guilty of was in relying on a
compiler that everyone said could not be written. As usual, people
muttered something about NP-hard or similar and said it couldn't be
done. In large measure, they have proven their critics wrong. The
compilers have been written and perform miracles, and the world is
better off for their stubbornness.

The reason it doesn't work as well as one might like doesn't have to
do with NP-hard or anything like it. It doesn't even have anything to
do with the predictable unpredictability of codes (the fact that you
*are* going to mispredict branches and that you *are* going to
misspeculate loads).

As I understand things currently, the problem is the unpredictability
about which *no* information is available. It's not in the source
code, and it's not in the training run file because it has nothing to
do with the code itself. It comes from the fact that, short of
working on bare iron, computers do not present a predictable run-time
environment.

It's only the *huge* cost of an unpredictable cache miss that makes it
so painful, and Intel is working on it.

Intel arrogrant: yes. Intel stubborn: yes. Intel stupid: no.

RM
 
T

Tony Hill

You are confusing the herd being headed in a particular direction with
evidence that it could be known by then that OoO was THE way to go.

Perhaps I am giving the herd too much credit, but it certainly seems
to me that there was enough evidence that OoO was THE way to go to
convince EVERY major chip designer out there to put in the effort.
The are multi-multi-million dollar projects, so usually you don't get
too many people taking risks if they don't think they'll pan out.

It certainly looks to me like Intel/HP were the ones taking the risk
with IA64 not supporting OoO, rather than everyone else doing a design
that did support OoO.
The actual hubris that Intel/HP was guilty of was in relying on a
compiler that everyone said could not be written. As usual, people
muttered something about NP-hard or similar and said it couldn't be
done. In large measure, they have proven their critics wrong. The
compilers have been written and perform miracles, and the world is
better off for their stubbornness.

Probably true, though it makes you wonder what other miracles might
have been performed if the resources had been directed elsewhere.
The reason it doesn't work as well as one might like doesn't have to
do with NP-hard or anything like it. It doesn't even have anything to
do with the predictable unpredictability of codes (the fact that you
*are* going to mispredict branches and that you *are* going to
misspeculate loads).

As I understand things currently, the problem is the unpredictability
about which *no* information is available. It's not in the source
code, and it's not in the training run file because it has nothing to
do with the code itself. It comes from the fact that, short of
working on bare iron, computers do not present a predictable run-time
environment.

If these sorts of things were predictable, we probably wouldn't really
need processors, just ASICs for everything! :>
It's only the *huge* cost of an unpredictable cache miss that makes it
so painful, and Intel is working on it.

Intel arrogrant: yes. Intel stubborn: yes. Intel stupid: no.

Perhaps not stupid in general, though it does sometimes look like
they've made stupid mistakes due to their arrogance and stubbornness.

What I really see in all this is that Intel doesn't design with
software in mind. Every other chip designer out there seems to design
their chips with an eye to the software they are going to run. This
is obvious with companies like Sun and IBM which are designing chips
for their own servers, their own operating system and even their own
apps. However even AMD seems to spend a lot more effort trying to
figure out what they can do in their processors to make the software
run better.

Intel seems to take the opposite approach, they design their
processors based on their own notion of how a processor should work.
Then after the fact they go to the software side of things and tell
them how software should work best on their platforms. Even on the PC
side of things we sometimes see this, with the P4 tending to need
better compilers for optimal performance than AMD's chips. With IA64
they took this to the extreme, hardware first and then figure out how
the heck you can get software to work on the thing.

If you build it they will come?
 
J

James Van Artsdalen

Goose said:
Concede-to-amd Technology?
Caught-with-our-pants-down Technology?
Catch-up Technology?
Cough-amd-cough Technology?
Change-course Technology?
...

Competitor's Technology
 
R

Robert Myers

Intel seems to take the opposite approach, they design their
processors based on their own notion of how a processor should work.
Then after the fact they go to the software side of things and tell
them how software should work best on their platforms. Even on the PC
side of things we sometimes see this, with the P4 tending to need
better compilers for optimal performance than AMD's chips. With IA64
they took this to the extreme, hardware first and then figure out how
the heck you can get software to work on the thing.

Yes. That's exactly what they've been doing, and I don't think it
bothers Intel very much that it makes them unpopular with hobbyists.
I don't think it bothers Microsoft, either. Both Intel and Microsoft
have in-house compiler teams staffed by the smartest people they can
buy.

It has taken the combined talent of nearly the entire rest of the
world working on gcc even to make a horse-race of it. It has pushed
gcc so hard that they have realized that they are going to have to
face up to modernizing the internals of the compiler rather than
continuing forever to fiddle with RTL.

I don't see these as bad things.
If you build it they will come?

Engineers hate marketing. IBM, Intel, and Microsoft all have three
things in common: they are or have been top of the heap, they are or
have been the object of widespread enmity within the community of
technical professionals, and they know how to market.

Nobody _needs_ a computer; certainly not the kind of computer that
corporations routinely ante up for. For a while, I worked for a
penny-pinching sourpuss who thought that green-screen serial terminals
off a Unix server were just fine. You know what? For the purposes he
needed served, he was right.

By (in this case largely Microsoft) creating the perception that
somehow you _need_ a computer with 32-bit color, a hundred or so
fonts, and the ability to play Beethoven's fifth symphony from a midi
file, the cost of such an implausible device is now far less than the
cost of a large-screen TV. The cost is _so_ low, in fact, that it's
hard to build a professional-quality thin client that can compete in
price. That's marketing.

An example of a company that thought that if you build it they will
come was DEC.

Okay folks, load up the baskets with rotten fruit and vegetables.
Haul out the flamethrowers, 'cause here it comes. AMD usually rides
on Intel's marketing coattails. In the case of x86-64, they have
beaten Intel at its own game by creating a desire where there is no
actual need. That's marketing.

RM
 
K

Keith R. Williams

Easy for you to say. If anyone accurately foresaw the importance of
OoO and just exactly _why_ it would be so important _before_ it was
introduced into common usage, I should be very much indebted to anyone
who can direct me to an appropriate link (who knows, maybe such a link
exists). Run-time scheduling may or may not prove in the long run to
play the critical role that it currently does, so I'm not going to
make any emphatic statements that purport to be true for all time.
People have tried every conceivable scheme for scheduling, and right
now, on-die runtime scheduling appears to be the winner.

....late to the party, but:

Gee, even the much maligned Cyrix 6X86 was an OoO processor, sold
in what, 1986? Evidently Cyrix thought it was a winner, and they
weren't wrong.
 
K

Keith R. Williams

I wouldn't write off the EV gang quite yet. More than one rumour has them
designing dual core ia64 parts...

....another reason to write them off. At least they're employed.
Gainfully?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top