old Itanium articles

Y

Yousuf Khan

Interesting old articles from the dawn of the Itanium age. I found it
interesting reviewing it now that we're probably nearing its twilight.

These two came from the time of the first Merced release in 2001:
http://zdnet.com.com/2100-11-529889.html
http://www.g4techtv.com/techtvvault/features/30631/Intel_Launches_NextGen_Chip.html

They were predicting that Itanium would be competing against RISC processors
in both servers and workstations. Not to mention predictions of Itanium
becoming a consumer product by 2004. They were even predicting that Itanium
would be able to eventually make Star Trek-style holograms. But there were
some naysayers, an Intel manager David House who had long since left
predicted that this would be one of the world's worst investments.

This is an old Intel press release from 1997:
http://tinyurl.com/5c9rn

This press release was announcing plans to release Merced by 1999. But of
course it didn't come out till 2001.

Yousuf Khan
 
G

Grumble

Yousuf said:
Interesting old articles from the dawn of the Itanium age. I found it
interesting reviewing it now that we're probably nearing its twilight.

Yousuf,

Do you ever take a break from trolling IPF? Seriously.
 
R

Robert Myers

Yousuf said:
Interesting old articles from the dawn of the Itanium age. I found it
interesting reviewing it now that we're probably nearing its twilight.

These two came from the time of the first Merced release in 2001:
http://zdnet.com.com/2100-11-529889.html

<quote>

At a preliminary technical exchange, says WideWord architect Rajiv
Gupta, "I looked Albert Yu in the eyes and showed him we could run
circles around PowerPC [an IBM processor], that we could kill PowerPC,
that we could kill the x86. Albert, he's like a big Buddha. He just
smiles and nods."

</quote>

No matter how accurate your prediction about the twilight of Itanium, I
hope we get a chance to understand the details behind statements like that.

IBM, HP, Intel, and Elbrus (among others) all thought they could do
amazing things will instruction-level parallelism via a long instruction
word--so amazing that they thought (at least in the case of HP, Intel,
and Elbrus) that they could blow the comptetition away.

It hasn't turned out that way, and it would be enlightening to be able
to see what evidence they were looking at that ended up misleading them.
For Intel, that has to be more than an idle exercise. For the rest of
us, there is probably insight to be gained.
http://www.g4techtv.com/techtvvault/features/30631/Intel_Launches_NextGen_Chip.html

They were predicting that Itanium would be competing against RISC processors
in both servers and workstations.

The issue of the moment, on the other hand, just isn't all that
interesting (to me, at least). IBM has misjudged the market for Itanium
in numerous ways? At some point, that has to stop being news.

RM
 
B

Benjamin Gawert

Yousuf said:
This press release was announcing plans to release Merced by 1999.
But of course it didn't come out till 2001.

Interesting. We got our first production systems with Itanium 733MHz and
800MHz Q1/Q2 2000, and the preproduction systems already 1999 (Itanium
667MHz)...

Benjamin
 
Y

Yousuf Khan

Robert said:
Yousuf said:
Interesting old articles from the dawn of the Itanium age. I found it
interesting reviewing it now that we're probably nearing its
twilight. These two came from the time of the first Merced release in
2001:
http://zdnet.com.com/2100-11-529889.html

<quote>

At a preliminary technical exchange, says WideWord architect Rajiv
Gupta, "I looked Albert Yu in the eyes and showed him we could run
circles around PowerPC [an IBM processor], that we could kill PowerPC,
that we could kill the x86. Albert, he's like a big Buddha. He just
smiles and nods."

</quote>

No matter how accurate your prediction about the twilight of Itanium,
I hope we get a chance to understand the details behind statements
like that.

Couldn't it just be "trying to make a pitch to management"?
IBM, HP, Intel, and Elbrus (among others) all thought they could do
amazing things will instruction-level parallelism via a long
instruction word--so amazing that they thought (at least in the case
of HP, Intel, and Elbrus) that they could blow the comptetition away.

It's probably still possible to achieve incredible performance, but maybe
that's not the type of performance that's so important for customers?

I just don't see Itanium as being done in by its performance. I think it's
simply that Itanium didn't address any computational needs. People had
existing code that they wanted to run, and Itanium would've made them
rewrite their code, just to run.
The issue of the moment, on the other hand, just isn't all that
interesting (to me, at least). IBM has misjudged the market for
Itanium in numerous ways? At some point, that has to stop being news.

Well, no, that's not the point. They were clearly hoping that Itanium would
be big on workstations just as much as servers, because on workstations they
were one step away from being PCs too.

Yousuf Khan
 
D

David Wang

In comp.sys.intel Yousuf Khan said:
I just don't see Itanium as being done in by its performance. I think it's
simply that Itanium didn't address any computational needs. People had
existing code that they wanted to run, and Itanium would've made them
rewrite their code, just to run.

I wrote some code that mostly ran on x86 boxes.

I had some memory issues, and was given an Itanium box to play
with. I moved my code onto the Itanium box, recompiled and ran.

Everything worked as before. I couldn't tell that I was running on
an Itanium box, except that I knew I was.
 
Y

Yousuf Khan

David said:
I wrote some code that mostly ran on x86 boxes.

I had some memory issues, and was given an Itanium box to play
with. I moved my code onto the Itanium box, recompiled and ran.

Everything worked as before. I couldn't tell that I was running on
an Itanium box, except that I knew I was.

What if you didn't have the source code?

Yousuf Khan
 
R

Robert Myers

Yousuf said:
Robert said:
Yousuf Khan wrote:

Interesting old articles from the dawn of the Itanium age. I found it
interesting reviewing it now that we're probably nearing its
twilight. These two came from the time of the first Merced release in
2001:
http://zdnet.com.com/2100-11-529889.html

<quote>

At a preliminary technical exchange, says WideWord architect Rajiv
Gupta, "I looked Albert Yu in the eyes and showed him we could run
circles around PowerPC [an IBM processor], that we could kill PowerPC,
that we could kill the x86. Albert, he's like a big Buddha. He just
smiles and nods."

</quote>

No matter how accurate your prediction about the twilight of Itanium,
I hope we get a chance to understand the details behind statements
like that.


Couldn't it just be "trying to make a pitch to management"?

I don't think so. Legend has it that Grove went to Russia and came back
convinced that, if Intel didn't do it, Elbrus would. I think I've got
those details right.
It's probably still possible to achieve incredible performance, but maybe
that's not the type of performance that's so important for customers?

I just don't see Itanium as being done in by its performance. I think it's
simply that Itanium didn't address any computational needs. People had
existing code that they wanted to run, and Itanium would've made them
rewrite their code, just to run.

Your post created an image of bulldozers pushing mountains of c into the
ocean with wheeling seagulls picking away at the rotting garbage. I
like it. You have some idea why I don't share everyone else's apparent
enthusiasm for granting x86 immortality?

In any case, I don't think anyone ever expected that much code would be
rewritten. Recompiled and retuned, yes. Rewritten, no.
Well, no, that's not the point. They were clearly hoping that Itanium would
be big on workstations just as much as servers, because on workstations they
were one step away from being PCs too.

Yeah, if I worked, I think I could find articles not just mentioning
workstations as a target market in passing, but going on elaborately
about them (market that was dominated by RISC, market of typically early
adopters, stuff I can't remember, I'm sure). But so what?

It's like the sunk costs which, as Keith pointed out, don't figure into
ROI calculations. Obsessing over who Intel _thought_ they were going to
sell the chip to just doesn't accomplish that much. Who do they think
they're going to sell it to now? That's what matters.

RM
 
D

David Wang

What if you didn't have the source code?

Then you couldn't "re-write" the application, which you
claimed was needed for x86 to Itanium migration.

The fact of the matter was that I didn't even have to
change the makefile. I ran the exact same code on
Mandrake + x86 as I did on Redhat + Itanium.
 
Y

Yousuf Khan

Robert said:
I don't think so. Legend has it that Grove went to Russia and came
back convinced that, if Intel didn't do it, Elbrus would. I think
I've got those details right.

Was Elbrus that much of a benchmark to Intel?
Your post created an image of bulldozers pushing mountains of c into
the ocean with wheeling seagulls picking away at the rotting garbage.
I like it. You have some idea why I don't share everyone else's
apparent enthusiasm for granting x86 immortality?

Well, that wasn't the image I was trying to convey, but now that you've told
me that's the image you had, now I can't get it out of my head. :)

As for x86's immortality, it stays important by evolving to fill modern
needs. Eventually, you might find that x86 has evolved so much that it's
become hidden behind a completely different architecture. AMD64 seems to be
one small step towards hiding away x86. I personally thought that 32-bits
was all that could be had from x86, I couldn't imagine too much that anyone
could add to it to extend it out to 64-bit, but I was wrong, AMD64 actually
does a little bit of creative subtracting to extend x86 -- I never imagined
that was one of the available options.

Actually, when I first heard of IA64, and how it was going to maintain
compatibility with x86, I thought it was a winner for-sure. But of course, I
was also pretty puzzled by how Intel was going to extend x86 out, since as I
said I couldn't imagine it, but I knew that Intel must have some plan --
it's their own design afterall. Then eventually I heard IA64 was a
completely different architecture between 32- to 64-bit mode. I thought well
this still makes some sense, and I still thought it was a winner; at this
point, I was thinking that it would be some kind of RISC architecture which
has enough in common with x86 encodings to work both ways. It was only after
I started finding out that IA64 was so alien from x86 that it actually could
only emulate x86, was when I first started changing my mind about it.
Without full-speed x86, it was going to be a loser.
In any case, I don't think anyone ever expected that much code would
be rewritten. Recompiled and retuned, yes. Rewritten, no.

Actually, I was using the term "rewritten" rather loosely, to also include
simple recompiles. Afterall, you may still have to add a line to your
compiler makefile. So that still sort of counts as a rewrite, anything that
minorly inconviences the programmer. :)
Yeah, if I worked, I think I could find articles not just mentioning
workstations as a target market in passing, but going on elaborately
about them (market that was dominated by RISC, market of typically
early adopters, stuff I can't remember, I'm sure). But so what?

Yeah, well I brought that point in because of Intel's assertion that
workstations were never really a big part of Itanium's picture after that HP
workstation product announcement.
It's like the sunk costs which, as Keith pointed out, don't figure
into ROI calculations. Obsessing over who Intel _thought_ they were
going to sell the chip to just doesn't accomplish that much. Who do
they think they're going to sell it to now? That's what matters.

You can still force the VMS and NonStop people to migrate. There was an
announcement yesterday that MIPS has been EOL'ed on the NonStop
architecture, to be replaced by Itanium. Unix people can migrate pretty much
anywhere they want.

Yousuf Khan
 
Y

Yousuf Khan

David said:
Then you couldn't "re-write" the application, which you
claimed was needed for x86 to Itanium migration.

Yeah, I did say rewrite. But I said "forced to rewrite", which meant that if
that wasn't a practical option, then these people would be forced to not
choose Itanium.

Yousuf Khan
 
C

chrisv

Robert Myers said:
Your post created an image of bulldozers pushing mountains of c into the
ocean with wheeling seagulls picking away at the rotting garbage. I
like it. You have some idea why I don't share everyone else's apparent
enthusiasm for granting x86 immortality?

I don't think that x86 will be immortal. If for no other reason, it's
not very suitable for battery/solar-powered applications, which is
where the world is going.
 
R

Robert Myers

chrisv said:
I don't think that x86 will be immortal. If for no other reason, it's
not very suitable for battery/solar-powered applications, which is
where the world is going.

Things do change, quickly and dramatically, but look at Cobol, Fortran,
and System/360. x86 will be at least as immortal as Cobol, and I just
came into a possession of an Itanium box that had had Cobol installed on
it. The x86 code base is just too big and too valuable, and x86-64
gives it many more years of growth.

RM
 
R

Robert Redelmeier

In comp.sys.ibm.pc.hardware.chips chrisv said:
I don't think that x86 will be immortal. If for no other
reason, it's not very suitable for battery/solar-powered
applications, which is where the world is going.

An interesting contention. Why is x86 unsuitable?

True, most current CPUs are fast power-hogs, but that
doesn't stop anyone from making slow (400MHz) sippers.
The large caches come in handy with slow flash.

-- Robert
 
C

chrisv

Robert Redelmeier said:
An interesting contention. Why is x86 unsuitable?

True, most current CPUs are fast power-hogs, but that
doesn't stop anyone from making slow (400MHz) sippers.

Would not it's MIPS/Watt still be inferior to more modern designs?
The large caches come in handy with slow flash.

Caches can be added to any CPU, of course...
 
D

Dorothy Bradbury

True, most current CPUs are fast power-hogs, but that
Would not it's MIPS/Watt still be inferior to more modern designs?

P-M does ok - derived from P3 lower power embedded-application work.
o Everyone said that Intel made little progress in that area
o Perhaps, but the R&D/knowledge gave it a great mobile chip
o Together with a route-map replacing Prescott

Mobile is as much one of CPU cost tho - not just power, plus the O/S.

The world is going mobile and could well move there more quickly:
o We can argue 50$ oil is short-term
---- peak oil is often argued - but the projection is as reliable as W/S analysis
o Longer term we know oil-heavy industry is going off-balance
---- USA will probably never sign Kyoto, but India/China have exemption
---- USA tax incentives outsource manuf/prodn/oil-heavy to India/China
---- so incentive for oil efficiency is outweighed by outsourcing efficiency
o So mobile in the West may indeed focus more on lower power
---- consumer energy prices are a favourite excuse for "Green" & "Indirect Taxation"
---- discretionary spend still has energy to pay for, pump or elec meter

Outside of energy each year we seem to collect more e-organiser tools, from the
2-way blueberry thro to i-tunes, to e-organisers. Desktop avg power is still about
80-120W, (lower from TFT, higher from modern CPU even at idle). Wearable tech
and convergence of existing devices isn't going away - cameras into phones.

For a lot of that the early offerings don't suit x86, later, dunno.

If you are willing to sacrifice a bit of performance, you can save a lot of watts.
Who, or what, wins the embedded platform will be interesting. Emphasis on the
mobile/embedded market is high - lots of sales, automatic obsolescence, and a
lot of customer-spend interface points as Nokia's incremental change history.

If fundamentalists take house of saud offline, I guess we'll soon notice with
90% of the world's cheapest oil being there - as well as in Iraq. Perhaps the
focus is not whether technology is strategic, but the energy to power it :)

Mini-ITX is still somewhat obscure, but Nano-ITX coming along soon with a
growing focus on mobility/size/energy. Energy prices in EU/UK could see an
easy 30-60% rise over the next 5yrs, with indirect taxation rises on top. Moving
the upgrade cycle from just cpu-power/features to energy is a good one re cost.
o Make a car heavier, and brakes/wheels/tyres/suspension go heavier, so repeats
o Make a cpu draw less current, and so the package & usability can go up

Battery technology as much as CPUs/OS made mobile-phones & laptop uptake.
To keep growing big-# revenue even single digits you need big-# sales.

Still at the gimmick stage with a lot of mobile products tho, a case of "can do",
but the key innovator is not who makes the tool, it's who builds-on-it/uses it.
I don't like windows on mobile, or on tablets for that much - feels too much like
a committee designed horse (camel). I also don't believe that so much of the
desktop-app compatibility matters for many uses. Xbox showed that MSFT
could design a stable (sw/hw) product, but Sony etc still keep selling theirs.
The big players can burn cash, but to create profit the law of big numbers
makes even a successful new product's profit minor against their o/all P&L.

Still not sure I want a Phone/TV/Camera all combined - I'd just like the actual
phone part to be reliable, but I guess I'm supposed to "feel the features"...

If the x86 industry moves to annual/periodic licence revenue stream, that is
an interesting promoter of change itself. Energy costs do matter - if mobile
tech is to avoid the running-cost irritation of digital cameras and such like.
 
Y

Yousuf Khan

Dorothy said:
P-M does ok - derived from P3 lower power embedded-application work.
o Everyone said that Intel made little progress in that area
o Perhaps, but the R&D/knowledge gave it a great mobile chip
o Together with a route-map replacing Prescott

Mobile is as much one of CPU cost tho - not just power, plus the O/S.

Tom's Hardware just did an overclocking article with a difference. They
underclocked an Athlon XP to use only 4.5W.

http://www.tomshardware.com/cpu/20041001/index.html

Yousuf Khan
 
D

Dorothy Bradbury

Tom's Hardware just did an overclocking article with a difference.
They underclocked an Athlon XP to use only 4.5W.
http://www.tomshardware.com/cpu/20041001/index.html

Interesting article.
o Showed how throttling voltage/multipliers far enough is difficult on most boards
o Also showed how VIA Mini-ITX can be neared with AMD XP-M & desktop board
---- altho Mini-ATX and even Pico-BTX are somewhat larger than a Mini-ITX

A pity they didn't do the same tests with P3-Celeron's:
o Plenty of those were passively cooled to ~600Mhz
o Architecture wise, the idle & leakage dissipation are far less than P4s

Cost wise, the differences are quite small:
o Typical desktop PC draws 80-100W
---- increasingly the CPU (eg, P4 Prescott) is a very major component even when idle
o The power dissipation elsewhere on the board was not considered
---- eg, Graphics, RAM, HD & so on still add up beyond pure CPU dissipation
o Despite that, 2.5" & Flash are well suited to a low power processor

For the UK, 100W is ~£80/yr in electricity bill for a "2nd-seat-PC" be it router, home
server, or entertainment server. Have 3 of them and that's £240/yr in running cost.
Dropping the headline figure to 50W gives figures to £40/yr & £120/yr respectively.

Interesting to calculate the global cost in electricity re Northwood to Prescott :)
At least Prescott LGA775 is re-introducing "speed-step-2" re thermal dissipation.
 
G

George Macdonald

P-M does ok - derived from P3 lower power embedded-application work.
o Everyone said that Intel made little progress in that area
o Perhaps, but the R&D/knowledge gave it a great mobile chip
o Together with a route-map replacing Prescott

For people doing real work, pounding on the CPU all day every day, I don't
think P-M makes a jot of difference... and there are other things about
mobile which make the experience unpleasant... like HDDs which crawl.
Mobile is as much one of CPU cost tho - not just power, plus the O/S.

The world is going mobile and could well move there more quickly:
o We can argue 50$ oil is short-term
---- peak oil is often argued - but the projection is as reliable as W/S analysis

Why bother bringing it up then?... because some political hack uses it to
hang his hat?:)

Peak oil is another fraud... up there with the err, "hydrogen economy".
S'funny really - I had a limo driver telling me just recently how all we
had to do was "get the hydrogen thing going and we'd free of them Arabs".
Things are that bad with the usual culprits in the media.
o Longer term we know oil-heavy industry is going off-balance
---- USA will probably never sign Kyoto, but India/China have exemption
---- USA tax incentives outsource manuf/prodn/oil-heavy to India/China
---- so incentive for oil efficiency is outweighed by outsourcing efficiency
o So mobile in the West may indeed focus more on lower power
---- consumer energy prices are a favourite excuse for "Green" & "Indirect Taxation"
---- discretionary spend still has energy to pay for, pump or elec meter

Maybe about time to figure how much energy is wasted on studying "Green"
and how to save energy - no? Kyoto?.... Bah - too many parasites... not to
mention corruption.:)
Outside of energy each year we seem to collect more e-organiser tools, from the
2-way blueberry thro to i-tunes, to e-organisers. Desktop avg power is still about
80-120W, (lower from TFT, higher from modern CPU even at idle). Wearable tech
and convergence of existing devices isn't going away - cameras into phones.

For a lot of that the early offerings don't suit x86, later, dunno.

If you are willing to sacrifice a bit of performance, you can save a lot of watts.

Compared with 2-3 light bulbs, a TV and water heater as an "average load"
in your typical home, I don't see "a lot" figuring in here, even for a
workstation or server.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top