Might be a book that even R. Myers can love :-)

R

Robert Myers

Dale Pontius wrote:

The question for IA64 becomes can it bring enough to the table on future
revisions to make up for its obstacles. Will >8-way become compelling,
and a what price? At this point, AMD is trying to push its Opteron ASPs
up, but probably has more flex room than IA64 or Xeon.

At this point, Itanium is _still_ mostly expectation. My point in
commenting on the book that started the thread is that Intel seemed to
have no interest in lowering expectations about Itanium.

Intel will do _something_ to diminish the handicap that Itanium
currently has due to in-order execution. The least painful thing that
Intel can do, as far as I understand things, is to use speculative
slices as a prefetch mechanism. That gets a big piece of the advantages
of OoO without changing the main thread control logic at all. Whether
that strategy works at an acceptable cost in transistors and power is
another question.

That single change could rewrite the rules for Itanium, because it will
take much of the heat off compilation and allow people more frequently
actually to see the kind of performance that Itanium now seems to
produce mostly only in benchmarks.

As to cost, Intel have made it clear that they are prepared to do
whatever they have to do to make the chip competitive.

As to how the big (more than 8-way) boxes behave, that's up to the
people who build the big boxes, isn't it? The future big boxes will
depend on board level interconnect and switching infrastructure, and if
anybody knows what that is going to look like in Intel's PCI Express
universe, I wish they'd tell me.

It gets harder to stick with the position all the time, but you still
have to take a deep breath when betting against Intel. The message
Intel wants you to hear is: IA-64 for mission critical big stuff, IA-32
for not-so-critical, not-so-big stuff.

No marketing baloney for you and you don't care what Intel wants you to
hear? That's reasonable and to be expected from technical people.
Itanium is where they intend to put their resources and support for high
end applications, and they apparently have no intention of backing away
from that. Feel free to ignore what they're spending so much money to
tell you. It's your nickel.

RM
 
D

daytripper

One simple question about IA64...

What and whose problem does it solve?

As far as I can tell, its prime mission is to solve Intel's problem, and
rid it of those pesky cloners from at least some segments of the CPU
marketplace, hopefully an expanding portion.

It has little to do with customers' problems, in fact it makes some
problems for customers. (Replace ALL software? Why is this good for
ME?)

I love the smell of irony in the evening...

The need for humongous non-segmented memory space is a driver for "wider
addressing than ia32 provided" architectures.

The real irony is, after years of pain for everyone involved, the ia64 may
just find itself in the dustbin of perpetual non-starters because the pesky
CLONER came up with a "painless" way to extend memory addressing!

/daytripper (simply delicious stuff ;-)
 
D

Dale Pontius

Dale Pontius wrote:



At this point, Itanium is _still_ mostly expectation. My point in
commenting on the book that started the thread is that Intel seemed to
have no interest in lowering expectations about Itanium.

Intel will do _something_ to diminish the handicap that Itanium
currently has due to in-order execution. The least painful thing that
Intel can do, as far as I understand things, is to use speculative
slices as a prefetch mechanism. That gets a big piece of the advantages
of OoO without changing the main thread control logic at all. Whether
that strategy works at an acceptable cost in transistors and power is
another question.

That single change could rewrite the rules for Itanium, because it will
take much of the heat off compilation and allow people more frequently
actually to see the kind of performance that Itanium now seems to
produce mostly only in benchmarks.
Development cost is a different thing to Intel than to most of the rest
of us. I've heard of "Intellian Hordes," (my perversion of Mongolian)
and that it sounds tough to me to coordinate the sheer number of people
they have working on a project. I contrast that with the small team we
have on projects, and our perpetual fervent wish for just a few more
people.
As to cost, Intel have made it clear that they are prepared to do
whatever they have to do to make the chip competitive.

As to how the big (more than 8-way) boxes behave, that's up to the
people who build the big boxes, isn't it? The future big boxes will
depend on board level interconnect and switching infrastructure, and if
anybody knows what that is going to look like in Intel's PCI Express
universe, I wish they'd tell me.
Actually, it's none of my business, except as an interested observer. I
don't ever forsee that kind of hardware in my home, and I don't oversee
purchases of that kind of equipment.
It gets harder to stick with the position all the time, but you still
have to take a deep breath when betting against Intel. The message
Intel wants you to hear is: IA-64 for mission critical big stuff, IA-32
for not-so-critical, not-so-big stuff.
My one stake in the IA-64 vs X86-64/IA-32e debate is that I have some
wish to run EDA software on my home machine. I like to have dinner with
the family, and it's about a half-hour each way to/from work. Having
EDA on Linux at home means I can do O.T. after dinner without a drive.

I currently have IA-32 and run EDA software, but that stuff is moving to
64-bit. I can foresee having X86-64 in my own home in the near future,
which keeps me capable. I can't see the horizon where I'll have IA-64
in my home, at the moment. In addition to EDA software, my IA-32
machine also does Internet stuff, plays Quake3, and other clearly non-
work related things. Actually, the work is the extra mission.
No marketing baloney for you and you don't care what Intel wants you to
hear? That's reasonable and to be expected from technical people.
Itanium is where they intend to put their resources and support for high
end applications, and they apparently have no intention of backing away
from that. Feel free to ignore what they're spending so much money to
tell you. It's your nickel.
Marketing baloney or not, it's really irrelevant at the moment. I'm a
home user, and Intel's roadmap doesn't put IA-64 in front of me for the
visible horizon. Nor do I have anything to say about purchasing that
calibre of machines at work. I *have* expressed my preference about
seeing EDA software on X86-64 - for the purpose of running it on a home
machine. So not only is it my nickel, they're not even asking me for
it. Any ruminations about IA-64 vs X86-64 are merely that - technical
discussion and ruminations. Anything they're spending money telling me
now is simply cheerleading.

For that matter, since IA-64 isn't on the Intel roadmap for home users
yet, I could well buy an X86-64 machine in the next year or two. When
it's time to step up again, I can STILL examine the IA-64 decision vs
whatever else is on the market, then.

Put simply, at the moment my choices are IA-32, X86-64, and Mac.
Period. Any discussion of IA-64 is just that -discussion, *because* I'm
a technical person.

Dale Pontius
 
R

Robert Myers

Dale Pontius wrote:

Development cost is a different thing to Intel than to most of the rest
of us. I've heard of "Intellian Hordes," (my perversion of Mongolian)
and that it sounds tough to me to coordinate the sheer number of people
they have working on a project. I contrast that with the small team we
have on projects, and our perpetual fervent wish for just a few more
people.

No matter how it turns out, Itanium should be safely in the books for
case studies at schools of management. To my eye, the opportunities and
challenges resemble the opportunities and challenges of big aerospace.
NASA isn't the very best example, but it's the easiest to talk about.
If you have unlimited resources and you're damned and determined to put
a man on the moon, you can do it, no matter how many people you have to
manage to get there. In the aftermath of Apollo, though, with shrinking
budgets and a chronic need to oversell, NASA delivered a Shuttle program
that many see as poorly conceived and executed. Intel and Itanium are
still in the Apollo era in terms of resources.
My one stake in the IA-64 vs X86-64/IA-32e debate is that I have some
wish to run EDA software on my home machine. I like to have dinner with
the family, and it's about a half-hour each way to/from work. Having
EDA on Linux at home means I can do O.T. after dinner without a drive.

I currently have IA-32 and run EDA software, but that stuff is moving to
64-bit. I can foresee having X86-64 in my own home in the near future,
which keeps me capable. I can't see the horizon where I'll have IA-64
in my home, at the moment. In addition to EDA software, my IA-32
machine also does Internet stuff, plays Quake3, and other clearly non-
work related things. Actually, the work is the extra mission.

For that matter, since IA-64 isn't on the Intel roadmap for home users
yet, I could well buy an X86-64 machine in the next year or two. When
it's time to step up again, I can STILL examine the IA-64 decision vs
whatever else is on the market, then.

Put simply, at the moment my choices are IA-32, X86-64, and Mac.
Period. Any discussion of IA-64 is just that -discussion, *because* I'm
a technical person.

The one thing you might care about would be the possibility that the
standard environment for EDA went from x86/Linux to ia64/Whatever. That
could still happen, but it seems like a distant prospect right now.
Itanium seems most plausible to prevail over x86-64 in proprietary
software with high license fees, but that kind of software isn't
generally running next to Quake3 now and probably won't ever be.

RM
 
R

Robert Myers

K said:
Robert Myers wrote:




I guess I'm trying to figure out exactly *what* you're driving at.
Performance comes with arrays of processors or complex processors.
Depending on the application, either may win, but there aren't any
simple-uniprocessors at the high-end. We're long past that
possibility.




Ok, now we're back to arrays. ...something which I thought you were
whining about "last" week.

If by an array you mean a stream of data and insructions, I suppose
that's general enough.

As to what I want...I think Iain McClatchie did well enough in
presenting what I thought might have been done with Blue Gene in talking
about his "WIZZIER processor" on comp.arch. You can do it for certain
classes of problems...no one doubts that. You can do it with ASIC's if
you've got the money...no one doubts that. Can you build a
general-purpose "supercomputer" that way? Not easily.

We are, in any case, a long way from exhausting the architectural
possibilities.
...and neither does anyone else. Many people are hard at work
re-inventing physics. The last time I remember a significant
speed-bump IBM invested ten-figures in a synchrotron for x-ray
lithography.

I thought the grand illusion was e-beam lithography.
Smarter people came up with the diffraction masks.
Sure, some of these smarter people will come around again, but the
problems go up exponentially as the feature size shrinks.

I'm looking for improvements from: low power operation (the basic
strategy of Blue Gene), improvements in packaging (Sun's slice of the
DARPA pie being one idea, albeit one I'm not crazy about), using
pipelines creatively and aggressively, and more efficient handling of
the movement of instructions and data. If we get better or even
acceptable power-frequency scaling with further scale shrinks,
naturally, I'll take it, but I'm not counting on it.

RM
 
K

K Williams

Dale said:
George Macdonald said:
As for JCL, I once had a JCL evangelist explain to me how he
could use JCL in ways which wasn't possible on systems with
simpler control statements - conditional job steps, subsitution
of actual file names for dummy
parameters etc... "catalogued procedures"?[hazy again] The guy
was stuck in his niche of "job steps" where data used to be
massaged from one set of tapes to another and then on in another
step to be remassaged into some other record format for storing
on another set of tape... all those steps
being necessary, essentially because of the sequential tape
storage. We'd had disks for a while but all they did was emulate
what they used to do with tapes - he just didn't get it.
I used to do JCL, back when I ran jobs on MVS. After getting used
to it, and the fact that you allocated or deleted files using the
infamous IEFBR14, there were things to recommend it.

I didn't have much problem with JCL either, and found it rather
powerful. (and one only needed IEFBR14 for cleanup detail).
At the very
least, you edited your JCL, and it all stayed put. Then you
submitted, and it was in the hands of the gods. None (or very
little, because there were ways to kill a running job) of this
Oops! and hit Ctrl-C.

If it was your job, it was rather easy to kill. Of course I
remember when even MVS was about as secure as MSDOS. I learned
much of my MVS stuff (including what initiators were "hot") by
walking through others' JCL and code. Even the "protection"
wasn't. Simply copy the file to another pack and delete it from
the VTOC where it was originally and re-catalog it. Of course RACF
ruined all my fun. ;-) Then there were ways of "hiding" who one
was (start TSO in the background, and submit a job from there hid
one's identity). ...much more fun than the incomprehensible *ix
stuff. ;-)
I never had to deal with tapes, fortunately. It was also
frustrating not having dynamic filenames. There were ways to
weasel around some of those restrictions, though.

Dynamic file names weren't a problem, AFAIR.
 
K

K Williams

daytripper said:
I love the smell of irony in the evening...

I rather like my wife doing that in the morning, so I have crisp
shirts to wear (and if you believe that...).
The need for humongous non-segmented memory space is a driver for
"wider addressing than ia32 provided" architectures.

But, but, bbbb, everone *knows* there is no reason for 64b
processors on the desktop! Intel says so.
The real irony is, after years of pain for everyone involved, the
ia64 may just find itself in the dustbin of perpetual non-starters
because the pesky CLONER came up with a "painless" way to extend
memory addressing!

Are you implying that Intel dropped a big ball? ...or a little one,
BIG-TIME!
/daytripper (simply delicious stuff ;-)

Indeed. ...though remember; no one needs 64bits. no one needs
64bits. no one needs 64bits. no one, no one, no...
 
K

K Williams

Robert said:
Dale Pontius wrote:



No matter how it turns out, Itanium should be safely in the books
for
case studies at schools of management.

Rather like the Tacoma Narrows Bridge movie is required viewing for
all freshmen engineers? ;-)
To my eye, the
opportunities and challenges resemble the opportunities and
challenges of big aerospace. NASA isn't the very best example, but
it's the easiest to talk about. If you have unlimited resources
and you're damned and determined to put a man on the moon, you can
do it, no matter how many people you have to
manage to get there.

....but Intel hasn't gotten there yet, if they ever will.
In the aftermath of Apollo, though, with
shrinking budgets and a chronic need to oversell, NASA delivered a
Shuttle program
that many see as poorly conceived and executed. Intel and Itanium
are still in the Apollo era in terms of resources.

No IMHO, Intel missed the moon and the Shuttle, and went directly
to the politics of the International Space Station. ...A mission
without a requirement.
<snip>

The one thing you might care about would be the possibility that
the
standard environment for EDA went from x86/Linux to ia64/Whatever.
That could still happen, but it seems like a distant prospect
right now. Itanium seems most plausible to prevail over x86-64 in
proprietary software with high license fees, but that kind of
software isn't generally running next to Quake3 now and probably
won't ever be.

I know several EDA folks have been reluctant to support Linux and
instead support Windows, for at least the low-end stuff (easier to
restrict licensing). I don't see anyone seriously going for IPF
though. It is *expensive* supporting new platforms. ...which is
why x86-64 is so attractive.
 
K

K Williams

Robert said:
I'm not sure what kind of complexity you are imagining. Garden
variety microprocessers are already implausibly complicated as far
as I'm concerned.

I guess I'm trying to figure out exactly *what* you're driving at.
Performance comes with arrays of processors or complex processors.
Depending on the application, either may win, but there aren't any
simple-uniprocessors at the high-end. We're long past that
possibility.
I have some fairly aggressive ideas about what *might* be done
with computers, but they don't necessarily lead to greatly
complicated
machines. Complicated switching fabric--probably.

Ok, now we're back to arrays. ...something which I thought you were
whining about "last" week.
No. I started out, in fact, in the building across the boneyard
from
MRL. I understand the physical limitations well enough. What I
don't know about is what might be done to get around those
limitations.

....and neither does anyone else. Many people are hard at work
re-inventing physics. The last time I remember a significant
speed-bump IBM invested ten-figures in a synchrotron for x-ray
lithography. Smarter people came up with the diffraction masks.
Sure, some of these smarter people will come around again, but the
problems go up exponentially as the feature size shrinks.
I don't know what would make you think I am so naive.

You want magic, but don't seem to define what it is you even want!
Maybe I've missed your wish-list.
Process miracles would be nice, but I'm not counting on them.

What the hell *do* you want? You keep dodging the issues, but
continue to dream about a better time!
Not sure what brute force means. Faster isn't happening as far as
I'm concerned. If it does happen, that'll be nice, but I'm not
counting on
it. We are a long way from exhausting the architectural
possibilities, though.

Please, tell us more...
My crystal ball is completely dark.

It seemed to be rather brighter a couple of posts up.
 
R

Robert Myers

K said:
Robert Myers wrote:




Rather like the Tacoma Narrows Bridge movie is required viewing for
all freshmen engineers? ;-)




...but Intel hasn't gotten there yet, if they ever will.




No IMHO, Intel missed the moon and the Shuttle, and went directly
to the politics of the International Space Station. ...A mission
without a requirement.

The comparison to the International Space Station doesn't seem
especially apt. I made the comparison to Apollo only to make the point
that neither ambitious objectives nor the need to bring enormous
resources to bear doom an enterprise to failure. Who knows how the
Shuttle, which was not a well-conceived undertaking to begin with, might
have fared without the ruinous political and budgetary pressure to which
the program was subjected. By comparison, Intel seems not to have
followed the path of publicly-funded technology, which is to starve
troubled programs, thereby guaranteeing even more trouble.

One is tempted to make the comparison to hot fusion, a program that,
after decades of lavish funding, has entered an old-age pension phase.
Both hot fusion and itanium had identifiable problems involving basic
science, and in neither case have those problems yet been solved. With
Itanium, the misconception (that static scheduling can do the job) may
be so severe that the problem can't be fixed in a sastisfactory way. As
to hot fusion, who knows...the physics are infinitely more complicated
that the bare Navier-Stokes equations, which themselves are the subject
of one of the Clay Institute's Millenium Problems.

Both Itanium and hot fusion have been overtaken by events. Hot fusion
has become less compelling as other less Faustian schemes for energy
production have become ever more attractive. In the case of Itanium,
who would ever have imagined that x86 would become so good? In
retrospect, an easy call, but if it were so easy in prospect, lots of
things might have happened differently. Should one fault Intel for not
forseeing the attack of the out-of-order x86? Quite possibly, but I
wouldn't claim to understand the history well enough to make that judgment.

I know several EDA folks have been reluctant to support Linux and
instead support Windows, for at least the low-end stuff (easier to
restrict licensing).

Right now, Linux is hostile territory for compiled binaries because of
shared libaries. Windows has an equivalent issue with "DLL hell," but
Microsoft never pretended it wasn't a problem (What's the problem? Just
recompile from source.) and has been working at solving it, not
completely without success. I'm sure the Free Software Foundation would
be just as happy if the problem were never addressed, and the biggest
problems I've encountered with GLIBC, but with Linux spending so much of
its time playing a real OS on TV, it seems inevitable that it will be
addressed. For the moment, though, companies like IBM can't be
completely unhappy that professional support or hacker status is almost
a necessity for using proprietary applications with Linux.
I don't see anyone seriously going for IPF
though. It is *expensive* supporting new platforms. ...which is
why x86-64 is so attractive.

Intel's real mistake with Itanium, I think. It's a problem even for
PowerPC.

RM
 
D

Dale Pontius

Robert Myers wrote:


No IMHO, Intel missed the moon and the Shuttle, and went directly
to the politics of the International Space Station. ...A mission
without a requirement.
Every now and then, I have to pop up and defend the ISS.

I must agree that at the moment, the ISS has practically NO value to
science. But I must disagree that it has NO value, at all.

At one point it had, and perhaps may have again, value in diplomacy
and fostering international cooperation.

But IMHO the real value of the ISS is not as a SCIENCE experiment, but
as an ENGINEERING experiment. The fact that we're having such a tough
time with it indicates that it is a HARD problem. It's clearly a third
generation space station. The first generation was preassembled, like
Skylab and Salyut, perhaps with a little unfurling and maybe a gizmo or
two docked, but primarly ground-assembled, and sent up. The second
generation was Mir, with a bunch of ground-assembled pieces sent up and
docked. There's some on-orbit assembly, but it's still largely a thing
of the ground.

The ISS has modules all built on the ground, obviously. But the on-
orbit assembly is well beyond that of Mir. It's the next step of a
logical progression.

Some look and say it's hard, let's stop. I say that until we solve the
'minor' problems of the ISS, we're NEVER going to get to anything like
Von Braun's (or 2001: ASO) wheels. Zubrin's proposal, in order to avoid
requiring an expensive space station, went to the extreme of having
nothing to do with one, even if it already were to exist. But until we
get to some sort of on-orbit, or at least off-Earth assembly capability
we're going to be limited to something in the 30ft-or-less diameter
that practically everything we've ever sent up has had.

Oh, the ISS orbit is another terrible obstacle. But at the moment, it
clearly permits Russian launches, and would be in even worse trouble,
without.

But IMHO, the ENGINEERING we're learning, however reluctantly and
slowly, is ESSENTIAL to future steps in space.

Dale Pontius
 
K

K Williams

Dale said:
Every now and then, I have to pop up and defend the ISS.

Ok, I'll play devil. ;-)
I must agree that at the moment, the ISS has practically NO value
to science. But I must disagree that it has NO value, at all.

At one point it had, and perhaps may have again, value in
diplomacy and fostering international cooperation.

Where's the beef? I *did* say "to the *politics* (emphasis added)
of the International Space Station". ;-)
But IMHO the real value of the ISS is not as a SCIENCE experiment,
but as an ENGINEERING experiment. The fact that we're having such
a tough time with it indicates that it is a HARD problem. It's
clearly a third generation space station. The first generation was
preassembled, like Skylab and Salyut, perhaps with a little
unfurling and maybe a gizmo or two docked, but primarly
ground-assembled, and sent up. The second generation was Mir, with
a bunch of ground-assembled pieces sent up and docked. There's
some on-orbit assembly, but it's still largely a thing of the
ground.

It's absolutely an engineering experiment. We already knew the
"science". Though there are problems, it went together more easily
than most erector-set projects (surprising all). The problems,
IMO, have been mostly political (and as a subset, financial).
The ISS has modules all built on the ground, obviously. But the
on- orbit assembly is well beyond that of Mir. It's the next step
of a logical progression.

Progression to what? I see no grand-plan that requires ISS.
Freedom was cut down to "Fred", because of the massive costs, then
morfed into ISS when it turned into a political tool.
Some look and say it's hard, let's stop. I say that until we solve
the 'minor' problems of the ISS, we're NEVER going to get to
anything like Von Braun's (or 2001: ASO) wheels. Zubrin's
proposal, in order to avoid requiring an expensive space station,
went to the extreme of having nothing to do with one, even if it
already were to exist. But until we get to some sort of on-orbit,
or at least off-Earth assembly capability we're going to be
limited to something in the 30ft-or-less diameter that practically
everything we've ever sent up has had.

I simply don't see ISS as interesting science or engineering. It's
a cut-down compromise done on the cheap with a very foggy mission
statement. It seems politics rules any possible science. There
was a good article (titled "1000 days", or some such) on this in
the last issue of _Air_and_Space_.
Oh, the ISS orbit is another terrible obstacle. But at the moment,
it clearly permits Russian launches, and would be in even worse
trouble, without.

Sure. A 57 degree inclination is useful for other reasons, as well.
The 25 degree orbit out of the cape would save little, other than
fuel. A polar or even sun-synchronous orbit would be "interesting"
too, but for "other" reasons, which wouldn't be in the spirit of
the ISS. ;-)
But IMHO, the ENGINEERING we're learning, however reluctantly and
slowly, is ESSENTIAL to future steps in space.

I disagree, in that ISS isn't doing what was promised. It is not
providing anything essential to the progress, since we don't even
know what we're progressing to.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top