Might be a book that even R. Myers can love :-)

K

K Williams

Felger Carbon wrocontemporariesiams said:
According to the first sentence of Chapt 43 of Sieworek, Bell, and
Newell's "Computer Structures: Principles and Examples", the
first
6600 was delivered in Oct 64. The 6600 project was begun in the
summer of 1960.

Ok, so they were contemporaries. ...hardly that IBM was somehow
shocked buy the 6600, so came out with the '360. The design point
for the '360 was to have a consistent ISA from top to bottom, even
though the underlying hardware was *quite* different. *That* was
the stroke of genius. Anyone can do amazing things with hardware
if one has a clean sheet of paper. ...and that was the norm at the
time. S/360 acknowledged that there was something more important
than hardware. ...and that is why it's still here.
 
K

K Williams

George said:
I guess it's likely folklore but I know that when the 7074 was to
be replaced in a certain office of a multinational corp in 1967,
the S/360 was the obvious and natural replacement for the DP side
of things; OTOH there was serious consideration given to Univac
1108 or CDC 6600 for technical & scientific work, which had often
been done on a 7094... and often at
extortionate time-lease terms. IOW it wasn't clear that the S/360
could hack it for the latter - turned out that it was dreadfully
slow but near
tolerable... if people worked late:-( and got much better later.
Certainly the performance of S/360 fell way short of expected
performance as "sold" - I can bore you with the details if you
wish.:)

Sure. Perhaps some people learned that not one key fits all locks.
The whole purpose of the S/360 was to make the *software* uniform,
from top to bottom. Yes, this is far more important to business
than geeks. The company's name isn't an accident. Indeed, who has
the $$. ;-)

....OTOH, the oil exploration geeks rather liked the S/370VF.
The CDC 6000 Series didn't become Cyber Series till ~1972[hazy];
before that there was 6200, 6400, 6500 and 6600... and there was
the notorious

Ok, I admit that I was hazy about the difference was between a
"6600" and a "Cyper-6600" (really crappy marketing;). At UIUC the
students used a 360/75 (a "scientific" version of the S/360) for
their use. The business stuff was run across campus on
something-or-other. The CS weenies did have 6600's and 7600's, and
were indeed anti-IBM weens. Plato was CDC based. Go figure,
people had different needs.
7600 in between. Dates of working hardware are difficult to pin
down - supposedly secret confidential data often went astray and
announced
availability and deliverable were umm, fungible. The story is
probably a bit folklorish but no doubt that IBM was seriously
threatened by Univac and CDC in the technical computing arena.

The S/360 was announced on April 7, 1964, IIRC. It was shipped
shortly after. I'm quite sure it took a little time to design and
build the widgets. ;-)
 
K

K Williams

Robert said:
George said:
I guess it's likely folklore but I know that when the 7074 was to
be replaced in a certain office of a multinational corp in 1967,
the S/360 was the obvious and natural replacement for the DP side
of things; OTOH there was serious consideration given to Univac
1108 or CDC 6600 for technical & scientific work, which had often
been done on a 7094... and often at
extortionate time-lease terms. IOW it wasn't clear that the
S/360 could hack it for the latter - turned out that it was
dreadfully slow but near
tolerable... if people worked late:-( and got much better later.
Certainly the performance of S/360 fell way short of expected
performance as "sold" - I can bore you with the details if you
wish.:)

The CDC 6000 Series didn't become Cyber Series till ~1972[hazy];
before that there was 6200, 6400, 6500 and 6600... and there was
the notorious
7600 in between. Dates of working hardware are difficult to pin
down - supposedly secret confidential data often went astray and
announced
availability and deliverable were umm, fungible. The story is
probably a bit folklorish but no doubt that IBM was seriously
threatened by Univac and CDC in the technical computing arena.

Threatened? :). The outlines of the folklore you report is the
folklore I started my career with: CDC (later Cray) for hydro
codes, IBM
for W-2 forms. Lynn Wheeler's posts to comp.arch have helped me
to understand how it was that IBM sold machines at all, because,
as far as I could tell, they were expensive and slow, JCL was
descended from some language used in Mordor, and the batch system
was designed for people who knew ahead of time what resources a
job would need (that is to say, it was designed for people
counting W-2 forms and not for people doing
research). My impression of IBM sofware was fixed by my early
experience with the Scientific Subroutine Package, and even the
compilers were buggy for the kinds of things I wanted to use--no
problem for financial applications, where there was (as yet) no
requirement for double precision complex arithmetic.

One is tempted to summarize the Stretch/360 experiences as: "How
IBM
learned to love banks and to hate the bomb." In retrospect, IBM's
misadventure with Stretch might be regarded as a stroke of luck.
An analyst too close to the action might have regarded IBM's being
pushed out of technical computing in the days of the Space Race as
a distaster, but the heady days of "If it's technical, it must be
worth doing" were actually over, and IBM was in the more lucrative
line of business, anyway.

Ok, answer this question: Where is the money?

....even John Dillinger knew the answer! ;-)
 
K

K Williams

Robert said:
It is the style of business and not the plentiful supply of money
that makes banks and insurance companies attractive as clients for
IBM.

Certainly. ...and that's *exactly* my point.
Under the right circumstance, money can pour from the heavens
for national security applications, and it will pour from the
heavens for
biotechnology and entertainment. Whatever you may think of that
kind of business, IBM wants a piece of it.

Nonsense. IBM does a coupla tens-o-$billions in commercial stuff
each year. There is no defined "government" market that's even
close. Even most government problems can be refined down to
"counting W2's".
From a technical standpoint, there is no company I know of better
positioned than IBM to dominate high performance computing, the
future
of which is not x86 (and not Itanium, either). Will IBM do it?

IMO, no. ...unless it fits into one of the research niches. The
HPC market is so muddled that IBM would be crazy to risk major
money jumping in. Certainly there is dabbling going on, and if
Uncle is going to fund research there will be someone to soak up
the grant.
If the past is any guide, IBM will be skunked again, but there is
always a first time.

You see it differently than the captains of the ship. The money is
where, well, the money is. It's a *lot* more profitable selling
what you know (and have) to customers you know (and need need what
you have), than to risk developing what someone thinks is needed,
but what he's not willing to pay for.

As much as you (and indeed I) may wish otherwise, IBM is *not* in
the risk business these days. If it's not a sure thing it will
simply not be funded. Sure a few bucks for another deep-purple or
a letterbox commercial works...
 
R

Robert Myers

K said:
As much as you (and indeed I) may wish otherwise, IBM is *not* in
the risk business these days. If it's not a sure thing it will
simply not be funded. Sure a few bucks for another deep-purple or
a letterbox commercial works...

I'm not smart enough to understand what's IBM and what's Wall Street,
but I agree with you that bold initiatives are something we should not
be looking for from IBM, and the wizards in Washington are as keen as
everyone else to buy off the shelf these days.

RM
 
R

Robert Myers

K said:
Robert Myers wrote:




Exactly. Off-the-shelf is "cheap". ...even if it doesn't work. ;-)

Is it too optimistic to imagine that we may be coming to some kind of
closure? That you can do so much with off-the-shelf hardware is both an
opportunity and a trap. The opportunity is that you can do more for
less. The trap is that you may not be able to do enough or nearly as
much as you might do if you were a bit more adventurous.

It apparently didn't take too many poundings from clusters of boxes at
supercomputer shows to drive both the customers and the manufacturers of
big iron into full retreat. The benchmark that has been used to create
and celebrate those artificial victories was almost _designed_ to create
such an outcome, and the Washington wizards, understandably tired of
being made fools of, have run up the white flag--with the exception of
the Cray X-1, which didn't get built without significant pressure.

I'm hoping that AMD makes commodity eight-way Opteron work and that it
is popular enough to drive significant market competition. Then my
battle cry will be: don't waste limited research resources trying to be
a clever computer builder--what can you do with whatever you want to
purchase or build that you can't do with an eight-way Opteron?

The possibilities for grand leaps just don't come from plugging
commodity boxes together, or even from plugging boards of commodity
processors together. If you can't make a grand leap, it really isn't
worth the bother (that's the statement that makes enemies for me--people
may not know how to do much else, but they sure do know how to run cable).

Just a few years ago, I thought commodity clusters were a great idea.
The more I look at the problem, the more I believe that off the shelf
should be really off the shelf, not do-it-yourself. It's not that the
do it yourself clusters can't do more for cheap--they can--they just
don't do enough more to make it really worth the bother.

Processors with *Teraflop* capabilities are a reality, and not just in
artificially inflated numbers for game consoles. Not only do those
teraflop chips wipe the floor with x86 and Itanium for the problems you
really need a breakthrough for, they don't need warehouses full of
routers, switches, and cable to get those levels of performance.

Clusters of very low-power chips, a la Blue Gene was not a dumb idea, it
just isn't bold enough--you still need those warehouses, a separate
power plant to provide power and cooling, and _somebody_ is paying for
the real estate, even if it doesn't show up in the price of the machine.
_Maybe_ some combination of Moore's law, network on a chip, and a
breakthrough in board level interconnect could salvage the future of
conventional microprocessors for "supercomputing," but right now, the
future sure looks like streaming processors to me, and not just because
they remind me of the Cray 1.

Streaming processors a slam dunk? Apparently not. They're hard to
program and inflexible. IBM is the builder of choice for them at the
moment. Somebody else, though, will have to come up with the money.

RM
 
K

K Williams

Robert said:
I'm not smart enough to understand what's IBM and what's Wall
Street, but I agree with you that bold initiatives are something
we should not be looking for from IBM, and the wizards in
Washington are as keen as everyone else to buy off the shelf these
days.

Exactly. Off-the-shelf is "cheap". ...even if it doesn't work. ;-)
 
R

Rupert Pigott

Robert said:
big iron into full retreat. The benchmark that has been used to create
and celebrate those artificial victories was almost _designed_ to create
such an outcome, and the Washington wizards, understandably tired of
being made fools of, have run up the white flag--with the exception of
the Cray X-1, which didn't get built without significant pressure.

Not pressure, $$$.
Clusters of very low-power chips, a la Blue Gene was not a dumb idea, it
just isn't bold enough--you still need those warehouses, a separate
power plant to provide power and cooling, and _somebody_ is paying for
the real estate, even if it doesn't show up in the price of the machine.

BG significantly raises the bar on density and power consumption. The
real issue with it is can folks make use of it effectively ? As far as
the mechanicals go the papers say that BG/L is scalable from a single
shelf to the full warehouse..

In fact the things which stand out about BG/L for me is how lean it is,
and how they've designed the thing from the ground up with MTBF and
servicing in mind. A bunch of whiteboxes hooked up by some 3rd party
interconnect just can't beat that.

BG/L is built to be practical, from what I've read about how those
folks approached the problem I don't think sex appeal really came into
it. :)

"Compared with today's fastest supercomputers, it will be six times
faster, consume 1/15th the power per computation and be 10 times more
compact than today's fastest supercomputers"

Quite some goal. In any other frame of reference that would be a huge
stride forward... Imagine if Intel or AMD could claim that for the
successor to the P4 or K8 ! :)

Cheers,
Rupert
 
R

Robert Myers

Rupert said:
Robert Myers wrote:



BG significantly raises the bar on density and power consumption. The
real issue with it is can folks make use of it effectively ? As far as
the mechanicals go the papers say that BG/L is scalable from a single
shelf to the full warehouse..

In fact the things which stand out about BG/L for me is how lean it is,
and how they've designed the thing from the ground up with MTBF and
servicing in mind. A bunch of whiteboxes hooked up by some 3rd party
interconnect just can't beat that.

I think we're agreed on that.

"Compared with today's fastest supercomputers, it will be six times
faster, consume 1/15th the power per computation and be 10 times more
compact than today's fastest supercomputers"

Those are compelling numbers, even by the harsh standard I use, which is
to take the fourth root of the claimed miracle as the real payoff
(because that's how much more hydro you can really do).

We need to be aiming at qualitative changes in how we do business,
though. With network on a chip and significant improvements in
board-level packaging, maybe we can get there with conventional
microprocessors in a Blue Gene architecture--but unless there is some
miracle I don't know about in the offing, we're going to need those
improvements and more, especially since, if scaling really hasn't fallen
apart at 90nm, nobody is saying so.

By comparison, we can do teraflop on a chip _now_ with streaming
technology. That's really hard to ignore, and we do need those
teraflops, and more.

RM
 
R

Rupert Pigott

Robert Myers wrote:

[SNIP]
By comparison, we can do teraflop on a chip _now_ with streaming
technology. That's really hard to ignore, and we do need those
teraflops, and more.

Yes, but can you do anything *useful* with that streaming pile of
TeraFLOP ? :)

I still can't see what this Streaming idea is bringing to the table
that's fundamentally new. It still runs into the parallelisation
wall eventually, it's just Yet Another Coding Paradigm. :/

Cheers,
Rupert
 
K

K Williams

Robert said:
Is it too optimistic to imagine that we may be coming to some kin
of closure? That you can do so much with off-the-shelf hardware
is both an opportunity and a trap. The opportunity is that you
can do more for
less. The trap is that you may not be able to do enough or nearly
as much as you might do if you were a bit more adventurous.

Gee, fantasy meets reality, once again. The reality is that what we
have is "good enough". It's up to you softies to make your stuff
fit within the hard realities of physics. That is, it's *all*
about algorithms. Don't expect us hardware types to bail you out
of your problems anymore. We're knocking on the door of hard
physics, so complain to the guys across the Boneyard from MRL.
It apparently didn't take too many poundings from clusters of
boxes at supercomputer shows to drive both the customers and the
manufacturers of
big iron into full retreat.

Perhaps because *cheap* clusters could solve the "important"
problems, given enough thought? Of course the others are deemed to
be "unimportant", by definition. ...at least until there is a
solution. ;-)
The benchmark that has been used to
create and celebrate those artificial victories was almost
_designed_ to create such an outcome, and the Washington wizards,
understandably tired of being made fools of, have run up the white
flag--with the exception of the Cray X-1, which didn't get built
without significant pressure.
Ok...

I'm hoping that AMD makes commodity eight-way Opteron work and
that it
is popular enough to drive significant market competition. Then
my battle cry will be: don't waste limited research resources
trying to be a clever computer builder--what can you do with
whatever you want to purchase or build that you can't do with an
eight-way Opteron?

I'm hoping for the same. ...albeit for a different reason.
The possibilities for grand leaps just don't come from plugging
commodity boxes together, or even from plugging boards of
commodity
processors together. If you can't make a grand leap, it really
isn't worth the bother (that's the statement that makes enemies
for me--people may not know how to do much else, but they sure do
know how to run cable).

IMHO, we're not going to see any grand leaps in hardware. We have
some rather hard limits here. "186,000mi/sec isn't just a good
idea, it's the *LAW*", sort of thing.

No doubt were currently running into what ammounts to a technology
speedbump, but there *are* some hard limits were starting to see.
It's up to you algorithm types now. ;-)
Just a few years ago, I thought commodity clusters were a great
idea. The more I look at the problem, the more I believe that off
the shelf
should be really off the shelf, not do-it-yourself. It's not that
the do it yourself clusters can't do more for cheap--they
can--they just don't do enough more to make it really worth the
bother.

Why should the hardware vendor anticipate what *you* want? You pay,
they listen. This is a simple fact of life.
Processors with *Teraflop* capabilities are a reality, and not
just in
artificially inflated numbers for game consoles. Not only do
those teraflop chips wipe the floor with x86 and Itanium for the
problems you really need a breakthrough for, they don't need
warehouses full of routers, switches, and cable to get those
levels of performance.

So buy them. I guess I don't understand your problem. They're
reality, so...
Clusters of very low-power chips, a la Blue Gene was not a dumb
idea, it just isn't bold enough--you still need those warehouses,
a separate power plant to provide power and cooling, and
_somebody_ is paying for the real estate, even if it doesn't show
up in the price of the machine.
_Maybe_ some combination of Moore's law, network on a chip, and
a
breakthrough in board level interconnect could salvage the future
of conventional microprocessors for "supercomputing," but right
now, the future sure looks like streaming processors to me, and
not just because they remind me of the Cray 1.

Yawn! So go *do* it. The fact is that it would be there if there
was a market. No, likely not from IBM, at least until someone else
proved there was $billions to be made. IBM is all about $billions.
Streaming processors a slam dunk? Apparently not. They're hard
to
program and inflexible. IBM is the builder of choice for them at
the
moment. Somebody else, though, will have to come up with the
money.

Builder, perhaps. Architect/proponent/financier? I don't think
so. ...at least not the way this peon sees things. I've had many
wishes over the years, This doesn't even come close to my list of
"good ideas wasted on dumb management",
 
R

Robert Myers

Rupert said:
Robert Myers wrote:

[SNIP]
By comparison, we can do teraflop on a chip _now_ with streaming
technology. That's really hard to ignore, and we do need those
teraflops, and more.


Yes, but can you do anything *useful* with that streaming pile of
TeraFLOP ? :)

The long range forces part of the molecular dynamics calculation is
potentially a tight little loop where the fact that it takes many cycles
to compute a reciprocal square root wouldn't matter if the calculation
were streamed.

There are many such opportunities to do something useful. There are
circumstances where you can't do streaming parallelism naively because
of well-known pipeline hazards, but, as always, there are ways to cheat
the devil.
I still can't see what this Streaming idea is bringing to the table
that's fundamentally new. It still runs into the parallelisation
wall eventually, it's just Yet Another Coding Paradigm. :/

In a conventional microprocessor, the movement of data and progress
toward the final answer are connected only in the most vaguely
conceptual way: out of memory, into the cache, into a register, into an
execution unit, into another register, back into cache,... blah, blah,
blah. All that chaotic movement takes time and, even more important,
energy. In a streaming processor, data physically move toward the exit
and toward a final answer.

Too simple a view? By a country mile to be sure. Some part of almost
all problems will need a conventional microprocessor. For problems that
require long range data movement, getting the streaming paradigm to work
even in the crudest way above the chip level will be... challenging.

Fortunately, there is already significant experience from graphics
programming with what can be accomplished by way of streaming
parallelism, and we don't have to count on anybody with a big checkbook
waking up from their x86 hangover to see these ideas explored more
thoroughly: Playstation 3 and the associated graphics workstation will
make it happen.

Yet Another Coding Pardigm? I can live with that, but I think it's a
more powerful paradigm than you do, plainly.

RM
 
R

Robert Myers

Rupert said:
Robert Myers wrote:

[SNIP]
By comparison, we can do teraflop on a chip _now_ with streaming
technology. That's really hard to ignore, and we do need those
teraflops, and more.


Yes, but can you do anything *useful* with that streaming pile of
TeraFLOP ? :)

The long range forces part of the molecular dynamics calculation is
potentially a tight little loop where the fact that it takes many cycles
to compute a reciprocal square root wouldn't matter if the calculation
were streamed.

There are many such opportunities to do something useful. There are
circumstances where you can't do streaming parallelism naively because
of well-known pipeline hazards, but, as always, there are ways to cheat
the devil.
I still can't see what this Streaming idea is bringing to the table
that's fundamentally new. It still runs into the parallelisation
wall eventually, it's just Yet Another Coding Paradigm. :/

In a conventional microprocessor, the movement of data and progress
toward the final answer are connected only in the most vaguely
conceptual way: out of memory, into the cache, into a register, into an
execution unit, into another register, back into cache,... blah, blah,
blah. All that chaotic movement takes time and, even more important,
energy. In a streaming processor, data physically move toward the exit
and toward a final answer.

Too simple a view? By a country mile to be sure. Some part of almost
all problems will need a conventional microprocessor. For problems that
require long range data movement, getting the streaming paradigm to work
even in the crudest way above the chip level will be... challenging.

Fortunately, there is already significant experience from graphics
programming with what can be accomplished by way of streaming
parallelism, and we don't have to count on anybody with a big checkbook
waking up from their x86 hangover to see these ideas explored more
thoroughly: Playstation 3 and the associated graphics workstation will
make it happen.

Yet Another Coding Pardigm? I can live with that, but I think it's a
more powerful paradigm than you do, plainly.

RM
 
R

Robert Myers

K said:
Robert Myers wrote:

Gee, fantasy meets reality, once again. The reality is that what we
have is "good enough". It's up to you softies to make your stuff
fit within the hard realities of physics. That is, it's *all*
about algorithms. Don't expect us hardware types to bail you out
of your problems anymore. We're knocking on the door of hard
physics, so complain to the guys across the Boneyard from MRL.

You seem to think that the complexity of the problems to be solved is
arbitrary, but it's not. It would be naive to assume that everything
possible has been wrung out of the algorithms, but it would be equally
naive to think that problems we want so badly to be able to solve will
ever be solved without major advances in hardware.

As to the physics...I wish I even had a clue.
Perhaps because *cheap* clusters could solve the "important"
problems, given enough thought?

That's been the delusion, and that's exactly what it is: a delusion.
Of course the others are deemed to
be "unimportant", by definition. ...at least until there is a
solution. ;-)

And that's why us "algorithm" types can't afford to ignore hardware: the
algorithms and even the problems we can solve are dictated by hardware.

IMHO, we're not going to see any grand leaps in hardware. We have
some rather hard limits here. "186,000mi/sec isn't just a good
idea, it's the *LAW*", sort of thing.

For the purpose of doing computational physics, the speed of light is a
limitation on how long it takes to satisfy data dependencies in a single
computational step. For the bogey protein-folding calculation in Allen.
et. al., we need to do 10^11 steps. One microsend is 300 meters (3x10^8
m/s x 10^-6 s). If we can jam the computer into a 300 meter sphere,
then a calculation that took one crossing time per time step would take
10^5 seconds, or about 30 hours. The Blue Gene document estimates 3
years for such a calculation, thereby allowing for more like 1000 speed
of light crossings per time step. To make the calculation go faster, we
need to reduce the number of speed of light crossings required or to
reduce the size of the machine.
No doubt were currently running into what ammounts to a technology
speedbump, but there *are* some hard limits were starting to see.
It's up to you algorithm types now. ;-)

All previous predictions of the end of the road have turned out to be
premature, so I'm hesitant to join the chorus now, no matter how clear
the signs may seem to be.

So buy them. I guess I don't understand your problem. They're
reality, so...

Before silicon comes a simulation model, and there are, indeed, better
ways to be approaching that problem than to be chatting about it on csiphc.

Builder, perhaps. Architect/proponent/financier? I don't think
so. ...at least not the way this peon sees things. I've had many
wishes over the years, This doesn't even come close to my list of
"good ideas wasted on dumb management",

IBM, and those who might be concerned with what might happens to the
technical capabilities it might possess, have more pressing concerns
than whether IBM should be going into supercomputers or not, and I don't
think IBM should, so we seem to be agreed about that.

RM
 
D

Dale Pontius

George Macdonald said:
As for JCL, I once had a JCL evangelist explain to me how he could use JCL
in ways which wasn't possible on systems with simpler control statements -
conditional job steps, subsitution of actual file names for dummy
parameters etc... "catalogued procedures"?[hazy again] The guy was stuck
in his niche of "job steps" where data used to be massaged from one set of
tapes to another and then on in another step to be remassaged into some
other record format for storing on another set of tape... all those steps
being necessary, essentially because of the sequential tape storage. We'd
had disks for a while but all they did was emulate what they used to do
with tapes - he just didn't get it.
I used to do JCL, back when I ran jobs on MVS. After getting used to it,
and the fact that you allocated or deleted files using the infamous
IEFBR14, there were things to recommend it. At the very least, you edited
your JCL, and it all stayed put. Then you submitted, and it was in the
hands of the gods. None (or very little, because there were ways to kill
a running job) of this Oops! and hit Ctrl-C.

I never had to deal with tapes, fortunately. It was also frustrating not
having dynamic filenames. There were ways to weasel around some of those
restrictions, though.

Dale Pontius
 
D

Dale Pontius

I think the technical merits were right up there as well.
What other system had a control store that required an air-pump to operate?;-)
When a former boss had a service anniversay, they brought him a 'gift'.
It was one of those thingies that needed an air pump to operate, also
known as CCROS. I suspect it meant Capacitive-Coupled Read-Only Storage.
The slick thing was that it was a ROM you could progam with a keypunch.
Not very dense, though. 36KB in 2 or 3 cubic feet.

Dale Pontius
 
D

Dale Pontius

The technical merits may not be all that important one way or the other.
If enough software developers feel they can forego Itanium, Itanium
will die. If a software developer decides to forgo Itanium and loses
enough important clients in the enterprise space because of it, the
software developer may be shut out of the most lucrative part of the market.
One simple question about IA64...

What and whose problem does it solve?

As far as I can tell, its prime mission is to solve Intel's problem, and
rid it of those pesky cloners from at least some segments of the CPU
marketplace, hopefully an expanding portion.

It has little to do with customers' problems, in fact it makes some
problems for customers. (Replace ALL software? Why is this good for
ME?)

People often make much of making technical merit subservient to
marketing considerations. IMHO as long as the proposed solution is
'good enough' and helps customers solve their problems, that's true.
The IA64 has thus far not been 'good enough' in that it did well in some
specialized benchmarks and applications, but was not 'nutritionally
complete' for anything other than a few scientific missions. It is just
possible that X86-64 has closed the window of opportunity on IA64,
simply because of the software replacement issue. It's a heck of a lot
easier to step into X86-64 from the software perspective. It solves
customers' migration problems better.

The question for IA64 becomes can it bring enough to the table on future
revisions to make up for its obstacles. Will >8-way become compelling,
and a what price? At this point, AMD is trying to push its Opteron ASPs
up, but probably has more flex room than IA64 or Xeon.

Dale Pontius
 
R

Robert Myers

K said:
Robert Myers wrote:


You seem to think that computers can become arbitrarily complex and
still be realizable. ...though "useful" and "economical" will run
out far faster. Yes, I believe there are real limits to our
knowledge, and tools. Perhaps that limit is a couple of decades
(though I believe somewhat nearer) away, but it is real.

I'm not sure what kind of complexity you are imagining. Garden variety
microprocessers are already implausibly complicated as far as I'm
concerned.

I have some fairly aggressive ideas about what *might* be done with
computers, but they don't necessarily lead to greatly complicated
machines. Complicated switching fabric--probably.
Gee, I thought you were plugged into that "physics" stuff too.
Perhaps you just like busting concrete? ;-)

No. I started out, in fact, in the building across the boneyard from
MRL. I understand the physical limitations well enough. What I don't
know about is what might be done to get around those limitations.

Perhaps, but you *must* deal with the hardware that is real. There
is no incentive to create random hardware, hoping that it will be
good at solving some random problem. Face it, we're not back in
the '60s. This is a mature business now. You have to pay to play,
or pick the bones of the market drivers. Since no one on you side
(with any $$) even knows what they want...

I don't know what would make you think I am so naive.

It must be grand to ignore realities, like power and atomic sizes.
Sure there is work to be done, but the features are already down to
a few atomic thicknesses, and tunneling is already a *BITCH*. It's
not going to get better. If the world was a vacuum it couldn't
suck worse. ;-)

Process miracles would be nice, but I'm not counting on them.
It's up to you folks to tell us how to get around the power
problems. They ain't going to go away by brute force.
Not sure what brute force means. Faster isn't happening as far as I'm
concerned. If it does happen, that'll be nice, but I'm not counting on
it. We are a long way from exhausting the architectural possibilities,
though.

I'm not saying we're at the end of the line today, but things don't
look good for Moore over the next decade. We've seen other issues
fall, but the atom isn't getting much smaller.

My crystal ball is completely dark.

RM
 
K

K Williams

Robert said:
You seem to think that the complexity of the problems to be solved
is
arbitrary, but it's not. It would be naive to assume that
everything possible has been wrung out of the algorithms, but it
would be equally naive to think that problems we want so badly to
be able to solve will ever be solved without major advances in
hardware.

You seem to think that computers can become arbitrarily complex and
still be realizable. ...though "useful" and "economical" will run
out far faster. Yes, I believe there are real limits to our
knowledge, and tools. Perhaps that limit is a couple of decades
(though I believe somewhat nearer) away, but it is real.
As to the physics...I wish I even had a clue.

Gee, I thought you were plugged into that "physics" stuff too.
Perhaps you just like busting concrete? ;-)
That's been the delusion, and that's exactly what it is: a
delusion.

Note "Important". There will always be problems left for the next
generation. Meanwhile there are important problems that can be
solved using the tools we have.
And that's why us "algorithm" types can't afford to ignore
hardware: the algorithms and even the problems we can solve are
dictated by hardware.

Perhaps, but you *must* deal with the hardware that is real. There
is no incentive to create random hardware, hoping that it will be
good at solving some random problem. Face it, we're not back in
the '60s. This is a mature business now. You have to pay to play,
or pick the bones of the market drivers. Since no one on you side
(with any $$) even knows what they want...
For the purpose of doing computational physics, the speed of light
is a limitation on how long it takes to satisfy data dependencies
in a single
computational step. For the bogey protein-folding calculation in
Allen.
et. al., we need to do 10^11 steps. One microsend is 300 meters
(3x10^8
m/s x 10^-6 s). If we can jam the computer into a 300 meter
sphere, then a calculation that took one crossing time per time
step would take
10^5 seconds, or about 30 hours. The Blue Gene document estimates
3 years for such a calculation, thereby allowing for more like
1000 speed
of light crossings per time step. To make the calculation go
faster, we need to reduce the number of speed of light crossings
required or to reduce the size of the machine.

It must be grand to ignore realities, like power and atomic sizes.
Sure there is work to be done, but the features are already down to
a few atomic thicknesses, and tunneling is already a *BITCH*. It's
not going to get better. If the world was a vacuum it couldn't
suck worse. ;-)

It's up to you folks to tell us how to get around the power
problems. They ain't going to go away by brute force.
All previous predictions of the end of the road have turned out to
be premature, so I'm hesitant to join the chorus now, no matter
how clear the signs may seem to be.

I'm not saying we're at the end of the line today, but things don't
look good for Moore over the next decade. We've seen other issues
fall, but the atom isn't getting much smaller.
Before silicon comes a simulation model, and there are, indeed,
better ways to be approaching that problem than to be chatting
about it on csiphc.

Ok, That's what I do when I'm not doing something else. ;-)

....have a good night!

<snip>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top