Xbox 2 is an IBM & SGI supercomputer

P

Per Ekman

MS said:
Well, you should learn how to read:

"The Xbox has 80 Gigaflops of computing power. That's equivalent to the
power found in a Cray C94 supercomputer."

It's bullshit, the C94 had a peak of 4GFLOPS (double precision, which
the Xbox certainly can't match). The C94 does 35GB/s on STREAM TRIAD,
I'd be surprised if the Xbox can do 3GB/s. And the C94 is from
1991...

*p
 
K

kevin getting

You took everything I said and just restated it - are you a consultant? ;)

That will be $20 please. :)

I agree with your point but I got there along a different train of
thought.
 
X

xTenn

kevin getting said:
;)

That will be $20 please. :)

I agree with your point but I got there along a different train of
thought.

Well, you did state that:

"The vector units in the PS2 are fully programable, though lack in
performance compared to modern video cards. However, modern video cards
are not as programable as the vector units on a PS2. This puts the vector
units on the PS2 as a general purpose CPU side, thus falling under
regulations of the time."

The difference you seem to be stating is that the greater programmablity,
even though it is weaker than modern video cards (as you state), gives it
some status as a possible (though not plausible) SuperComputer participant
for year 2000. On this point we do differ, since I would like to put forth
that even in year 2000 video cards were supporting microcode that controlled
the functionality and characteristics of the GPU. After all, how many times
did you have to change a video driver? :)
 
T

Tony Hill

What is the definition of supercomputer then?

The definition of a "supercomputer" is, and always has been, a moving
target. What's more, the definition depends a lot on who you ask,
even within the community of people that actually work on such things
there is significant disagreement between just what it takes to be
called a "supercomputer".

Legally speaking though, the US has export controls based on "Millions
of Theoretical Operations per second, or MTOPS". This is, of course,
a totally meaningless measure of a computer's performance (possibly
even a tiny bit worse than MIPS) and it dates back to the 1970's (or
perhaps even earlier?). The US also defines a few different levels of
countries, each level having a maximum number of MTOPS for computers
being sold to them.

In the late 1990's the regulations had become TOTALLY out of whack.
Common, every-day desktop PCs and game consoles had indeed started to
surpass the MTOPS figure for the most high-risk countries (which
included places like India, Russia, China, Vietnam, etc. When Apple
brought out their PowerMac G4, they used this totally ridiculous
regulation as an advertising claim that they were selling a
"supercomputer", which was of course total bullshit. Fortunately the
MTOPS maximum has been increased once or twice, though they are still
using that pointless measure of performance from what I can tell.

Of course, since most supercomputers being build these days are now
superclusters, the regulations have become even more meaningless than
before. Now a company can freely ship thousands computers to a
distributer in some other country who will then assemble these
together to form a cluster-style supercomputer. Through in an extra
level or two in the distribution chain and this sort of thing becomes
more or less impossible to enforce.
 
T

Tony Hill

Get Real. If that was the case then every NVidia and ATI graphics card
available at the time (2000) would have been a supercomputer. You are
propagating more of the Sony propaganda machine bs. There are SOOOOO many
reasons why a single PS2 game console would not be considered one, common
sense not the least thereof.

Before the regulations were changed in 1999, the PS2 would have been
pushing the limits of what was legally called a "supercomputer". You
are quite correct in saying that this defies common sense, you have
that pillar of defying common sense, the US government, to thank for
that one!

As you guess, desktop PCs were also starting to meet or exceed the
regulations as well. Apple made a big advertising campaign about this
when they released their Powermac G4 systems.

The low water mark for what gets the "supercomputer" label has been
pushed up a few times now, though they're still using the same
measuring stick and are still defying common sense.
There has been an attempt (at NCSA at that) to create what would qualify a
supercomputer from PS2 shells, but it takes 70 (yes, 70) consoles to
qualify. For the record, it takes less PCs to reach the same threshold.
The major reason the PS2 was used is because of the Linux kit (which
thankfully allows access to the vector units) and cheap hardware, NOT
because of extremely powerful hardware.

Actually it's really rather useless hardware for the majority of
supercomputing tasks.
There are quite a few good resources on the web about super computing, not
the worse of which is from the projects here at the University of Tennessee
and Oak Ridge National Laboratories - but then you are probably not familar
with BLAS or LAPACK, are you? At least check out the LINPACK tests on
common computing hardware to become familiar with how things really rank
from a simplistic linear equation standpoint. Check out this PDF if the
topic of performance interests you WITHOUT the hype:

http://www.netlib.org/benchmark/performance.pdf

To the best of my knowledge, the PS2 is just not capable of doing
double-percision floating point calculations. That, combined with
extremely limited memory, lack of ECC on memory, no local storage,
terrible I/O capabilities and the total lack of any meaningful
high-speed interconnect for the system makes the PS2 more than a bit
useless as a real supercomputer. If anyone was trying to make a
"supercomputer" out of PS2s they were doing it as 1-part joke, 1-part
neat little toy experiment. Even if the boxes were free it wouldn't
be at all worthwhile wasting ones time on such a design, regardless of
what any US export regulations said at a time.
 
M

Mikael Sillman

"The Xbox has 80 Gigaflops of computing power. That's equivalent to the
It's bullshit, the C94 had a peak of 4GFLOPS (double precision, which
the Xbox certainly can't match). The C94 does 35GB/s on STREAM TRIAD,
I'd be surprised if the Xbox can do 3GB/s. And the C94 is from
1991...

-So you think that nVidia has just been lying on it's PUBLIC WEB-PAGE for
nearly 3 years without anyone but you noticing and figuring that they're
lying?

Right...
 
M

M.C.D. Roos

Mikael said:
-So you think that nVidia has just been lying on it's PUBLIC WEB-PAGE for
nearly 3 years without anyone but you noticing and figuring that they're
lying?

So you think that whatever the marketing department says, is the truth?


Right again :).

greetings,
Michiel
 
T

Tony Hill

It's bullshit, the C94 had a peak of 4GFLOPS (double precision, which
the Xbox certainly can't match).

The XBox has a peak of 733MFlops double precision, or just shy of
3Gflops single precision in the CPU (SSE boost single precision
performance a lot, but the chip doesn't support SSE2, so no double
precision).

The 80 gigaflops number is, as you mentioned, complete bullshit. It's
all from the GPU, which can't be used for general purpose programming.
It also can't do double precision, and it definitely does not even
have 80GFlops peak even if it could do all of those things.

The GPU of the XBox runs at 233MHz and has 4 pipelines. Therefore, to
get the 80GFlop number, nVidia is saying that each pipeline can do 85
floating point instructions at a time. I have absolutely no idea how
they managed to get such a ridiculous number, but it has absolutely no
bearing on reality.

At an absolute maximum you're looking at 233MHz x 4 pipelines, each
capable of handling 4 chunks of single precision data at a time
(128-bit wide vector) and maybe being able to do two flops at once (eg
a multiply-add). That would give you some sort of theoretical maximum
of 7.4 GFlops. Of course, the real number is actually zero flops
since it's not programmable. Also there is no possibility of doing
any double precision on this, so it gets a fat 0 GFLops there.

In any case, end result is that the total processing umph of the XBox
CPU+GPU is a theoretical 10 GFlops of single precision, or 0.73 GFlops
double precision. The PS2 gets 6.4GFLops single percision and almost
nothing double precision.
The C94 does 35GB/s on STREAM TRIAD,
I'd be surprised if the Xbox can do 3GB/s.

XBox has 400MT/s memory (200MHz DDR) with a 128-bit interface. Max
theoretical bandwidth is 6.4GB/s. But most of that bandwidth goes to
the graphics processor (makes sense, that's where the bandwidth is
needed). Max theoretical bandwidth to the CPU is 133MT/s and 64-bit,
or 1.06GB/s. If you could run some sort of STREAM TRIAD on the GPU,
it could probably get well over 3GB/s, but on the CPU you aren't even
going to hit 1GB/s.
And the C94 is from 1991...

Err, wasn't it from 1994? Hence the 'C94' name? Still hardly a
current product.
 
P

Per Ekman

Mikael Sillman said:
-So you think that nVidia has just been lying on it's PUBLIC WEB-PAGE for
nearly 3 years without anyone but you noticing and figuring that they're
lying?

I _know_ that the statement is misleading and I know that I'm not the
only one who knows it. I also know that marketing and reality seldom
connect so this is hardly something particular to nVidia.

Do your research and prove me wrong then.

*p
 
L

Linux on SGI User

mosys said:
my take on this is:

In terms of floating point performance and graphics muscle, the Xbox 2
should outdo a 16-pipe SGI InfiniteReality2 or IR3 machine from the late
1990s.

Even Silicon Graphics themselves have turned to ATI for the highend
Onyx4 UltimateVision systems, which will employ upto -32- ATI R3XX VPU
cores.

I am guessing Xbox 2 should have at least 5-10 times the graphics muscle
of a R300 / Radeon 9700. or perhaps 3-4 times that of the upcoming R420.


reality is that any machine which run the linux is already
supercomputer. linux makes supercomputers for everyone.

also fact that linux on playstation with BEOWULF outperform every sun,
sgi, and hp machine shows proof this.

so with linux on xbox this is very true about supercomputer.
 
A

Andrew

reality is that any machine which run the linux is already
supercomputer. linux makes supercomputers for everyone.

So a POS Pentium 60 PC is a supercomputer if it has Linux installed on
it? Mmmmkay.
also fact that linux on playstation with BEOWULF outperform every sun,
sgi, and hp machine shows proof this.

You mean a cluster of machines outperforms one single machine?
so with linux on xbox this is very true about supercomputer.

ROFL, please tell me you are trolling and you really aren't that
stupid.
 
P

Per Ekman

Tony Hill said:
Err, wasn't it from 1994? Hence the 'C94' name? Still hardly a
current product.

The models in the T90, C90 and J90 series are named according to the
number of CPUs in the system, so a C94 is a 4-CPU C90, a J916 is a
16-CPU J90 and so on.

*p
 
B

Benjamin Gawert

Per said:
The PS2 does 6.2GFLOPS (single precision)

Yes, single precision, and AFAIR it's just an theoretical value as the GLOPS
number comes mostly from the gfx hardware which isn't freely programmable
like a CPU..

And the double performance numbers are even under 1GFLOPS/s, with the same
limitations.
which was quite impressive
at the time (and still is if you ask me). And IIRC the export
restrictions specified systems with more than 1GFLOPS as a
supercomputer.

Wasn't the 1GFLOPS/s limitation not a double precision number?

Besides this, I can't see how US export restrictions should apply to
asian-made game consoles.

Benjamin
 
B

Benjamin Gawert

MS said:
Well, you should learn how to read:

"The Xbox has 80 Gigaflops of computing power. That's equivalent to
the power found in a Cray C94 supercomputer."

http://www.nvidia.com/page/console.html

Maybe You should start to think first?

Lets forget the fact that Nvidia even didn't specify if it's single or
double precision, it should be quite obvious that the 733MHz Extended
Celeron used in the XBox in no way can do 80GFLOPS/s. This number certainly
comes from the gfx hardware which certainly even can't do this (like almost
all gfx chips on current gfx cards which are even faster than the XBox)
(saying the XBox does 80GFLOPS/s has the same quality like the "120W PMPO"
stickers on little PC speakers that use a weak 5V/150mA Power Supply). Also
one little problem here is that the use for this "computing power" is
somewhat limited as the gpus aren't as flexible programmable as CPUs. This
btw is one of the reason that the scientific institutions all over the world
didn't run to get a bunch of PS2s or XBoxes when they came out but still
settle on Supercomputers or Clusters made of real computers. Of course GPUs
can be used for computing tasks but they are very very limited.

It's typical for Nvidia just publishing a plain number on the webiste
without explaining the relations. Saying the XBox does 80GFLOPS/s so it must
be a supercomputer is like saying a 2GHz CPU is faster than a 1GHz CPU -
both expressions shows a lack of background knowledge.

Benjamin
 
P

Per Ekman

Benjamin Gawert said:
Yes, single precision, and AFAIR it's just an theoretical value as the GLOPS
number comes mostly from the gfx hardware which isn't freely programmable
like a CPU..

No, it's programmable alright. The Emotion Engine in the PS2 has two
vector co-processors that does 9 FMACs and 3 FDIVs per cycle in
addition to the 1 FMAC and 1 FDIV of the regular FPU for a total of 24
FLOP/cycle@294MHz (which comes out as 7.35GFLOPS so there's presumably
some issue restrictions somewhere).

*p
 
J

Jonathan

Rolf said:
But Like I asked...

Can it make you a cup of coffee on those long nights when playing
Midtown Madness 3 Live against the best and the worst.

I don't think so

Jud

You haven't heard of the Microsoft Coffee Mate? I hear most
businesses have them these days. :)
 
R

Robert Myers

To the best of my knowledge, the PS2 is just not capable of doing
double-percision floating point calculations. That, combined with
extremely limited memory, lack of ECC on memory, no local storage,
terrible I/O capabilities and the total lack of any meaningful
high-speed interconnect for the system makes the PS2 more than a bit
useless as a real supercomputer. If anyone was trying to make a
"supercomputer" out of PS2s they were doing it as 1-part joke, 1-part
neat little toy experiment. Even if the boxes were free it wouldn't
be at all worthwhile wasting ones time on such a design, regardless of
what any US export regulations said at a time.

Dunno about PS2's in particular but some fairly bright people are
playing around with the high throughput potential of GPU's

http://www.gpgpu.org

While the Aaron Spink's of this world and other wholly-owned
subsidiaries of TWTA (The Way Things Are) are saying that nobody knows
how to program streaming processors and that they are completely
useless for anything, an army of more adventurous souls are proving
him and others like him to be completely wrong by doing serious things
on GPU's.

The lack of Double Precision is a serious limitation, but when people
are doing computational chemistry or even better simulating nuclear
devices at unacceptably low precision on a cluster of GPU's at a cost
that puts the latest acquisition of LLNL to shame, somebody at the DoE
will wake up from his permanent afternoon nap and give Aaron Spink or
someone like him even more money to correct the deficiency he has been
helping to perpetuate.

RM
 
T

Tony Hill

-So you think that nVidia has just been lying on it's PUBLIC WEB-PAGE for
nearly 3 years without anyone but you noticing and figuring that they're
lying?

No, plenty of people have noticed that they are lying. Most people
just don't care because they know that all companies lie about this
sort of stuff. Per is right though, the number is total bullshit.
 
J

joe smith

so with linux on xbox this is very true about supercomputer.

Potentially.
ROFL, please tell me you are trolling and you really aren't that
stupid.

Actually it was more perceptive and open-minded than your attitude, but also
naive, 100 Mbit ethernet is high-latency / low-bandwith 'interconnection' so
the kind of applications that this "cluster" is most able to run are
distributed-computing kind of packetized workload, which reduces the ability
for such system to work with efficient random access to a large dataset
which may reduce the number of applications such system is capable of
performing.

It would make a good SETI@HOME 'supercomputer' fer' instance. But I don't
think the OP was stupid, but you definitely are arrogant mofo.
 
A

Andrew

Actually it was more perceptive and open-minded than your attitude, but also
naive, 100 Mbit ethernet is high-latency / low-bandwith 'interconnection' so
the kind of applications that this "cluster" is most able to run are
distributed-computing kind of packetized workload, which reduces the ability
for such system to work with efficient random access to a large dataset
which may reduce the number of applications such system is capable of
performing.

I am not saying that a cluster of consoles couldn't potentially be
powerful and useful. What is BS is saying that a single console is
equivalent to a supercomputer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top