65nm news from Intel

J

Jouni Osmala

Yes. No problem. I humbly submit the source of Emacs as evidence,
and claim that the conclusion is obvious.

Note that Stefan Monnier did not say that Emacs could not be
parallelised well, at least in theory, but was responding to a
comment that it was going to be.

I disagree. Jouni's post began...

I have a better reason why emacs is a great candidate for
parallerization.

...which is certainly starting from a "could" rather than "would"
viewpoint.

Its written in lisp, and in reality its a lisp operating system
with embedded wordprocessor included as a major app in it. Now
the lisp code could be autoparallized by autoparallerizing compiler.
So you would need to do some work to improve the underlying lisp
compiler/OS to handle mutliprocessing needs.

Here he makes a specific supporting argument for his claim. When I
asked for rebuttals, I was rather hoping that someone would address
this one. Auto-parallelisation of Lisp may be significantly easier
than the same task for C (which I happily accept hasn't really
happened yet, despite efforts) so emacs may be much better placed
than "the average app".

BTW: I think that EMACS is going to be one of the desktop
applications that are going to be parallerized well. [If it
hasn't already.]

OK, here he switches to "could" mode, but if he blows both ways in the
same post I think its unfair to claim he went in just one direction.

Simply because parallerizing it is geeky enough trick that someone
in OSS developement may wan't to do just for the kicks [...]

Here's a second line of argument, differentiating emacs from the average
app. It is surely undeniable that "cult" OSS software gets ported and
twisted in far more ways than its intrinsic quality would justify. If
I had to place money on which applications would get ported first and
best to any new architecture, I'd bet on emacs and GNU C.

Okay. As Engrish is my 2nd language, and Finnish is my first AND my
expression of ideas is not clearest, as on a team work, I typicly have
to spend over 10 hours speaking for other students to get how my
algorithms and thats live with pen&paper as assistant in my native
language, even if they are excellent students near graduating that DO
coding as part of their studies. [Or is the problem that my algorithms
are so weird that others have hard time understanding them.]

Lets make some simple claims.
a)
I think LISP is great for parallerization.
b)
Emacs operating system has several aplications running in top of it,
and atleast SOME of them benefit from parallelized lisp execution.
c)
Some one is going to write parallerized Lisp interpreter/(ORjit) just
for the kicks for eLisp after desktop multiprocessing becomes
mainstream.
d)
After that some others will improve the underlaying lisp code for
better parallel execution IF there need for performance in that code.

Now I don't claim, WHEN the c happens, and how quicly d is going to
happen nor that ALL the code is going to be parallerized. Heck it
might be that after reading the post of those other people in this
matter that very little current lisp code is going to usefull for
parallerization, but after the parallerization back end has been done,
there will be gradual improvement in that matter, or a great jumps in
different areas. And new elisp application written in more functional
form, perhaps even DOOM clone written in elisp that parallerizes for n
processors ;)


Jouni
 
J

Jouni Osmala

Or even used significantly in that area! Yes, PLEASE tell me about
those languages, as it really is rather relevant to my work.

Those things are used as a sub-blocks by several BIG companies. And
the company that delivers those, wasn't interested in making parallel
language just something that made expressing their problems easier,
and the result was something that could parallerize well as long as
you kept everything in single memory space. And I doubt that they want
their corporate secrets leaked by some thing that was in our
university cafe discussion after a lecture one of their researcher
kept. There are limitations still it won't scale to big systems,
because it cannot cope with multiple memory domains, but it scales
excellently with SMT and CMP and vector extensions, and can mix them
in any way needed to handle the code. [Now thats part of the reason
I'm optimistic in CMP, while not too optimistic on clusters.] And I
remember very little about it, as it was years ago, but they still
actively licence the product they used it with.
I must still iterate my view on parallerism, on desktop as a future.
Utilizing CMP is much easier than clusters. Simple because the
syncronization latencies are 3 order of magnitudes smaller compared to
Myrinet for instance, and you can use shared memory where usefull.

Jouni Osmala
 
B

beliavsky

Simple...

Let's all dust off our old APL manuals, and then practically ALL of
our code will be vectorizable/parallel.

Why not buy a book on Fortran 95 and learn about the array and
ELEMENTAL functions? There are many commercial compilers and a free
compiler called G95 for Linux and Unix at http://www.g95.org .
 
S

Stephen Fuld

snip
Those things are used as a sub-blocks by several BIG companies. And
the company that delivers those, wasn't interested in making parallel
language just something that made expressing their problems easier,
and the result was something that could parallerize well as long as
you kept everything in single memory space. And I doubt that they want
their corporate secrets leaked by some thing that was in our
university cafe discussion after a lecture one of their researcher
kept. There are limitations still it won't scale to big systems,
because it cannot cope with multiple memory domains, but it scales
excellently with SMT and CMP and vector extensions, and can mix them
in any way needed to handle the code. [Now thats part of the reason
I'm optimistic in CMP, while not too optimistic on clusters.] And I
remember very little about it, as it was years ago, but they still
actively licence the product they used it with.

OK, given that you can't violate even an implied NDA, can you at least tell
us the name of the product that the company still licences (while still not
telling us anything about how it is written)? Perhaps that might help us to
progress further in the discussion.
 
S

Stefan Monnier

Has anyone even done JIT to native code for elisp yet? That would be
much easier, and would provide more broadly applicable performance
gains. (At the cost of portability, though there are some fairly
portable JIT systems now. And it is an active area for research.)

The problem is that elisp is a very dynamic language. E.g. dynamic scoping
together with buffer-local variables makes most optimizations very difficult
to perform.
And a naive approach results in very disappointing speedups (because the
interpretive overhead is often dwarfed by the slowness of even the most
basic operations such as "get the value of variable `foo'" or "create new
local var `bar'").


Stefan
 
S

Sander Vesik

In comp.arch Jouni Osmala said:
Okay. As Engrish is my 2nd language, and Finnish is my first AND my
expression of ideas is not clearest, as on a team work, I typicly have
to spend over 10 hours speaking for other students to get how my
algorithms and thats live with pen&paper as assistant in my native
language, even if they are excellent students near graduating that DO
coding as part of their studies. [Or is the problem that my algorithms
are so weird that others have hard time understanding them.]

Lets make some simple claims.
a)
I think LISP is great for parallerization.

Many dialects of Lisp are not. elisp is very proably
one such.
b)
Emacs operating system has several aplications running in top of it,
and atleast SOME of them benefit from parallelized lisp execution.

s/benefit/might benefit/
c)
Some one is going to write parallerized Lisp interpreter/(ORjit) just
for the kicks for eLisp after desktop multiprocessing becomes
mainstream.

See, you need a paralellised eLisp engine, just a "generic" lisp one
won't do you any good. the lisps are a legion.
d)
After that some others will improve the underlaying lisp code for
better parallel execution IF there need for performance in that code.

Now I don't claim, WHEN the c happens, and how quicly d is going to
happen nor that ALL the code is going to be parallerized. Heck it
might be that after reading the post of those other people in this
matter that very little current lisp code is going to usefull for
parallerization, but after the parallerization back end has been done,
there will be gradual improvement in that matter, or a great jumps in
different areas. And new elisp application written in more functional
form, perhaps even DOOM clone written in elisp that parallerizes for n
processors ;)

You have left thie real world and gotten way lost in the dream one.
 
J

Jouni Osmala

Sander said:
In comp.arch Jouni Osmala said:
Okay. As Engrish is my 2nd language, and Finnish is my first AND my
expression of ideas is not clearest, as on a team work, I typicly have
to spend over 10 hours speaking for other students to get how my
algorithms and thats live with pen&paper as assistant in my native
language, even if they are excellent students near graduating that DO
coding as part of their studies. [Or is the problem that my algorithms
are so weird that others have hard time understanding them.]

Lets make some simple claims.
a)
I think LISP is great for parallerization.

Many dialects of Lisp are not. elisp is very proably
one such.
Ok.
b)
Emacs operating system has several aplications running in top of it,
and atleast SOME of them benefit from parallelized lisp execution.


s/benefit/might benefit/

c)
Some one is going to write parallerized Lisp interpreter/(ORjit) just
for the kicks for eLisp after desktop multiprocessing becomes
mainstream.


See, you need a paralellised eLisp engine, just a "generic" lisp one
won't do you any good. the lisps are a legion.

OK. I'm not specialized on eLisp, it looked like scheme but having more
in line things. But still if there are 16 cores on every "normal" home
computer, then eLisp will be extended/subsetted to something that could
use them. (Of course having old unparallerisable code in emacs will
continue.) But there will be eLisp mode set that runs parallel some time
in the future even if current eLisp is not parallerisable. So transition
will take time. Lets hope that people find some use their 4 or 8 core
CPU:s before eLisp gets parallerized ;)

You have left thie real world and gotten way lost in the dream one.

Why this would be dream. There are plenty of emacs games, including
elite. When elite was a new thing NO-ONE probably though making it run
inside emacs. But these days its ported to it.
If intel/AMD finds that the biggest gain improvement is increasing
number of cores, then the elisp version of doom that will happen in next
3 decades, probably will use what ever number of cores there was
available two years before its creation...

Jouni Osmala
 
G

Guest

|> >
|> > Sigh. You are STILL missing the point. Spaghetti C++ may be about
|> > as bad as it gets, but the SAME applies to the cleanest of Fortran,
|> > if it is using the same programming paradigms. I can't get excited
|> > over factors of 5-10 difference in optimisability, when we are
|> > talking about improvements over decades.
|> >
|> Simple...
|>
|> Let's all dust off our old APL manuals, and then practically ALL of
|> our code will be vectorizable/parallel.

Hmm. Do you have a good APL Dirichlet tesselation code handy?
I have two main memories of APL, both about 2.5 decades old.

To the APL programmer, every problem looks like a vector/matrix.
(To the man with a hammer, every problem looks like a nail.)

You can apply every monadic operator, in the correct sequence, to
zero, and the result is 42. (HHGTG reference)

And a few other snippets, like the general flavor of the language.
I could probably relearn it in short order, if I had a set of APL
keycaps and a manual.

Dale Pontius
 
N

Nick Maclaren

I have two main memories of APL, both about 2.5 decades old.

To the APL programmer, every problem looks like a vector/matrix.
(To the man with a hammer, every problem looks like a nail.)

Yes, quite. Which is why when, faced with the problem of uncrewing
a fitting, the solution is to smash the unit it is attached to,
thus freeing the fitting.


Regards,
Nick Maclaren.
 
?

=?ISO-8859-1?Q?Jan_Vorbr=FCggen?=

ftp://download.intel.com/pressroom/kits/events/idffall_
Most interesting. Unfortunately, that failed to download.

Worked for me. The line break at the underline is unfortunate, however.

Jan
 
N

Nick Maclaren

|> >>ftp://download.intel.com/pressroom/kits/events/idffall_
|> >>2004/otellini_presentation.pdf#page=38
|> >
|> > Most interesting. Unfortunately, that failed to download.
|>
|> Worked for me. The line break at the underline is unfortunate, however.

I got it eventually. 3 of 5 downloads failed, however, which
indicates some sort of problem with the server.


Regards,
Nick Maclaren.
 
N

Nick Maclaren

|>
|> > (2 threads means "2 threads per core" in case it is not clear. Slide
|> > elsewhere indicates SMT.)
|>
|> Multi-threaded: yes. SMT: no. Montecito uses a different version of
|> multithreading than SMT. I know that's been discussed before. Search
|> for it if you want details.

Hmm. I have seen no details worth a damn. Yes, it is known that
it does something different, but I haven't seen a clear statement
of what. And there are a lot of possibilities. Of course, I might
have missed some actual information in the morass of buzzwords and
general waffle.


Regards,
Nick Maclaren.
 
C

Chris Morgan

Robert Myers said:
Maybe by the time Whitefield and Niagara are available, Transmeta will
have a similar product, too. A ULV Whitefield is where I'd want to
start, and I don't think I'd be too bothered by the separate
controller, which I'd get to amortize over at least four cores. By
the time Whitefield is available, Intel should have more complete
infrastructure like Advanced Switching as an interconnect.

By the way, Niagara has reached silicon and boots Solaris. Lots of
work still to do, I'm sure.

http://blogs.sun.com/roller/page/jonathan/20040910#the_difference_between_humans_and


Chris
--
Chris Morgan
"Post posting of policy changes by the boss will result in
real rule revisions that are irreversible"

- anonymous correspondent
 
G

Gavin Scott

In comp.arch Nick Maclaren said:
Yes, quite. Which is why when, faced with the problem of uncrewing
a fitting, the solution is to smash the unit it is attached to,
thus freeing the fitting.

Unfortunately in software this may be a perfectly viable design :)

G.
 
R

Rick Jones

In comp.arch Nick Maclaren said:
I got it eventually. 3 of 5 downloads failed, however, which
indicates some sort of problem with the server.

Or the intervening network. My download came through on the first
try. Of course all _that_ might mean is I was lucky :)

rick jones
 
N

Nick Maclaren

Or the intervening network. My download came through on the first
try. Of course all _that_ might mean is I was lucky :)

Yes and no. FTP uses TCP/IP, and I downloaded the file many times
to both IRIX and Linux, and got the same unreliability. Now, despite
common belief, FTP is a VERY unreliable protocol, but TCP/IP isn't.
I am 90% certain (based on that and previous experience) is that the
server was using the FTP protocol in one of its many unreliable ways.


Regards,
Nick Maclaren.
 
R

Rick Jones

Yes and no. FTP uses TCP/IP, and I downloaded the file many times
to both IRIX and Linux, and got the same unreliability. Now,
despite common belief, FTP is a VERY unreliable protocol, but TCP/IP
isn't. I am 90% certain (based on that and previous experience) is
that the server was using the FTP protocol in one of its many
unreliable ways.

TCP is "reliable" only in that it will tell you if it believes that
the data has not arrived at the desired destination. TCP can only
overcome so much in the way of packet loss and the like, so if there
was nasty packet loss between you and the other end, or routing
instability somewhere...

rick jones
--
The computing industry isn't as much a game of "Follow The Leader" as
it is one of "Ring Around the Rosy" or perhaps "Duck Duck Goose."
- Rick Jones
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to raj in cup.hp.com but NOT BOTH...
 
N

Nick Maclaren

TCP is "reliable" only in that it will tell you if it believes that
the data has not arrived at the desired destination. TCP can only
overcome so much in the way of packet loss and the like, so if there
was nasty packet loss between you and the other end, or routing
instability somewhere...

Yes, but there are SUPPOSED to be checksums and sequence counts.
If those were used properly, the chances of error are low. Yes,
I know that regrettably many systems don't check them correctly,
or even run with no checking by default, but still ....

In a previous task, I had to investigate FTP, and that is nothing
like as solid. In particular, it makes it too easy to truncate a
transfer early and think that was EOF. I think that was what was
happening - the last window wasn't being pushed, and so the last
few KB of the file were always arriving.


Regards,
Nick Maclaren.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top