Is Itanium the first 64-bit casualty?

T

Terje Mathisen

Stephen said:
64-bit pointers are, pardon the pun, pointless for the vast majority of
desktops. 64-bit ints are somewhat useful, but the big benefit to x86

I disagree: When all our standard high-end PCs come with 2 GB today,
even including the laptop I'm using, the 32-bit wall is getting
uncomfortably close. I.e. the real limit for a 32-bit OS isn't 4GB, it
is more like 2 to 3 GB, right?
machines is the extra eight GPRs that amd64 offers -- which is orthogonal to
64-bitness. It just makes sense to add everything at the same time, since
we only get an opportunity for such a radical ISA change once a decade.

Rather the opposite: The extra GPRs is a relatively small improvement,
but as you said, it made a lot of sense to do it at the same time.

Terje
 
R

RusH

Tony Hill said:
Not this time (though it could be in other situations). FAT32 has
a limit on file sizes of 4GB (unsigned 32-bit int)

ha, i was thinking about disk and thats where this FAT idea came, but
now I remember, its AVI file format limitation :) 2GB no more, there is
a new wersion of this format now supporting larger files

http://www.google.pl/search?q=cache:_a-
m3edLsnIJ:www.marcpeters.co.uk/videoeditingforum/software003.htm+avi+2g
b+limit&hl=pl&lr=lang_en|lang_pl

guess I'm not the only one


Pozdrawiam.
 
N

Nick Maclaren

|>
|> > But a single flat, linear address space is almost equally ghastly,
|> > for different reasons. It is one of the surest ways to ensure
|> > mind-bogglingly revolting and insecure designs.
|> >
|> Smile when you say that, pardner. AS400/iseries has exactly that.
|> And has since S/38.

Eh? One of us is seriously confused.

When I was being told about it at Santa Teresa in the context of
implementing C, I was told that one of the worst problems was that
pointers were semi-capabilities. That is "problem" in the sense
of implementing C, not "problem" in the sense of insecurity. And
typed and checked pointers to memory segments (which is what I
understood they were) do not provide a flat address space!

I can believe that, at a lower level, there is a flat address
space. Equally well, most mainframes (and similar) with a flat
address space have a structured one underneath (banking etc.)

The point is what is made visible by the ISA.


Regards,
Nick Maclaren.
 
R

Rupert Pigott

Nate said:
Plenty of Windows XP (especially XP Home) users still use FAT32, which is
still limited in terms of its maximum file sizes.

I'm aware of that, but I think it's *highly* unlikely in the case of
this particular guy. He had successfully created > 2GB files (and
verified them). I think it's far more likely that the software was
FUBARed or NT was simply preventing > 2Gb mmap.


Cheers,
Rupert
 
R

Rupert Pigott

RusH said:
ha, i was thinking about disk and thats where this FAT idea came, but
now I remember, its AVI file format limitation :) 2GB no more, there is
a new wersion of this format now supporting larger files

AVI was his *output* format, not the *input* format. What's more he had
verified that the input files were OK. I think he had the same idea that
perhaps they were junk after the 2GB mark. :)

Cheers,
Rupert
 
B

Bernd Paysan

Judd said:
What is taking them so long? Answer = Intel!

Intel's 64 bit workstation chip has been officially released now. It's short
in supply (or the release date was pushed a bit too much - another
Unobtanium ;-), but at least theoretically, you could get an EM64T-enabled
processor in the new socket format (with lands instead of pins). No NX-bit
yet, though.

I think the answer is "Microsoft". They don't seem to be able to dig through
the bad code they've written in the past.
 
P

Peter Dickerson

Stephen Sprunk said:
64-bit pointers are, pardon the pun, pointless for the vast majority of
desktops. 64-bit ints are somewhat useful, but the big benefit to x86
machines is the extra eight GPRs that amd64 offers -- which is orthogonal to
64-bitness. It just makes sense to add everything at the same time, since
we only get an opportunity for such a radical ISA change once a decade.

S

More GPRs is not a benefit as far as nearly all costumers are concerned.
They add 10-15% speed improvement maybe, if you recompile. Thats similar to
waiting 3 months for clock/memory speed improvements. We are approaching the
point with the need for 64-bit addressing that 32-bit reached with EMS/XMS
on the original PC. Most people didn't need 32-bit, they just needed a bit
more than was available (20-bit for PC). Soon many people will need a bit
more than 32-bit addressablility. We can either add kludges to make 32-bit+
possible, but inconvenient for the developer, or we can go to the next
logical stage. The consensus for that is 64-bit addressing. OK, so most
devices do less that 64-bit physical addressing but extension is little more
than adding pins/balls the chip.

Peter
 
S

Sander Vesik

In comp.arch David Schwartz said:
I have to disagree here. File sizes in serious applications have long

File size does not relate to pointer size in any way.
exceeded the size where you could 'mmap' them with 32-bit pointers. Without
64-bit pointers, the usable memory space for typical applications is between
.5Gb and 3.5Gb depending upon how the system is set up. Raising it to 3.5Gb

If its under 2GB, you have some serious questions to ask from your OS developer.
or so has other serious compromises. The current generation of software
doesn't use this type of capability simply because it hasn't been available.

Sure, 64-bit anything is pointless for the vast majority of anything
*today* simply because the vast majority of software is 32-bits. That
doesn't mean that the software couldn't be better if it had access to a
massive address space.

Think about things like hash tables and sparse arrays. Their
implementation could be much simpler if the address space were massive and
the OS allocated memory upon usage.

Oh, and just what would that do to the caches?
 
D

Dan Pop

In said:
File size does not relate to pointer size in any way.

If you have a comment to make, wait until you reach the end of the
sentence.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

How comes you ignored this part of the sentence? It shows a clear
relation between file size and pointer size, despite your flat claim that
there is no such thing.

Dan
 
D

Dan Pop

In said:
Marketing and the perceptions of PHBs with the chequebooks.

Since you were the one who made the distinction, are you a marketroid or
a PHB with the chequebooks?
Not my call.

Yet, it was *your* statement.
I think we should play the Marketoids at their own game : Let's start
referring to IA-64 as "Legacy" now that we have a dual-sourced 64bit
architecture in the x86 world.

Wasn't Alpha dual-sourced long ago (Samsung, IIRC, being the alternate
supplier)?

Dan
 
R

Rupert Pigott

Dan said:
Since you were the one who made the distinction, are you a marketroid or
a PHB with the chequebooks?

No a just a programmer. On of the guys who have to make good out of
the stuff that the Marketroids have conned the PHBs with.

The PHBs who bought kit at the time made that distinction though.
The minimal spec Alpha was quite definitely "The Workstation" while
the dual Pentium Pro 200, with 256Mbytes of RAM, triple UW SCSI stuffed
with sundry PCI I/O cards that I used to as a seat in order to keep my
bum warm on cold sites was definitely the desktop. ;)

I think the distinguishing features for the PHBs were :
1) It didn't run x86 binaries native (unlike PCs aka "Desktops")
2) It cost more for a given level of performance.

One of my collegues said of the PHB (a nice guy) who did most of the
purchasing : "Knows enough to be dangerous." :)

I have a suspicion that DEC Salesmen were actually pushing them as a
cut above a "Desktop" (aka PC) in order to justify the price, which was
quite high compared to the white-box Pentium Pros we were running. They
had the added advantage that we could get hold of NT apps for them too.
Yet, it was *your* statement.




Wasn't Alpha dual-sourced long ago (Samsung, IIRC, being the alternate
supplier)?

Alpha did not execute x86 binaries natively. FX!32 was a wonder to
behold but trust me : Running NT 3.51 and x86 apps under it really
was a chronic waste of a great machine. WinZIP ran about 3x faster on
a Pentium Pro 200 than on the low-end Alpha under FX!32. I think that
must have been around late 1996.

To be honest I was really pissed off that I didn't have access to a
toolchain for that box, I had a particular bit-bashing app that would
have screamed on it... :/

Cheers,
Rupert
 
D

Dan Pop

In said:
I have a suspicion that DEC Salesmen were actually pushing them as a
cut above a "Desktop" (aka PC) in order to justify the price, which was
quite high compared to the white-box Pentium Pros we were running.

A low end desktop Alpha PC had the same price as a high end desktop Intel
PC. The performance depended on what you wanted to do with these systems,
of course.
They
had the added advantage that we could get hold of NT apps for them too.

What good is an NT app to a machine running Linux, be it Alpha or Intel?
;-)
Alpha did not execute x86 binaries natively.

Entirely irrelevant in an open source world.

Dan
 
C

Casper H.S. Dik

Wasn't Alpha dual-sourced long ago (Samsung, IIRC, being the alternate
supplier)?

I think "x86" is the operative word here; dual sourced 64 bit SPARC
has been around for a quite a while (since '95, IIRC).

Casper
 
Y

Yousuf Khan

Rupert Pigott said:
Alpha did not execute x86 binaries natively. FX!32 was a wonder to
behold but trust me : Running NT 3.51 and x86 apps under it really
was a chronic waste of a great machine. WinZIP ran about 3x faster on
a Pentium Pro 200 than on the low-end Alpha under FX!32. I think that
must have been around late 1996.

I can recall looking back to the early 90's where even back then sales
droids were claiming that X architecture was Y times faster than a PC, even
while running PC apps! It was always a little hard to believe, and
fortunately none of the purchasers of the places I worked at took seriously
either.

Yousuf Khan
 
Z

Zalman Stern

Terje Mathisen said:
Rather the opposite: The extra GPRs is a relatively small improvement,
but as you said, it made a lot of sense to do it at the same time.

I recently helped a friend with perfomance optimization of some string
searching code. They were using strstr, which was acceptably fast, but
needed a case insensitive version and the case insensitive versions of
strstr were much slower. The platform where this was discovered was
Win32 using VC++ libraries, but Linux, Solaris, AIX, HP-UX, etc. are
also targets for the product.

I suggested using Boyer-Moore-Gosper (BMG) since the search string is
applied to a very large amount of text. A fairly straight forward BMG
implementation (including case insensitivity) in C is ~3 times faster
than strstr on PowerPC and SPARC for the test cases they use. On PIII
class hardware it is faster than strstr by maybe 50%. On a P4 it is a
little slower than strstr.

The Microsoft strstr is a tight hand coded implementation of a fairly
simple algorithm: fast loop to find the first character, quick test
for the second character jumping to strcmp if success, back to first
loop if failure.

The BMG code had ridiculous average stride. Like 17 characters for
this test corpus. But the inner loop did not fit in an x86 register
file. (It looked like I might be able to make it fit if I hand coded
it and used the 8-bit half registers, and took over BP and SP, etc.
But it wasn't obvious it could be done and at that point, they were
happy enough with the speedup it did give so...)

It turns out that the P4 core can run the first character search loop
very fast. (I recall two characters per cycle, but I could be
misremembering. It was definitely 1/cycle.) The BMG code, despite
theoretically doing a lot less work, stalls a a bunch. A good part of
this is that it has a lot of memory accesses that are dependent on
loads of values being kept in memory because there are not enough
registers. 16 registers is enough to hold the whole thing. (I plan on
redoing the performance comparison on AMD64 soon.)

The point is this: having "enough" registers provides a certain
robustness to an architecture. It allows register based calling
conventions and avoids a certain amount of "strange" inversions in the
performance of algorithms. (Where the constant factors are way out of
whack compared to other systems.) As a practical day to day matter, it
makes working with the architecture easier. (Low-level debugging is
easier when items are in registers as well.)

I expect there are power savings to be had avoiding spill/fill
traffic, but I'll leave that to someone with more knowledge of the
hardware issues.

AMD64 is a well executed piece of practical computer architecture
work. Much more in touch with market issues than IPF ever has been or
ever will be.

-Z-
 
R

Rupert Pigott

Dan said:
A low end desktop Alpha PC had the same price as a high end desktop Intel
PC. The performance depended on what you wanted to do with these systems,
of course.

Was OK running a RIP, but quite frankly PPros were just as swift. It
was very hard to justify it's existance on the basis of price /
performance with the shrink-wrapped *NT* apps that mattered to us.
What good is an NT app to a machine running Linux, be it Alpha or Intel?

It's kinda handy if the machine is running NT though. Horrible waste
of said machine IMO, but that's PHB thinking for you.

Here's a list of comments I received in response to suggesting using
Linux :

"It's too expensive !"
"It's unsupported !"
"There's no hardware support !"
"People don't understand it !"
"Customers won't like it !" (Customers wanted gear that worked
appliance style 24x7, no UI)
;-)




Entirely irrelevant in an open source world.

Very relevent to that PPOE at the time. The PHBs were highly allergic to
the idea of UNIX in any shape or form (the only Open Source choice for
the Alpha in 1996).

You just can't help some people. About 6 months after I had left I
heard that they had stopped paying their staff... A shame, they had
some good people. :(

Cheers,
Rupert
 
S

Stephen Sprunk

Rupert Pigott said:
I think the distinguishing features for the PHBs were :
1) It didn't run x86 binaries native (unlike PCs aka "Desktops")
2) It cost more for a given level of performance.

By both measures, a Mac is a workstation and not a desktop.

S
 
S

Stephen Sprunk

Yousuf Khan said:
I can recall looking back to the early 90's where even back then sales
droids were claiming that X architecture was Y times faster than a PC, even
while running PC apps! It was always a little hard to believe, and
fortunately none of the purchasers of the places I worked at took seriously
either.

There was a year or so when an Alpha running x86 binaries on FX!32 did
indeed outpace the fastest x86 machines available, though by less than a 50%
margin. I believe it was right before the P6 core came out.

S
 
R

Rupert Pigott

Yousuf said:
I can recall looking back to the early 90's where even back then sales
droids were claiming that X architecture was Y times faster than a PC, even
while running PC apps! It was always a little hard to believe, and

I don't think it makes sense to buy a swift machine and hobble it by
emulating a crufty ISA.
fortunately none of the purchasers of the places I worked at took seriously
either.

As it happens running native that Alpha could well have blitzed the
P-Pro by a factor of 2-3x, which would have been possible if that
PPOE had added Linux to their portfolio. That would have made a huge
difference to some of our customers too - could have genuinely made
a contribution to their bottom line. Like I said elsewhere, running
NT + FX!32 on that machine was a terrible waste. :(

Cheers,
Rupert
 
S

Stephen Sprunk

Peter Dickerson said:
orthogonal

More GPRs is not a benefit as far as nearly all costumers are concerned.
They add 10-15% speed improvement maybe, if you recompile. Thats similar to
waiting 3 months for clock/memory speed improvements.

It totally depends on the app. One program I work on sees a 79.8%
performance boost when moved from i686 to amd64.
We are approaching the
point with the need for 64-bit addressing that 32-bit reached with EMS/XMS
on the original PC. Most people didn't need 32-bit, they just needed a bit
more than was available (20-bit for PC). Soon many people will need a bit
more than 32-bit addressablility. We can either add kludges to make 32-bit+
possible, but inconvenient for the developer, or we can go to the next
logical stage. The consensus for that is 64-bit addressing. OK, so most
devices do less that 64-bit physical addressing but extension is little more
than adding pins/balls the chip.

I've never run across a desktop app that needs more than 2GB of address
space; for that matter my 512MB machine (with no VM) handles anything I
throw at it, though I'm sure it'd be a bit faster if the motherboard
accepted more than that.

Sure, in 2-3 years some users (or more correctly, bad developers) will
require more than 2GB in a single process, but the hype about 64-bit on the
desktop today is way premature. The only point I see in putting 64-bit on
the desktop today is that the volume will drive down prices for 64-bit parts
in servers -- where it is unquestionably needed already.

S
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top