Interesting read about upcoming K9 processors

P

Patrick Schaaf

It's not really all that simple: while you can run a pure 64 bit OS on
AMD64, there are many Linux applications only available as "32-bit, IA32"
binaries.

I don't see many of them on the ~1000 Linux systems I've got at work.
There are bound to be some commercial apps where that's the case,
but I'm sure Stephen was thinking about the ease for _open_source_software_
to be ported over.

Why worry about commercial apps? They are supposed to make money, let their
developing companies spend some on porting. If the open source developers
already could do it for free, where should the problem lie? :)
Solaris for AMD64 will follow the same route as Solaris for 64-bit SPARC:
dual boot, and the ability to run unchanged 32 bit binaries in the 64 bit
environment.

Which is, of course, what Linux on AMD64 already and always provided,
decisions of _distributions_ wrt their development / library support
nonwithstanding.

best regards
Patrick
 
A

Andi Kleen

Casper H.S. Dik said:
It's not really all that simple: while you can run a pure 64 bit OS on
AMD64, there are many Linux applications only available as "32-bit, IA32"
binaries. MS Windows for AMD64 has an even greater need to support all
those 32 bit app. Solaris for AMD64 will follow the same route as
Solaris for 64-bit SPARC: dual boot, and the ability to run unchanged 32
bit binaries in the 64 bit environment.

Most Linux/x86-64 distributions support 32bit applications just
fine. The kernel certainly can run 32bit applications with very good
compatibility (I'm typing this on a Athlon64 running a 64bit kernel
with an old 32bit suse 8.1 userland). The 32bit emulation subsystem
is also very stable, because it is mostly shared now between many
other 64bit architectures, some of them using 32bit as their
primary user land.

There are unfortunately some more x86-64 linux distributions (let
unnamed) who chose to go the slightly easier route of not providing
32bit compatibility. This means they don't ship the 32bit libraries
and don't use the /lib64 <-> /lib separation to make it easy to
install 32bit programs into the 64bit system.

Of course even with them you can install a 32bit root into a chroot
and run your old programs in there, but it is not very nice solution
and makes administration more difficult.

I suppose the users will chose with their feet depending on if they
value 32bit compatibility or not.

-Andi
 
D

Douglas Siebert

I don't see many of them on the ~1000 Linux systems I've got at work.
There are bound to be some commercial apps where that's the case,
but I'm sure Stephen was thinking about the ease for _open_source_software_
to be ported over.


Do the AMD64 versions of Redhat and SuSE recompile everything? It seems
kind to silly to have a 64 bit /bin/ls, for instance. They always left
most stuff for which performance didn't matter compiled as i386, and for
stuff where performance mattered (the kernel, openssl libraries, etc.)
there were i686 versions. I would assume it is the same way for AMD64
stuff, but perhaps I'm wrong. If I am, it sure seems like they'd have
to do a lot more versions if they bugfix /bin/ls and have to compile a
64 bit version to go along with the i386 version!
 
A

Andi Kleen

Douglas Siebert said:
Do the AMD64 versions of Redhat and SuSE recompile everything? It seems

Yes, they do.

(well mostly, the 32bit compat libraries are 32bit of course. And
there are a few programs left that are 32bit only, most noteable
of them is OpenOffice. Also usually only 32bit Java is available
because the 64bit version wasn't ready when the last releases were cut)
kind to silly to have a 64 bit /bin/ls, for instance. They always left

One reason is that a 32bit /bin/ls uses a 32bit time_t and will return
bad dates after 2036.

And unlike on some other architecture 64bit AMD64 programs are not
significantly slower than their 32bit counterparts.
most stuff for which performance didn't matter compiled as i386, and for
stuff where performance mattered (the kernel, openssl libraries, etc.)
there were i686 versions. I would assume it is the same way for AMD64
stuff, but perhaps I'm wrong. If I am, it sure seems like they'd have

You are wrong.
to do a lot more versions if they bugfix /bin/ls and have to compile a
64 bit version to go along with the i386 version!

And a s390 version and a ia64 version and a ppc64 version and ...

Linux distributions already have the infrastructure in place to handle
multi architecture updates. Adding another architecture is only a very
small amount of additional work.

-Andi
 
Z

Zalman Stern

Douglas Siebert said:
Do the AMD64 versions of Redhat and SuSE recompile everything?

The SuSE distro I last worked with (8.1 enterprise?) has 64-bit as the
default compilation environment. If you do a "make configure" you get
a 64-bit app. Thus the natural process of building a distro produces
all 64-bit commands.

(E.g. if one installs a 32-bit devel package RPM, it will not be found
for config purposes in the default 64-bit build environment.)
It seems kind to silly to have a 64 bit /bin/ls, for instance.

"Silly" or "not necessary"? One would have to decide which commands
stay 32-bit on a command by command basis as some do benefit from the
better performance and increased address space. And having ls be
32-bit doesn't gain one much except allowing the same bits to be used
for the 64-bit and 32-bit versions. If one wants that, one can install
the 32-bit distro on top of the 64-bit kernel.

But if one values the executable bits being widely useable across
architectures, it seems silly to have native code for ls at all. All
the basic commands could be Java class files, .NET assemblies, dis
code, or Perl or Python scripts or whatever.

-Z-
 
A

Anton Ertl

Douglas Siebert said:
Do the AMD64 versions of Redhat and SuSE recompile everything?

Sure, with some exceptions; on Fedora Core 1:

file /bin/*|grep ELF|grep x86-64|wc -l
77
file /bin/*|grep ELF|grep -v x86-64|wc -l
0
file /usr/bin/*|grep ELF|grep x86-64|wc -l
649
file /usr/bin/*|grep ELF|grep -v x86-64|wc -l
68

And these are the 68 binaries in /usr/bin that are still 386:

align bcat berkeley_db31_svc berkeley_db32_svc berkeley_db33_svc
berkeley_db40_svc ch_lab ch_track ch_utt ch_wave db2_archive
db2_checkpoint db2_deadlock db2_dump db2_load db2_printlog db2_recover
db2_stat db31_archive db31_checkpoint db31_deadlock db31_dump
db31_load db31_printlog db31_recover db31_stat db31_upgrade
db31_verify db32_archive db32_checkpoint db32_deadlock db32_dump
db32_load db32_printlog db32_recover db32_stat db32_upgrade
db32_verify db33_archive db33_checkpoint db33_deadlock db33_dump
db33_load db33_printlog db33_recover db33_stat db33_upgrade
db33_verify db40_archive db40_checkpoint db40_deadlock db40_dump
db40_load db40_printlog db40_recover db40_stat db40_upgrade
db40_verify design_filter dp festival festival_client fringe_client
g++296 gcc296 i386-redhat-linux7-c++ i386-redhat-linux7-g++
i386-redhat-linux7-gcc
It seems
kind to silly to have a 64 bit /bin/ls, for instance.

The 64-bit programs are supposedly smaller and faster. What's
probably more important, is that, if all programs used are 64-bit, you
need only the 64-bit libraries in RAM.
If I am, it sure seems like they'd have
to do a lot more versions if they bugfix /bin/ls and have to compile a
64 bit version to go along with the i386 version!

I don't think that's a problem for them.

Followups to comp.arch

- anton
 
R

Rupert Pigott

George Macdonald wrote:

[SNIP]
As usual the Kentster's way of impolitely calling someone a liar... and not
only me. The roadmaps *did* exist! Were they official roadmaps like those
issued to the i-Stooges in your quixotic, privileged position?... nope!
Were they published in magazines and Web sites?... yup! The evidence has
vanished along with bubble memory cheers and i860 effervescence - seems
like you were not paying attention.

Actually... Not quite. I am pretty sure I'd have some Personal Computer
Worlds that presented this road map... I recall my friend "advising" me
on my K7 purchase... He was saying "Don't get it, wait for Merced, it'll
all be over for x86 then.". Not that I believed him because I had some
kinda handle on what the realities of silicon and compilers are, whereas
the press, roadmaps and he didn't. ;)

Cheers,
Rupert
 
R

Rupert Pigott

Dean Kent wrote:

[SNIP]
And, as I said - I have official Intel roadmaps from 1996 thru 2000. On
paper and in electronic form. You have recollections. Excuse me if I
doubt your powers of recall unless and until some better evidence is
presented. Sorry.

Sure, I'll buy that Intel lies to the public but gives the real dope to
you. :p

I guess the press could have made it up too and that might explain why
Intel's later roadmaps (1998/99) differed so markedly from people's
recollections.

In truth I don't think the history really matters.

IA-64 is just too much like effort to support and the pay-off for
supporting it is minimal. There are a bunch of large IA-64 boxes on
the order book (eg : SGI and I suspect that this is where the end of
year spike came from). However those boxes will mostly be running
fairly specialised apps and will be nursemaided by very skilled
people with deep pockets.

Being realistic about it, there is sod all market there for ISVs to
port mainstream stuff to. Even worse, Intel can't stick & carrot ISVs
and customers with 64bit addressing because of AMD64. For those
reasons I believe that IA-64 will remain niche for the forseeable
future.

Further, while Intel will be moving to 64b x86 too, AMD is making a
far bigger shift towards it, it's mainstream roadmap screams 64bit.
So sure, IA-64 may *perhaps* have outshipped Opteron to date, BUT
it's volumes are likely to remain chickenfeed for a long time while
K8 sells by the shedload.

Ignoring the market & ISV issues, conside the fact that IA-64 silicon
relies on HUGE caches for it's performance (and the roadmap does not
show any change coming either), so even if it's performance does
become persuasive, it's cost per unit performance will remain high...

It's a dead duck as far as volume goes at the moment and it will
remain so. Hell, take a look at the high-street vendors, compare and
constrast the AMD64 box count to the IA-64 box count...

To be honest, I *hate* x86, I wish it would die (I was bitten by a
rabid 286 many years ago)... However I don't think IA-64 is a valid
solution, it's just ugly compared to the classic RISCs like Alpha
and MIPS (ironically both killed by IA-64).

Cheers,
Rupert
 
Y

Yousuf Khan

Rupert said:
Actually... Not quite. I am pretty sure I'd have some Personal
Computer Worlds that presented this road map... I recall my friend
"advising" me on my K7 purchase... He was saying "Don't get it, wait
for Merced, it'll all be over for x86 then.". Not that I believed him
because I had some kinda handle on what the realities of silicon and
compilers are, whereas the press, roadmaps and he didn't. ;)

It's called spin. Spin is the art of turning something truthful into
something that is at the edge of a lie without crossing over. The truth is
Intel was at one time saying that Itanium was going to take over from x86
pretty much by about now. The spin is that Intel never said it in so many
words.

Anyways, there was another article questioning what the purpose of Itanium
really is anymore and what its advantages are:

http://www.vnunet.com/analysis/1157294

<quote>
Priestley added that chips such as Opteron and EM64T will not be as
appealing as Itanium for 64bit computing in the long term, because they will
offer inferior reliability and scalability. "What we'll see is all 32bit
processors [like Opteron and Xeon] being able to address more than 4GB but
it doesn't fundamentally change the nature of the processor. It will still
be 32bit, and still used in the same applications."

Priestley said there will be less overlap between Itanium and the hybrid
32bit/ 64bit chips than some analysts suggest - even where 64bit versions of
business applications can run on Xeon or Opteron platforms. "The platform
capabilities [of these hybrid processors] don't deliver the reliability or
scalability," he argued. This last claim is the centrepiece of Intel's
argument in favour of Itanium, but critics say that Intel has provided
little evidence of Itanium's superior reliability or scalability.
</quote>

So basically, the press is starting to call Intel out on its claims that
Itanium has superior reliability and scalability.

Yousuf Khan
 
S

Sander Vesik

In comp.arch Yousuf Khan said:
The move from 16-bit to 32-bit Windows was doubly difficult, because of the
need to replace segment-based addressing with linear addressing. That is not
an issue with the 32-bit to 64-bit port. I think people are justifiably
disappointed in MS, because this one should've been smooth as silk. Their
kernel was ported quickly, but nothing else is being ported quickly.

Umm... Note that WinNT was never 16 bit and that parts of code were
AFAICT always shared with Win9x.
I don't think it's entirely as a result of spaghetti code. I think it's also
as a result of some dufusy arbitrary decisions that MS made. For example,
they decided not support 16-bit protected mode apps in Win64, even though
the AMD64 architecture has no problems running either 16-bit or 32-bit
protected-mode apps in compatibility mode. They've also decided not to allow
the use of x87 FPU under 64-bit mode, relying solely on SSE, even though
AMD64 is perfectly fine with either or both; now I don't know if this is
actually going to be a problem to developers, but it does show that MS is
taking arbitrary decisions. Then it hasn't created a 32-bit to 64-bit device
driver thunking layer, which would've given device driver manufacturers an
additional amount of time to port full 64-bit drivers to Windows.

Its not an *arbitrary* decision - once you look at the POV of "we are
doing a new ABI, hey compiler folks, whats the best thing?" doing it
this way makes sense.
 
S

Sander Vesik

In comp.arch Yousuf Khan said:
I don't think that's been the case since Windows 3.1. That's the version of
Windows that brought the "32-bit file system", which replaced all calls to
DOS and BIOS with protected mode Windows ones. Even if the app made a direct
call to DOS or BIOS, the 32-bit FS would handle the call.

It was slightly more complex given that you could have "16 bit disk drivers"
so the call might still make it back to some non-32bit gunk after the FS.
 
Y

Yousuf Khan

Sander Vesik said:
It was slightly more complex given that you could have "16 bit disk drivers"
so the call might still make it back to some non-32bit gunk after the FS.

I doubt that. Even if they were 16-bit disk drivers, they were *protected
mode* 16-bit disk drivers, so there was no need to transition through a
Virtual-8086 task gate.

The call to V86 was the real killer of performance. An OS call would result
in going to Ring 0 protected mode kernel and drivers, but then that driver
would really be a wrapper of a DOS or BIOS function, and that would result
in executing code in Ring 3 with continuous back and forth monitoring by the
Ring 0 code.

Yousuf Khan
 
S

Sander Vesik

In comp.arch Douglas Siebert said:
Do the AMD64 versions of Redhat and SuSE recompile everything? It seems
kind to silly to have a 64 bit /bin/ls, for instance. They always left
most stuff for which performance didn't matter compiled as i386, and for
stuff where performance mattered (the kernel, openssl libraries, etc.)
there were i686 versions. I would assume it is the same way for AMD64
stuff, but perhaps I'm wrong. If I am, it sure seems like they'd have
to do a lot more versions if they bugfix /bin/ls and have to compile a
64 bit version to go along with the i386 version!

It really depends on if AMD64 is being treated as an extension of
x86 or no. decising that no, you are just going to treat it as a
64 pltaform where the native ABI is x86-64 and that is what userland
uses by default is a valid decision. Especially if you dont plan on
running anything legacy.
 
Y

Yousuf Khan

Sander Vesik said:
Umm... Note that WinNT was never 16 bit and that parts of code were
AFAICT always shared with Win9x.

The move from 16-bit to 32-bit Windows didn't happen with Windows NT, it
happened during the time of Windows 3.x. Windows 3.x was the bridge between
16-bit and 32-bit. The Windows 9X/ME series was a continuation of that
bridging system.

Windows NT was supposed to be what those operating systems bridged
themselves to. As it turned out, that is exactly what happened, but it
didn't happen right away. The bridging OSes survived for a long, long time,
and Windows NT evolved into Windows 2000 and finally XP, before the
transition was finally complete.

Yousuf Khan
 
N

Nick Roberts

The move from 16-bit to 32-bit Windows didn't happen with Windows NT,
it happened during the time of Windows 3.x. Windows 3.x was the
bridge between 16-bit and 32-bit. The Windows 9X/ME series was a
continuation of that bridging system.

I don't think that's strictly correct. My memory is that Windows 95
was itself the first Windows which supported 32-bit code, when it was
launched (in 1995 ;-) The immediately prior version of Windows was
3.11, which was 16-bit only (and ran on top of MS-DOS).

Microsoft was banking on most customers immediately switching to
Windows 95. However, a lot of people (including corporate customers)
did not do this, so demand for a way to run Win32 programs under
Windows 3.1x built, and Microsoft quite quickly brought out the
Win32s API (which thunks the 32-bit calls to 16-bit ones).
Windows NT was supposed to be what those operating systems bridged
themselves to. As it turned out, that is exactly what happened, but
it didn't happen right away. The bridging OSes survived for a long,
long time, and Windows NT evolved into Windows 2000 and finally XP,
before the transition was finally complete.

I think that's about right, but greatly shortens a very long and
convoluted story. Apparently the NT project actually began in the
late 1980s, and the theme of the ensuing saga seems to be that
Microsoft were permanently struggling to find a place for NT in
their marketing strategies (and failing, until XP).

However, my comments are of a punter, and not vaguely authoritative.
 
D

Dean Kent

Nick Roberts said:
I don't think that's strictly correct. My memory is that Windows 95
was itself the first Windows which supported 32-bit code, when it was
launched (in 1995 ;-) The immediately prior version of Windows was
3.11, which was 16-bit only (and ran on top of MS-DOS).

Microsoft was banking on most customers immediately switching to
Windows 95. However, a lot of people (including corporate customers)
did not do this, so demand for a way to run Win32 programs under
Windows 3.1x built, and Microsoft quite quickly brought out the
Win32s API (which thunks the 32-bit calls to 16-bit ones).

I seem to recall that Win32S was made available prior to Win95, but I may be
mistaken. It seemed to be a 'transition' tool so that developers could
start writing '32 bit' code that would run under Win95 when it arrived (and
perhaps WinNT).
I think that's about right, but greatly shortens a very long and
convoluted story. Apparently the NT project actually began in the
late 1980s, and the theme of the ensuing saga seems to be that
Microsoft were permanently struggling to find a place for NT in
their marketing strategies (and failing, until XP).

I seem to recall that the WinNT effort followed the failed OS/2 partnership
with IBM. I think it was specifically meant to replace OS/2 1.1 (or 1.2,
whichever was the last MS release). This brings me back to my original
assertion - WinNT was not written to be portable, nor to be upgradable. It
was written to be 32-bit. I find it difficult to believe that in their
haste to come out with WinNT that the MS developers took into consideration
the chance that they might have to run on different platforms. If there
was one code base, I think there would not be the 'problem' of supporting
multiple platforms. Consider DB2, which *was* written with portability in
mind. It took several weeks (or perhaps several days) to port it to x86-64,
and it runs on virtually every platform imaginable specifically because of
this.
 
F

Florian Laws

Dean Kent said:
I seem to recall that the WinNT effort followed the failed OS/2 partnership
with IBM. I think it was specifically meant to replace OS/2 1.1 (or 1.2,
whichever was the last MS release). This brings me back to my original
assertion - WinNT was not written to be portable, nor to be upgradable. It
was written to be 32-bit. I find it difficult to believe that in their
haste to come out with WinNT that the MS developers took into consideration
the chance that they might have to run on different platforms.

WinNT did in fact run on four different Platforms:
i386, MIPS, PowerPC and Alpha. (all 32-bit little-endian, though)

Regards,

Florian
 
R

Rob Stow

Dean said:
I seem to recall that Win32S was made available prior to Win95, but I may be
mistaken. It seemed to be a 'transition' tool so that developers could
start writing '32 bit' code that would run under Win95 when it arrived (and
perhaps WinNT).

I have a slightly different recollection of Win32S.

When Win95 came out, MicroSoft told developers that if they
wanted to put the Windows logo on their packaging or to
call their software Windows compatible, then it had to be
able to run on all three platforms: 3.x, 95, and NT.
Win32S was more or less a thunking layer that let 32 bit
code made for 95 and NT run on the 16 bit Windows 3.x.

Personally I thought W2K was a roaring success for MicroSoft.
I seem to recall that the WinNT effort followed the failed OS/2 partnership
with IBM.

In some ways it is the *cause* of that failed partnership.
MicroSoft made no secret of the fact that they were
*simultaneously* working on NT and OS/2 - which led to
a lot of acrimony between IBM and MS. IBM resented
that MS was diverting resources from the OS/2 project,
and IBM also did what they could to prevent MS from using
OS/2 ideas/techniques/code/etc in the development of NT.

Some of the many long delays for OS/2 were attributed to
the fact that the IBMers did not want to share any more
code than they had to with MS - for fear that MS would
abuse their relationship with IBM and use that code for
NT. This meant that often the MS developers working on
OS/2 were kept in the dark about things they needed to know.


As well, I have heard another side of the story about MS
"rushing" to beat OS/2. Seems that perhaps it was not
so much that MS was rushing, but that IBM was doing too
much foot dragging - which is one of the things that caused
MS to start their NT project. Apparently IBM was aiming
for cross-platform perfection while MS wanted to settle for
a "good enough" x86 version, put the OS on the market, and
start recovering some of those development costs. MicroSoft's
pockets weren't nearly as deep back then as they are today -
they were deeply in the hole over OS/2 and badly needed to start
seeing revenue from the investment they had made.
I think it was specifically meant to replace OS/2 1.1 (or 1.2,
whichever was the last MS release). This brings me back to my original
assertion - WinNT was not written to be portable, nor to be upgradable. It
was written to be 32-bit. I find it difficult to believe that in their
haste to come out with WinNT that the MS developers took into consideration
the chance that they might have to run on different platforms. If there
was one code base, I think there would not be the 'problem' of supporting
multiple platforms. Consider DB2, which *was* written with portability in
mind. It took several weeks (or perhaps several days) to port it to x86-64,
and it runs on virtually every platform imaginable specifically because of
this.

Mine aren't any more authorative. I was a programer back
in those days and IBM and MS were stabbing each other in
the back as they tried to lure programmers towards developing
for either OS/2 or NT. Each side spread a lot of nasty
rumours about the other side and it was pretty hard to find
a few nuggets of truth in all of that sh*t.
 
Y

Yousuf Khan

Nick said:
I don't think that's strictly correct. My memory is that Windows 95
was itself the first Windows which supported 32-bit code, when it was
launched (in 1995 ;-) The immediately prior version of Windows was
3.11, which was 16-bit only (and ran on top of MS-DOS).

Microsoft was banking on most customers immediately switching to
Windows 95. However, a lot of people (including corporate customers)
did not do this, so demand for a way to run Win32 programs under
Windows 3.1x built, and Microsoft quite quickly brought out the
Win32s API (which thunks the 32-bit calls to 16-bit ones).

No, Win32S was available long before Windows 95 ever came out (remember
Windows 95 itself was greatly delayed). It sort of served as a "preview" of
the functionality to come with Windows 95 and beyond. Not that Win32S was an
API greatly used to during its time, but it was there nonetheless. It
allowed programs to use more than 16MB of memory (i.e. the limit of 16-bit
286 Protected Mode).

Also even ignoring the Win32S API, Windows 3.1 and beyond itself had quite a
few 32-bit drivers, such as the filesystem and the disksystem. There was a
combination of 16-bit and 32-bit protected mode drivers running under
Windows 3.x already. Plus the Windows 3.x kernel itself was 32-bit in
Enhanced Mode (remember that Standard Mode vs. Enhanced Mode stuff that used
to exist back then?). In Enhanced mode, it was able make use of paging and
Virtual-8086 mode for multitasking DOS programs underneath it; both of these
Enhanced Mode features required that the OS be designed for the 386 or later
processors, it wouldn't work on earlier non-32-bit processors.

Yousuf Khan
 
Y

Yousuf Khan

Dean said:
I seem to recall that Win32S was made available prior to Win95, but I
may be mistaken. It seemed to be a 'transition' tool so that
developers could start writing '32 bit' code that would run under
Win95 when it arrived (and perhaps WinNT).
Yup.

I seem to recall that the WinNT effort followed the failed OS/2
partnership with IBM. I think it was specifically meant to replace
OS/2 1.1 (or 1.2, whichever was the last MS release). This brings me
back to my original assertion - WinNT was not written to be portable,
nor to be upgradable. It was written to be 32-bit. I find it
difficult to believe that in their haste to come out with WinNT that
the MS developers took into consideration the chance that they might
have to run on different platforms.

Actually, I think it was meant to be the next generation of VMS. Remember it
was Dave Cutler who spearheaded this project after coming over from DEC to
Microsoft. At DEC he spearheaded VMS, and at Microsoft he wanted to do it
one better.

The "one better" in this case meant that he wanted NT to be more portable
than VMS was at the time (it was originally just a VAX OS, then at great
effort it was ported to become an Alpha OS, now it's been ported to
Itanium).

And yes, Microsoft's own goal was to make NT one better than OS/2 too. So
each conspirator (Cutler & Microsoft) had their own goal posts in mind.

Yousuf Khan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top