Interesting read about upcoming K9 processors

P

Paul Repacholi

WinNT did in fact run on four different Platforms: i386, MIPS,
PowerPC and Alpha. (all 32-bit little-endian, though)

NT, aka MICA was 64 bit from day one. It was a VMS derivitive for
Prism, the Alpha predescessor. (An inside joke, AXP stands for Almost
eXactly Prism. Not totally far from the truth!)

Windows was DEFINED to be 32 bit. Except for the many 16 bit
leftovers! PPros showed them up very quickly. SO all of the Windows
defined APIs had to also be 32 bit, and still are.

--
Paul Repacholi 1 Crescent Rd.,
+61 (08) 9257-1001 Kalamunda.
West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
 
G

G

Nick Roberts said:
I don't think that's strictly correct. My memory is that Windows 95
was itself the first Windows which supported 32-bit code, when it was
launched (in 1995 ;-) The immediately prior version of Windows was
3.11, which was 16-bit only (and ran on top of MS-DOS).

That's pretty oversimplified. There were things running before Win95
that took advantage of the 386. EMM386.exe (and HiMem.sys ?) did some
funky things to give DOS apps more breathing room. Plus I seem to
recall that the actual implementation of the local/global heap code
was different on a 386. There was definitely a different version of
kernel.dll if you were on a 386.

But all that was hidden at the API level wich was strictly 16-bit.
Pointers to API calls were defined as DWORD I believe? I suppose that
you could argue from an academic perspective that "handles" were
bit-independent though.
Microsoft was banking on most customers immediately switching to
Windows 95. However, a lot of people (including corporate customers)
did not do this, so demand for a way to run Win32 programs under
Windows 3.1x built, and Microsoft quite quickly brought out the
Win32s API (which thunks the 32-bit calls to 16-bit ones).

Actually I don't think there was any external demand for this.
Microsoft wanted to force everyone to upgrade to the new 32-bit
development environment, and end-of-life VC++ 1.52. They couldn't get
corporate developers who were working in a Win 3.x only shop to
abandon good old 1.52 without the ability to run what they built on
their users' machines. So you're correct that it was indirectly
related to the slow uptake of Win95, but not driven by market demand
as much as MS's own agenda to jump start 32 bit development. They new
only too well from OS/2 what happens when people don't write native
apps for your shiny new OS, but just use it as a more convenient way
to run older apps.
I think that's about right, but greatly shortens a very long and
convoluted story. Apparently the NT project actually began in the
late 1980s, and the theme of the ensuing saga seems to be that
Microsoft were permanently struggling to find a place for NT in
their marketing strategies (and failing, until XP).

I think it also had to do with there being no "clean" cut-off point.
Alot of the crappier stuff in Win9x was there for the greatest
possible backward compatibility. Remember this was a time when there
was no DirectX, and games wrote directly to the hardware (and often
required "quiting" Windows entirely in order to run). Device drivers
stayed 16 bit for a LONG time. I bought a brand new HP scanner in 1998
that *still* used a 16 bit driver and wrote directly to the parallel
port.
 
D

Dean Kent

WinNT did in fact run on four different Platforms:
i386, MIPS, PowerPC and Alpha. (all 32-bit little-endian, though)

Yes, but were they written with a 'port' to 64 bit addressing in mind (seems
from your comment that the answer to the latter question would be no). I am
also curious as to whether all of these were entirely different code bases,
or if they shared a lot of code with some 'flags' to determine which
libraries/routines/includes/macros/whatever to compile with (not sure how it
works with C, as I am more familiar with COBOL or S/390 assembler).

Regards,
Dean
 
D

Dean Kent

Paul Repacholi said:
(e-mail address removed) (Florian Laws) writes:


NT, aka MICA was 64 bit from day one. It was a VMS derivitive for
Prism, the Alpha predescessor. (An inside joke, AXP stands for Almost
eXactly Prism. Not totally far from the truth!)

Windows was DEFINED to be 32 bit. Except for the many 16 bit
leftovers! PPros showed them up very quickly. SO all of the Windows
defined APIs had to also be 32 bit, and still are.

Apologies for being dense - but when you say "Windows was defined to be 32
bit" are you referring to Win95, or all flavors of Windows? If the latter,
how is that to be reconciled with your first statement?

Regards,
Dean
 
K

Ken Hagan

Nick said:
I don't think that's strictly correct. My memory is that Windows 95
was itself the first Windows which supported 32-bit code, when it was
launched (in 1995 ;-) The immediately prior version of Windows was
3.11, which was 16-bit only (and ran on top of MS-DOS).

The first version of NT (1991?) called itself Windows 3.1. It had its
faults, but it was 32-bit all the way through and more robust than
any DOS based Windows either before or after. On anything other than
minimum hardware it was faster too. Since any sane application ran
fine on it, I consider it a version of Windows.

Sadly, drivers were another matter. Every version of Windows 3x or
9x from that point on existed merely to support hardware that didn't
have an NT driver. (IIRC, for each of Win31, 95, 98 and ME, Microsoft
stated at the time of release that it would be the last such version.)

Microsoft designed Win31 (the DOS version) with some hooks in the
loader which allowed them to offer Win32s very soon after. Win32s
made it attractive for people like me to write 32-bit software even
though none of my customers had a 32-bit OS. As a result, Win95 had
an installed base of applications even before it was released and
OS/2 never had one, at all, ever. (Ducks for cover.)

All of which preceded Win95 by 2 or 3 years.
 
D

David Brown

Dean Kent said:
I seem to recall that Win32S was made available prior to Win95, but I may be
mistaken. It seemed to be a 'transition' tool so that developers could
start writing '32 bit' code that would run under Win95 when it arrived (and
perhaps WinNT).

Win32S was available before Win95, as was NT 3.51 (I believe NT 3.5 was the
first easily-availble version of NT - marketing's idea of persuading people
that it was a mature product). Win32s implemented some of the Win32 api
from NT - it allowed programs to use a full 32-bit address space, but did
not support multi-tasking. It was mainly used for "big" programs, like CAD,
or development tools, which could take advantage of the better memory
management. Back in the days when MS made at least a token attempt at
pretending they could co-operate with other people, the aim was that Win32s
would be a portable api for use on Win3.1, NT, and Win95 (which was due out
"real soon now"), along with other systems including OS/2 and unix systems.
Of course, after giving IBM a license to put win32s 1.25 into OS/2 Warp, MS
immediately upped the version number to 1.30 (the version number being the
only real change) so that new win32s programs would refuse to run on OS/2.

Incidently, the only win32s program MS ever made, AFAIK, was freecell.
I seem to recall that the WinNT effort followed the failed OS/2
partnership

WinNT caused the failure - MS realised that they did not have full control
of OS/2, so they started the WinNT project behind IBM's back, with a fair
amount in common at the base (which they were legally allowed to do). NT
has far more in common with OS/2 in its guts than it does with VMS.
with IBM. I think it was specifically meant to replace OS/2 1.1 (or 1.2,
whichever was the last MS release). This brings me back to my original
assertion - WinNT was not written to be portable, nor to be upgradable. It
was written to be 32-bit. I find it difficult to believe that in their

WinNT was written to be 32-bit - it followed mainly from OS/2 2.0 (the
original OS/2 1.x was 16-bit, in common with Win3.x). But it was written to
be portable, within certain restrictions - it required a 32-bit
little-endian cpu. It ran on PowerPCs, MIPs (this was in fact the main
platform for NT), Alpha, and x86. Even though all but the x86 had 64-bit
variants, and all ran best in big-endian rather than little-endian mode,
WinNT stuck to 32-bit little-endian.
haste to come out with WinNT that the MS developers took into consideration
the chance that they might have to run on different platforms. If there
was one code base, I think there would not be the 'problem' of supporting
multiple platforms. Consider DB2, which *was* written with portability in
mind. It took several weeks (or perhaps several days) to port it to x86-64,
and it runs on virtually every platform imaginable specifically because of
this.

Porting applications between architectures is not nearly as much work as
porting an OS. Original windows (Win3.x, then Win9x) has had lots of
architecture-specific code and assembly language scattered randomly about
the code base, making them a porting nightmare. Original NT 3.5 was nicely
modular, with the main code in C and only a few specific bits being
architecture-specific. This made it fairly easily portable. The same
applies to Linux, *bsd, etc. NT 4.0 onwards re-introduced the spagetti
organisation, as code (in particular, graphics code) was made
architecture-specific in the name of performance. Thus the NT platforms
died one after one as the maintainance costs sky-rocketed, and the cpu
manufacturers refused to pay MS' development costs.

Porting applications, on the other hand, is far easier. You have to
consider the effect of having 64-bit integers, and you might have to
consider endian issues, but mostly (if the original code is well-written)
its a matter of re-compiling and testing the new binary.
 
P

Paul Repacholi

Apologies for being dense - but when you say "Windows was defined to
be 32 bit" are you referring to Win95, or all flavors of Windows?
If the latter, how is that to be reconciled with your first
statement?

WindowsNT on, not the Windows 3.x or earlier, they where 16 bit
through out as far as I could see.

It was not a clean cut though, there was a lot of 16 bit code in
Windows from NT 3.5 and 3.51. NT 4.0 had lots less, but also NT and
Windows where now totally welded together. My memory of the times
was that NT was the future way, and all others where to go.

Win 95 was still a DOS based system, but with a curtin pulled over the
DOS parts. You have to dig to find it, but much of it is still there.
Don't know about ME, thank god!

2000 was ready to go to the Win64 interface on Alpha but was stopped
from proceding by M$.

--
Paul Repacholi 1 Crescent Rd.,
+61 (08) 9257-1001 Kalamunda.
West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
 
D

Dean Kent

David Brown said:
Porting applications, on the other hand, is far easier. You have to
consider the effect of having 64-bit integers, and you might have to
consider endian issues, but mostly (if the original code is well-written)
its a matter of re-compiling and testing the new binary.

Well, I am not an OS developer - but I do work on system level applications
(requiring hooks into the OS, use of low-level OS services, etc.). It
seems to me that the biggest problem for the OS is the compatibility with
existing applications. There will be control blocks (structures, whatever)
and APIs that include addresses to data areas, calls to communicate with
other programs, etc. You can't just change the size of the address areas
for two reasons - the offsets for following data areas would change
(requiring all applications using that structure/control block to be
recompiled), and you have no idea if the applications using it can actually
use the 'bigger' addresses. It also seems to me that you would have to
include a mechanism whereby the OS knows whether the application is running
32-bit or 64-bit to make sure it doesn't allocate memory where the
application can't access it - and it can't load the program in an address
range it can't handle. The calls to other modules would have to include a
mechanism whereby programs running in different addressing modes can still
pass parameters and data to each other without causing major problems and
crashes.

IOW, moving an OS to 64-bit would not only require changing integer sizes
and recompiling, but actually redefining the interfaces to work with both
32-bit and 64-bit programs (perhaps even mixed addressing modes). Making
sure all of this works would be a huge undertaking, and could create
situations where 95% of the code works fine, but a few critical apps have
major problems that delay the OS release.

Backwards compatibility matters, particularly in an environment where the OS
vendor has support issues to deal with and wants to control what is running
in the field. It may not be as big an issue with Linux (or may be), since
there are no support issues to deal with and there is no attempt to control
what is running in the field. "Want a 32-bit OS? Fine, run an older
kernel - or someone else's 64-bit kernel that has support for it."

At least, it works that way in the environment I am familiar with (not
Windows or Unix... :).

Regards,
Dean
 
D

Dean Kent

Paul Repacholi said:
2000 was ready to go to the Win64 interface on Alpha but was stopped
from proceding by M$.

Is it your understanding that this is the basis for the current 64-bit
Windows implementation? Also, out of curiosity (as I don't work on
Windows), are there updated APIs for 64-bit applications that are backwards
compatibile with the 32-bit ones - or has MS somehow designed their
interfaces/control blocks/etc. so that this isn't a concern?

Regards
Dean
 
R

Roger Binns

Dean said:
Windows implementation? Also, out of curiosity (as I don't work on
Windows), are there updated APIs for 64-bit applications that are
backwards compatibile with the 32-bit ones - or has MS somehow
designed their interfaces/control blocks/etc. so that this isn't a
concern?

Short answer is yes. Long answer follows:

Microsoft of all companies is very proficient at word size changes,
backwards compatibility, and API design borne of necessity rather
than purist design.

These are the transitions they went through:

- 8 bit to 16 bit (early machines, Basics, cloning CP/M)
- 16 bits with slightly wider memory (DOS, extended memory, 286
and 386 flavours of early Windows 1/2)
- 16 to 32 bits (but mostly 16) in Windows 3
- 32 bits with lots of 16 (Windows 95)
- "Clean" design 32 (Windows NT)
- 32 bits with longer addresses (PAE, Alpha, MIPS)
- 64 bits (Itanic, AMD64)

In general a binary is marked for a particular subsystem/version
which implies sizes. When the underlying OS uses a different
size various thunking or stub mechanisms are used to translate
to the right sizes, as well as lots of backwards compatibility
shims.

Note that backwards compatibility doesn't just mean theoretically
pure. It also means bugs or other behaviour. For example Windows
95 specifically recognised the Windows 3 version of SimCity which
had a bug that used memory just after it had been freed. They
run the memory manager in a different mode that ensures no crash
happens.

I recommend reading the historical articles in Raymond Chen's blog.

http://weblogs.asp.net/oldnewthing/category/2282.aspx?Show=All

A selection:

Why 16-bit DOS and Windows are still with us
http://weblogs.asp.net/oldnewthing/archive/2004/03/01/82103.aspx

History of calling conventions:
http://weblogs.asp.net/oldnewthing/archive/2004/01/02/47184.aspx
http://weblogs.asp.net/oldnewthing/archive/2004/01/07/48303.aspx
http://weblogs.asp.net/oldnewthing/archive/2004/01/08/48616.aspx
http://weblogs.asp.net/oldnewthing/archive/2004/01/13/58199.aspx
http://weblogs.asp.net/oldnewthing/archive/2004/01/14/58579.aspx

Sometimes an app just wants to crash:
http://weblogs.asp.net/oldnewthing/archive/2003/12/19/44644.aspx

When programs grovel into undocumented structures:
http://weblogs.asp.net/oldnewthing/archive/2003/12/23/45481.aspx

Why not just block the apps that rely on undocumented behaviour:
http://weblogs.asp.net/oldnewthing/archive/2003/12/24/45779.aspx

Why are structure sizes checked strictly:
http://weblogs.asp.net/oldnewthing/archive/2003/12/12/56061.aspx

What do the letters W and L stand for in WPARAM and LPARAM?
http://weblogs.asp.net/oldnewthing/archive/2003/11/25/55850.aspx

What about BOZOSLIVEHERE and TABTHETEXTOUTFORWIMPS?
http://weblogs.asp.net/oldnewthing/archive/2003/10/15/55296.aspx

Why is address space allocation granularity 64K?
http://weblogs.asp.net/oldnewthing/archive/2003/10/08/55239.aspx

My all time favourite about why Win95 didn't use the HLT instruction:
http://weblogs.asp.net/oldnewthing/archive/2003/08/28/54719.aspx

Roger
 
D

David Brown

Dean Kent said:
Well, I am not an OS developer - but I do work on system level applications
(requiring hooks into the OS, use of low-level OS services, etc.). It
seems to me that the biggest problem for the OS is the compatibility with
existing applications. There will be control blocks (structures, whatever)
and APIs that include addresses to data areas, calls to communicate with
other programs, etc. You can't just change the size of the address areas
for two reasons - the offsets for following data areas would change
(requiring all applications using that structure/control block to be
recompiled), and you have no idea if the applications using it can actually
use the 'bigger' addresses. It also seems to me that you would have to
include a mechanism whereby the OS knows whether the application is running
32-bit or 64-bit to make sure it doesn't allocate memory where the
application can't access it - and it can't load the program in an address
range it can't handle. The calls to other modules would have to include a
mechanism whereby programs running in different addressing modes can still
pass parameters and data to each other without causing major problems and
crashes.

IOW, moving an OS to 64-bit would not only require changing integer sizes
and recompiling, but actually redefining the interfaces to work with both
32-bit and 64-bit programs (perhaps even mixed addressing modes). Making
sure all of this works would be a huge undertaking, and could create
situations where 95% of the code works fine, but a few critical apps have
major problems that delay the OS release.

That's all very true - that's more reasons to back up my "porting an
applicaiton is relatively easy, porting an OS is hard" post. The OS has to
sit between the hardware and the apps - I only mentioned that the low-level
stuff was hard to port, but you are correct that there are issues on the app
side to consider. Hence Win-on-win for 16-bit app support on 32-bit WinNT,
and 32-bit loaders and libraries as well as 64-bit versions on 64-bit linux
and Win64.
Backwards compatibility matters, particularly in an environment where the OS
vendor has support issues to deal with and wants to control what is running
in the field. It may not be as big an issue with Linux (or may be), since
there are no support issues to deal with and there is no attempt to control
what is running in the field. "Want a 32-bit OS? Fine, run an older
kernel - or someone else's 64-bit kernel that has support for it."

Most people get their linux from OS vendors - what else would you call
Mandrake, Suse (Novell), Red Hat, etc. ? Support and backwards
compatibility is a big issue - in the linux world, if something works then
people use it, they don't re-write it and expect people to pay for upgrades
just that it will run on the latest version of your OS. There are programs
in continuous use in linux systems (and any other *nix system) that haven't
been changed in years - decades, maybe - simply because they already do the
job they were meant to do. Most of these can, of course, be simply
re-compiled at 64-bit for a 64-bit linux, but there are certainly plenty of
occasions when you would want to run 32-bit binaries on 64-bit linux.
 
S

Stephen Sprunk

There are a few, yes, but most of the folks developing Linux couldn't care
less if they cause closed-source developers a little pain. I haven't heard
any complaints about problems running i386 binaries on amd64 kernels, so it
appears to be a non-issue -- unlike with WinXP64.

Actually, it applies to commercial software too. I worked at a company
where the main product (an embedded OS) was shipped on a dozen different CPU
types and once the cost of porting was paid (over a decade ago) for the
second arch, the cost of adding a new arch was nearly zero.

The key is making sure all new code is written portably, which so far has
not been necessary in the Windows world. Some ISVs may decide it's cheaper
to develop non-portable code over time; they'll pay dearly for porting once
a decade and usually drop support for the older arch (as happened in the
Win16-to-Win32 conversion).
Do the AMD64 versions of Redhat and SuSE recompile everything? It seems
kind to silly to have a 64 bit /bin/ls, for instance. They always left
most stuff for which performance didn't matter compiled as i386, and for
stuff where performance mattered (the kernel, openssl libraries, etc.)
there were i686 versions. I would assume it is the same way for AMD64
stuff, but perhaps I'm wrong. If I am, it sure seems like they'd have
to do a lot more versions if they bugfix /bin/ls and have to compile a
64 bit version to go along with the i386 version!

If /bin/ls were patched, somebody has to recompile it for dozens of other
platforms, so what's the marginal cost in recompiling for amd64? It seems
to be less than the cost of deciding which binaries on an amd64 installation
should be kept as i386 and compiling your distribution appropriately.

S
 
N

Nate Edel

In comp.sys.intel David Brown said:
Win32S was available before Win95, as was NT 3.51 (I believe NT 3.5 was the
first easily-availble version of NT - marketing's idea of persuading people
that it was a mature product).

NT 3.1 was the first version available, with the 3.1 selected to be in
parallel with the version of (non-NT) Windows available at the time.
not support multi-tasking. It was mainly used for "big" programs, like CAD,
or development tools, which could take advantage of the better memory
management.

And for a lot of the Windows 3.x web browsers. I remember deploying a ton
of copies of Win32s so the lawyers I was supporting could use some
now-archaic version of netscape while the higher-ups in IT planned how to
transition to Win 95 (and then to WinNT 4, which was what we ended up moving
to, right around when I left.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top