64 bit processors

C

CBFalconer

Aardvark J. Bandersnatch said:
That has always puzzled me. Why in the heck did the micro designers
take that route?

It started with the 8008, which had 7 8 bit registers designated a,
b, c, d, e, h, l. h and l were manipulated only as 8 bits each,
but the combination could be used to address up to a glorious 16k
of external memory. Note that hl pair, introducing register
specialization. Addressing was via a 3 bit field, which could
specify the 7 registers and the indirect via hl address, known as
m. 1/4 of the instructions simply moved data from one register to
another. Arithmetic was done with the a register (accumulator)
implied, which is also a register specialization.

The architecture was continued, for marketing and familiarity
reasons, into the 8080, which added a flags register (8 bit), an sp
register (16 bit) for stack, and instructions to combine the bc,
de, and hl registers into 16 bits and do arithmetic with them.
This was a real computer with 16 bits and an 8 bit path to external
memory. The whole personal computer explosion was really based on
this chip. Now the sp register joined the hl register as
specialized. A set of 16 bit arithmetic instructions was added
using the hl register as the 16 bit accumulator. The 6502 was
competition, and the Z80 was an enhancement (the z80 added more
specialized registers). Other chips had little influence.

The next step was the 8086 (and its 8 bit bussing clone, the
8088). Again, the register architecture was continued, with added
specialized registers and usages. The bc pair became the counting
register for string operations. The si and di indexing registers
(special purpose) were added. The bp (base register for stack
scopes) was added, all specialized. Also the various segment
registers. The adaptations kept the actual code size short yet
greatly expanded the addressing capabilities.

Other major steps were to the 80286 (not really significant) and
the 80386, which is the architectural base of most PC class
machines today.
 
G

General Schvantzkoph

The Intel 8088 - upon which all this is built - was the biggest pile of crap
ever invented. Certainly compared to the marvellous, wonderful, brilliant,
fabulous Motorola 68000 that was around at the same time. No contest. The
Motorola architecture was a programmers dream, with loads of 32-bit
multipurpose registers, the ability to address loads of memory in contiguous
chunks (no segment registers, yuk) and a great instruction set. And it ran
like the wind.

Unfortunately, some idiot in IBM decided - for reasons unknown to me - to
implement the 8088 in the first IBM PC. I might have had something do with
wanting to keep the memory bus 8-bits wide to keep the cost down. The 8088
was 16-bit internally, with an 8-bit data bus. Joke. A 68000
implementation would have meant a more complex motherboard with 16 bit
memory.

Another possibility is the lack of a standard 68000 OS at the time. Before
the PC was born, the only "standard" OS was CP/M and this ran on Zylog Z80s
and on Intel. CP/M 68K hadn't been written yet. I think IBM's early plans
were to sell PC's with CP/M, but as we know a certain Bill Gates changed all
that by writing the diabolical MS-DOS in his garage.

Anyway, the rest is history. Motorola didn't get the IBM PC, Intel did and
we have had to put up with the consequences ever since.

Of course the x86 architecture got better. The 8086 (as featured in the IBM
XT) at least had a 16-bit data bus and the PC-AT with the Intel 80286
finally brought some decent speed, even if programming it was still a
nightmare.

I could go on. But I'll stop there.

Chip

IBM wanted to use the 68K but Motorola was late and Intel had the 8088
ready to go. Nobody thought the original PC was going to be anything more
than a one-of type of machine, so it didn't matter that it used an awful
processor. The 8088 was better than the Z80 which powered the competing
CP/M machines of the time and that's all that mattered. If IBM had thought
that the PC was going to become a standard that would last for decades
they never would have allowed Microsoft to own the operating system and
they probably would have used a microprocessor of their own rather than
use an Intel part. Even Intel didn't think the x86 architecture was going
to last, they were working on a part called the 432 which they thought was
the machine of the future. The 432 was a fiasco. The 432 was what was
called a capability based architecture, it used hardware to manage
objects. It was a terrible idea that was way beyond the technology of the
time. Intel tried to kill the x86 again in the late 80s with the i860
which was a RISC machine of sorts. The i860 was more successful than the
432, it was a pretty good DSP and it found it's way into a lot of parallel
processors. Intel sold thousands of them which means they lost their
shirts on it because a chip company needs to sell millions of parts not
thousands, but at least it wasn't the embarassment that the 432 had been.
The latest attempt to kill the x86 was the Itanium which is been even less
successful than the i860. Intel hasn't officially thrown in the towel on
the Itanium but everyone else has. Microsoft isn't supporting it in their
upcoming clustering version of XP Server which is the first step towards
dropping support altogether. Intel has a couple more generations of
Itaniums still on their roadmap but they no longer talk about desktop
Itaniums, only high end servers. So the x86, a baby so ugly that it's own
mother has tried to drown it three time, continues to live on as the
x86-64 and will be with us for years to come.
 
V

VWWall

Chip said:
The Intel 8088 - upon which all this is built - was the biggest pile of crap
ever invented. Certainly compared to the marvellous, wonderful, brilliant,
fabulous Motorola 68000 that was around at the same time. No contest. The
Motorola architecture was a programmers dream, with loads of 32-bit
multipurpose registers, the ability to address loads of memory in contiguous
chunks (no segment registers, yuk) and a great instruction set. And it ran
like the wind.

Unfortunately, some idiot in IBM decided - for reasons unknown to me - to
implement the 8088 in the first IBM PC. I might have had something do with
wanting to keep the memory bus 8-bits wide to keep the cost down. The 8088
was 16-bit internally, with an 8-bit data bus. Joke. A 68000
implementation would have meant a more complex motherboard with 16 bit
memory.

We are doomed to live with the decisions of our ancestors!
Another possibility is the lack of a standard 68000 OS at the time. Before
the PC was born, the only "standard" OS was CP/M and this ran on Zylog Z80s
and on Intel. CP/M 68K hadn't been written yet. I think IBM's early plans
were to sell PC's with CP/M, but as we know a certain Bill Gates changed all
that by writing the diabolical MS-DOS in his garage.

Actually, Gary Kildall wrote CP/M. IBM did offer it as an OS, but MS
issued its version as MS, and sold it to IBM.
Anyway, the rest is history. Motorola didn't get the IBM PC, Intel did and
we have had to put up with the consequences ever since.

Segmented memory anyone!
Of course the x86 architecture got better. The 8086 (as featured in the IBM
XT) at least had a 16-bit data bus and the PC-AT with the Intel 80286
finally brought some decent speed, even if programming it was still a
nightmare.

The Z-80 had some features that had to be used by writing ASM code with
"DB" calls. Adam Osbourne was the only one to offer it in machine.
 
W

Wes Newell

The Z-80 had some features that had to be used by writing ASM code with
"DB" calls. Adam Osbourne was the only one to offer it in machine.

Radio Shack TRS-80 users would probably disagree with this.
 
A

Aardvark J. Bandersnatch, BLT, MP, PBJ, LSMFT

CBFalconer said:
It started with the 8008, which had 7 8 bit registers designated a,
b, c, d, e, h, l. h and l were manipulated only as 8 bits each,
but the combination could be used to address up to a glorious 16k
of external memory. Note that hl pair, introducing register
specialization. Addressing was via a 3 bit field, which could
specify the 7 registers and the indirect via hl address, known as
m. 1/4 of the instructions simply moved data from one register to
another. Arithmetic was done with the a register (accumulator)
implied, which is also a register specialization.

The architecture was continued, for marketing and familiarity
reasons, into the 8080, which added a flags register (8 bit), an sp
register (16 bit) for stack, and instructions to combine the bc,
de, and hl registers into 16 bits and do arithmetic with them.
This was a real computer with 16 bits and an 8 bit path to external
memory. The whole personal computer explosion was really based on
this chip. Now the sp register joined the hl register as
specialized. A set of 16 bit arithmetic instructions was added
using the hl register as the 16 bit accumulator. The 6502 was
competition, and the Z80 was an enhancement (the z80 added more
specialized registers). Other chips had little influence.

The next step was the 8086 (and its 8 bit bussing clone, the
8088). Again, the register architecture was continued, with added
specialized registers and usages. The bc pair became the counting
register for string operations. The si and di indexing registers
(special purpose) were added. The bp (base register for stack
scopes) was added, all specialized. Also the various segment
registers. The adaptations kept the actual code size short yet
greatly expanded the addressing capabilities.

What a kludge! We designed better stuff in hardware classes. Argh.
 
R

Roy

General Schvantzkoph said:
IBM wanted to use the 68K but Motorola was late and Intel had the 8088
ready to go. Nobody thought the original PC was going to be anything more
than a one-of type of machine, so it didn't matter that it used an awful
processor. The 8088 was better than the Z80 which powered the competing
CP/M machines of the time and that's all that mattered. If IBM had thought
that the PC was going to become a standard that would last for decades
they never would have allowed Microsoft to own the operating system and
they probably would have used a microprocessor of their own rather than
use an Intel part. Even Intel didn't think the x86 architecture was going
to last, they were working on a part called the 432 which they thought was
the machine of the future. The 432 was a fiasco. The 432 was what was
called a capability based architecture, it used hardware to manage
objects. It was a terrible idea that was way beyond the technology of the
time. Intel tried to kill the x86 again in the late 80s with the i860
which was a RISC machine of sorts. The i860 was more successful than the
432, it was a pretty good DSP and it found it's way into a lot of parallel
processors. Intel sold thousands of them which means they lost their
shirts on it because a chip company needs to sell millions of parts not
thousands, but at least it wasn't the embarassment that the 432 had been.
The latest attempt to kill the x86 was the Itanium which is been even less
successful than the i860. Intel hasn't officially thrown in the towel on
the Itanium but everyone else has. Microsoft isn't supporting it in their
upcoming clustering version of XP Server which is the first step towards
dropping support altogether. Intel has a couple more generations of
Itaniums still on their roadmap but they no longer talk about desktop
Itaniums, only high end servers. So the x86, a baby so ugly that it's own
mother has tried to drown it three time, continues to live on as the
x86-64 and will be with us for years to come.

So how do the G5 and G4 chips that Apple uses compare to the the x86?
 
G

General Schvantzkoph

So how do the G5 and G4 chips that Apple uses compare to the the x86?

The Power PC architecture is a clean well designed architecture. IBM had a
research program in the 1970s called the 801 project (after a room number
at Watson Research) which helped define the concept of RISC (reduced
instruction set computer) architectures. The PPC is an outgrowth of 801
program. Stanford had a similar project called MIPS which was
commericialized, and Berkley had the Berkley RISC project which became the
SUN SPARC, Digital did the Alpha architecture. All of those machines,
with the exception of SPARC, consistantly out performed the x86 in each
generation. However performance isn't nearly as important as available
software so the x86 has driven all of the RISC machines, with the
exception of the PPC which survives in the Mac and in IBM servers, into
the grave. What Intel ended up proving is that if you throw enough money
at the problem you can overcome the inherent disadvantages of the
instruction set. Because of the x86's market share Intel can afford to
spend billions on each new processor and more importantly on each new
semiconductor process. The G5 is still faster in scientific applications
(because of it's vastly superior vector instruction set) but it's no
faster than the P4 in most general purpose applications.
 
R

Roy

General Schvantzkoph said:
The Power PC architecture is a clean well designed architecture. IBM had a
research program in the 1970s called the 801 project (after a room number
at Watson Research) which helped define the concept of RISC (reduced
instruction set computer) architectures. The PPC is an outgrowth of 801
program. Stanford had a similar project called MIPS which was
commericialized, and Berkley had the Berkley RISC project which became the
SUN SPARC, Digital did the Alpha architecture. All of those machines,
with the exception of SPARC, consistantly out performed the x86 in each
generation. However performance isn't nearly as important as available
software so the x86 has driven all of the RISC machines, with the
exception of the PPC which survives in the Mac and in IBM servers, into
the grave. What Intel ended up proving is that if you throw enough money
at the problem you can overcome the inherent disadvantages of the
instruction set. Because of the x86's market share Intel can afford to
spend billions on each new processor and more importantly on each new
semiconductor process. The G5 is still faster in scientific applications
(because of it's vastly superior vector instruction set) but it's no
faster than the P4 in most general purpose applications.


What a shame!
 
C

CBFalconer

Aardvark J. Bandersnatch said:
What a kludge! We designed better stuff in hardware classes. Argh.

But were they backwards compatible? After writing a few macros, I
could probably run most 8008 source programs on a Pentium.
Remember, at each step, the biggest customer base is the users of
the previous generation. IBM showed the efficiency of this 40
years ago with the 360 instruction set, which preserved binary
compatibility.

Actually I am quite happy to be able to run code from 25 years ago,
for which I have lost the source, yet it executes correctly.
 
J

John Smithe

IBM wanted to use the 68K but Motorola was late and Intel had the 8088
ready to go. Nobody thought the original PC was going to be anything
more than a one-of type of machine, so it didn't matter that it used an
awful processor. The 8088 was better than the Z80 which powered the
competing CP/M machines of the time and that's all that mattered. If IBM
had thought that the PC was going to become a standard that would last
for decades they never would have allowed Microsoft to own the operating
system and they probably would have used a microprocessor of their own
rather than use an Intel part. Even Intel didn't think the x86
architecture was going to last, they were working on a part called the
432 which they thought was the machine of the future. The 432 was a
fiasco. The 432 was what was called a capability based architecture, it
used hardware to manage objects. It was a terrible idea that was way
beyond the technology of the time. Intel tried to kill the x86 again in
the late 80s with the i860 which was a RISC machine of sorts. The i860
was more successful than the 432, it was a pretty good DSP and it found
it's way into a lot of parallel processors. Intel sold thousands of them
which means they lost their shirts on it because a chip company needs to
sell millions of parts not thousands, but at least it wasn't the
embarassment that the 432 had been. The latest attempt to kill the x86
was the Itanium which is been even less successful than the i860. Intel
hasn't officially thrown in the towel on the Itanium but everyone else
has. Microsoft isn't supporting it in their upcoming clustering version
of XP Server which is the first step towards dropping support
altogether. Intel has a couple more generations of Itaniums still on
their roadmap but they no longer talk about desktop Itaniums, only high
end servers. So the x86, a baby so ugly that it's own mother has tried
to drown it three time, continues to live on as the x86-64 and will be
with us for years to come.

Way back when I heard that IBM wanted assurances of backward compatibility in
future micorprocessors. The claim was that Intel said yes and Motorola said
no. Can you comment about this?

TIA
 
M

Mitch Crane

Actually I am quite happy to be able to run code from 25 years ago,
for which I have lost the source, yet it executes correctly.

Not to argue with your point, which is correct, but you don't really need a
compatible instruction set for 25 year-old code--you can emulate. Its
really a matter of being able to run the apps you are using now when you
change hardware. Apple did a pretty good job making current 68k apps run on
the PPC, too.
 
G

General Schvantzkoph

Way back when I heard that IBM wanted assurances of backward compatibility in
future micorprocessors. The claim was that Intel said yes and Motorola said
no. Can you comment about this?

TIA

I doubt that they were thinking about backward compatibility when they did
the original PC. The PC was an experiment to see if IBM could get
something out the door quickly and cheaply like a startup. At the time
Apple was taking off with the Apple II and IBM wanted to show that they
could compete in that market too. None of the established computer
companies was enthusistic about personal computers because they didn't see
any way of making any money selling cheap machines. IBM was making
billions selling multi-million dollar mainframes, how could they possible
make as much selling thousand dollar machines (the original PC was around
$3000). Never the less they wanted to cover their bets just in case
personal computers became important. I remember talking to an engineer
from the PC division at a conference in the early 80s. He said that they
had been forbidden to use any IBM resources to build the PC, they had to
go outside for everything just like a startup would have to do. The result
was the PC which used an Intel processor and Microsoft operating system.
Like a startup they were primarily concerned with getting it out the door
as quickly as possible. Motorola had slipped the 68K's schedule by 6
months so they used the Intel 8088 instead. I'm sure that they were
thinking that they could always switch processors in a future machine.
What they didn't count on was the huge success of the PC and more
importantly the rise of the PC clones which happened almost immediately.
Prior to the PC it wasn't practical to clone someone elses machine. Data
General and DEC sued the makers of Nova and PDP 11 clones out of business.
There were a couple of Japanese clones of the 370 but they never acheived
enough market share to really trouble IBM. The PC was different because it
was possible for a smaller and quicker competitor to build a truely
identical machine. Compaq came out very soon after the launch of the PC.
It was the clone makers that locked in the architecture and took away IBMs
ability to change the architecture. If you remember the machine after the
AT used an IBM proprietary bus instead of the AT bus. It was IBM's attempt
to recapture control of the PC, it failed. IBM was never again able to
influence the course of PC development. As it turned out they were right
to doubt that they could make money with PCs, they never did. Last weeks
sale of the PC division to Lenovo puts and end to an era. IBM's
competitors from that era are almost all gone now, Digital, Data General,
Prime, Control Data, none of them could survive in the PC era. Only HP
remains and they make almost all of their money from ink, their PCs are
probably a near breakeven business as they were for IBM.
 
V

VWWall

CBFalconer said:
Actually I am quite happy to be able to run code from 25 years ago,
for which I have lost the source, yet it executes correctly.
It's even more amazing how the BIOS can still accommodate old code.
I've got stuff that was originally ASM for a X80 CPU, that still works
on Win98 in a DOS window. Most was written in "C", (K&R style and
compiled with TurboC). I have one program called "Circus", that uses
channel 2 of the present CMOS version of the old programmable timer. It
still plays calliope music through the PC speaker, if you can find a
case with a real speaker. The evolution through ever larger HD's has
been interesting, and the 48bit BIOS's look like they'll keep up with
new HD's for awhile! (WD giveth and MS taketh away.)

Virg Wall (Old CP/M ASM programmer.)
 
A

Aardvark J. Bandersnatch, BLT, MP, PBJ, LSMFT

Mitch Crane said:
Not to argue with your point, which is correct, but you don't really need
a
compatible instruction set for 25 year-old code--you can emulate.

Eggzackly.
 
C

CBFalconer

Aardvark J. Bandersnatch said:
Eggzackly.

Possible, but that 25 year old code written in assembly seems to
handle anything in roughly zero clock time. Loading is of a paltry
5 or 10 kb, if that, no problem with locality and caches, etc.
 
J

John F. Regus

There are already 64 bit versions of Windows XP and Windows 2003. Do you
know what 32 bit or 64 bit processing is for?
It is the amount of memory that can be accessed. 2 to 32th power or 2 to
64th power of addressable virtual memory. This is useful for sites running
terabyte size databases. It doesn't really buy you any additional
processing power, because just the operating system can address 2 to the
64th power of virtual memory, do you have a database that size. Also, the
exponent means that certain virtual memory management modules are going to
be loaded farther into virtual memory. I know this from having been a
mainframe (IBM). If PC manufacturers wanted to make their processor
motherboards faster, they should have multiple IO paths to the disk drives.
If one path is busy to a disk, and you have another disk then another path
is used. This would reduce IO load and pass data faster through to the
processor. I also use a Promise raid card now, with its own IO processing
ability to offload IO operations from the cpu.

I hope AMD comes out with at least one if not two more faster Socket A(462)
processors, because 32 bit software is going to be here a lot longer.

Save the money. Look for fast IO mobo, and go to a SATA raid with its own
IO processor.

BTW just doing normal processing you would probably use very infrequently
the full 2 to 32d power of virtual memory because you are not running
programs requiring that much virtual memory. Half of the cpu is being used,
but it is only running around looking for something to do.
 
M

Mitch Crane

There are already 64 bit versions of Windows XP and Windows 2003. Do
you know what 32 bit or 64 bit processing is for?
It is the amount of memory that can be accessed.

I keep reading that, but it seems bogus to me. Are you saying that these
64-bit processors only have 64-bit address registers and nothing for data?

Assuming it can do each operation in the same clock cycles, a 64 bit
operation should be faster than two 32-bit operations on the same data. So
there should be situations where this could be taken advantage of by 64-bit
software. Otherwise we may as well have stuck to 8-bit registers and an 8-
bit data path.
 
K

kony

There are already 64 bit versions of Windows XP and Windows 2003. Do you
know what 32 bit or 64 bit processing is for?
It is the amount of memory that can be accessed. 2 to 32th power or 2 to
64th power of addressable virtual memory. This is useful for sites running
terabyte size databases. It doesn't really buy you any additional
processing power, because just the operating system can address 2 to the
64th power of virtual memory, do you have a database that size.

There have been real performance benefits seen from 64bit,
one does not have to speculate, just benchmarking is enough.

Also, the
exponent means that certain virtual memory management modules are going to
be loaded farther into virtual memory. I know this from having been a
mainframe (IBM).

I'd keep quiet about that, if the MAN realizes you've gone
organic he'll hunt you down. ;-)
If PC manufacturers wanted to make their processor
motherboards faster, they should have multiple IO paths to the disk drives.
If one path is busy to a disk, and you have another disk then another path
is used. This would reduce IO load and pass data faster through to the
processor. I also use a Promise raid card now, with its own IO processing
ability to offload IO operations from the cpu.

That was true for awhile, but the CPU is not a bottleneck
anymore, and with DMA you're looking at more of a PCI bus
bottleneck (up until PCI Express) and physical (mechanical)
storage medium bottleneck.
I hope AMD comes out with at least one if not two more faster Socket A(462)
processors, because 32 bit software is going to be here a lot longer.

As an upgrade path for current socket A motherboard owners
it seems a good idea, but otherwise not. Athlon 64 is
faster at 32 bit due to integral memory controller. If one
doesn't want to pay the overhead for 64 bit, there are 32
bit Semprons. There is no advantage to socket A which would
entice one to buy a motherboard for that platform except the
maturity of the technology, maturity of the motherboards,
their bios, and the (currently) lower cost.
Performance-wise socket A is dead already.
 
C

Chip

John F. Regus said:
There are already 64 bit versions of Windows XP and Windows 2003. Do you
know what 32 bit or 64 bit processing is for?
It is the amount of memory that can be accessed. 2 to 32th power or 2 to
64th power of addressable virtual memory.

Did you read ANY of this thread before posting this?

Not only has it been discussed *extensively* already, its also not really
correct. Yes, you can address more memory with a larger address registers
and address bus. But what about the data registers and data bus? What
about the fact that you can grab 8 bytes of data in one memory access and
process all 8 bytes at once - as opposed to only 4 bytes in a 32-bit
architecture?

I suggest you read the rest of the thread before typing again.

Chip
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top