The death of non-x86 is now at hand?

D

Dorothy Bradbury

It's probably not going to be necessary until 2050 though.

A bit like no-one requiring more than 640KB :)

Microsoft Law: Software with an uncanny knack of partly negating
any jump in performance or memory increase nearly as fast as it occurs.


I think I used WP5.1 for DOS on an 8086 8Mhz with 20MB HD,
memory was 640MB - an Amstrad 1640 ECD (colour EGA). It
was lousy since it wasn't VGA which made such a big difference.

Today I need an 80786 of 2200Mhz with a RAM that is 13x bigger
than that machines entire *hard-drive*, and a HD 1,000x bigger. The
O/S has gone from a small fraction of 640KB to one that is a small
fraction of 640MB. Ok, not that small a fraction of 640MB.

I can see a trend, if it continues we might create a "human machine"
which will have monumental capability, but can't finish painting a house.
A machine eventually as inefficient & unproductive as humanity :))

Perhaps, as inefficient & unproductive as bank staff or politicians...
 
K

Keith R. Williams

: In article <1CxVb.13543$R6H.1791
: @twister01.bloor.is.net.cable.rogers.com>, news.20.bbbl67
: @spamgourmet.com says...
:: http://www.theinquirer.net/?article=14038
:
: Simple tabloid fluff. Nothing new, much wrong.

Please elaborate and set us all straight then, smarty man. ;-)

In a future installment of "PeeCee History for Dummies", perhaps.
....but too much work, too little time now.
 
N

Nate Edel

In comp.sys.intel Yousuf Khan said:
ongoing anyways, whether x86 spurred it or not. In fact, I'd hazard a guess
that there were more x86 assembly programmers than for any other
architecture simply because of the numbers of x86 hardware sold.

Especially given that there were a lot of 8080/Z80 assembly programmers, and
while it wasn't quite 1:1, if you knew 8080/Z80 first, learning 8086 was
EASY.

There were a lot of 6502 assembly programmers of my generation too, but
knowing assembly on that one gave you nowhere to go, until the 65816 came
out (and unless you got a IIGS or were programming for the SNES, not much
chance to program those.)
 
N

Nate Edel

In comp.sys.ibm.pc.hardware.chips David Schwartz said:
If there's no other way to make processors keep getting faster, adding
more bits will allow them to perform at least some types of computations
more rapidly. If we have to go to 128 bits to do this, we will. And that
could happen long before 2050.

Well, that's register/ALU/FP width, rather than address space. Adding width
is easy, given the transistor budgets we have today...
Few computations don't fit in 32 bits, so going to 64 bits won't speed up
computation in general by very much (10%?). Almost everything fits in 64
bits, so going to 128 bits for computational speed will likely be a
non-started.

Depends on what sort of computations; most FP has been 64 or 80 bit for a
long time.
 
T

The little lost angel

than that machines entire *hard-drive*, and a HD 1,000x bigger. The
O/S has gone from a small fraction of 640KB to one that is a small
fraction of 640MB. Ok, not that small a fraction of 640MB.

I'm not sure it's even less than 640MB now! The last I looked, a bare
Win2K installation ate up almost 1GB :pppP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
A

Alex Johnson

The said:
I'm not sure it's even less than 640MB now! The last I looked, a bare
Win2K installation ate up almost 1GB :pppP

she meant memory not disk space. the 640KB Amstrad probably ran DOS
3.3, which took about 1-1.5MB disk and 10-100KB ram by the time you got
a prompt. my windows installations take about 2-3 GB (including extras)
and while running the task manager reports 120-150MB memory used after
startup.

Alex
 
Y

Yousuf Khan

Dorothy Bradbury said:
I think I used WP5.1 for DOS on an 8086 8Mhz with 20MB HD,
memory was 640MB - an Amstrad 1640 ECD (colour EGA). It
was lousy since it wasn't VGA which made such a big difference.

You used WP5.1 on an 8086? I couldn't imagine that, I used WP4.2 on it, and
I didn't switch to 5.1 until I got myself a 386. :)

Yousuf Khan
 
Y

Yousuf Khan

Nate Edel said:
Especially given that there were a lot of 8080/Z80 assembly programmers, and
while it wasn't quite 1:1, if you knew 8080/Z80 first, learning 8086 was
EASY.

There were a lot of 6502 assembly programmers of my generation too, but
knowing assembly on that one gave you nowhere to go, until the 65816 came
out (and unless you got a IIGS or were programming for the SNES, not much
chance to program those.)

I dabbled in 6502 assembly when I had a Commodore Vic-20, and used to play
with the school's Commodore Pets and SuperPets, as well as friend's C64's.
Assembly language on that was done through the machine language monitors.

I then got a PC, I was amazed to find that instead of machine language
monitors they had those highly convenient assemblers, which allowed you to
create machine language offline and run them only once you were completely
done! Wow, now that was convenience. :)

Yousuf Khan
 
T

Tony Hill

A bit like no-one requiring more than 640KB :)

Microsoft Law: Software with an uncanny knack of partly negating
any jump in performance or memory increase nearly as fast as it occurs.


I think I used WP5.1 for DOS on an 8086 8Mhz with 20MB HD,
memory was 640MB - an Amstrad 1640 ECD (colour EGA). It
was lousy since it wasn't VGA which made such a big difference.

Today I need an 80786 of 2200Mhz with a RAM that is 13x bigger
than that machines entire *hard-drive*, and a HD 1,000x bigger. The
O/S has gone from a small fraction of 640KB to one that is a small
fraction of 640MB. Ok, not that small a fraction of 640MB.

So what you're saying is that memory use has gone up by a factor of a
thousand since the 8086 days. That's 2^10, ie it's doubled 10 times
in about 20 years, or more simply, it's doubled every two years.

Now, if 32-bit systems are at their limit now, going by historical
trend with memory use doubling every 2 years, that means that we've
got another 64 years before we start hitting the limits of 64-bit
machines.

So the comment about 64-bit machines being good enough until 2050 is
erring on the side of caution. In reality 64-bit machines will
probably be enough until closer to 2070.
 
T

Tony Hill

Well, that's register/ALU/FP width, rather than address space. Adding width
is easy, given the transistor budgets we have today...

Easy but pointless. Even 64-bit integers registers aren't really that
useful except in very rare cases, and 128-bit registers would
basically never be used. All you would do is have a whole lot more
zeros to toss around. You can probably count on one hand the total
number of applications in all the world that use integers with a range
larger than 10^19. Basically you'll end up with cryptography.

Also having wider registers has a tendency to hurt performance a bit.
Take a look at the Athlon64/Opteron some time, a number of the integer
operations have higher latency in 64-bit mode than they do in 32-bit
mode. It's not a very significant difference, but given that having
128-bit wide integer registers buys you absolutely nothing in terms of
performance, the net effect would be to make things slower. So you
end up with a slower design that costs more money, ie pointless.
Depends on what sort of computations; most FP has been 64 or 80 bit for a
long time.

Floating point is used for slightly different things than integer
math. It's not a simple matter of "well, we need more range than a
32-bit integer provides so let's use floating point instead". If you
need a range of more than 4 billion for integers, you use a 64-bit
integer (compilers will do this for you with no trouble at all on a
32-bit architecture, albeit with a performance hit). If you need
floating point, you use floating point.
 
R

Robert Myers

So the comment about 64-bit machines being good enough until 2050 is
erring on the side of caution. In reality 64-bit machines will
probably be enough until closer to 2070.

Only through modesty have you failed to mention Hill's law: All
processes of relevance to computing can be represented by a straight
line on a semi-log plot. ;-).

RM
 
N

Neil Maxwell

I would love to see this happen too... Unfortunetly for the many
things Intel is, they are also a very greedy corperation. Let's not
forget one of the big driving motivations behind IA-64 was so intel
could once agian have (almost) complete ownership over a CPU ISA, no
3rd party licences, and then force that down on the average joe so
they can take another step twards because a true CPU monopoly.

I don't think they're really interested in becoming a monopoly, as
that would involve even more attention from government legal types.
Based on the behavior of the last 6 years or so, I believe Intel is
reasonably happy with 80% market share (as long as it includes the
fastest, high-margin CPUs). When the competition approaches or
exceeds 20%, they crank the competitive muscle into high gear,
otherwise, they seem to pace themselves and the market.

Just an observation...


Neil Maxwell - I don't speak for my employer
 
Y

Yousuf Khan

Neil Maxwell said:
I don't think they're really interested in becoming a monopoly, as
that would involve even more attention from government legal types.
Based on the behavior of the last 6 years or so, I believe Intel is
reasonably happy with 80% market share (as long as it includes the
fastest, high-margin CPUs). When the competition approaches or
exceeds 20%, they crank the competitive muscle into high gear,
otherwise, they seem to pace themselves and the market.

In the eighties, they had control of more than 90% of the PC chip market,
obviously. That was before they had steady competition from competitors in
the same marketplace. Now that they have gone down to 80% of the market,
after the steady competition came in. It's now a major battle for them to go
over 85% these days.

The majority of the Intel advantage comes from having products that are
competing in markets not covered by the competition -- yet. Especially,
higher margin markets like servers and laptops. However, the competition is
now starting to cover some of those other markets -- and heavily. The
ability for Intel to crank up the competitive muscle is now severely being
hampered, since those muscles are being attacked themselves. One of the
competitive muscles is laptops, which Intel still has a lot of strength in.
But the other major muscle was servers, which are now starting to look like
has gotten weaker.

For the servers, it looks like trouble for the two ends of the market that
Intel was targetting for itself: the Xeon and Itanium. The Opteron seems to
truly have the Xeon's number, it does everything that the Xeon does, except
more. When the Xeon CT comes out, it might close the 64-bit gap slightly on
Opteron, but it still loses out to Opteron's other major advantages:
integrated RAM controller, and Hypertransport. These advantages of Opteron
have very little to do with greater performance, but much more to do with
greater simplicity in developing systems around them.

Now in all of these Opteron vs. Xeon battles, you'd think Itanium is safely
tucked away in a higher end of the market, but it isn't, people don't
consider it to be higher end at all, except for its price. It is said that
Xeon is for lower-priced servers while Itanium is for big-iron servers.
However, what features does Itanium have that distinguish it for use in
big-iron than the Xeon, apart from the IA64 language? Itanium and Xeon both
use the same shared bus for their i/o operations, although the Itanium might
get a slightly faster version of the bus; Itanium and Xeon use the same
shared i/o bus to access their memory as well. As a matter of fact, Opteron
is technically more suitable for big-iron than Itanium is.

Then we get to the laptop muscle of Intel, which Intel is still currently
strong in. Laptop sales are entirely based around marketing. Last year,
Intel cleaned up with a marketing campaign based around slightly lower power
consumption, and built-in WiFi, known as Centrino. This year, the "latest
thing" may very well be a neat color scheme, and 64-bit chips. Witness how
Acer is marketing the Ferrari notebook: the only thing it's got going for it
seems to be a fluorescent red paint job and a famous namebrand; but people
are absolutely nuts about it. Now Intel might be able to market a Toyota
Itanium notebook, but I doubt it. :)

Yousuf Khan
 
R

RusH

Interesting article, I honestly don't think they're way to far off
base... I wouldn't be surprised to see the vast majority of
diversity in CPU architecture disappear over the next few years.

on PC market yes, but forget about embedded x86


Pozdrawiam.
 
K

Keith R. Williams

Only through modesty have you failed to mention Hill's law: All
processes of relevance to computing can be represented by a straight
line on a semi-log plot. ;-).

....and more importantly; "past performance is no guarantee of
future growth".
 
C

Carlo Razzeto

RusH said:
(e-mail address removed) (Carlo Razzeto) wrote :


on PC market yes, but forget about embedded x86


Pozdrawiam.
--
RusH //
http://kiti.pulse.pdi.net/qv30/
Like ninjas, true hackers are shrouded in secrecy and mystery.
You may never know -- UNTIL IT'S TOO LATE.

If I'm not mistaken I believe that they mentioned that x86 has not been and
probably will continue to not be very successful in the embedded market.

Carlo
 
N

Nate Edel

In comp.sys.intel Yousuf Khan said:
You used WP5.1 on an 8086? I couldn't imagine that, I used WP4.2 on it, and
I didn't switch to 5.1 until I got myself a 386. :)

I used WP 5.1 on 4.77mhz 8088s and V20s, nothing so quick as a 8mhz 8086.
By and large, the performance was fine as long as you had a hard drive or
LAN; ISTR that running 5.1 with dual floppies was was a royal pain in the
neck with disk swapping.

Wouldn't want to have tried WP 6 on a machine that slow.
 
T

Tony Hill

Only through modesty have you failed to mention Hill's law: All
processes of relevance to computing can be represented by a straight
line on a semi-log plot. ;-).

Who's limiting it to just processes of relevance to computing? Isn't
a straight line plot is how *everything* works?! :>
 
T

Tony Hill

In the eighties, they had control of more than 90% of the PC chip market,
obviously. That was before they had steady competition from competitors in
the same marketplace. Now that they have gone down to 80% of the market,
after the steady competition came in. It's now a major battle for them to go
over 85% these days.

I seem to remember that AMD actually competing very effectively in the
late 386s days with their 40MHz 386DX chip while Intel was just
starting out with the (at the time) very expensive 486s. And back in
the 286 days there were quite a number of competitors (including AMD
back then).

Intel's peak, from what I can tell, was around mid 1995 through to mid
1997. At that time everyone had pulled out of the x86 market except
Intel, AMD and Cyrix. AMD was floundering around having all kinds of
trouble with the K5 (first it was a year+ late, then under performing
compared to had been when the chip was supposed to be released).
Cyrix was doing ok with their 6x86 line, though compatibility
problems, first with Microsoft disabling cache, then later with
motherboards not supporting 75MHz bus speeds properly, kept this chip
in the real low-end and fairly low quantities.

It was only really when AMD got their K6 production up and running
that they really started competing again.
The majority of the Intel advantage comes from having products that are
competing in markets not covered by the competition -- yet. Especially,
higher margin markets like servers and laptops.

The larger volume also helps a lot. A HUGE amount of the costs
involved in CPUs are tied up in R&D and capital costs. But where the
up front costs are large, the unit costs are relatively small. Any
money made from the first few million CPUs goes to pay off the R&D
costs and it's only later that they start really making profit.
For the servers, it looks like trouble for the two ends of the market that
Intel was targetting for itself: the Xeon and Itanium. The Opteron seems to
truly have the Xeon's number, it does everything that the Xeon does, except
more. When the Xeon CT comes out, it might close the 64-bit gap slightly on
Opteron, but it still loses out to Opteron's other major advantages:
integrated RAM controller, and Hypertransport. These advantages of Opteron
have very little to do with greater performance, but much more to do with
greater simplicity in developing systems around them.

I'd say that it's a bit of both there, particularly if you look at
4-way servers. The Opteron seems to smack totally smack the XeonMP
around any time you start playing with 4P systems. On 2P systems the
shared bandwidth of the Xeon doesn't seem to hurt as much, though the
Opteron does almost always win here as well.
Now in all of these Opteron vs. Xeon battles, you'd think Itanium is safely
tucked away in a higher end of the market, but it isn't, people don't
consider it to be higher end at all, except for its price. It is said that
Xeon is for lower-priced servers while Itanium is for big-iron servers.
However, what features does Itanium have that distinguish it for use in
big-iron than the Xeon, apart from the IA64 language? Itanium and Xeon both
use the same shared bus for their i/o operations, although the Itanium might
get a slightly faster version of the bus; Itanium and Xeon use the same
shared i/o bus to access their memory as well. As a matter of fact, Opteron
is technically more suitable for big-iron than Itanium is.

The glue around the Itanium is currently allowing it to perform a lot
better in very large servers than anything we've seen from the Xeon.
Of course, we haven't really had a chance to see what the Opteron can
really do in large servers since no one has made anything more than a
4P system.

It is a bit of a coup though that AMD has managed to compete VERY well
with the Itanium in a LOT of 2P and 4P benchmarks.
Then we get to the laptop muscle of Intel, which Intel is still currently
strong in. Laptop sales are entirely based around marketing. Last year,
Intel cleaned up with a marketing campaign based around slightly lower power
consumption, and built-in WiFi, known as Centrino. This year, the "latest
thing" may very well be a neat color scheme, and 64-bit chips. Witness how
Acer is marketing the Ferrari notebook: the only thing it's got going for it
seems to be a fluorescent red paint job and a famous namebrand; but people
are absolutely nuts about it. Now Intel might be able to market a Toyota
Itanium notebook, but I doubt it. :)

Hehe, I'd like to see that Toyota notebook, complete with non-descript
styling and a boring paint job :> Actually a Centrino Toyota notebook
might just work, "sure it doesn't look very exciting, but it's
extremely reliable and gets excellent millage (low power
consumption)".

I think Intel is pretty well positioned in the laptop market for the
time being. AMD/Acer might have a bit of a win on their hands with
the Ferrari notebook, but really Intel has a great base of technology
in their Pentium-M and i855 chipset. AMD does have some options here,
particularly if they can do something with the AthlonXP-M line on a
90nm fab process. If they could combine some of the features of the
Athlon64/Opteron and the very low power consumption of the AthlonXP-M
(that chip is actually in the same basic power range as the
Pentium-M), they could have a decent competitor. I'm just not sure
that AMD has the resources to develop two completely separate cores
like Intel does (err, I guess Intel develops 3 cores).

Of course, VIA could start eating into the low-end here if they can
follow through on their plans effectively. Their chips are getting
some pretty impressive power consumption numbers and, perhaps more
importantly, combining that with VERY low costs. VIA has yet to get
the marketing going well, but the opportunity is there. VIA could
potentially start leading a low-cost notebook revolution in much the
same way that the K6 did on the desktop. I'm sure there are a lot of
people who would be willing to sacrifice some performance for a $500
laptop instead of a $1000 one. Intel's Celeron-M seems to be a
non-starter so far (though it's still early), while the Celeron Mobile
consumes a fair chunk of power while offering terrible performance.
 
T

Tony Hill

If I'm not mistaken I believe that they mentioned that x86 has not been and
probably will continue to not be very successful in the embedded market.

x86 isn't doing all that bad in the embedded market actually. I've
used it before in an embedded system design for the simple reason that
software development was MUCH easier. Now admittedly were weren't all
that constrained in terms of size or power, the box was going to sit
on a machine that was half the size of an american football field.
Still, the real strength of x86 was that it was EASY to develop
software for it. We could do essentially all the development and
testing on a plain old desktop system running Linux. We could even
get an OS image for the all done up on a desktop as well. This all
ended up being VERY handy since we didn't receive the hardware until
just over a week before the final product had to be shipped out.

You can start to see this sort of thing happen for a lot of embedded
projects. The ease of development for embedded x86 systems often out
weights any potential loss in the performance/watt measure. I don't
expect to see any x86 chips in smoke detectors any time soon, but
things like set-top boxes and many industrial processes can benefit a
lot here.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top