Intel Inside no more

Y

Yousuf Khan

David said:
SPECcpu is more than just one "thing". It's a collection of many.

Well, SPEC has been shown in the past to be highly manipulable by
compiler tricks and architectures geared towards achieving high SPEC
scores without necessarily being faster in real-life situations. That is
unless your real life situation is running SPEC benchmarks.

I can think of several situations where SPEC has shown one architecture
to be faster than another, but real-world applications never ran any
faster. For example, games which are very FPU-dependent somehow seem to
get greater benefit out of the Athlons than the Pentiums, yet Pentiums
still somehow show up higher in SPECfpu. And this is not just a
complaint that's been levelled about SPEC since the x86 processors
started competing. It's been a complaint since the days of the RISC
server chips competed with each other.
Lower IPC is not slower. I never said the P4 had higher IPC, I said it
was faster. That means better performance, not better per clock
performance. Nobody cares about the latter. Performance is what
matters; if you build a 10GHz CPU with low IPC, that's fine. If you
build a 1GHz CPU with high IPC that's fine. Ultimately, it doesn't
matter except for the power/heat issues.

I was talking about real performance too. The P4 didn't really start
breaking away from P3, until the P3 stopped at 1.3Ghz while the P4 was
upto 2.0Ghz. Similarly, the P4 didn't really start breaking away from
Athlon XP until XP stopped at 2.1Ghz and P4 was at 3.2Ghz. Since Intel
couldn't really go much beyond 3.6Ghz with the P4, the Athlon 64
completely picked it off because it had the higher IPC and a better
thermal cushion to work with.
Yes and no. Even before there was serious competition in the server
markets, Intel had better ASPs and margins on the mobile segment than
in server. AMD has had very good desktop market share starting with
the K6 (in fact, AMD has less market share today than it does when the
K6 was out). The problem was getting into the more valuable markets.

Now today, AMD has made inroads into the server market, but you really
want to ask yourself, how much of a thread is this to Intel? The moron
I responded to seems to think that Intel is doomed and will be gone. I
seriously doubt AMD can hold > 25-30% of the x86 server marketshare.

I don't think the other guy said that Intel itself is doomed. However,
it does seem like he's saying that Intel will have a tough time catching
up technologically.

As for server marketshare, that's exactly the marketshare that they are
aiming for right now, approximately 30%. As for how much of it is really
a threat to Intel? I would assume it's quite a threat to it. Intel had
100% x86 server marketshare, 70% is a huge step down. It was accounting
for a large portion of their profits due to the high margins and
monopoly status.

And there is no refuge in the laptop market either. Retail sales of
laptops were 30% AMD just before Christmas, again up from nearly 0%
before. The servers and laptops were where Intel had counted on making
its profits, because it was making noises about how old-fashioned the
desktop PC market was, and how the laptop was the way of the future.
They caught Intel at a very bad point in time WRT product line ups.
Intel has rectified this flaw; Bensley will be a reasonable start, at
least putting them in the right neighborhood for performance and
price/performance. It certainly won't get them ahead, but when
Woodcrest comes out, things will be interesting.

Yeah but AMD didn't just catch Intel at a bad time, AMD created the bad
time for them. Intel was just doing fine with all of its old-fashioned
chips, selling them without trouble, because nobody figured that they
were old-fashioned yet. AMD had gone quiet for a few years while it
worked on these technologies. It's not just the chips themselves, but
also the manufacturing technology was was upgraded at the same time.
Intel never thought about taking a breather and working on their future
directions during the quiet time. And now it can't even think about
taking a breather, it's caught up in a full-scale bombardment, they have
to spend time just shoring up their defenses.

So, up until recently AMD had a strong technical advantage over Intel.
I think history has shown that when AMD can present a significantly
stronger product (say ~30-100% better, not just 10-15%, by whatever
your metric for better is), they tend to do well. The issue is that
historically, when AMD and Intel's products are very close (under 30%
difference) Intel has done very well. A large part of this is due to
marketing, channels of distribution, etd.

I think that's right. In the last generation the Intel and AMD
technologies were very close to each other. The Athlon XP and the P3
were almost identical in IPC, with the Athlon getting ahead due to
higher frequency. But in the current generation, AMD does hold the big
performance and technology lead of greater than 30%.
If AMD wants to be able to take and hold marketshare they need to have
a plan for dealing with Intel when they have no technical advantages.
They also need to be able to deal with Intel when they have technical
disadvantages. Intel is currently a year ahead of AMD in process
technology. They will have an advantage for that year or so, and then
AMD will likely end up ahead when they finally get 65nm worked out.

Not likely, Intel had the same time advantage over AMD at 90nm, but it
never worked out for them. The only thing that the lower nanometers give
anyone nowadays is a cost advantage for manufacturing, but no
performance advantages. It was starting to get obvious from the 180nm
node on down that performance was not automatically scaling like it used
to in the past.
The question is how will AMD fair this next year? I remember when
Intel had Northwood out, and AMD was still using 180nm parts...it sure
wasn't pretty and Intel took back all their marketshare and then some.
Of course, the cycle then reversed itself with 90nm.

Yeah, but 180nm was pretty much the end of it for performance
improvements. Intel always brings smaller process technology out six
months or more ahead of AMD. When Intel transitioned to 90nm, AMD was
still at 130nm for at least six months, but the performance was still
increasing at AMD. Those days are over for "miniaturization is
proportional to performance increases".
Ultimately, AMD wants to be able to break the cycle, but I'm not really
seeing how they can.

It seems to me that it already has broken that cycle now. It's been
helped by physics: Intel can't use miniaturization as a crutch to help
it get away from AMD anymore.

Yousuf Khan
 
T

Tony Hill

Yes, and one certainly could say the same thing about you.

While I'm hardly "new" to these boards, when it comes to Intel I am
quite dogmatic. I've been watching them since the beginning. You
follow their religion of numbers and d*mned the clock speed. Well,
the clock speed has just done them in and many of us have known this
for a great many years. How many more power plants would you have us
build for your house burners? I say none... I'd be very curious;
just how much electricity has Intel wasted in the aggregate?
Aggregate meaning every machine out there, that a low power AMD could
have done. If some numbers guru reads this, he'll figure it out and
post.

One doesn't need to be much of a guru to figure this one out, a bit of
simple math will suffice. Figure that there are roughly 200M PC
processors sold each year, AMD has about 17% of the market, Intel has
about 81% of the market (VIA and Transmeta combine for the remaining
1-2% of the market). Given that we're referring only to
Athlon64/Sempron vs. Pentium4/Celeron here, we're really only looking
at the last two years worth of CPUs and only desktop CPUs. Roughly
50% of all CPUs sold are for desktops, 40% for laptops and 10% for
servers (note: these are very rough approximations).

So, total CPUs in question is 400M, of which 324M are Intel chips and
162M are Intel desktop P4/Celeron chips that theoretically could have
been AMD desktop Athlon64/Sempron chips instead.
Then what will you say? Hello? It's called thermal heat and
I believe there might be almost a 50 to 70 watt difference for like
performance in the high end.

The difference is probably an average of about 50W at full load.
Intel sells a LOT of lower-end Celeron chips that don't consume all
that much more power than an AMD Sempron of similar performance
levels.

So, with all our numbers in place it's quite a simple calculation:

50W/processor * 162M processors = 8,100MW worldwide (roughly one
decent sized power plant)

But there's more. The above assumes that the computers are powered on
24 x 7 and are running at full load that whole time. In reality we
can probably approximate to having computers powered on only half of
the time (on average) and idle 99% of the time (even while someone is
actually USING their computer it's going to be idle pretty close to
100% of the time). At idle the difference in power consumption
between the chips would tend to drop somewhat, maybe down to about
30W.

So, 30W/processor * 162M processor * 0.5 = 2,430MW

I would guess that this figure is on the high-end of the real-world
figure, but it should at least be in the right order of magnitude.


Of course, if you look back a little bit then the shoe is on the other
foot. It is only a year or two ago that MOST chips AMD sold were
AthlonXP chips, and those tended to consume a fair chunk of power,
particularly when comparing idle power consumption of the AthlonXP vs.
P4. Certainly they consumed a lot more power than the PIII. There's
also the laptop market, where Intel has done quite well with their
Pentium-M chip while AMD has mostly sold laptop chips that consume a
lot more power (the Turion MT line is the only chip in the same range
as the Pentium-M).
People who blindly support Intel as you do, and don't see what's
happening in the market right now, are just as Intel is. One thing
is certain, AMD will never be "fringe" again. As each day passes,
Intel is who's seen, and truly is on the fringe. This can only go on
for so long.

Umm.. Intel does still outsell AMD roughly 5:1. Fortune lists them as
#53 in their Fortune 500 list, AMD just makes the bottom of the list
at #473.

I would hardly call Intel a "fringe" company.
Intel is much further ahead of you though; they've been in panic for
quite some time. The BTX was the example of this.

BTX is an effort to put a bit more thought into case design. Most
cases really don't put much thought into airflow, and that's generally
a bad thing. Now, that's not too say that BTX is perfect by any
stretch, but in many ways it IS an improvement of ATX.
 
G

George Macdonald

Who is McKinsey?

http://www.mckinsey.com/aboutus/whatwedo/workexamples/ Been around for
decades - well known for turning somewhat inefficient companies into total
****-ups with reorgs (divisionalize/consolidate, rationalize/diversify etc.
etc. according to status quo) based on corporate shrink analysis.

OTOH maybe it's just that Otellini got Job's religion... since they seem to
have struck up a great friendship.
As for distributing the engineers among the product groups, I've seen
management do a lot of unorthodox things, which then get reversed when
the next generation of managers come in. Carly Fiorina combining the HP
printer and PC groups together and then Mark Hurd reversing that, for
example.

Ah so maybe Otellini == Fiorina??:) I wonder how long he'll get?
It does sound like something out of Hitchhiker's Guide to the Galaxy,
doesn't it? For example in Hitchhiker's they had a race called the
Golgafrinchans who decided to get rid of all of their useless people. So
they sent all of their hairdressers, management consultants, telephone
sanitizers and marketing people up into a big spaceship telling them
that a giant space goat was coming to eat their world. Those
Golgafrinchans eventually landed on a primitive Earth, where they were
told to invent fire and the wheel. They broke up into subcomittees to
study what consumers want from fire and how they relate to it. The wheel
subcommittee broke up because they couldn't decide on what color the
wheel should be. Ironically, the original Golgafrinchans back on their
homeworld died out due to complications arising from dirty telephones.

Hmm, it does have a ring to it. I have to think though that it's maybe
just the P4 guys who are being punished and sent out to the trenches for
making such a shitty job of it. Then again it was Barrett who said "They
buy the MHz" - they should cancel his stock options.:)
As long as AMD keeps their telephones clean, I can't see how they can't
help but be successful. :)

Yeah but I kinda like cheap CPUs.:)
 
D

dannysdailys

[/quote

Yeah but AMD didn't just catch Intel at a bad time, AMD created th
bad
time for them. Intel was just doing fine with all of its old-fashione

chips, selling them without trouble, because nobody figured that the

were old-fashioned yet. AMD had gone quiet for a few years while it
worked on these technologies. It's not just the chips themselves, bu

also the manufacturing technology was was upgraded at the same time.
Intel never thought about taking a breather and working on thei
future
directions during the quiet time. And now it can't even think about
taking a breather, it's caught up in a full-scale bombardment, the
have
to spend time just shoring up their defenses.

[/quote

Yes, and that's exactly what I was trying to say. Thank
 
J

Johannes

Tony said:
One doesn't need to be much of a guru to figure this one out, a bit of
simple math will suffice. Figure that there are roughly 200M PC
processors sold each year, AMD has about 17% of the market, Intel has
about 81% of the market (VIA and Transmeta combine for the remaining
1-2% of the market). Given that we're referring only to
Athlon64/Sempron vs. Pentium4/Celeron here, we're really only looking
at the last two years worth of CPUs and only desktop CPUs. Roughly
50% of all CPUs sold are for desktops, 40% for laptops and 10% for
servers (note: these are very rough approximations).

So, total CPUs in question is 400M, of which 324M are Intel chips and
162M are Intel desktop P4/Celeron chips that theoretically could have
been AMD desktop Athlon64/Sempron chips instead.


The difference is probably an average of about 50W at full load.
Intel sells a LOT of lower-end Celeron chips that don't consume all
that much more power than an AMD Sempron of similar performance
levels.

So, with all our numbers in place it's quite a simple calculation:

50W/processor * 162M processors = 8,100MW worldwide (roughly one
decent sized power plant)

But there's more. The above assumes that the computers are powered on
24 x 7 and are running at full load that whole time. In reality we
can probably approximate to having computers powered on only half of
the time (on average) and idle 99% of the time (even while someone is
actually USING their computer it's going to be idle pretty close to
100% of the time). At idle the difference in power consumption
between the chips would tend to drop somewhat, maybe down to about
30W.

So, 30W/processor * 162M processor * 0.5 = 2,430MW

I would guess that this figure is on the high-end of the real-world
figure, but it should at least be in the right order of magnitude.

However, it's the peak-time power from power plants that matters, that is
what decides size of infrastructure and what causes blackouts. There is no
efficient way to store power that is not demanded at a fixed generator rate.
(just my 1c worth).
 
R

Rob Stow

I read something about this at an American federal gov web site
back when California was having their power crisis a couple of
years ago: power requirements by computers in the USA,
including related A/C costs, were increasing by 0.8 GW per year.

The same report also said that energy saving technologies - such
as power-saving modes in CPUs and switching from CRT to LCD
monitors - were being more than offset by things like
faster/hotter CPUs, chipsets, and video cards; more RAM; a
dramatic increase in the amount of time spent on recreational
computer usage; and the continuing rise in the number of
computers being used.
However, it's the peak-time power from power plants that matters, that is
what decides size of infrastructure and what causes blackouts. There is no
efficient way to store power that is not demanded at a fixed generator rate.
(just my 1c worth).

Shortly after the time of the California crisis I remember
reading about a few businesses that switched from 9-to-5
operation to things like 6am-to-8pm, 6 days a week, shiftwork in
order to reduce their peak-time power consumption.
Unfortunately, once people started to think the crisis had passed
most went back to business as usual. The initial principal was
valid though: many people working with computers don't do jobs
that have an integral requirement that they be done on a 9-to-5,
Monday-to-Friday basis.
 
Y

Yousuf Khan

George said:
http://www.mckinsey.com/aboutus/whatwedo/workexamples/ Been around for
decades - well known for turning somewhat inefficient companies into total
****-ups with reorgs (divisionalize/consolidate, rationalize/diversify etc.
etc. according to status quo) based on corporate shrink analysis.

Ah, management consultants.
Ah so maybe Otellini == Fiorina??:) I wonder how long he'll get?

So far, I haven't seen Otellini do nearly as much damage as Fiorina did
in her first year. He'll probably be a long-term damager.

Yousuf Khan
 
D

David Kanter

Yousuf said:
Well, SPEC has been shown in the past to be highly manipulable by
compiler tricks and architectures geared towards achieving high SPEC
scores without necessarily being faster in real-life situations. That is
unless your real life situation is running SPEC benchmarks.

Point the way to a better cross platform benchmark...

Seriously, compilers improve and SPEC takes that into account. There's
nothing wrong with that, as long as the compiler tricks can be broadly
applied. I think Sun's art hack is really dicey, but most compiler
improvements benefit classes of programs, not just benchmarks.
I can think of several situations where SPEC has shown one architecture
to be faster than another, but real-world applications never ran any
faster.

So what? I can think of several situations like that for any
benchmark. That doesn't mean crap. If there is a better alternative,
why isn't the industry embracing it? There is one better alternative,
which is to use applications. However, very few people use the same
applications...therefore application benchmarks aren't quite as
meaningful to as many users.
For example, games which are very FPU-dependent somehow seem to
get greater benefit out of the Athlons than the Pentiums, yet Pentiums
still somehow show up higher in SPECfpu.

That would be because it's called SPEC CPU, not SPEC video games...
And this is not just a
complaint that's been levelled about SPEC since the x86 processors
started competing. It's been a complaint since the days of the RISC
server chips competed with each other.

Yes, but you notice how everyone is still a SPEC member? That's
because it serves a very valuable purpose.
I was talking about real performance too. The P4 didn't really start
breaking away from P3, until the P3 stopped at 1.3Ghz while the P4 was
upto 2.0Ghz.

Hrmmm...yes, that's right. The P4 hit 2GHz shortly after it was
launched though...seems like a good product introduction then.
Similarly, the P4 didn't really start breaking away from
Athlon XP until XP stopped at 2.1Ghz and P4 was at 3.2Ghz.

I think Northwood probably outperformed even at 2.8GHz, but either way,
the point is that as a result, AMD got shanked and had big problems
until they could get the K8 out. That's a success. That it didn't
work at 90 was a big mistake, but the core used in prescott was not the
same as in Northwood.
Since Intel
couldn't really go much beyond 3.6Ghz with the P4, the Athlon 64
completely picked it off because it had the higher IPC and a better
thermal cushion to work with.
Yup.


I don't think the other guy said that Intel itself is doomed. However,
it does seem like he's saying that Intel will have a tough time catching
up technologically.

Perhaps, perhaps not. I think Merom will do wonders.
As for server marketshare, that's exactly the marketshare that they are
aiming for right now, approximately 30%.

Let's see if they make it. AMD has never been able to hold on to their
market share gains or goals. They were supposed to hit 30% of the
general market, but they certainly aren't there now.
As for how much of it is really
a threat to Intel? I would assume it's quite a threat to it. Intel had
100% x86 server marketshare, 70% is a huge step down. It was accounting
for a large portion of their profits due to the high margins and
monopoly status.

I've talked with quite a few financial analysts about this, and since
Centrino came out, margins were higher for laptops ==> Intel made more
off laptops (since they have more volume there than in servers I
believe).

Ok, so are you saying that AMD will hit 30%? I want to see someone
here make an actual prediction, with times and stuff. I don't want "we
will accept whatever AMD says". Obviously companies have been known to
be wrong about product acceptance and marketshare projections.
And there is no refuge in the laptop market either. Retail sales of
laptops were 30% AMD just before Christmas, again up from nearly 0%
before.

How much the laptop market is retail? I suspect not that much...

Moreover, AMD has always had retail laptops. Can you provide any
evidence that they had 1-2% of the retail market before? I was
certainly under the impression that they were doing around 5-10% of the
laptop market.
The servers and laptops were where Intel had counted on making
its profits, because it was making noises about how old-fashioned the
desktop PC market was, and how the laptop was the way of the future.

Well, it is to some degree true.
Yeah but AMD didn't just catch Intel at a bad time, AMD created the bad
time for them.

Yes, just like Intel did with Northwood for the K7 and how they did for
the K5/6. AMD's competitive pressure played a role, but Intel also
screwed up with Prescott.
Intel was just doing fine with all of its old-fashioned
chips, selling them without trouble, because nobody figured that they
were old-fashioned yet. AMD had gone quiet for a few years while it
worked on these technologies. It's not just the chips themselves, but
also the manufacturing technology was was upgraded at the same time.
Intel never thought about taking a breather and working on their future
directions during the quiet time.

That's the funniest shit I've ever heard. You think the place where
"Only the Paranoid Survive" was just sitting on it's laurels? LOL.
And now it can't even think about
taking a breather, it's caught up in a full-scale bombardment, they have
to spend time just shoring up their defenses.

Hurray, irrelevant military metaphors.
I think that's right. In the last generation the Intel and AMD
technologies were very close to each other. The Athlon XP and the P3
were almost identical in IPC, with the Athlon getting ahead due to
higher frequency. But in the current generation, AMD does hold the big
performance and technology lead of greater than 30%.

Is it really greater than 30%? Are you talking about the desktop
market, laptop market, servers or what?
Not likely, Intel had the same time advantage over AMD at 90nm, but it
never worked out for them. The only thing that the lower nanometers give
anyone nowadays is a cost advantage for manufacturing, but no
performance advantages. It was starting to get obvious from the 180nm
node on down that performance was not automatically scaling like it used
to in the past.

No, I think 130nm had fine scaling. The problems started at 90nm.
Yeah, but 180nm was pretty much the end of it for performance
improvements.

AMD had a 130nm shrink of the K7.
Intel always brings smaller process technology out six
months or more ahead of AMD. When Intel transitioned to 90nm, AMD was
still at 130nm for at least six months, but the performance was still
increasing at AMD. Those days are over for "miniaturization is
proportional to performance increases".

I know that.
It seems to me that it already has broken that cycle now. It's been
helped by physics: Intel can't use miniaturization as a crutch to help
it get away from AMD anymore.

Perhaps...we shall see.

DK
 
T

Tony Hill

However, it's the peak-time power from power plants that matters, that is
what decides size of infrastructure and what causes blackouts. There is no
efficient way to store power that is not demanded at a fixed generator rate.
(just my 1c worth).

True enough, but I would toss out a guess that power consumption by
processors will be pretty close to directly proportional to the power
consumption of the grid as a whole. Peak use is going to be mid-day
while people are at the office, followed by a slow decrease throughout
the evening and then being at a minimal level overnight when most
computers are likely to be idle or turned off.

I think it's also quite safe to assume that the usage patterns between
computers with AMD processors and those with Intel processors are not
going to deviate significantly (or at least not as a directly-related
cause).
 
T

Tony Hill

Well, SPEC has been shown in the past to be highly manipulable by
compiler tricks and architectures geared towards achieving high SPEC
scores without necessarily being faster in real-life situations. That is
unless your real life situation is running SPEC benchmarks.

Like any other benchmark, it's important to recognize the limitations
of SPEC CPU2000. As you mention, it IS very compiler-dependant. Not
always just "tricks" in the compiler (and, in fact, the SPEC rules do
forbid any SPEC-specific optimizations), but also optimizations in
general. This may or may not be of much relevance to people looking
at the benchmark. If you own the code and can use any compiler you
like, then it is definitely appropriate. On the other hand, if you're
purchasing off-the-shelf software or even if you own the code but are
limited in your choice in compilers, then SPEC might not be a 100%
accurate representation.

One situation that jumps to mind here is the HUGE performance jump Sun
was able to get in the 179.art test a few years back. A new revision
of their compiler improved the performance in this test by 800%. The
result greatly inflated the SPARC III's CFP score, at quick glance
temporarily putting it up there with the best on the market. However
when one examined things more carefully it turned out that the overall
performance of the chip was still pretty ho-hum, it was just so hugely
outperforming everyone else in that one test that it's overall score
was high. Now this was all achieved by using perfectly valid compiler
optimizations and other companies have also implemented similar things
in compilers, but for about 6 months the scores were skewed. If your
application was nearly identical to 179.art, then the SPARC III was
killer, but otherwise the score really didn't reflect the performance
you were going to get.
I can think of several situations where SPEC has shown one architecture
to be faster than another, but real-world applications never ran any
faster. For example, games which are very FPU-dependent somehow seem to
get greater benefit out of the Athlons than the Pentiums, yet Pentiums
still somehow show up higher in SPECfpu. And this is not just a
complaint that's been levelled about SPEC since the x86 processors
started competing. It's been a complaint since the days of the RISC
server chips competed with each other.

I'm not sure that games are necessarily the best example here since
gaming performance was never the goal of SPEC CPU2000, however there
are some critics that could be made. The idea of tuning an
architecture to get good SPEC scores was rather more popular with SPEC
CPU95, particularly in regards to cache. With CFP2000 they tried to
correct this, though I personally think that they might have
over-corrected some things. It seems like a lot of the tests ended up
being more of a slight variation on STREAM than they were floating
point benchmarks. Now, admittedly memory bandwidth DOES play an
important role in floating point calculations, but it seemed like the
tests were intentionally ignoring the many cases were your data can be
broken up to run entirely from cache.
I was talking about real performance too. The P4 didn't really start
breaking away from P3, until the P3 stopped at 1.3Ghz while the P4 was
upto 2.0Ghz. Similarly, the P4 didn't really start breaking away from
Athlon XP until XP stopped at 2.1Ghz and P4 was at 3.2Ghz.

One thing to consider here though is that the P4 was able to get to
2.0GHz on a 180nm fab process. The PIII, on the exact same fab
process topped out at 1.13GHz (and it took them two tries to reach
that clock rate). When comparing those two clock rates it became
fairly easy to see why Intel went with the P4.

When compared to AMD the clock speed difference wasn't as large, AMD
did eventually get their 180nm process and "Palomino" AthlonXP chips
up to 1.73GHz. However they did so a 7 months after Intel had hit
2.0GHz on their 180nm process. If you compare things at the same
time-frame it was more like 1.4GHz for AMD vs. 2.0GHz for Intel, or
1.73GHz for AMD vs. 2.4GHz for Intel. The writing was really on the
wall at this point, the AthlonXP just wasn't going to keep up for very
long.
Since Intel
couldn't really go much beyond 3.6Ghz with the P4, the Athlon 64
completely picked it off because it had the higher IPC and a better
thermal cushion to work with.

Yup, the shoe was on the other foot for Intel for a while, and it
still is today. However this has been a bit of a cyclical things for
the past few years. If AMD just sits on their laurels for the next
while, Intel will blow by them in the performance and performance/watt
race. The fact that Intel is producing chips on a 65nm fab process
now and AMD doesn't plan on doing so for at least 6 months (possibly a
year) should be of some concern to execs. in AMD's board room.
As for server marketshare, that's exactly the marketshare that they are
aiming for right now, approximately 30%. As for how much of it is really
a threat to Intel? I would assume it's quite a threat to it. Intel had
100% x86 server marketshare, 70% is a huge step down. It was accounting
for a large portion of their profits due to the high margins and
monopoly status.

AMD ain't there yet though. I don't think they've broken the 10% mark
(though they're close). Server marketshare is a tough nut to crack.
And there is no refuge in the laptop market either. Retail sales of
laptops were 30% AMD just before Christmas, again up from nearly 0%
before. The servers and laptops were where Intel had counted on making
its profits, because it was making noises about how old-fashioned the
desktop PC market was, and how the laptop was the way of the future.

AMD continues to do well in retails sales, both of desktops and
notebook chips, but their overall sales figures continue to lag well
behind. For notebook chips AMD is again hovering around 10% overall
marketshare, hoping to get up to 15%.
Not likely, Intel had the same time advantage over AMD at 90nm, but it
never worked out for them. The only thing that the lower nanometers give
anyone nowadays is a cost advantage for manufacturing, but no
performance advantages. It was starting to get obvious from the 180nm
node on down that performance was not automatically scaling like it used
to in the past.

I would say that you're greatly exaggerating things here. While
initially it seemed like 90nm didn't gain Intel anything, a lot of
that seemed to be due to the fact that they more than double the
number of transistors without an immediately apparent improvement in
performance.

Looking back though, 90nm didn't turn out all that badly for Intel.
They went from having 3.4GHz chips with 512KB of cache on made on a
130nm process up to 3.8GHz chips with 2MB of cache. For SPEC
CFP2000_base the result went from 1300 to 1976, or a 52% increase in
performance. not quite as good as the 77% increase in performance
when going from 180nm to 130nm (735 vs. 1300), but not nearly as far
off as you make it sound.

For SPEC CINT2000_base the numbers are similar. 681 for the 180nm P4
running at 2.0GHz/256KB cache/400MT/s bus, 1342 for the 130nm P4
running at 3.4GHz/512KB cache/800MT/s bus and 1793 for the 90nm P4 at
3.8GHz/2MB cache/800MT/s bus (this last number is courtesy of Dell
since Intel doesn't have their own result for this chip).
Percentage-wise that is 97% for 180nm -> 130nm and 34% for 130nm ->
90nm. Much smaller but definitely still measurable.

With the move to 65nm the goal is more for multicore performance than
single-threaded stuff, but there are some changes Intel plans to bring
in that will help the latter. If nothing else they should improving
the CPU bus design a bit, something that (not counting the, slightly
meaningless, Extreme Edition chips) didn't happen with their 90nm
chips.

So, to say that the we're just not going to see any further
performance increases due to process enhancements is VERY shortsighted
IMO.
 
G

George Macdonald

However, it's the peak-time power from power plants that matters, that is
what decides size of infrastructure and what causes blackouts. There is no
efficient way to store power that is not demanded at a fixed generator rate.
(just my 1c worth).

I believe that where local geography/facilities permit hydro pump storage
is in regular use in various parts of the world. I don't know what came of
the plan to pump compressed gas into impermeable rock cavities - seemed a
bit hairbrained way back when it was proposed. For peak load however, I
believe that, relatively inexpensive to build, gas turbines are often used
to cover peak-loads over the base-load provided by the large infrastructure
plants.

Somebody had better figure this one out if wind turbines are to become
viable in any sense at all.:)
 
J

Johannes

George said:
I believe that where local geography/facilities permit hydro pump storage
is in regular use in various parts of the world.

Yes, there are places where you can pump water up to a higher lake. Hence,
I put in the little word 'efficient'. A certain nuclear power plant did
this, I was told by the people that the fish grew quicker because of the
nice warm water...
I don't know what came of
the plan to pump compressed gas into impermeable rock cavities - seemed a
bit hairbrained way back when it was proposed. For peak load however, I
believe that, relatively inexpensive to build, gas turbines are often used
to cover peak-loads over the base-load provided by the large infrastructure
plants.

Somebody had better figure this one out if wind turbines are to become
viable in any sense at all.:)

I know that water some water companies will incur penalties if they exceed
their peak power allocation. Even switching on an extra heater in the office
during the winter could be trigger the penalty. Something a domestic customer
wouldn't think twice about.
 
D

dannysdailys

Asking for predictions

How about this one, I predict the new Pentium Extreme dual core wil
be a pig for power and anyone stupid enough to buy one, will wind u
with an orphan. I said this 2 months ago, maybe 4. Intel can onl
orphan so many people. Maybe Intel should design an internal ice bo
for these processors

Yes, 65nm with no extra performance. I believe someone alread
mentioned that would happen. This thing isn't even a true dual core
It's just two processors on the same card, sharing a one channel of
card memory bus. Oh, that's breath taking; let's clock it up som
more..

Another barn burner that barely holds up. This one even has "hot
spots. No front side bus to speak of and this is where you want t
leave your faith? This is their latest and greatest?! This is wha
you think gains marketshare? I told you 2006 would be an interestin
year and it hasn't even started yet

Oh, and by the way, so much for your SPEC test

http://www.extremetech.com/article2/0,1697,1907202,00.as

This thing has an almost a 60 watt larger heat signature then th
Athlon X2-4800, and can barely run with it

Yeah, better pull out the BTX's. They're the only thing going to sav
this pig. By the way, no one ever told me what happens when one o
those tiny BTX air filters plugs up. I think they should only sel
them in Minnosota in the winter. No really... That way, if yo
leave it laying outside and it's 45 degrees below 0 f; it may hav
half a chance of not melting down. That should cool it. LO

I really feel sorry for you Intel guys..

I remember when Intel first came out with heat related down steppin
on board. We Athlon people couldn't figure out why. Now we know...
Good thing they did it too, huh..

"Crank up the generators, Intel's in town." LO

Yes, vindication is sweet..
 
G

George Macdonald

Yes, there are places where you can pump water up to a higher lake. Hence,
I put in the little word 'efficient'. A certain nuclear power plant did
this, I was told by the people that the fish grew quicker because of the
nice warm water...

Compared with running a supplemetary gas turbine, they are relatively
efficent.
I know that water some water companies will incur penalties if they exceed
their peak power allocation. Even switching on an extra heater in the office
during the winter could be trigger the penalty. Something a domestic customer
wouldn't think twice about.

Why water compaines?
 
C

chrisv

dannysdailys said:
Intel is the laughing stock of the tech industry
and will remain so in the foreseeable future.

Yeah, that $9B a year in profits is something that the "tech industry"
considers to be quite laughable.

Idiot.
 
D

Del Cecchi

Yousuf Khan wrote:
snip
IBM is a schizophrenic disjointed organization. Some parts of IBM are
helping Intel out more, for example, their chipset division makes the X3
NUMA chipset for Xeon processors; this group hopes to see AMD fail.
While another division of IBM is collaborating with AMD, helping them
out with their manufacturing technology. And of course other parts of
IBM are competing against Intel, such as their processor division; the
processor division was the one affected when Intel stole away the Apple
contract from IBM.

Yousuf Khan

You are an idiot. The X3 chipset was designed by the E&TS group for the
Xseries servers and is only sold as part of a server box. So far as I
know there is no desire by Xseries or ETS to see AMD fail. In fact I
think Xseries sells boxes with AMD processors.

IBM doesn't have a processor division. IBM has the I and P series
servers that use PowerPC processors. And IBM has the Microelectronics
division which manufactures chips.

These are all part of Technology Group. And while I don't know the
whole story, I think "Intel stole.... Apple" is a poor description.

IBM technology group is also doing chips for Sony and Microsoft.

You really should calm down and get a grip on yourself. If you can't
handle the stress, sell your amd stock and buy ibonds.
 
M

max

Asking for predictions?

How about this one,

Got any predictions on what the quarterly/annual results to be
released shortly will be?

Since you believe Intel's crashing and burning, and AMD is laughing
all the way to the bank, it should be fairly easy...

max
 
D

dannysdailys

maxwrote
wrote

Asking for predictions

How about this one
Got any predictions on what the quarterly/annual results to b
released shortly will be

Since you believe Intel's crashing and burning, and AMD is laughin
all the way to the bank, it should be fairly easy..

max[/quote:7121188a35

Hey, the bottom line is Intel had 100% of servers. It also had 100
of laptops. Anything inroads AMD makes is going to hurt Intel. The
already have. As this trend continues, Intel stock will take a hit.
The trend has to continue as long as Intel keeps coming out wit
these ancient processors. Have you read the review on the ne
"Extreme?" The only thing extreme about this chip is how extreme th
heat gets

It doesn't matter about profits, the market could care less when i
smells a loser

And speaking of the market, we don't have a clue what the extent o
the lawsuit will bring out. This alone, can't be good for Intel

Fairly easy? If it was that easy, I certainly wouldn't be wasting m
time here. Now would I
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top