so Jobs gets screwed by IBM over game consoles, thus Apple-Intel ?

T

TravelinMan

YKhan said:
It has a research component. No one is going to do experiments on a
working production line.

Sorry to disappoint you, but it happens all the time in the real world.
 
D

Del Cecchi

Keith said:
Hasn't the ASTC been closed down?

News to me, but that doesn't mean much. Easily could have been and they
forgot to notify me.
YEEHAWW Howie wasn't invited to the auction. No water. No
electricity. No land. No workers. Lotsa taxes buys lotsa bureaucracy
though. It's taken forty years to build a two-lane road, and it'll
likely be another forty before it's finished.

No Water? You can damn near throw a rock into Lake Champlaign from
there. Or is Champy covered by the Endangered species act?
If people only knew... You shoulda seen people turn white when there
was a b*mb scare on the main site some ten years back (long before
9/11).
I been to vermont. Most of the people were pretty white anyway. :)
 
K

keith

News to me, but that doesn't mean much. Easily could have been and they
forgot to notify me.

I'm not a process type, but I haven't seen anythign out of there for at
least two years, likely three. Politics AIUI, but...
No Water? You can damn near throw a rock into Lake Champlaign from
there. Or is Champy covered by the Endangered species act?

Nope. No water or sewer. Just because there is a (not-so) great lake
10mi away doesn't mean there is (a *LOT* of) water to be had for a fab.
EF has *unlimited* water. IBM sunk a kabillion wells in the '80s a
few miles away and then topped the wells with grass. They then gave
the surface to the town for soccer fields, keeping the mineral rights.
I been to vermont. Most of the people were pretty white anyway. :)

Snow white, in fact. ;-) HOwever, you've not *seen* white. Shutting
the place down for a few days wasn't taken lightly! Hell, it's never been
shut down for weather. ...unlike those wussies in NC. ;-)
 
K

keith

It has a research component. No one is going to do experiments on a
working production line.

Wrong. One doesn't duplicate $billion lines. Experimets are run on
production lines all the time.
But each manufacturer has its own history to draw upon when it creates
its own automated process management scheme. AMD's history (including
all of its failures) is particularly relevant to the manufacture of x86
microprocessors.

....and IBM has never made an x86 processor? Also, perhaps you can
clarify why you think x86 is so special. Processors technologies are
processor technologies. IBM has different goals, sure.

? Most of this is learned experience, not theoretical.
AMD has much more experience at producing high-speed processors at good
yields than IBM. This is borne out by IBM's severe yields problem at
Fishkill compared to AMD's relative lack of it using nearly indentical
equipment in Dresden.

You're on drugs, yousuf!
AMD needed some help from IBM about integrating new materials into its
processes. But once that knowledge was gained, it would be AMD that
would have the better chance at getting it going on a big scale.

Geez! What a load of horse-hockey!
 
Y

YKhan

keith said:
Wrong. One doesn't duplicate $billion lines. Experimets are run on
production lines all the time.

Yet, no production silicon came out of there.
...and IBM has never made an x86 processor? Also, perhaps you can
clarify why you think x86 is so special. Processors technologies are
processor technologies. IBM has different goals, sure.

No, IBM hasn't made an x86 processor in the longest time. But it's not
really the fact that it's an x86 processor, that makes it an issue.
It's the fact that IBM doesn't have enough experience making massive
amounts of processors in a long long time, so it doesn't have the
background anymore. Sure it can feed its own needs with Power
processors, but that's not a large quantity of processors.

Nvidia was about to use Fishkill as its centre of manufacture of
Geforce chips, but IBM was completely unprepared for the task. Nvidia
is back with TSMC again. Cray wanted Fishkill to produce some router
chips for its XT3/Red Storm computers, and IBM couldn't even handle
that small job, ending up delaying a supercomputer project at Sandia.
And to top it off, IBM was able to produce its own supercomputers which
pipped the Red Storm in bypassing the Earth Simulator. So Cray gave
the job to TI to complete it.

So far, it's been a dismal record of incompetance. IBM might have some
great theoreticians around, but not enough of the practicians.
? Most of this is learned experience, not theoretical.

You're on drugs, yousuf!

If I am then you should get on them right away too, it'll bring you
back into planet Earth. You're not seeing the big commercial picture
here: IBM is getting a reputation for manufacturing incompetence, pure
and simple.
Geez! What a load of horse-hockey!

Yes, sometimes the blunt truth hurts.

Yousuf Khan
 
W

websnarf

Yousuf said:
When did Tom's become a "real" benchmarks site?

When they disclosed the software, hardware, specifications of their
testing, and when people on the outside have been able to reproduce
their results.

If you have BIAS problems (*cough* ads from nVidia *cough*) with Tom's
that's fine, but that's ok, you have all the other sites I listed which
keep him in check.

Compare this to Apple which hired Veritest (formly ZD labs, notorious
for bias towards their sponsors and just generally bad benchmarking in
the past) who told Apple they they *LOST* on SPEC CPU Int (even after
rigging the whole test scenario in Apple's favor, and ignoring the
faster Opteron/Athlon based CPUs), and then Apple interpreting this to
mean that their computer was the fastest on the planet.

There are two completely different standards here.
 
W

websnarf

Well IBM had PPC750 parts running at over 1.0Ghz when Moto was stuck at
450.

Why? Dunno.

I thought you "knew" that greater pipelining was how you got higher
clock rate.
I had made the argument that Intel's idea of pipelining for greater
clock has been abandoned.

Really? By who?

Intel may have given up on the P4, but that, I think, it just a
reflection that the P4 didn't have as much life in it as Intel
originally thought, and they didn't have a next generation technology
ready to go. But they did have a re-implemented P-Pro architecture
that seems to have fit their current product needs.

Trust me, then next Intel processor (after the Pentium-M) will be very
deeply pipelined. Just going by their previous generation design
times, I would say that in about 12-24 months they should be
introducing a completely new core design. (On the other hand, the tech
bubble burst and the time they wasted on Itanium may have truly thrown
things off kilter over at Intel. Who knows, we'll see.)
You post 3 paragraphs of trivia about AMD crap that I couldn't care
less about,

It was meant as background details to help you understand what is
really going on. But as an Apple-head, I should have realized you
don't care about details.
[...] so in the interests of avoiding a totally pointless
argument (too late!) I helpfully modified my assertion to just be that
Intel went for frequency over IPC with the P4, something I should hope
you would not find controversial or otherwise RDF-influenced.

They did, but their intention was not to have low IPC. They just
couldn't design around this. Their real goal was to deliver high
overall performance. A goal they did achieve. Just not quite as well
as AMD achieved it. If Intel could have figured out how to truly
deliver higher IPC with their half-width double pumped integer ALU
architecture, they would have ruled the CPU universe and have
completely rewritten the rules on CPU design.

But AMD had been putting too much pressure on Intel, and Prescott did
not impress. Furthermore, Intel seems to be shifting to Pentium-M. So
presumably, its not a easy (if not impossible) to solve problem.

The reason why I bring up AMD is that you cannot understand where
Intel's technology has been going without understanding what their
strongest competitor is doing.
Wheel the goalposts as far back as you like, but if you can't find
anything wrong or inconsistent with those tests they're credible to me.

And you must one of those who still believe there are WMDs in Iraq.
Apple does not disclose sufficient details to reproduce the benchmark
numbers. Futhermore, there are no third parties auditting their
results. The few benchmarks where they did give sufficient details, I
have personally debunked here:

http://www.pobox.com/~qed/apple.html

Scroll down to the 08/20/03, 07/18/03 entries. Other useful debunking
includes the 06/15/02 entry.
yeah, one that does back my claim that a dual 2.7 murders a single
athlon in MP-enabled, computation-intensive code. Thanks.

Right. Because dual processor Athlons don't exist. Oh wait a sec?
What's this:

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2397

Barbarically throwing an whole CPUs worth of extra computation power to
win in *some* benchmarks with yet more provisos was a nice trick, but
ultimately, AMD (and thus, eventually a desperately following Intel)
will win that one too.
No, IIRC Otellini publically confirmed that disinclination recently.

You are extremely trusting of authority figures aren't you? You should
never take the statements of a CEO or PR person at face value. Only
statements made for which there is legal accoutability should be
believed from such people.
Sure.

Still, IBM has designed a triple-core 3.2Ghz CPU for Microsoft's $500
box, compared to Intel's dual-core 3.2Ghz P-D (which, as you surely
know is nothing more than two Prescotts duct taped together) that they
are selling for over $530 in qty 1000

http://www.intel.com/intel/finance/pricelist/

Its called marketing and positioning. I thought I posted this already.
which sorta puts paid to your assertion:

"IBM, not feeling any competitive pressure from anyone, just decided to
crank the power consumption through the roof"

Seems to this RDF-challenged observer that IBM is ahead of Intel here.

I don't know what "RDF" stands for (I'm not a regular in this group)
but I'll just read that as "challenged". There is no credible sense in
which IBM can be considered ahead of Intel (let alone AMD). Except
with the Power architecture on Spec FP CPU which is really a memory
bandwidth test, but that's a different matter altogether.
http://www.tsmc.com/english/technology/t0113.htm

is what they'll be using. Doesn't seem that generic to me.

Ok, so they have copper. Look, there is a reason why AMD made a big
deal about announcing a fab technology sharing agreement with IBM --
there are techniques and technologies that multiple billions in
research that IBM spends develops that lesser fabricators like TSMC
cannot match.

If TSMC is making a highly clocked Microprocessor with 3 cores, it
means they are using a low gate count core design and tweaking the hell
out of it at the circuit level for higher clock rates. Maybe these are
those G3's that were at 1Ghz, but had no AltiVec units. That would
make the most sense (one of the lessons of the video game industry is
that SIMD instruction sets don't matter if you have a fast graphics
card.)
Well, the triple SMT cores will probably mitigate TMSC's 'woeful'
fabbing capabilities.

I don't even know what that means. You mean by having a steady
customer, they will be able to fund future process technology? That's
great, but their problem is what they can deliver with their *current*
technology. And they have other steady customers, like ATI and nVidia
who will pay just as well as Microsoft for fab capacity.
from Microsoft? huh?

*Sigh* ... Microsoft doesn't *OWN* the design.
yeah, pay IBM hundred(s) of millions of dollars.

For licensing a core? That's not the way it works. Beyond a nominal
up front cost, they pay a percentage royality on each chip shipped.
For a design like this, I'm guessing about $5 per chip.
True. I think Apple was getting tired worrying about the 3-5 year
horizon, not just the immediate problems they were having with
Freescale. Enough to make anyone say, "**** it!", and that's not even
considering the immense cost and time-to-market advantages Apple is
getting from not having to design its own chipsets any more.

Exactly ... this is the "deep reason" so many people are missing. The
stress at Apple over IBM/Mot/Freescale being unable to deliver must be
driving them nuts on a constant basis. By going with Intel, they get
to benefit from the all out war between AMD and Intel (both from a
price and technology perspective) without even having to deal with more
than one CPU supplier.

The reason why you see Dell openly calling Apple to license OS X to
them, is because Dell realizes that ultimately Apple will come out of
this much stronger (something they could have done long time ago, BTW.)
Of course, Dell may be scared for no reason at all -- Apple still has
the problem of proving that their value add is superior to that of a
typical Windows machine; something they have so far only been able to
translate into 3% marketshare. The other 97% are not *ALL* just
obsessed with benchmark performance.
Apple only found need to mention IEEE fp in the briefest of notes in
their 100+ page transition document.

Ok, that's nice that Apple doesn't feel the need to emphasize it. That
has nothing to do with whether or not a developer will use the endian
sensitive tricks described there.
Right. Apple's frameworks were cross-platform to begin with. That
people weren't smart enough to take advantage of them isn't Apple's
fault.

Yes, but who's fault it is, doesn't change the reality. Software which
makes endianness assumptions exists in essentially equal quantity with
software that is not cross platform.
Nah. Dubious assertion to me. Adobe people are pros, they know how to
architect code.

One thing has nothing to do with the other. You are too quick to defer
to perceived authority. It is in fact your statement which is dubious,
because you have no evidence to back it up.

On the other hand, one thing we know for sure is that reams of raw x86
assembly code linked up with an OS X front end doesn't exist in any
form anywhere. Photoshop ported to OS X on x86 will probably be the
first (and possibly only) application to do this ever.
Assuming they're still on Codewarrior... moving to Xcode/gcc, now THAT
will be an immense amount of work.

Codewarrior on the PC? Pulease! For people with even the most minor
concerns about performance on Windows/X86 basically, your choices are
gcc, MSVC, and Intel C/C++.
True, CoreImage doesn't do Adobe any good in maintaining pixel-perfect
compatibility with the x86 code.

God, you really don't have any clue at all do you? The assembly is
used to make it run with high performance. Pixel correctness isn't the
issue -- "does it even compile" is the issue.
 
R

Randy Howard

Trust me, then next Intel processor (after the Pentium-M) will be very
deeply pipelined. Just going by their previous generation design
times, I would say that in about 12-24 months they should be
introducing a completely new core design.

It would be nice to see them do something tangible to approach
Hypertransport on the high end. Maybe Apple is even asking
for this in return for not entertaining AMD proposals.
Apple does not disclose sufficient details to reproduce the benchmark
numbers. Futhermore, there are no third parties auditting their
results. The few benchmarks where they did give sufficient details, I
have personally debunked here:

http://www.pobox.com/~qed/apple.html


I don't know what "RDF" stands for (I'm not a regular in this group)

It's the very same "reality distortion field" that you mention in long
form on the link above.
The reason why you see Dell openly calling Apple to license OS X to
them, is because Dell realizes that ultimately Apple will come out of
this much stronger (something they could have done long time ago, BTW.)

Dell answered an email inquiry with a bit of political correctness, and
probably just to yank Gates' chain a bit. I'm sure he doesn't really
want to train another 5000 Indians in Bangalore to support it.

It was a far cry from "calling for" Apple to license it to them.
 
D

Del Cecchi

keith said:
I'm not a process type, but I haven't seen anythign out of there for at
least two years, likely three. Politics AIUI, but...
ASTC was shut down YE 2004, according to my source. Must have moved the
experiments to the other lines.
Nope. No water or sewer. Just because there is a (not-so) great lake
10mi away doesn't mean there is (a *LOT* of) water to be had for a fab.
EF has *unlimited* water. IBM sunk a kabillion wells in the '80s a
few miles away and then topped the wells with grass. They then gave
the surface to the town for soccer fields, keeping the mineral rights.
The enviro folks wouldn't want anyone to suck the water out of the lake
anyway.
And Pataki came up with the $$$$$$.
Snow white, in fact. ;-) HOwever, you've not *seen* white. Shutting
the place down for a few days wasn't taken lightly! Hell, it's never
been
shut down for weather. ...unlike those wussies in NC. ;-)

Ice storms are nothing to mess with. We've had a couple and they were a
pain. But we don't shut down for weather either. "It is up to the
individual and their manager" etc.

del
 
Y

YKhan

Trust me, then next Intel processor (after the Pentium-M) will be very
deeply pipelined. Just going by their previous generation design
times, I would say that in about 12-24 months they should be
introducing a completely new core design. (On the other hand, the tech
bubble burst and the time they wasted on Itanium may have truly thrown
things off kilter over at Intel. Who knows, we'll see.)

They've announced the next two generations of desktop processors
already, they will be Conroe and Merom, both Pentium-M derivatives.
Intel has given up deep pipelining it's now gunshy about it. It'll
slowly increase the pipelines a little bit at a time from now on (like
it should've been doing all along). We probably won't see Intel trying
to hit 4Ghz again for another five years.

Intel's got more pressing issues to deal with now rather than
pipelines. Like how to match AMD's Direct Connect Architecture
(Hypertransport and internal memory controller).

Yousuf Khan
 
Y

YKhan

When they disclosed the software, hardware, specifications of their
testing, and when people on the outside have been able to reproduce
their results.

If you have BIAS problems (*cough* ads from nVidia *cough*) with Tom's
that's fine, but that's ok, you have all the other sites I listed which
keep him in check.

Compare this to Apple which hired Veritest (formly ZD labs, notorious
for bias towards their sponsors and just generally bad benchmarking in
the past) who told Apple they they *LOST* on SPEC CPU Int (even after
rigging the whole test scenario in Apple's favor, and ignoring the
faster Opteron/Athlon based CPUs), and then Apple interpreting this to
mean that their computer was the fastest on the planet.

There are two completely different standards here.

Still don't see the difference between them and Tom's. Have you seen
their latest attempt "stress testing" an AMD and an Intel dual-core
systems? At one point the Intel system burned through three
motherboards, until they finally found one that was stable. Stable is
relative in this case, as all it means is that it hasn't burned up yet,
however it's suffered 4 reboots. At the same time, the AMD had no
reboots. So they decided in order to make it fairer for Intel, they
decided to reboot both systems and start the tests over again. Again
the reboots started happening for Intel. So eventually they just
decided to call it an "unfair" test because the "AMD systems were
production systems", while the "Intel systems were preproduction". Now
is it my imagination but didn't Intel introduce its dual-core desktop
processor at least a month before AMD? So why is Intel's systems still
pre-production? Tom's doesn't have an answer for that.

Yousuf Khan
 
K

keith

ASTC was shut down YE 2004, according to my source. Must have moved the
experiments to the other lines.

My source says at least a year before that. Nothing came out of there in
'04, AIUI. Well, at least nothing in 90nm. Perhaps some antiques?

The enviro folks wouldn't want anyone to suck the water out of the lake
anyway.
And Pataki came up with the $$$$$$.

In the form of electrictiy, sure. They basically screwed Central Hudson
in the process, but what's politics for? The other biggie *was* water.
There isn't any to be had in Deaniac-land, amazingly 'nuff.

Ice storms are nothing to mess with. We've had a couple and they were a
pain. But we don't shut down for weather either. "It is up to the
individual and their manager" etc.

Hell, we've had ice storms, nope. No shutdown. They are a tad scared of
T-boomers though. When I was in P'ok we shut down a couple of times.
Once for the better part of a week. Yeah, the Yanks are a hardier breed. ;-)
 
T

Tony Hill

Yet, no production silicon came out of there.


No, IBM hasn't made an x86 processor in the longest time. But it's not

Err.. IBM is currently fabbing VIA's x86 processors... do those not
count?
really the fact that it's an x86 processor, that makes it an issue.
It's the fact that IBM doesn't have enough experience making massive
amounts of processors in a long long time, so it doesn't have the
background anymore. Sure it can feed its own needs with Power
processors, but that's not a large quantity of processors.

They do manage to turn out a fair share of power processors for the
higher end of the embedded market, and then they have things like the
PPC970 and the Power4/Power5 for the high-end. They do have an
interesting hole in the middle where Intel and AMD tend to live, but I
don't think it's THAT much of a stretch for them.

Still, I DO believe that IBM has a fair bit to learn from AMD, the
information flow between the two companies is definitely not a one-way
street.
 
T

Tony Hill

Really? By who?

Intel may have given up on the P4, but that, I think, it just a
reflection that the P4 didn't have as much life in it as Intel
originally thought,

The P4 was introduced in 2000. We're now in 2005 and the P4 is
Intel's mainstream core for this year and well into next year at the
very least. Their next generation core isn't expected until at least
late-2006 and probably not until well into 2007.

A 7-year lifespan for a processor core is EXTREMELY good.
and they didn't have a next generation technology
ready to go. But they did have a re-implemented P-Pro architecture
that seems to have fit their current product needs.

Calling the Pentium-M a "re-implemented P-Pro" is a VERY large
stretch. Basically every component of the processor has been
significantly modified from the original PPro. It may be a more
evolutionary design than the P4, but it's still worlds away from the
original.

For comparison sake, AMD's Opteron is much more closely related to the
original Athlon than the Pentium-M is to the P-Pro.
[...] so in the interests of avoiding a totally pointless
argument (too late!) I helpfully modified my assertion to just be that
Intel went for frequency over IPC with the P4, something I should hope
you would not find controversial or otherwise RDF-influenced.

They did, but their intention was not to have low IPC. They just
couldn't design around this. Their real goal was to deliver high
overall performance. A goal they did achieve. Just not quite as well
as AMD achieved it. If Intel could have figured out how to truly
deliver higher IPC with their half-width double pumped integer ALU
architecture, they would have ruled the CPU universe and have
completely rewritten the rules on CPU design.

My view is that Intel created some sort of think-tank group to come up
with what would be the dominant application that people were really
going to need processing power for during the P4's lifetime. This
think-tank came back and said "streaming media is it!". Intel then
went about creating a processor with a significant focus on streaming
media performance. The resulting P4 processor really IS quite good at
streaming media, but unfortunately it can be a bit ho-hum at some
other tasks. The real problem here is that streaming media just isn't
the real killer-app that Intel thought it would be. Sure, we do
stream some media here and there, but more often than not we end up
being limited by something other than the processor.
Its called marketing and positioning. I thought I posted this already.

Also, as I've mentioned in other threads, companies like Dell and HP
pay significantly less than the above-mentioned prices due to the
quantities they buy in. I wouldn't be at all surprised that Dell
could pick up a 3.2GHz Pentium-D for about $150.

As for the comparison of the 3-core PPC CPU of the XBox360 vs. the
dual-core Pentium-D, the two chips really aren't in the same ballpark.
IBM really stripped out a lot of features from their PPC core to allow
it to clock higher. Clock for clock and core-for-core performance of
the chip will most likely be lower than that of the Pentium-D. Not
that it really matters of course, the console industry is quite
different from the PC world and it's difficult to make direct
comparisons.
I don't know what "RDF" stands for (I'm not a regular in this group)
but I'll just read that as "challenged". There is no credible sense in
which IBM can be considered ahead of Intel (let alone AMD). Except
with the Power architecture on Spec FP CPU which is really a memory
bandwidth test, but that's a different matter altogether.

Of just about any other high-end server benchmark. SPEC, TPC,
Linpack, you name it. The Power5 is a beast of a processor. It also
costs a boatload to produce and really isn't competing in the same
sort of market as Intel and AMD for the most part (though IBM does
have some rather interesting 2 and 4 processor servers at attractive
price-points).
Ok, so they have copper. Look, there is a reason why AMD made a big
deal about announcing a fab technology sharing agreement with IBM --
there are techniques and technologies that multiple billions in
research that IBM spends develops that lesser fabricators like TSMC
cannot match.

TSMC has some very advanced technology, but their business is rather
different than that of AMD, IBM and Intel. Where those companies
might concentrate on high-performance above all else, TSMC has to
focus more on low-cost of production. Manufacturing process can be
tweaked in many different ways, but performance and cost are two of
the main variables that can be optimized for.
If TSMC is making a highly clocked Microprocessor with 3 cores, it
means they are using a low gate count core design and tweaking the hell
out of it at the circuit level for higher clock rates.


TSMC is quite capable of making damn near anything that you through at
them, but they might not be able to hit the same clock frequencies as
Intel, IBM or AMD. That being said, they'll get higher yields and
lower costs, at least as compared to IBM and AMD (Intel can probably
match them on a cost basis due to shear volume if nothing else).
I don't even know what that means. You mean by having a steady
customer, they will be able to fund future process technology? That's
great, but their problem is what they can deliver with their *current*
technology. And they have other steady customers, like ATI and nVidia
who will pay just as well as Microsoft for fab capacity.

ATI and nVidia are hardly their only customers. TSMC is a BIG
manufacturing company. For 2004 they had a total capacity of 5
million 8-inch equivalent wafers, split between their 9 fabs. For
comparison, once AMD gets their new 12-inch, 300nm Fab36 up and
running at full steam, combined with their current 8-inch, 200mm
Fab30, they will have a total capacity of about 650,000 8-inch
equivalent wafers per year.

Intel is the only company in the world with capacity to match TSMC.
The reason why you see Dell openly calling Apple to license OS X to
them, is because Dell realizes that ultimately Apple will come out of
this much stronger (something they could have done long time ago, BTW.)

Or perhaps because Dell sees the opportunity for themselves to come
out of this much stronger? I know I'm not the only one who would love
to run MacOS X on a (non-Apple) PC.
 
Y

YKhan

Tony said:
Err.. IBM is currently fabbing VIA's x86 processors... do those not
count?

I think that only starts with the new C7 processor, I think the current
C3's are being fabbed at TSMC still.

It would be worthwhile actually seeing how quickly IBM can ramp up to
create C7's. With its problems in creating GeForce chips and Cray
routers, and other problems, it's getting a bad reputation for
manufacturing.
They do manage to turn out a fair share of power processors for the
higher end of the embedded market, and then they have things like the
PPC970 and the Power4/Power5 for the high-end. They do have an
interesting hole in the middle where Intel and AMD tend to live, but I
don't think it's THAT much of a stretch for them.

Possibly, but the embedded PPC's would be the equivalent of a AMD's
Geode/Alchemy processors or Intel's Xscale. Low frequency parts not
requiring a lot of hard work, where even a slightly wonky yield problem
would still churn out working processors.
Still, I DO believe that IBM has a fair bit to learn from AMD, the
information flow between the two companies is definitely not a one-way
street.

Yup.

Yousuf Khan
 
Y

Yousuf Khan

Tony said:
ATI and nVidia are hardly their only customers. TSMC is a BIG
manufacturing company. For 2004 they had a total capacity of 5
million 8-inch equivalent wafers, split between their 9 fabs. For
comparison, once AMD gets their new 12-inch, 300nm Fab36 up and
running at full steam, combined with their current 8-inch, 200mm
Fab30, they will have a total capacity of about 650,000 8-inch
equivalent wafers per year.

Intel is the only company in the world with capacity to match TSMC.

Not all of their 9 fabs are state-of-the-art. Some of them are pretty
old, used mainly for creating very cheap components. Similarly, AMD has
got considerably more than just the Dresden fabs, it's also got less
state-of-the-art fabs in austin (fab 28), and various in Japan through
its Spansion subsidiary.

Yousuf Khan
 
Y

Yousuf Khan

keith said:
You think this is novel? Processes are tweaked on the fly to produce
what's needed all the time.

This is what I was talking about:

AMD Lifts Its Veil - Forbes.com
http://www.forbes.com/home/intellig.../14/amd-semiconductor-ruiz_cx_ah_0613amd.html
"Intel likes to save up all its changes and do them all at once," he said. "Since AMD has to do a lot with less, it makes incremental changes. If you took someone from an Intel factory and put him in an AMD factory, after a few days he'd run out of there thinking, 'These people are crazy.' "


Yousuf Khan
 
D

Del Cecchi

Yousuf Khan said:
This is what I was talking about:

AMD Lifts Its Veil - Forbes.com
http://www.forbes.com/home/intellig.../14/amd-semiconductor-ruiz_cx_ah_0613amd.html



Yousuf Khan

Intel has a whole bunch of fabs that are considered essentially
identical. They clone the process to start up the new fab. So allowing
each fab to wander off in whatever direction could be a bad idea.

How would you manage several fabs, all building the same products in the
same process? Please share your knowledge.

del
 
R

Robert Myers

Yousuf said:
AMD Lifts Its Veil - Forbes.com

http://www.forbes.com/home/intellig.../14/amd-semiconductor-ruiz_cx_ah_0613amd.html

A fascinating article.

<quote>

Nor can AMD afford to make mistakes. Intel's PC chip unit turned in
gross margins above 49% in 2004, while AMD's gross margins for its PC
chips were just shy of 12% in the same period.

</quote>

You don't have to go any further to understand why big vendors are
reluctant to depend on AMD. A manufacturing gross margin of
_Twelve_percent_?

Little wonder that Intel isn't too flustered.

RM
 
T

Travelinman

Robert Myers said:
http://www.forbes.com/home/intelligentinfrastructure/2005/06/14/amd-semiconduc
tor-ruiz_cx_ah_0613amd.html

A fascinating article.

<quote>

Nor can AMD afford to make mistakes. Intel's PC chip unit turned in
gross margins above 49% in 2004, while AMD's gross margins for its PC
chips were just shy of 12% in the same period.

</quote>

You don't have to go any further to understand why big vendors are
reluctant to depend on AMD. A manufacturing gross margin of
_Twelve_percent_?

Maybe you should show that to all the AMDtrolls who insist that AMD is a
much more worthy partner for Apple.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top