AMD64 = IA-32e

A

Alex Johnson

Samuel said:
Don't be. Intel thought of it, but didn't implement it - probably
because management didn't want to "confuse" the market with an Itanium
competitor.

Are you guys all too young to remember the 1990s? In 1994 intel
announced they were doing the 786 which would be a 64 bit processor. At
the time it was exactly what Opteron and Prescott are, an extension on
top of x86 (probably excluding the extra registers and including V86
mode). Intel wanted to grab the big money for selling servers and this
strategy was deemed a dead end. Why would that be? Because every fancy
server feature put into their 64b server would be cannibalized by the
desktop line to eek out that little percent improvement to sell another
chip. Once that happened there would be no reason anyone would pay
server prices for the server chip. So they went looking for alternative
architectures and found HP waving its EPIC flag. They made a deal in
1997 and intel dropped the idea of x86-64 in favor of Itanium. Intel
left the x86-64 openning and AMD struck against them.

Alex
 
C

chrisv

Robert Myers said:
Your belief, I suppose, is that Intel management is whistling past the
graveyard. Their strategy was to get big customers locked into an
absolutely unique architecture. To the extent that Intel has
succeeded, those customers are not going back to x86.

Both of them? 8) In any case, even if true, it doesn't mean they'll
stay Intel customers...
 
R

Robert Myers

I think he means 64-bit code on Opteron/Athlon64 vs. 32-bit code on
Opteron/Athlon64, ie identical hardware, only differences is the
software. He's also almost certainly right, since 64-bit should, if
all else were equal, be slower than 32-bit. However because AMD
doubled the number of registers we sometimes (~70-80% of the time from
what I've seen so far) see a performance benefit.

There is no pure way that I can see to measure the performance benefit
of having more register names.

If you take 32 bit code and rewrite it so that it can take advantage
of the extra registers, you're looking at a performance increase
that's roughly equivalent in its root cause to the better SPEC numbers
going from Northwood to Prescott. Going from Northwood to Prescott,
the compiler got better and/or the architecture became more tolerant
of the infirmities of the compiler for SPEC benchmarks.

You don't know, and you'll never know, exactly what the benefits of
having more named registers is except that, since you are solving a
problem with fewer constraints, it will always be easier to write code
that is near optimum if you have more named registers. The only
indisputable benefit is to compiler writers and hand coders, who
plainly don't have to work as hard.

You give the compiler or hand-coder more optimization space, and he
comes up with a better result. Is that because the compiler or
hand-coder found a solution that doesn't exist in the smaller
optimization space, or because the compiler or hand-coder more
frequently came up with a solution that was near optimum because the
set of near-optimum solutions is much larger? There is no way that I
can see to tell for sure.

Let me propose another way to frame the question so it might not seem
like such a silly cavil to you. If doubling the number of registers
is good, wouldn't tripling or quadrupling the number be even better?
We both know very well that there is a cost to more registers: more
die area, more transistors, more complicated control circuitry, more
power, longer traces. We also know that having more register names
makes it easier to find near-optimal coding solutions.

Six apples >= three oranges? Six apples = three oranges? Six apples
<= three oranges?

RM
 
R

Robert Myers

On Fri, 20 Feb 2004 07:31:26 -0600, steve harris

How about Intel is about making money and Merced is nothing but a money
making proprietary processor that AMD would not be allowed to produce?

That's about the size of it. Fortunately, we live in a free-market
economy, so you will have the option of buying something you like
better from IBM, AMD, or perhaps Transmeta, or even Via, if you don't
care for what Intel makes.

No one will sell competing products because Intel is all-powerful?
Doesn't even pass the laugh test. Even if Intel gets out of hand in
the high end, which would happen only if IBM threw in the towel and
left the processor part of that market to Intel, such a move would
only open an opportunity for someone else to come up with a clean RISC
design that's suitable for mainframes. Who knows, maybe AMD could
invent something of its own (for once...he mutters. AMD is capable of
*much* better than more me-too chips).
Rambus was an Intel opportunity to corner the market. Intel's Rambus
shareholdings came back to bite them in the butt.

Intel's mistake was in not understanding or forseeing the effects of
the rage of the memory industry at Rambus' overreaching patent claims.

If Rambus had stuck to RDRAM, we might be using RDRAM today, and we
might be better off for it. Intel surely was told how the memory
manufacturers felt, and it's clear that their response was, "That's
nice. We're going to do this, anyway. We can because we're Intel."
That's a mistake I don't think they're likely to make again.
Intel could care less what the customer wants IMO.

On that point, I must beg to differ. Intel doesn't score well on
balance with Usenet posters, but they do manage to sell more
processors than anybody else. Do you really think they manage to do
that without taking into consideration what customers want? Could
you entertain the possibility that they understand the market better
than you do? Could you entertain the possibility that they understand
the market better than the average Usenet poster?

RM
 
S

steve harris

On that point, I must beg to differ. Intel doesn't score well on
balance with Usenet posters, but they do manage to sell more
processors than anybody else. Do you really think they manage to do
that without taking into consideration what customers want? Could
you entertain the possibility that they understand the market better
than you do? Could you entertain the possibility that they understand
the market better than the average Usenet poster?

RM

Robert,
I believe DELL could sell a box with a Via processor in it. DELL's
reseller ratings are horrible, they moved their service and support
offshore for home customers. I do think Intel believes perception is
cheaper than reality and 90% of consumers could care less who made the
processor in their computer. It will catch up with Intel if Intel
doesn't respond. It appears Intel is responding with Banias and AMD64.
DELL will change or their problems will catch up with them. We saw it in
the American automotive companies in the 70s and 80s.

I think Banias will kill off the current P4 processors and it will not
be long before the Banias will have AMD64 built in.
Steve
 
R

Robert Myers

On Fri, 20 Feb 2004 08:47:59 -0600, steve harris

I believe DELL could sell a box with a Via processor in it.

Not too sure about that, but Walmart obviously can. Dell ordinarily
charges a large fraction of the price of Walmart's box just to ship
one of theirs.
DELL's
reseller ratings are horrible, they moved their service and support
offshore for home customers.

I'm the guy that trashes Dell regularly. Despite my advice to
consumers, they just keep buying. (I hope that concession makes you
happy, Felger).
I do think Intel believes perception is
cheaper than reality and 90% of consumers could care less who made the
processor in their computer.

Intel's "Intel Inside" campaign was a shrewd and very successful
campaign to get customers, especially corporate customers, to be aware
of what processor is in the box. The guy with the big desk doesn't
shop in the brand x aisle at the supermarket, and he doesn't expect to
find brand x beside his desk at the office.

Opteron is a *huge* credibility breathrough for AMD...maybe. We'll
see. Markets change.

As to perception and reality...that's just the way it goes. As it
happens, that nostrum is working in AMD's favor just at the moment.
Sixty-four bits must be twice as good as thirty-two, wouldn't you
think?
It will catch up with Intel if Intel
doesn't respond. It appears Intel is responding with Banias and AMD64.
DELL will change or their problems will catch up with them. We saw it in
the American automotive companies in the 70s and 80s.

Intel has managed to stay on top of this wave for a long time, and
they have never had the opportunity to develop the complacency of the
post-WWII American automobile manufacturers.
I think Banias will kill off the current P4 processors and it will not
be long before the Banias will have AMD64 built in.

Wow! Don't waste your time talking to me. Go buy some AMD stock.
;-).

RM
 
R

Robert Myers

Both of them? 8) In any case, even if true, it doesn't mean they'll
stay Intel customers...

The cost of retooling a major database to new software has got to
exceed the cost of the hardware it runs on. How transparent is
database software to the hardware it runs on? I suspect that Itanium
scores in this department have been Oracle scores. If you call Oracle
as a prospective customer, they will, after having spoken with you
about your needs, offer to connect you directly with a corporate
account executive at Dell.

Oracle software isn't going to run on IBM boxes. AMD and Opteron
aren't going to help Oracle compete with IBM. You can add this up
horizontally, vertically, or diagonally. It doesn't spell AMD. I
would hesitate to call this a conspiracy, but it is clear that some
vendors are more, er, comfortable working with some partners than with
others.

RM
 
W

Wolfgang S. Rupprecht

Robert Myers said:
There is no pure way that I can see to measure the performance benefit
of having more register names.

Couldn't you just tell the compiler to not use half the registers?

If my understanding of how gcc does things is correct, it already
compiles things to some virtual machine with an infinite register
space. Then in a subsequent pass it assigns data to either real
registers or stack depending on the lifetimes of the variables etc.

One would still take the (slight) hit of having an extra register
selection bit taking up space in the instructions themselves, possibly
slowing things down by increasing the size of the instructions a tiny
bit. So like you said, it won't be a pure test, but one could get a
darn good idea of how register starved the architecture really is.

Perhaps a good thesis topic for some bright young student?

-wolfgang
 
J

joe smith

I think most of the performance gains we are seeing with the
How would you tell the difference?

Easily. Most software use int, not "long long" or __uint64, so the
additional optimization opportunities compiler could possibly fathom from
the availability of wider registers is limited. The number of possibilities
to organize code so that it has less dependencies on previous instructions
w/o read/write to memory are more numerous, even if the L1 cache is 'fast',
and even if the code is dynamically translated to the 'micro-ISA' (for lack
of my mind to recall a better term for what is being done at this time) the
ALU's internally process.

Avarage speed differences from one batch of tests ran on AMD64 (X86_64 for
GCC terminology) did show up average 15% speed increase with mere
recompilation with early versions of GCC for X86_64. As time goes by, the
optimizations the compiler can employ are propably going to be increased,
but that's what early tests show. I can't recall the link, could be I picked
it from slashdot.org a while ago. Could remember wrong.

I don't have AMD64 at this time, but I _do_ develop for MIPS IV ISA and X86
at this time and have some little practical knowledge to come to the above
educated guess (yup, I don't claim it's the Truth or that I have the
ultimate clue, but base the opinion to previous experience in general in the
field of programming and micro-architechture and what effect is has on C/C++
code generation -- yes, I do check compiler output and adjust the sourcecode
accordingly to segments of code which are important, which are not very
many, but still puts me up to this particular job now and then even as of
2004).

The across-the-board winner, practically all problems, practically all
coding styles, has almost got to be reduced latency. In the same
category as increasing the size of the cache, only much better,
because you don't have to work so hard to suck stuff into the cache.

L2 cache avoids very expensive reads, but as long as the working set fits
into L2 anyway, the way L1 is implemented can have 1-2 clock cycle
improvement in latency for VERY *TOIGHT* innerloops. P4's latest incarnation
does appear to favour more (space) over less (latency), though.

Having cache efficiency is not just merely fitting the working set into the
cache, but for dataset which DOESN'T fit there is still all-important issue
about cache which DEFINITELY DOES NOT guarantee acorss-the-board win for all
coding styles. First, merely having cache does help, on average, that has
been demonstrated and that's why there IS cache on modern systems. But using
it in specific ways goes the extra mile. First, how many ways the cache has
has effect on how many working sets the code can use at one time w/o
performance dive. Secondly, the pattern of storing data can have a
substential effect on AVERAGE performance. FATMAP2 is a classic document on
the issue, showing how storage pattern can improve performance by average of
50% (and even more). Infact, tiling is a VERY COMMON pattern for storing
texels in modern 3D accelerators. It's not black magic or mystery why this
is being done. Same technique benefits applications written for
generic-purpose CPU.

More register names. My guess is that the biggest benefit is to
compiler writers and hand coders.

Hand coders are a dying breed, but coders who do think a bit what they want
the computer to do, rather than what they want abstract virtual language to
do, are sometimes the concept which separates a good implementation from the
bad. Some program "C++", some program machine code USING C++, even if they
use templates, partial specialization, namespaces, inheritance, etc.etc. A
subtle difference in theory, but can lead to more efficient code in
practise. However, dwelling too much on the performance issues ALL THE TIME
is waste of time, and therefore waste of money and brainpower, life and all
that naturally results from that. Whenever given choise, I'd write clear and
easy-to-read solution rather than obfuscated one even if it were 50% faster.
The chances are that this code is easier to refactor and take appropriate
measures to switch between algorithms which result ORDER OF MAGNITUDE better
performance in a long haul anyway, when wrote "slow code", but, it doesn't
mean I'd go writing code I know to be crappy just because it gets the job
done, the point I had in mind was that experience leads to automaticly
writing things the 'right way'..

Experience in programming is proficiency in applying 'patterns' you've
learned to practical problems you are solving at the time. Programming is
problem solving and pattern matching at the same time. I don't care if
someone disagrees this is just my opinion. And I am taking drugs and
unemployed so perhaps anyone should not take my advice afterall, see what it
would get you?

But I hope the regulars here got a good chucles out of this and I hope even
more satisfaction in claiming how clueless git I were. This one's on me.
Enjoy.
 
J

Jan Panteltje

The cost of retooling a major database to new software has got to
exceed the cost of the hardware it runs on. How transparent is
database software to the hardware it runs on? I suspect that Itanium
scores in this department have been Oracle scores. If you call Oracle
as a prospective customer, they will, after having spoken with you
about your needs, offer to connect you directly with a corporate
account executive at Dell.

Oracle software isn't going to run on IBM boxes.
I think you are plain wrong about that, 3 years ago RedHat gave
free PC Oracle CDs away with Linux.
I asked for one, but it was only for the US :-(
Anyways, these days if CEOs can save some $$$$$$ they will HAVE
to change, else those shareholders (who all run AMD PC hehe), will ask
many many questions.
JP
 
J

Jan Panteltje

There is no pure way that I can see to measure the performance benefit
of having more register names.

If you take 32 bit code and rewrite it so that it can take advantage
of the extra registers, you're looking at a performance increase
that's roughly equivalent in its root cause to the better SPEC numbers
going from Northwood to Prescott. Going from Northwood to Prescott,
the compiler got better and/or the architecture became more tolerant
of the infirmities of the compiler for SPEC benchmarks.

You don't know, and you'll never know, exactly what the benefits of
having more named registers is except that, since you are solving a
problem with fewer constraints, it will always be easier to write code
that is near optimum if you have more named registers. The only
indisputable benefit is to compiler writers and hand coders, who
plainly don't have to work as hard.

You give the compiler or hand-coder more optimization space, and he
comes up with a better result. Is that because the compiler or
hand-coder found a solution that doesn't exist in the smaller
optimization space, or because the compiler or hand-coder more
frequently came up with a solution that was near optimum because the
set of near-optimum solutions is much larger? There is no way that I
can see to tell for sure.

Let me propose another way to frame the question so it might not seem
like such a silly cavil to you. If doubling the number of registers
is good, wouldn't tripling or quadrupling the number be even better?
We both know very well that there is a cost to more registers: more
die area, more transistors, more complicated control circuitry, more
power, longer traces. We also know that having more register names
makes it easier to find near-optimal coding solutions.

Six apples >= three oranges? Six apples = three oranges? Six apples
<= three oranges?

RM
It is not a linear increase I think.
how many extra variables you need to declare 'register' to still get useful
speed increase.
These will be mainly in inner loops, and maybe not that many.
Main point of having enough registers is not having to go to memory / stack.
Hey, I am coding asm, several days now.
But not for x86 ....
So it also depends on how the algos are done, maybe things could be more
'unraveled' and spread over more registers, but the performance versus
register curve will flatten out real soon I think.
JP
 
N

Never anonymous Bud

On Fri, 20 Feb 2004 08:47:59 -0600, steve harris



Not too sure about that, but Walmart obviously can. Dell ordinarily
charges a large fraction of the price of Walmart's box just to ship
one of theirs.

A friend called Dell about a system.
They wanted $119 just for shipping, on a $700 system!

When he questioned that,the order-taker hung up on him.

He came out ahead, I found him a better system at a lower price
right here in town, and I'll take care of any software installation
he needs help with.





To reply by email, remove the XYZ.

Lumber Cartel (tinlc) #2063. Spam this account at your own risk.

This sig censored by the Office of Home and Land Insecurity....
 
T

Tony Hill

There is no pure way that I can see to measure the performance benefit
of having more register names.

Of course not, but can you think of any other performance enhancements
that would explain it? The only real differences between IA32 and
AMD64 code are the extra registers and the ability to use more native
64-bit integers. The latter can improve performance in rare cases,
but long long ints are pretty darn rare.
You don't know, and you'll never know, exactly what the benefits of
having more named registers is except that, since you are solving a
problem with fewer constraints, it will always be easier to write code
that is near optimum if you have more named registers. The only
indisputable benefit is to compiler writers and hand coders, who
plainly don't have to work as hard.

You give the compiler or hand-coder more optimization space, and he
comes up with a better result. Is that because the compiler or
hand-coder found a solution that doesn't exist in the smaller
optimization space, or because the compiler or hand-coder more
frequently came up with a solution that was near optimum because the
set of near-optimum solutions is much larger?

Perhaps a more important question here is "does it matter?" When you
get right down to it, AMD64 has more registers and often AMD64 code is
faster than IA32 code, despite the fact that 64-bit code is normally
slower if everything else were equal.
Let me propose another way to frame the question so it might not seem
like such a silly cavil to you. If doubling the number of registers
is good, wouldn't tripling or quadrupling the number be even better?

It would be. Why do you think damn near every other ISA out there has
32 GPRs (or more in the case of IA64).
We both know very well that there is a cost to more registers: more
die area, more transistors, more complicated control circuitry, more
power, longer traces. We also know that having more register names
makes it easier to find near-optimal coding solutions.

My understanding is that for AMD64 the maximum number of registers AMD
could use while still keeping the instruction set basically unchanged
was 16. The costs associated with the extra registers would seem to
be rather small given today's transistor budget, but changing all the
op-codes around would make things tricky on the software side.
 
S

Samuel Barber

Alex Johnson said:
Are you guys all too young to remember the 1990s? In 1994 intel
announced they were doing the 786 which would be a 64 bit processor. At
the time it was exactly what Opteron and Prescott are, an extension on
top of x86 (probably excluding the extra registers and including V86
mode). Intel wanted to grab the big money for selling servers and this
strategy was deemed a dead end. Why would that be? Because every fancy
server feature put into their 64b server would be cannibalized by the
desktop line to eek out that little percent improvement to sell another
chip. Once that happened there would be no reason anyone would pay
server prices for the server chip. So they went looking for alternative
architectures and found HP waving its EPIC flag. They made a deal in
1997 and intel dropped the idea of x86-64 in favor of Itanium. Intel
left the x86-64 openning and AMD struck against them.

Your history is very flawed. In 1994 Intel was already working with HP
to define IA-64. The development of IA-64 had little to do with
64-bitness, actually; it was motivated by a desire for higher
performance. At the time, x86 performance lagged the RISCs, and this
was viewed as a barrier to entering the high-end markets. HP's
architectural ideas seemed to offer a way to leapfrog the RISCs in the
performance race.

As it happens, architecture has proven not to be a significant
performance lever. The irony is thick. If only Intel had believed its
own anti-RISC propaganda, it would be better off.

Sam
 
R

Rob Stow

Never said:
A friend called Dell about a system.
They wanted $119 just for shipping, on a $700 system!

When he questioned that,the order-taker hung up on him.

Doesn't surprise me at all - mirrors an experience I've
had:

Call Dell about again about a system. Ask how much for
an upgrade from 256 MB to 512 MB (or from 512 to 1024).
Then ask why it costs about twice as much as if you just
bought a DIMM at the local computer store. CLICK.


I've long thought that Dell makes no money - perhaps even
takes a loss - on their basic systems. Where they make
their profits are on the shipping, "upgrades", extended
warranties, etc.
 
G

George Macdonald

There is no pure way that I can see to measure the performance benefit
of having more register names.

You're right of course - no way to come up with a general rule... but it
won't stop some from trying.:)
If you take 32 bit code and rewrite it so that it can take advantage
of the extra registers, you're looking at a performance increase
that's roughly equivalent in its root cause to the better SPEC numbers
going from Northwood to Prescott. Going from Northwood to Prescott,
the compiler got better and/or the architecture became more tolerant
of the infirmities of the compiler for SPEC benchmarks.

You don't know, and you'll never know, exactly what the benefits of
having more named registers is except that, since you are solving a
problem with fewer constraints, it will always be easier to write code
that is near optimum if you have more named registers. The only
indisputable benefit is to compiler writers and hand coders, who
plainly don't have to work as hard.

If you add the fact that you now have named FP registers, the extra named
integer registers come in awful handy for things like loop unrolling -
something which just did not really work with the FP stack. Even for hand
coding, the FP stack was a royal PITA. I expect to see some benefit and
I'll maybe even try to measure it for my stuff.:)
You give the compiler or hand-coder more optimization space, and he
comes up with a better result. Is that because the compiler or
hand-coder found a solution that doesn't exist in the smaller
optimization space, or because the compiler or hand-coder more
frequently came up with a solution that was near optimum because the
set of near-optimum solutions is much larger? There is no way that I
can see to tell for sure.

Let me propose another way to frame the question so it might not seem
like such a silly cavil to you. If doubling the number of registers
is good, wouldn't tripling or quadrupling the number be even better?
We both know very well that there is a cost to more registers: more
die area, more transistors, more complicated control circuitry, more
power, longer traces. We also know that having more register names
makes it easier to find near-optimal coding solutions.

It's *more* than doubling the integer register count though. With IA32 you
basically have 6 registers to fiddle with, once you take away the EBP and
ESP - now you have 14... I think. DEC once did a universal optimizing
compiler - carried registers across function/subroutine calls - for a VAX
and it had less registers available than AMD64: 16 integer overlapped with
the FP registers - stupid design... almost as bad as the page size blunder.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
G

George Macdonald

The cost of retooling a major database to new software has got to
exceed the cost of the hardware it runs on. How transparent is
database software to the hardware it runs on? I suspect that Itanium
scores in this department have been Oracle scores. If you call Oracle
as a prospective customer, they will, after having spoken with you
about your needs, offer to connect you directly with a corporate
account executive at Dell.

Are you sure about that? Michael D and Larry E seems to me to be the
immutable force meeting the immovable object. I'd be surprised if Oracle
is all that enchanted with Dell. Oracle has already been ported to AMD64
anyway - took 2 days according to the "reports".
Oracle software isn't going to run on IBM boxes.

Of course it does. IBM is diviisionalized enough that those kinds of
issues don't arise. Ya think that SAP - bless its cotton socks - doesn't
run on IBM and interface to Oracle?
AMD and Opteron
aren't going to help Oracle compete with IBM. You can add this up
horizontally, vertically, or diagonally. It doesn't spell AMD. I
would hesitate to call this a conspiracy, but it is clear that some
vendors are more, er, comfortable working with some partners than with
others.

You know, if you call up IBM Consulting Services (whatever its actually
called) and tell them you need some *BIG* help with your IT solutions, show
them a fistful of $$ and say you're committed to Oracle as a database,
they'll say "yessir - we can do that".

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
R

Robert Myers

Are you sure about that? Michael D and Larry E seems to me to be the
immutable force meeting the immovable object.

In this case, I wasn't speculating, but reporting actual experience.
I mentioned that I wasn't all that enchanted with Dell, and the rep
asked me what I thought would be a better vendor. I mentioned IBM and
the rep's response was something to the effect that I would just be
paying a needlessly higher price for the same hardware.
I'd be surprised if Oracle
is all that enchanted with Dell. Oracle has already been ported to AMD64
anyway - took 2 days according to the "reports".

Enormous egos aside, why would Dell and Ellison not be natural allies?
IBM, OTOH, has its own software it wants to sell you.

As to what it actually takes to move a database from one hardware
vendor to another, I have no experience. What I do have experience
with is a database project that turned into a financial hole in the
bottom of the boat.
Of course it does. IBM is diviisionalized enough that those kinds of
issues don't arise. Ya think that SAP - bless its cotton socks - doesn't
run on IBM and interface to Oracle?

The statement as I made it is preposterous. I meant to say that
Oracle wouldn't be ported to the Power architecture, but even on that
point I would have been wrong:

http://www.oracle.com/partnerships/...ux/intro.html?src=1952614&Act=7&kw=powerlinux

You know, if you call up IBM Consulting Services (whatever its actually
called) and tell them you need some *BIG* help with your IT solutions, show
them a fistful of $$ and say you're committed to Oracle as a database,
they'll say "yessir - we can do that".

You're right, of course. The strategic focus of IBM is software, but,
as you correctly point out, the new IBM is divisionalized to the
extent that corporate strategic focus is subordinate to the divisional
bottom line. Doesn't sound like a very smart way to run a company,
but what do I know?

RM
 
R

Robert Myers

I've long thought that Dell makes no money - perhaps even
takes a loss - on their basic systems. Where they make
their profits are on the shipping, "upgrades", extended
warranties, etc.

Yes, and a very nice scam they have going there, too. My war with
Dell started over a non-Dell network card. No matter what I tried,
the system blew up. Dell's position: if it works the way we delivered
it to you, it works.

The only reason Dell *ever* backed down is that it became impossible
even to reinstall the operating system--without any offending non-Dell
equipment in the box. Were it not for that, I probably would have had
to take them to court.

If you can get past *that* reality, and the thought of having
motherboard with a non-standard power connector, there are bargains to
be had from Dell, but you won't get one by logging into their system
and custom-ordering a box because the fancy strikes you.

You have to check the web-site frequently and scan all their #&!%*
print ads. Sooner or later, the system you want or something very
close will turn up at a price that seems unbelievable. As Dorothy
Bradbury has pointed out, the wild price fluctuations are probably the
result of Dell being an aggressively opportunistic buyer and seller.
If they get a good deal on something, they don't want to hold onto it
for more margin. They want to get rid of it, and they're happy to
pass the savings on to whoever is lucky enough to happen along at the
right moment.

Or you could just skip the whole Dell thing and purchase from a vendor
that doesn't have the corporate mentality of a used-car salesman.

RM
 
G

Grumble

Robert said:
Wow! Don't waste your time talking to me. Go buy some AMD stock.

Why do you say that?

Why would AMD's stock rise if the Pentium M microarchitecture replaced
the NetBurst microarchitecture? Why would AMD's stock rise if Intel
implemented AMD64 in the Pentium M processor family?

Regards,

Grumble
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top