HP plugs two Itaniums into one card

Y

Yousuf Khan

http://www.xbitlabs.com/news/cpu/display/20040505131140.html

This looks like an interim step that HP has taken to compete against
dual-core IBM Power processors already on the market, since dual-core
Itaniums aren't expected until next year at the earliest.

But how much can you save by plugging two Itaniums into a single daughter
card? You still have to pay for _two_ Itaniums, regardless of whether you're
plugging them into the same daughter card or different daughter cards.

Yousuf Khan
 
A

Alex Johnson

Yousuf said:
http://www.xbitlabs.com/news/cpu/display/20040505131140.html

This looks like an interim step that HP has taken to compete against
dual-core IBM Power processors already on the market, since dual-core
Itaniums aren't expected until next year at the earliest.

But how much can you save by plugging two Itaniums into a single daughter
card? You still have to pay for _two_ Itaniums, regardless of whether you're
plugging them into the same daughter card or different daughter cards.

Yousuf Khan

You save by buying a smaller system. Take a look at big tin systems and
you'll see that a system that can support 16 sockets costs 20% of a
system that supports 32 sockets which is 20% of a system that supports
64 sockets (or some similar numbers). So you lay down $200K for a
low-end 32P chassie instead of $1M for a 64P chassie. Then you install
dual-die parts instead of traditional parts and your cheaper chassie now
holds the same number of chips. Even though a chip may cost you $4K
each, filling the system is usually less expensive than buying the
"potential to fill the system"; ie, ever notice how a 4P/4-socket system
costs a fraction of a 4P/16-socket system even though you bought the
same number of processors?

Alex
 
R

Robert Myers

Alex said:
You save by buying a smaller system. Take a look at big tin systems and
you'll see that a system that can support 16 sockets costs 20% of a
system that supports 32 sockets which is 20% of a system that supports
64 sockets (or some similar numbers). So you lay down $200K for a
low-end 32P chassie instead of $1M for a 64P chassie. Then you install
dual-die parts instead of traditional parts and your cheaper chassie now
holds the same number of chips. Even though a chip may cost you $4K
each, filling the system is usually less expensive than buying the
"potential to fill the system"; ie, ever notice how a 4P/4-socket system
costs a fraction of a 4P/16-socket system even though you bought the
same number of processors?

Nothing wrong with your logic, but I think that a simpler explanation
will do:

<quote>

"The milestone is achieved with a new dual-processor module, called mx2,
which features two industry-standard Intel Itanium 2 processors on a
single module that can plug into ->existing<- [emphasis added] systems –
delivering up to 35% lower acquisition costs than similar IBM systems."

The move of HP is aimed to deliver speeds closer or higher those
delivered by dual-core IBM processors and servers today. Since dual-core
IA64 “Montecito” chips from Intel are to come only next year or later,
HP needs something that would be here to compete with IBM. From
technology standpoint, the mx2 is simply a module that allows
microprocessors to share the same processor system bus.

</quote>

The real savings for HP here are the infinite cost of developing an
entirely new system in zero time as compared to being able to use an
existing system to compete against IBM merely by plugging in a new
daughter card. In such a scenario, the costs of a second processor are
completely incidental and the costs of an imaginary entirely new system
that utilizes a second processor in some other way completely
irrelevant. The potential cost to HP of not doing something is losing
market share and, the most expensive cost of all, losing existing
customers to a competitor.

The strategy requires the two processors to split the memory bandwidth
of one, leading one to expect that the indended market is OLTP, where
the strategy might work reasonably well, based on benchmarks I've seen,
as opposed to HPC, where it would be a move of pure desperation. Looked
at another way, the non-processor costs of "big tin" are the memory
subsystem and I/O infrastructure. Your argument amounts to saying that
you can get more performance by adding processors without beefing up the
memory and I/O infrastructure.

That the strategy works at all (more processors, no more memory and I/O)
is probably an oddity of trying to have one system that will cover
multiple markets. The memory subsystem required to do well for an HPC
benchmark is probably overspec'd as compared to the memory subsystem
required to do OLTP. As discussed in these forums and others,
processors for OLTP wind up spending much of their time stalled, and the
usual cleverness that allows a processor not to be stalled in most
applications doesn't work for OLTP. You can try to be clever with SMT,
or you can stop worrying about it and just add more processor cores,
which, for "big tin," aren't all that big a deal, anyway.

Completely off the subject of the thread, is there some new accepted
spelling of the English word, "chassis?"

RM
 
A

Alex Johnson

Robert said:
Completely off the subject of the thread, is there some new accepted
spelling of the English word, "chassis?"

No, I'm just a lowly engineer with poor speling skilz. :)

Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top