4-way Opteron vs. Xeon-IBM X3 architecture

D

daytripper

Ok, then why don't you find some AMD systems which can run 128GB of
memory at PC3200 speeds? My claim at least has some evidence, you have
been spouting all sorts of denials, but with NO EVIDENCE to back it up.


Absolutely, but I'll wager money that Bensley will do 1066MT/s when it
comes out.

Expect 1333MT/s busses to appear on the Blackford chipset before the summer
solstice...
 
D

David Kanter

daytripper said:
Expect 1333MT/s busses to appear on the Blackford chipset before the summer
solstice...

Interesting, I had heard early 3Q06....but I suppose end of 2Q06 isn't
that much different.

DK
 
G

George Macdonald

Ok, then why don't you find some AMD systems which can run 128GB of
memory at PC3200 speeds? My claim at least has some evidence, you have
been spouting all sorts of denials, but with NO EVIDENCE to back it up.

"MY EVIDENCE" is in AMD's docs which are freely available and which you
appear not to have read - I'm not going to give you lessons on reading them
but the rank counts and speed considerations are spelled out... and *again*
it has been demonstrated that they can be exceeded.
Absolutely, but I'll wager money that Bensley will do 1066MT/s when it
comes out.

Of course, as I already said it's a DIB system. You know, your defensive
posture here on behalf of Intel seems to betray an excess of umm,
empathy... zeal?;-) I believe there's still some element of "if" involved
with Bensley... even if released, I'm not sure whether the world is ready
to accept DIMMs which put out as much heat as a CPU, need heatpipes and
active cooling... which Intel invested a substantual sum of money in
developing and for which we don't have a price. In case you've seen my
previous posts on the subject, I'm having 2nd thoughts on FBDIMM.
A pattern? 2 data points don't form a pattern, be serious.

Formally, if you want to split hairs... no, but there *is* strong evndence
here, which you want to ignore completely with piddling, insignificant
*possible* changes in hardware. There *are* more than two data points
which are close enough, for me, in configuration.
 
D

David Kanter

"MY EVIDENCE" is in AMD's docs which are freely available and which you
appear not to have read - I'm not going to give you lessons on reading them
but the rank counts and speed considerations are spelled out... and *again*
it has been demonstrated that they can be exceeded.

When and where has it been so demonstrated?
Of course, as I already said it's a DIB system. You know, your defensive
posture here on behalf of Intel seems to betray an excess of umm,
empathy... zeal?;-) I believe there's still some element of "if" involved
with Bensley... even if released, I'm not sure whether the world is ready
to accept DIMMs which put out as much heat as a CPU, need heatpipes and
active cooling... which Intel invested a substantual sum of money in
developing and for which we don't have a price. In case you've seen my
previous posts on the subject, I'm having 2nd thoughts on FBDIMM.

You're entitled to your second thoughts. However, quite clearly the
industry agrees, even Sun will be embracing FBD. Ask any memory
subsystem expert and they will probably tell you that FBD is a VERY
good design for servers. It's probably not appropriate for desktops,
but that is hardly relevant given that more Gbits of DRAM are shipped
in servers than desktops/laptops.
Formally, if you want to split hairs...

That's hardly splitting hairs...more like calling a spade a spade. 2
observations != a trend.
no, but there *is* strong evndence
here, which you want to ignore completely with piddling, insignificant
*possible* changes in hardware. There *are* more than two data points
which are close enough, for me, in configuration.

Ok, how about you point out the comparisons and the differences then?
That might actually convince me...

DK
 
G

George Macdonald

When and where has it been so demonstrated?

Several Web sites, including THG (not my choice either but the one that
comes quickly to mind), have been able to run 4 ranks at DDR400 on each
channel of an Athlon64. It gets coverage regularly in various forum
discussions; from my POV this is common knowledge, which you are apparently
lacking... because you are not paying attention to details of AMD-based
systems?
You're entitled to your second thoughts. However, quite clearly the
industry agrees, even Sun will be embracing FBD. Ask any memory
subsystem expert and they will probably tell you that FBD is a VERY
good design for servers. It's probably not appropriate for desktops,
but that is hardly relevant given that more Gbits of DRAM are shipped
in servers than desktops/laptops.

We'll see how many times customers are going to be convinced to upgrade
their A/C systems to cope with this thermal load - Intel has already hit
them fairly recently on this issue. Until they can lose the heatpipes
and/or active cooling I have doubts. The way I see it, this could easily
go the same way as DRDRAM.
That's hardly splitting hairs...more like calling a spade a spade. 2
observations != a trend.

I don't need a trend - one pair is enough to provoke the question: "why did
2 near-identical systems perform so differently?". A 2nd pair, with no
counter-cases is good confirmation. Show me a convincing counter-case and
I'll look again.
Ok, how about you point out the comparisons and the differences then?
That might actually convince me...

The results are well presented and the filtering provided is sufficient for
the purpose. I do not have the time - in fact I'm convinced you are
unconvincable in your stubbornness here.
 
K

Keith

but that is hardly relevant given that more Gbits of DRAM are shipped
in servers than desktops/laptops.


I'd like to see a cite for this, considering that there are .1 to
..2 billion laptops/desftops sold each year. It's been a *long*
time since server technology pulled desktops around by the nose,
rather than the other way around.
 
D

David Kanter

but that is hardly relevant given that more Gbits of DRAM are shipped
I'd like to see a cite for this, considering that there are .1 to
.2 billion laptops/desftops sold each year. It's been a *long*
time since server technology pulled desktops around by the nose,
rather than the other way around.

Sorry there, I was wrong; servers use less DRAM than desktops.

download.micron.com/pdf/presentations/jedex/memory_trends_micron_2004.pdf

According to page 20, desktops use about 45%, notebooks 15% and servers
20% of the WW DRAM production. So by unit volumes, servers do
not...wag the dog, so to speak. I did see a presentation which showed
that server DRAM was a > % of the market than desktops/laptops, but it
could be that this was measured in $, and possibly end-user dollars
rather than OEM dollars.

DK
 
K

Keith

Sorry there, I was wrong; servers use less DRAM than desktops.

download.micron.com/pdf/presentations/jedex/memory_trends_micron_2004.pdf

According to page 20, desktops use about 45%, notebooks 15% and servers
20% of the WW DRAM production. So by unit volumes, servers do
not...wag the dog, so to speak. I did see a presentation which showed
that server DRAM was a > % of the market than desktops/laptops, but it
could be that this was measured in $, and possibly end-user dollars
rather than OEM dollars.

Thanks. I'm shocked servers use as much as 20%. That's certainly
enough to support a separate design, though servers won't get the
desktop memory pricing advantage. OTOH, that might be in the
server manufacturer's interest.
 
D

David Wang

Several Web sites, including THG (not my choice either but the one that
comes quickly to mind), have been able to run 4 ranks at DDR400 on each
channel of an Athlon64. It gets coverage regularly in various forum
discussions; from my POV this is common knowledge, which you are apparently
lacking... because you are not paying attention to details of AMD-based
systems?

I would simply like to point out that there is a difference between
getting it demonstrated to work by THG and getting it shipped in a
medium/large-ish 4P server that costs tens of thousands of dollars.
Getting 4 ranks of DDR(1) to work @ 400 Mb/s probably doesn't take
rocket science, as I am reasonably certain that I am using it in the
Intel based box that I sit in front of right now. The difference is
in guarenteeing that it'll work with sufficient margins and reliability
that HP or anyone would feel comfortable in putting their name on it...
in a server. Not to belabor the point, but it has also been demonstrated
that with enough cooling and enough voltage, Pentium xx can be
pushed to 6+ GHz. So having it demonstrated to work doesn't mean that
you can see it in a server box anytime soon.

HP was being conservative and drops the capacity down to 2 ranks when
running DDR(1) @ 400 Mb/s. That shouldn't be a surprise. IMO, seeing
DDR(1) @ 400 Mb/s in a server is itself a bit of a surprise,
since DDR(1) @ 400 Mb/s is really "overclocked" memory with the 2.6 V
spec. The 400 Mb/s bin was supposed to have been reserved for DDR2,
but market demand from the desktop segment pushed 400 Mb/s down into
DDR(1) territory and made it happen. So no other server that I know of
use DDR(1) @ 400 Mb/s, and they're all waiting for DDR2 to run 4 ranks
@ 400 Mb/s.

FWIW, AMD is going to talk about Opteron II w/DDR2 support at ISSCC 2006.

http://www.isscc.org/isscc/2006/ap/2006_AP_Final.pdf

Session 5.4
 
G

George Macdonald

What is this?... a RealWorldTech gang-bang?:)
I would simply like to point out that there is a difference between
getting it demonstrated to work by THG and getting it shipped in a
medium/large-ish 4P server that costs tens of thousands of dollars.

Maybe but IF YOU"D ONLY READ, there's more than THG - this stuff is also
common knowledge in discussion forums. OTOH Micron/Crucial, among other
mfrs, for what is now a small premium, is producing PC4200/DDR500 DIMMS for
the enthusiast market; so the server market is asleep?... err, not
enthusiastic enough to pursue this?:) If this DDR500 needs 2.8V... so
what?... specs can be changed to adapt to evolution of silicon.
Getting 4 ranks of DDR(1) to work @ 400 Mb/s probably doesn't take
rocket science, as I am reasonably certain that I am using it in the
Intel based box that I sit in front of right now. The difference is
in guarenteeing that it'll work with sufficient margins and reliability
that HP or anyone would feel comfortable in putting their name on it...
in a server.

Let's hope someone else might get more umm, enthusiastic.:)
Not to belabor the point, but it has also been demonstrated
that with enough cooling and enough voltage, Pentium xx can be
pushed to 6+ GHz. So having it demonstrated to work doesn't mean that
you can see it in a server box anytime soon.

Such an extreme point is a bit of a red herring.
HP was being conservative and drops the capacity down to 2 ranks when
running DDR(1) @ 400 Mb/s. That shouldn't be a surprise. IMO, seeing
DDR(1) @ 400 Mb/s in a server is itself a bit of a surprise,
since DDR(1) @ 400 Mb/s is really "overclocked" memory with the 2.6 V
spec.

DDR(1)400 is no more of an overclock than PC100 was over PC66 and PC133
after that - it's just progress and enhancement of the processes... happens
all the time.
The 400 Mb/s bin was supposed to have been reserved for DDR2,
but market demand from the desktop segment pushed 400 Mb/s down into
DDR(1) territory and made it happen. So no other server that I know of
use DDR(1) @ 400 Mb/s, and they're all waiting for DDR2 to run 4 ranks
@ 400 Mb/s.

Not expecting some evolution and progress in functional specs is a bit
myopic IMO. No need to slavishly follow specs which are >3 years old.
FWIW, AMD is going to talk about Opteron II w/DDR2 support at ISSCC 2006.

http://www.isscc.org/isscc/2006/ap/2006_AP_Final.pdf

Session 5.4

Yeah it's been on the roadmap for a while now.
 
D

David Wang

Maybe but IF YOU"D ONLY READ, there's more than THG - this stuff is also
common knowledge in discussion forums. OTOH Micron/Crucial, among other
mfrs, for what is now a small premium, is producing PC4200/DDR500 DIMMS for
the enthusiast market; so the server market is asleep?... err, not
enthusiastic enough to pursue this?:) If this DDR500 needs 2.8V... so
what?... specs can be changed to adapt to evolution of silicon.

1. I did not see any concrete reference other than to THG. If you have
anything other than citing "common knowledge from the enthusiast market"
then automatically applying it to the server market, please reference
that.

2. I don't read THG, so I am not familiar with details of their test.
However, there is a difference between "memory targeted for the enthusasists"
and "memory targeted for servers". For one thing, servers use RDIMMS
while desktop folks use UDIMMS. The difference is that servers tend
to go for capacity, so they'd want to load up on the ranks, and you'll
usually see 16 x4 devices crammed into a single rank, then the DIMM
provides the buffering to electrically isolate the load. The "enthusiast
memory" have to go for speed, so AFAICT, they don't use 16 x4 devices
in any single rank, and there's no buffering there. So even if THG or
whatever website may have succeeded in running 4 ranks of DDR(1) devices
@ 400 Mb/s, we have to ask the next question, "What is the configuration
of each rank?" If those are not fully loaded 16 device ranks, then the
capacity per channel is still limited to 32 devices @ 400 Mb/s.

3. Server market is rather conservative, also rather slower to change.
No one is going to risk reliability to support an over-spec'ed
product. Certainly not DDR(1) @ 500 Mb/s. Even if you show that it is
capable of running DDR(1) with 64 devices @ 400 Mb/s on Opteron/A64,
the next question to ask is how many simulation and test hours do you
have behind that configuration. The point here is that no one is doing
that sort of study to get DDR(1) to work @ 400 Mb/s in the big servers,
because DDR2 is here, and all of the energy is going into qualifying
those parts and getting the 64 device configurations to run @ 400 Mb/s.
Let's hope someone else might get more umm, enthusiastic.:)

That's the enthusiast market, not the server market. I believe the
topic of discussion is the 4P Opteron box and its memory capacity.
If gaming ethusiasts start shelling out the dough to cram 16 GB of
memory into those boxes, then some vendor will certainly step up, do
the required engineering support and get it to work, but until then,
the Opteron is limited to running DDR(1) with only 32 devices @ 400 Mb/s,
because that's the maximum amount of memory that HP is willing to
stand behind at that datarate.
DDR(1)400 is no more of an overclock than PC100 was over PC66 and PC133
after that - it's just progress and enhancement of the processes... happens
all the time.

There are sets of baseline spec's for SDRAM, DDR SDRAM, DDR2 SDRAM,
respectively, and DDR(1) 400 is a push product that broke the
baseline spec for DDR SDRAM. PC133 (and even PC166) uses the same
voltage specs as PC100. DDR(1) 400 does not use the same set of
spec's as DDR 266 or DDR 333.
Not expecting some evolution and progress in functional specs is a bit
myopic IMO. No need to slavishly follow specs which are >3 years old.

If you're selling servers and your reputation depends on the reliability
of said servers, you too would gladly follow specs regardless of their
age. If you're pushing out a product that has better spec than others
in the market, that's great, but I'd want to know how many test hours
you had behind it. I wouldn't want to hear that you just came up with
the product with Micron's help last week.
 
R

Rob Stow

George said:
What is this?... a RealWorldTech gang-bang?:)


Maybe but IF YOU"D ONLY READ, there's more than THG - this stuff is also
common knowledge in discussion forums. OTOH Micron/Crucial, among other
mfrs, for what is now a small premium, is producing PC4200/DDR500 DIMMS for
the enthusiast market; so the server market is asleep?... err, not
enthusiastic enough to pursue this?:) If this DDR500 needs 2.8V... so
what?... specs can be changed to adapt to evolution of silicon.

FWIW, eight 2 GB PC3200 ECC Reg DIMMs (total of 16 GB) from
either Crucial or Corsair on a Tyan S2880, S2882, S2885, or S2895
seems to work every time.

I did, however, have issues when I tested a mix of Crucial and
Corsair DIMMs. MemTest would run OK for a while and then the
systems would crash, and the length of the pre-crash interval
seem to be random. Having each bank of four DIMM slots filled
with matching DIMMs seemed to be the only reliable way to go -
two matching pairs in each bank of four slots did not seem to be
good enough.
 
D

David Kanter

Keith said:
Thanks. I'm shocked servers use as much as 20%. That's certainly
enough to support a separate design, though servers won't get the
desktop memory pricing advantage. OTOH, that might be in the
server manufacturer's interest.

I think that depends. FBD really is not much more to make than a
regular DIMM; the question is whether the savings get passed along or
not.

I'd love to know how large each market (desktop, laptop, server) is by
$s, I think that would be much more interesting.

DK
 
K

Keith

George Macdonald wrote:


FWIW, eight 2 GB PC3200 ECC Reg DIMMs (total of 16 GB) from
either Crucial or Corsair on a Tyan S2880, S2882, S2885, or S2895
seems to work every time.
Interesting.

I did, however, have issues when I tested a mix of Crucial and
Corsair DIMMs. MemTest would run OK for a while and then the
systems would crash, and the length of the pre-crash interval
seem to be random. Having each bank of four DIMM slots filled
with matching DIMMs seemed to be the only reliable way to go -
two matching pairs in each bank of four slots did not seem to be
good enough.

Could this be a Tyan BIOS issue? Perhaps BIOS (or the memory
controller) can't keep the PD data straight with much mixed memory.
Did you try it in different configurations (Crucial first then
Corsair, etc.)? I thought we were through with this crap. :-(
 
K

Keith

I think that depends. FBD really is not much more to make than a
regular DIMM; the question is whether the savings get passed along or
not.

Precisely. What's the premium of registered memory over
unbuffered? :-(
I'd love to know how large each market (desktop, laptop, server) is by
$s, I think that would be much more interesting.

The total market, or the memory slice of that market? The total
market should be more-or-less public information. Memory costs are
likely buried so deep in corporate spreadsheets you'll never tease
them free.
 
R

Rob Stow

Keith said:
Precisely. What's the premium of registered memory over
unbuffered? :-(

Looking at PC3200 ...

For 512 MB DIMMS, about 40%.
For 1 GB DIMMs, about 55%.

For 2 GB DIMMs their is no direct comparison.
PC3200 ECC Reg from Crucial is about $650 - which is about 115%
more *per GB* than 1 GB unbuffered non-ECC DIMMs.

The RAM cost per desktop is typically about $60 or $120.
The RAM cost for a 2P server will seldom be less than $1K and can
easily hit $5K.

The total market, or the memory slice of that market? The total
market should be more-or-less public information. Memory costs are
likely buried so deep in corporate spreadsheets you'll never tease
them free.

And where do you "measure" ? At the memory manufacturer ?
Server/desktop/laptop manufacturer ? Consumer ?
 
G

George Macdonald

1. I did not see any concrete reference other than to THG. If you have
anything other than citing "common knowledge from the enthusiast market"

While it may suit your agendum, that is not a quote of what I said.
then automatically applying it to the server market, please reference
that.

I did not give any "concrete reference" URL... not even to THG - that was a
casual mention of one I remembered among several I've seen and I do *not*
keep bookmarks for such things... as common as they are. You need to get
out more - the "common knowledge" is in discussion groups all over the
place - I visit several Web Forums to gather info and troubleshoot and I've
seen such discussion of memory configs in almost all.
2. I don't read THG, so I am not familiar with details of their test.
However, there is a difference between "memory targeted for the enthusasists"
and "memory targeted for servers". For one thing, servers use RDIMMS
while desktop folks use UDIMMS. The difference is that servers tend
to go for capacity, so they'd want to load up on the ranks, and you'll
usually see 16 x4 devices crammed into a single rank, then the DIMM
provides the buffering to electrically isolate the load.

With the right devices and with registering the server market should be
able to do better - that's all I'm saying. There's nothing about an x4
device which would prohibit making the higher speed versions in x4 form.
The "enthusiast
memory" have to go for speed, so AFAICT, they don't use 16 x4 devices
in any single rank, and there's no buffering there. So even if THG or
whatever website may have succeeded in running 4 ranks of DDR(1) devices
@ 400 Mb/s, we have to ask the next question, "What is the configuration
of each rank?" If those are not fully loaded 16 device ranks, then the
capacity per channel is still limited to 32 devices @ 400 Mb/s.

Ah so the servers don't "go for speed"? What I'm saying is that with the
devices available now I don't see why they could do better. You know
damned well that UDIMMs are 8 devices per rank so why do you have "to ask
the next question"?
3. Server market is rather conservative, also rather slower to change.
No one is going to risk reliability to support an over-spec'ed
product. Certainly not DDR(1) @ 500 Mb/s.

Canard - I did not even come close to suggesting that.
Even if you show that it is
capable of running DDR(1) with 64 devices @ 400 Mb/s on Opteron/A64,
the next question to ask is how many simulation and test hours do you
have behind that configuration. The point here is that no one is doing
that sort of study to get DDR(1) to work @ 400 Mb/s in the big servers,
because DDR2 is here, and all of the energy is going into qualifying
those parts and getting the 64 device configurations to run @ 400 Mb/s.

Ah now we have it: "conservative":)... that's my complaint.;-)
That's the enthusiast market, not the server market. I believe the
topic of discussion is the 4P Opteron box and its memory capacity.
If gaming ethusiasts start shelling out the dough to cram 16 GB of
memory into those boxes, then some vendor will certainly step up, do
the required engineering support and get it to work, but until then,
the Opteron is limited to running DDR(1) with only 32 devices @ 400 Mb/s,
because that's the maximum amount of memory that HP is willing to
stand behind at that datarate.

No, the topic was *not* limited to 4P Opteron box, not that it's make much
difference with Opteron anyway. I'm not sure how you're counting your "32
devices" but HP only makes the rules for its boxes... not Opteron in
general.
There are sets of baseline spec's for SDRAM, DDR SDRAM, DDR2 SDRAM,
respectively, and DDR(1) 400 is a push product that broke the
baseline spec for DDR SDRAM. PC133 (and even PC166) uses the same
voltage specs as PC100. DDR(1) 400 does not use the same set of
spec's as DDR 266 or DDR 333.

So a recommended .1V difference for DDR400 makes it overclocked and a
different "set of specs"?<boggle> As I've already tried to point out the
"baseline spec's" for DDR are >3 years old - if you're going to deny the
ability to push as the silicon opportunity presents, I have to ask why? We
"allow" that CPUs, GPUs, etc. increase voltage and speed over a silicon
design lifecycle... why not memory?
If you're selling servers and your reputation depends on the reliability
of said servers, you too would gladly follow specs regardless of their
age. If you're pushing out a product that has better spec than others
in the market, that's great, but I'd want to know how many test hours
you had behind it. I wouldn't want to hear that you just came up with
the product with Micron's help last week.

Oh you mean like FBDIMMs... with AMBs which "will brown a burger better
than a George Foreman Grill can" [quote from this NG]?:)
 
R

Rob Stow

Keith said:
Could this be a Tyan BIOS issue? Perhaps BIOS (or the memory
controller) can't keep the PD data straight with much mixed memory.
Did you try it in different configurations (Crucial first then
Corsair, etc.)? I thought we were through with this crap. :-(

I e-mailed and snail-mailed Tyan about this back in July or
August and I have yet to get a reply. Forget about phoning -
with my hearing I can't handle badly accented English
face-to-face, let alone over the phone.

Crucial and Corsair were more helpful - or at least tried to be.
Other than suggesting that I try different BIOS versions they
didn't have much to offer - but at least they replied.

I have also discussed this in other forums but have only gotten
very limited confirmation. There just aren't enough people in
those forums who have Opty dualies and lots of 2 GB DIMMs to play
with. (Far more often than not I don't have that kind of stuff
to experiment with.)

Also: this seems to be a 2 GB DIMM issue - I have successfully
done mix'n'match with 1 GB PC3200 ECC Reg from a wide variety of
manufacturers - especially including Crucial, Corsair, Mushkin,
and Kingston. They might not play together if I throw them in at
random, but they do if I stick with singles or matched pairs.

This is not really that big a deal for me - knowing that there
is an easily avoidable problem is sufficient. It might be a
different story if the RAM manufacturers stopped selling at more
or less the same prices.
 
D

David Wang

While it may suit your agendum, that is not a quote of what I said.

My "agenda" is only to report facts and data. The facts on the ground
is that I have not seen any references that has shown that you can
reliably add more memory to the Opteron system @ 400 Mb/s.

The point here is that the issue concerns both speed AND capacity.
The references given suggest that higher speeds are possible, but
none shows that higher speeds are possible with full capacity.
I did not give any "concrete reference" URL... not even to THG - that was a
casual mention of one I remembered among several I've seen and I do *not*
keep bookmarks for such things... as common as they are. You need to get
out more - the "common knowledge" is in discussion groups all over the
place - I visit several Web Forums to gather info and troubleshoot and I've
seen such discussion of memory configs in almost all.

As you may suspect, I read plenty about memory systems, and I would
vigorously challenge your memory in regards to the "common knowledge"
here. I do not believe it's common knowledge for anyone to have
demonstrated running 64 DDR(1) SDRAM devices reliabliably @ 400 Mb/s.
If you can, please do cite a concrete reference URL. What you remember
to have seen may not actually be what it is.
With the right devices and with registering the server market should be
able to do better - that's all I'm saying. There's nothing about an x4
device which would prohibit making the higher speed versions in x4 form.

The issue that prohibits making the higher speed version with x4 devices
is that you have to hang 16 of them (18 with ECC) on the same address
and command busses per rank. That's a rather heavy electrical load to
run @ 200 MHz. So, no, you can't just look at "enthusiast memory" built
with x8 parts and automatically assume that x4 parts will work just the
same at the same data rate, because you're going to need even faster
parts to meet the same timing.
Ah so the servers don't "go for speed"? What I'm saying is that with the
devices available now I don't see why they could do better. You know
damned well that UDIMMs are 8 devices per rank so why do you have "to ask
the next question"?

Because you cited some rather nebulous references in regards to memory
from the enthusiast market and assumed that it would work in the server
world. I was simply pointing out that's not going to work here because
of configuration differences and electrical loading considerations.
Ah now we have it: "conservative":)... that's my complaint.;-)

Which is what is shipping in HP's Opteron server, and guarenteed to
work by HP. That guarentee provides the effective upper limit to
the maximum memory capacity of an Opteron server as of today. That
limit cannot be exceeded or changed arbitrarily. This, I believe was
the crux of the contentious point. . .
No, the topic was *not* limited to 4P Opteron box, not that it's make much
difference with Opteron anyway. I'm not sure how you're counting your "32
devices" but HP only makes the rules for its boxes... not Opteron in
general.

32 devices is not counting ECC. 36 counting ECC.

The limit is 2 ranks of 18 x4 DDR(1) devices running @ 400 Mb/s, and
that's the same number I am seeing over and over again. Not just HP,
but Tyan as well. So the limitation isn't just HP Opterons, or even
Opterons of any brand of servers, but DDR(1) SDRAM memory controllers
@ 400 Mb/s. The Opteron just happen to have a DDR(1) SDRAM memory
controller that has to follow the same constraints as everyone else.
If you claim that the limit can be exceeded, please show me where
you're getting your impression from, because I'd certainly like to
see where someone is getting a fully loaded DDR(1) memory system to
run @ 400 Mb/s. **

** By fully loaded, I mean 4 ranks of 16 x4 devices, for a total of
64 devices (not counting ECC) per channel. With 1 Gbit devices, you'll
get 8 GB of memory per channel, and 16 GB of memory per Opteron
processor. In a 4P config, that would push your total memory
capacity to 64 GB instead of the current limit of 32 GB @ 400 Mb/s.
So a recommended .1V difference for DDR400 makes it overclocked and a
different "set of specs"?<boggle> As I've already tried to point out the
"baseline spec's" for DDR are >3 years old - if you're going to deny the
ability to push as the silicon opportunity presents, I have to ask why? We
"allow" that CPUs, GPUs, etc. increase voltage and speed over a silicon
design lifecycle... why not memory?

The difference is that CPU's and GPU's are single vendor to single customer
parts. That is, Intel can change the specs of these devices to whatever
it wants, whenever it wants, as long as its customers doesn't mind
following the new spec. Same with AMD, NVIDIA, ATI, etc. DRAM doesn't
work that way. The dynamics of the commodity market means that the parts
are supposed to be completely interchangeable. So the "interchangeable"
aspect of things greatly limits the standards definition process.

For example, Samung can probably crank out much faster DDR parts because
it has excellent process tech, but some of the less-well funded fabs
can barely meet the spec, and they would be hard pressed to meet these
push spec parts, so they would be resistant to changes in the JEDEC
standards definition.

The limitation of the JEDEC standard means that the faster guys can't
really run ahead of the slow guys, although they're finding some ways
around that with the push spec parts designed for the "enthusiast
market". So, no, you can't just take advantage of opportunities
made available with faster process technology to make your own faster
DRAM parts. You have to wait until sufficient number of DRAM manufacturers
can agree with you on the new addendum to the spec, and a suffient
number of design houses (Intel, IBM, Sun, AMD, etc etc) agree to the
same set of proposed addendum to the spec, before the standard can be
created, and you can sell you part as (JEDEC) DDRx xyz MHz, and customers
can be reasonably certain that your DDRx xyz MHz parts can operate
interchangeably with parts from Infineon, Samsung, Micron, Elpida,
etc.
Oh you mean like FBDIMMs... with AMBs which "will brown a burger better
than a George Foreman Grill can" [quote from this NG]?:)

FBD's have been in developement for more than 2 years, and they're still in
development/testing/tweaking.

They'll enable servers with incredible amount of memory, and the power
headache that comes with it. The AMB is just part of the problem. With
16 device per FBD, you can get the ratio of AMB device power to DRAM
device power down to 15~20%.

As we've discussed, the practical limit for DDR(1) @ 400 Mb/s is 32 devices
per channel, and for DDR2 devices it's 64 devices @ 400 Mb/s per
channel. Each channel will cost you about 100 control/data pins.

FBD's can get you 256 devices per channel with much fewer pins. You can
basically hang 10x more DRAM bits per pin. Now imagine a memory system with
10x more devices, and the amount of power that memory system can consume.
AMB is a (relatively) small problem.
 
T

Tony Hill

Thanks. I'm shocked servers use as much as 20%. That's certainly
enough to support a separate design, though servers won't get the
desktop memory pricing advantage. OTOH, that might be in the
server manufacturer's interest.

Server buyers, at the least, already miss out on the desktop pricing
advantage. First off they're already using a slightly different
design by using ECC/Registered memory vs. non-ECC/unregistered memory
(ok, some workstation-class systems use ECC/reg. stuff as well).
Secondly is simply the price that the server makers tend to charge for
memory. Now, as has been repeated many times before, cost and price
are only marginally related. However given that HP charges over
$300/GB of memory (vs. $183 that Crucial charges for the same stuff)
in the DL585 server that started this whole discussion, that gives
them quite some room to play with the whole pricing/cost.

For comparison sake, on one of their consumer desktop PCs they charge
only $160 for an upgrade from 1GB to 2GB. This is much closer to the
$151 price difference for a similar upgrade from Crucial (ie price
difference between 2x512MB vs. 2x1GB).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top