Xeon Woodcrest Preys On Opteron

Y

YKhan

David said:
Not a chance, look at the gap in SPECint scores. There is no way that
AMD can catch up with a simple die shrink, and you're deceiving
yourself if you think that is so.

Ditto for TPC-C or SPECjbb2005.

As I said, AMD will get it close enough so that it doesn't matter; it
may still be behind, but it won't be anything that will be important to
people. In the server world, there is only a specific segment that
worries about absolute performance, and that would be the HPC/animation
crowd; everybody else is interested in infrastructure and balance of
performance. Benchmark superiority didn't win AMD any sales in its
first year. However, it got AMD's name in the news which eventually won
it sales in the next bunch of years, when it became clear that Intel
couldn't come up with an answer. If AMD doesn't have a good answer
within the first year, then it will have to worry. But AMD wasn't
caught off-guard with the exact wrong architecture at the exact wrong
time, so it's not going to suffer the 3 year catchup lag of Intel.
See you're missing the point. Right now, AMD is not performance
competitive with Intel in servers. They won't be able to reduce power
consumption on 65nm, they will be too busy ramping up clockspeed to try
and get back to performance parity.

Either ramping up speed or increasing the cache size. Woodcrest has a
2:1 cache size advantage on Opteron. More than likely AMD will be using
the 65nm transition to ramp up cache size more than speed.
Yes it will. AMD's first design will be a compaction, and after that
they will do K8L.

Actually, before K8L they have Rev G coming out. Rev G will be the
first one at 65nm, and there is some talk that there are some
incremental architectural improvements in the works for that, even
before K8L (Rev H) comes. One of the chipsites believes that there is
an additional integer execute unit inside Rev G.
Hello? Nobody in the server world uses out of spec modules. DDR2-800
is the top of the line for servers. Do you really think AMD will be
able to out perform Intel with a simple upgrade to HT and the memory
controller? If so you are ignoring reality. AMD's modifications might
get them 10%...

The point is that these are coming down the line -- soon. The memory
manufacturers are holding competitions within themselves to see how
fast they can get their DDR2 modules going at, and who will get there
first, so they're chomping at the bit to introduce these things. The
DRAM makers want to sell some high-end modules to make some gross
margins (any gross margins!) -- something they don't seem to do too
much. We've seen a fairly lethargic pace of specs improvement on DDR2
so far, now that AMD is onboard -- everybody is onboard, so it's time
to open up the ride.

Who cares about Conroe? AMD makes their money in the server market,
and that is where Intel is going to hit the hardest.

I don't think so, my feeling is that it's the desktop market where
Intel is going to hit the hardest. Intel won't have enough time to get
enough momentum going before AMD is firstly close enough, and then
secondly at par again. Not with the server market as slow to react as
it does. The desktop market is extremely dynamic, so that where we're
going to see the biggest ups and downs for both Intel and AMD.
Who cares about out of spec overclocked modules?

Will Intel be able to upclock their FSB enough to take advantage of
these, even when they are official spec modules? And even if they can
increase the clocks on the FSB to take advantage of the bandwidth,
would they be able to take advantage of the latency? The faster these
modules get the lower their absolute latencies are. Will Core 2's magic
latency hiding technology be able to keep hiding the real latency, as
the real latencies keep going down? Or is there a tipping point where
it can no longer hide the real latencies?
Prove it. I've seen presentations from AMD that claim 15% for consumer
markets WW, and it goes down from there.

Well the stories are a little convoluted here depending on which site
you read.

In this story, they don't mention any specific sub-market here (e.g.
consumer and sub-10-employee business market), so it's likely they're
talking about the overall desktop and laptop marketshares:

"The company's growth in the desktop and mobile markets was just as
strong in the second half of 2005. AMD's desktop processor share went
from 20.4 percent in the third quarter to 24.3 in the fourth, and its
mobile share went from 12.2 percent to 15.1 percent."
http://news.com.com/AMD+once+again+hits+the+roaring+20s/2100-1006_3-6030509.html

In this story they say that the 15.1% was the retail marketshare in
2005, but in 2006, AMD's notebook retail share had gone upto 44.7%:

"Intel has traditionally had a massive advantage in market share, with
83.13 percent compared to AMD's 15.14 percent of the U.S. retail market
for notebook PCs in April 2005, not counting sales by Dell or Wal-Mart
Stores, according to a survey of national retailers by Current
Analysis.

By April 2006, that lead had nearly vanished, with Intel at 54.71
percent compared to AMD's 44.66 percent."
http://www.infoworld.com/article/06/05/17/78397_HNamdchallenges_1.html

Anyways, I'm willing to concede that between the two stories one
mentions a sub-market and one doesn't, so it's possible that in the
absense of detail in one of the stories, that the other story with the
greater detail is right. So 15.1% may have been AMD's retail notebook
marketshare in 2005. But that same story now says that in 2006, AMD's
retail notebook marketshare is about 45%, which higher than the
estimate I gave which said 30% retail marketshare! So AMD does quite
definitely have a presense in the retail notebook market.

And Intel still has 4 million Sonomas to get rid of, from two years
ago.

Try 99%, according to AMD.

So where exactly are we disagreeing here?
No it's not. But you are welcome to believe what ever you want.

Oh, please do keep feeling sorry for yourself, "oh whoa is me, nobody
believes me!"

I've done a fair amount of Googling for you up above to show you
stories that I've read in the past which shows why I'm lead to believe
certain things. It's quite obvious that you have been blinded to the
huge upheaval in the consumer notebook market that's happened. The
consumer notebook market has taken off finally, but no thanks to the
efforts of Intel. You're not going to see how popular AMD notebooks
have become, if all you see are corporate notebooks. I know very few
people who own personal Centrino notebooks at home, but I know a lot of
people who have got a Centrino notebook from work (including myself).
At the consumer level, if it's an Intel notebook, I see more people
owing Celeron or even Pentium 4 notebooks than Centrino-class.

It's also been said that over 50% of HP notebooks are now AMD based,
although HP doesn't break it out itself. Again that's likely the retail
consumer marketshare, but it's impressive none-the-less.

Yousuf Khan
 
R

Ryan Godridge

You're wrong. Woodcrest is quite available. Conroe won't be released
for a little while.


Go to an OEM website.

DK

How exactly will going to an oem website help with getting, at least
approximately objective, data about shipping systems. The clue is in
the 'shipping systems' part of the question. The jam tomorrow
scenario cuts no ice with me.

I'll repeat it - I won't believe it until I see it. When I do I'll
believe it.

Ryan
 
D

David Kanter

How exactly will going to an oem website help with getting, at least
approximately objective, data about shipping systems. The clue is in
the 'shipping systems' part of the question. The jam tomorrow
scenario cuts no ice with me.

Let me give you a hint, most OEMs will give you an availability or
shipping date.

This is something a judicious use of the internet could solve.
However, apparently that particular tool is beyond your ken.

DK
 
R

Ryan Godridge

Let me give you a hint, most OEMs will give you an availability or
shipping date.

This is something a judicious use of the internet could solve.
However, apparently that particular tool is beyond your ken.

DK

You're absolutely right this web thingy is beyond my grasp. Perhaps
you can help me. I think I must have been unclear in my question.
I'll try and restate it, then with your knowledge, I'm sure you'll be
able to help me.

Where can I find benchmarks for 3D rendering in Lightwave (8.5 or 9
beta I don't mind) which were done on generally available machines -
as in I could buy one and have it delivered tomorrow with Woodcrest
processors. I'm not interested in ES chips or vapour chilled etc. If
not for Lightwave then 3DS Max or Maya, but I would treat them as
indicative only not proof as yet.

Thank you for your help - I look forward to your reply.

Ryan
 
D

David Kanter

Let me give you a hint, most OEMs will give you an availability or
You're absolutely right this web thingy is beyond my grasp. Perhaps
you can help me. I think I must have been unclear in my question.
I'll try and restate it, then with your knowledge, I'm sure you'll be
able to help me.

You forgot the sarcasm tags somewhere in there...
Where can I find benchmarks for 3D rendering in Lightwave (8.5 or 9
beta I don't mind) which were done on generally available machines -
as in I could buy one and have it delivered tomorrow with Woodcrest
processors.

So here's the first question. Why does it have to be a GA machine?
You're clearly a little flexible about the version of the software,
which will introduce a fair amount of uncertainty.

My general opinion, which most folks will agree with, is that an HP
system using Chipset A and processor P will perform within 2-5% of an
IBM platform using the same chipset and processor. Naturally, this
holds unless you hit a boundary condition; for instance, some HP
Opteron servers hold more memory than competitors, but if you need 64GB
of memory or less, it probably doesn't matter.

So, do you really need to know how a particular system from HP or Sun
behaves? Or do you just need to know how the chipset+CPU combo
performs? If you think so, I'd love to hear why. Are your workloads
very storage intensive? Does the performance improve substantially by
going from 16-->32GB of memory?

I don't think you appreciate how few differences there are between
different server systems using the same chipset and CPU. The main
differentiators are storage options, DIMMs, warranty, support,
management software, etc. Only two of these will impact performance,
and that largely depends on your data set and what % of the working set
fits in memory.
I'm not interested in ES chips or vapour chilled etc.

Nobody in their right mind benchmarks vapor cooled server chips, unless
the systems which these chips ship in tend to use vapor cooling.
If
not for Lightwave then 3DS Max or Maya, but I would treat them as
indicative only not proof as yet.

Well, 3DS max and maya are just as real workloads as lightwave. They
just happen not to be your workload.

So I guess my questions are:

1. Why do you care about benchmarking the precise machine that is
available? Do you have any evidence that there will be a large
performance variation between an HP, IBM and Supermicro implementation
of Woodcrest?

2. Can you find shipping dates on OEM websites? What about resellers?

DK
 
R

Ryan Godridge

You forgot the sarcasm tags somewhere in there...

I suspect you spotted the sarcasm tags quite adequately.
So here's the first question. Why does it have to be a GA machine?
You're clearly a little flexible about the version of the software,
which will introduce a fair amount of uncertainty.

I think you miss the point - there are NO benchmarks on retail boxes
now. There are only preview benchmarks, due to the fact that there
are no boxes available today.
My general opinion, which most folks will agree with, is that an HP
system using Chipset A and processor P will perform within 2-5% of an
IBM platform using the same chipset and processor. Naturally, this
holds unless you hit a boundary condition; for instance, some HP
Opteron servers hold more memory than competitors, but if you need 64GB
of memory or less, it probably doesn't matter.
Agreed

So, do you really need to know how a particular system from HP or Sun
behaves? Or do you just need to know how the chipset+CPU combo
performs? If you think so, I'd love to hear why. Are your workloads
very storage intensive? Does the performance improve substantially by
going from 16-->32GB of memory?

Depends on the scene file, it can make the difference between single
passes and multiple passes or a requirement for compositing, then even
a 40% performance difference becomes irrelevant. Apart from these
cases memory beyond a certain point becomes less important.
I don't think you appreciate how few differences there are between
different server systems using the same chipset and CPU. The main
differentiators are storage options, DIMMs, warranty, support,
management software, etc. Only two of these will impact performance,
and that largely depends on your data set and what % of the working set
fits in memory.
I don't think you appreciate what I appreciate.
Nobody in their right mind benchmarks vapor cooled server chips, unless
the systems which these chips ship in tend to use vapor cooling.


Well, 3DS max and maya are just as real workloads as lightwave. They
just happen not to be your workload.

So I should buy servers or workstations based on information from
preview benchmarks of applications I don't use on machines I can't
buy. I think the term due diligence is appropriate here.
So I guess my questions are:

1. Why do you care about benchmarking the precise machine that is
available? Do you have any evidence that there will be a large
performance variation between an HP, IBM and Supermicro implementation
of Woodcrest?
There are no boxes available NOW, when there are I suspect any
differences will be minimal. Further, there are no benchmarks that
give me the information I need on any of these systems either so we go
back to preview baenchmarks of questionable use.
2. Can you find shipping dates on OEM websites? What about resellers?

DK

I think that your point was that Woodcrest was available right here,
right now - not in fact the case.


My interest is in Woodcrest's performance in Lightwave given that it
is my heaviest continuous use of processing power where I have the
power to choose my platform. My flexibility in version was to make
the task of finding benchmarks easier, but still not possible it
seems. In reality I'm only interested in Lightwave 9 on 64 bit XP,
but this might be a bit harder to find.

Let's be crystal clear about this. Woodcrest looks as if it will beat
Opteron in certain applications, but not in others - e.g. RSA
encryption(Signs) at high bit lengths. Now this is not an area that
I'm interested in, but shows weakness - or less strength in the
Woodcrest.

There seems to be some sort of hysteria whipped up by Intel marketing
that Woodcrest will wipe the floor with the current generation of
Opterons in every benchmark known to man. This is patently untrue, so
the question becomes - in reality where are the areas that Woodcrest
is good and where are the ones it is not so good.

To sum our positions up

Yours:-

Right now AMD is not performance competitive with Intel in servers.
Woodcrest is quite available.

Mine:-

Woodcrest may be ahead of Opteron for the tasks you wish to achieve on
your server, or it may not. It depends on the task.
There is insufficient information available to determine this beyond a
few preview benchmarks on unavailable machines.
Woodcrest will become available in the near future - just not right
now.
Price/performance analyses are nowhere near complete.


To suggest the case is closed is just plain bad science, but hey
that's what Intel wants everybody to believe - just that some are less
gullible than others.
 
D

David Kanter

[snip]
I think you miss the point - there are NO benchmarks on retail boxes
now. There are only preview benchmarks, due to the fact that there
are no boxes available today.

That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

Ship date for a 2S system is 7/2/06. That sounds suspiciously like
today...

There are no performance reviews for these systems because Intel
already seeded the market.

I personally think your claim that Woodcrest is unavailable is wrong.
I have not seen any data to support it. I tried going through the Dell
and IBM order system, but they don't give you ship dates unless you buy
the server. I'm curious, but not $10K worth of curious : )

OK, I'm glad we are seeing eye to eye on this.
Depends on the scene file, it can make the difference between single
passes and multiple passes or a requirement for compositing, then even
a 40% performance difference becomes irrelevant. Apart from these
cases memory beyond a certain point becomes less important.

Right, that is my expectation. Basically if your memory needs are 10%
peak average memory capacity, you should be fine and IBM and HP
performance will be very similar.
I don't think you appreciate what I appreciate.

That could be, but why don't you try and communicate it in a convincing
manner?
So I should buy servers or workstations based on information from
preview benchmarks of applications I don't use on machines I can't
buy. I think the term due diligence is appropriate here.

Well, it really depends. To what extent does the performance of Maya
corrolate with 3DS MAx and with Lightwave? Is the coefficient 1? Is
it 0.90, is it 0.80?


There are no boxes available NOW, when there are I suspect any
differences will be minimal.

See, I wish before you would make absolute claims you would actually
investigate them fully. As I demonstrated above, it is quite possible
to get a system that will ship to you TODAY.
Further, there are no benchmarks that
give me the information I need on any of these systems either so we go
back to preview baenchmarks of questionable use.

You already conceded that woodcrest+blackford should perform almost
identically no matter what server is used, right? Therefore
benchmarking ANY woodcrest+blackford server will provide all the
information you need. So if someone has benchmarked WC+BF with
lightwave, then you are fine (no matter which server it is).

Now, earlier I discussed the fact that Lightwave will probably be
highly corrolated with Maya. If you can figure out to what degree they
are corrolated, you could use a two step estimation process to find out
what the performance will be like:

You measure the performance of Maya on WC+BF. Then you measure or
estimate the R^2 between Maya and Lightwave performance. Now you use
that to estimate the performance of Lightwave on WC+BF. This will
introduce a bit of inaccuracy, but it should be on the order of 5-10%.

Now naturally, I need to find performance numbers for WC+BF with Maya
or 3DS max. Since you already agreed that any combination of WC+BF is
suitable, there is no reason why we cannot use
http://www.techreport.com/etc/2006q2/woodcrest/index.x?pg=6
as a data source. Yes it is a system that is not sold commercially
(PDK), but it is in fact a viable data source for WC+BF. Moreover,
using Scott's results would, if anything, bias the performance of
Woodcrest down. IOW, b/c it's preproduction hardware, commercialized
hardware will almost certainly have higher performance.


I think that your point was that Woodcrest was available right here,
right now - not in fact the case.

Actually yes, it is available, see above. Most of your argument
appears to be predicated on a single fact which is not actually true.
My interest is in Woodcrest's performance in Lightwave given that it
is my heaviest continuous use of processing power where I have the
power to choose my platform. My flexibility in version was to make
the task of finding benchmarks easier, but still not possible it
seems. In reality I'm only interested in Lightwave 9 on 64 bit XP,
but this might be a bit harder to find.

Don't you need Windows Server 2003 for a 2S/4P system?
Let's be crystal clear about this. Woodcrest looks as if it will beat
Opteron in certain applications, but not in others - e.g. RSA
encryption(Signs) at high bit lengths.

Do you have any references for this?
Now this is not an area that
I'm interested in, but shows weakness - or less strength in the
Woodcrest.

Sure, as always, YMMV. It's not hard to construct a workload where
Woodcrest would lose to K8, nor is it hard to do the reverse. THe real
question is, what happens on average, and what happens in your app?
There seems to be some sort of hysteria whipped up by Intel marketing
that Woodcrest will wipe the floor with the current generation of
Opterons in every benchmark known to man.

Woodcrest will win the vast majority of commercial server benchmarks.
It will win some HPC stuff and lose some, although I haven't really
evaluated any benchmarks to give much insight. However, HPC is much
better for AMD's memory pipeline.
This is patently untrue, so
the question becomes - in reality where are the areas that Woodcrest
is good and where are the ones it is not so good.

I think what you're trying to get at is that it depends on your
workload, and I totally agree. However, I think that the majority of
workloads will be better on WC than the current K8.
To sum our positions up

Yours:-

Right now AMD is not performance competitive with Intel in servers.
Woodcrest is quite available.
Mine:-

Woodcrest may be ahead of Opteron for the tasks you wish to achieve on
your server, or it may not. It depends on the task.

This sort of 'YMMV' is always true and is implicit in any benchmark
pronounciations. As I said before, it's not hard to produce a
workload/benchmark that shows a particular outcome. All it takes is
knowledge of caches, and other characteristics.

The point is that on average Woodcrest will beat K8.
There is insufficient information available to determine this beyond a
few preview benchmarks on unavailable machines.
Woodcrest will become available in the near future - just not right
now.
Price/performance analyses are nowhere near complete.

Again, your assertions are based on factual inaccuracies. WC is
available right now. Moreover, pricing information is available, hence
perf/$ is available.
To suggest the case is closed is just plain bad science, but hey
that's what Intel wants everybody to believe - just that some are less
gullible than others.

Of course it's not. However, I think it has been rather emphatically
shown that for the majority of workloads woodcrest is a superior
product.

DK
 
R

Ryan Godridge

[snip]
I think you miss the point - there are NO benchmarks on retail boxes
now. There are only preview benchmarks, due to the fact that there
are no boxes available today.

That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

Ship date for a 2S system is 7/2/06. That sounds suspiciously like
today...

Estimated ship date - but I'll conceed that NOW HP may have
availability - unless of course estimated ship date and actual ship
date are different - no never happens.
There are no performance reviews for these systems because Intel
already seeded the market.

I personally think your claim that Woodcrest is unavailable is wrong.
I have not seen any data to support it. I tried going through the Dell
and IBM order system, but they don't give you ship dates unless you buy
the server. I'm curious, but not $10K worth of curious : )


OK, I'm glad we are seeing eye to eye on this.


Right, that is my expectation. Basically if your memory needs are 10%
peak average memory capacity, you should be fine and IBM and HP
performance will be very similar.


That could be, but why don't you try and communicate it in a convincing
manner?
Leave it out - I'm not the one making assumptions as to experience or
knowledge - caveat emptor.
Well, it really depends. To what extent does the performance of Maya
corrolate with 3DS MAx and with Lightwave? Is the coefficient 1? Is
it 0.90, is it 0.80?
Who knows what the correlation is between these and Lightwave on a new
architecture. Given that Max and Maya don't correlate that well:

Source GamePC Labs - no endorsement from me.

Discreet 3D Studio Max 7.0 - Radiosity Render (Lower is better)

Intel Xeon 5140 (2.33 GHz) 106
AMD Opteron 285 (2.6 GHz) 116

Win to Woodcrest

Alias Maya 6.5 - High Definition Software Render (Lower is better)

Intel Xeon 5140 (2.33 GHz) 38
AMD Opteron 285 (2.6 GHz) 36

Win to Opteron

So I suggest that extrapolation from these two is only mildly
indicative and in opposite directions to boot.

However there seems to be a trend of better performance at comparable
clock rates with the Woodcrest, which we haven't seen for a while from
Intel.
See, I wish before you would make absolute claims you would actually
investigate them fully. As I demonstrated above, it is quite possible
to get a system that will ship to you TODAY.
Ok so absolute claims are out then:-
And I quote -

"Right now AMD is not performance competitive with Intel in servers".

Now correct me if I'm wrong but your position as previously stated is
as above.

If you want to now say that it will depend on workload and we don't
really have all of the information then by all means go ahead, but it
means dropping the slogans and being a little more reflective in
claims. Or to put it another way I wish you'd investigate absolute
claims fully before making them - haven't I heard that somewhere
before?
You already conceded that woodcrest+blackford should perform almost
identically no matter what server is used, right? Therefore
benchmarking ANY woodcrest+blackford server will provide all the
information you need. So if someone has benchmarked WC+BF with
lightwave, then you are fine (no matter which server it is).
No Lightwave benchmarks.
Now, earlier I discussed the fact that Lightwave will probably be
highly corrolated with Maya. If you can figure out to what degree they
are corrolated, you could use a two step estimation process to find out
what the performance will be like:

Your discussion was interesting but unfortunately not yet provable
without actual measurement.
You measure the performance of Maya on WC+BF. Then you measure or
estimate the R^2 between Maya and Lightwave performance. Now you use
that to estimate the performance of Lightwave on WC+BF. This will
introduce a bit of inaccuracy, but it should be on the order of 5-10%.
I'll come back to you with the figures, when benchmarks have been
performed but I think your figure 5-10% is wildly optimistic. That's
not the way uncertainty works.
Now naturally, I need to find performance numbers for WC+BF with Maya
or 3DS max. Since you already agreed that any combination of WC+BF is
suitable, there is no reason why we cannot use
http://www.techreport.com/etc/2006q2/woodcrest/index.x?pg=6
as a data source. Yes it is a system that is not sold commercially
(PDK), but it is in fact a viable data source for WC+BF. Moreover,
using Scott's results would, if anything, bias the performance of
Woodcrest down. IOW, b/c it's preproduction hardware, commercialized
hardware will almost certainly have higher performance.

Ok now you're really pushing it - a pair of tests with two data points
on 3DS Max alone with what looks like the 32 bit version at that.

But i've seen 1 person jump out of a plane with a parachute like mine
so why do I need to check this one! Ok different plane, different
parachute, but surely it must be the same.
Actually yes, it is available, see above. Most of your argument
appears to be predicated on a single fact which is not actually true.
No my argument is predicated on a few facts, one you've pointed out to
be incorrect - availability. Now for the others - there are no
Lightwave benchmarks available at the moment. The average does not
inform the specific. There is no average until the specifics have
been calculated, only uninformed speculation.
Don't you need Windows Server 2003 for a 2S/4P system?
No I think you'll find that XP Pro supports 2 sockets, with as many
cores as you like.

The difference between XP Pro and Pro 64 is mainly the amount of ram
supported and then virtual address space for each process etc.

Win2K Pro takes cores and sockets into account and thus requires
server for dual dual.

http://support.microsoft.com/kb/888732/en-us

Though rendering performance on Linux would be of interest.

I'll refrain from commenting about use of the internet and absolute
claims.
Do you have any references for this?
Yep http://www.anandtech.com/IT/showdoc.aspx?i=2772&p=5

Once again no endorsement and on Gentoo this time.

Sure, as always, YMMV. It's not hard to construct a workload where
Woodcrest would lose to K8, nor is it hard to do the reverse. THe real
question is, what happens on average, and what happens in your app?

No the real question has absolutely nothing to do with the average in
any individual case.
Woodcrest will win the vast majority of commercial server benchmarks.
It will win some HPC stuff and lose some, although I haven't really
evaluated any benchmarks to give much insight. However, HPC is much
better for AMD's memory pipeline.
So it begins - if the benchmarks haven't been performed and you
haven't evaluated them, then as you say your insight in this area is
limited. I'd also be a little more circumspect about inferences about
a complete area of applications and one portion of the memory
architecture, the thing about HPC is that it is pretty broad.
I think what you're trying to get at is that it depends on your
workload, and I totally agree. However, I think that the majority of
workloads will be better on WC than the current K8.
It's an opinion, not backed up by a whole lot of evidence currently,
but that's your right.
This sort of 'YMMV' is always true and is implicit in any benchmark
pronounciations. As I said before, it's not hard to produce a
workload/benchmark that shows a particular outcome. All it takes is
knowledge of caches, and other characteristics.

The point is that on average Woodcrest will beat K8.

The implicit in any benchmark argument is shorthand for marketing
speak. We're talking analysis here, not marketing, so let's be
accurate, specific, and deal with known quantities.

Average would only be important to me if I shared my machine with
other users - I don't. The average is not useful or in yet
calculated.

Ooh now you've confused me with your technical talk of caches and
other characteristics - I thought that we'd agreed that I can't use
this web thingy let alone that sort of stuff.

I'm not suggesting contrived pathological cases - merely a fairly well
known 3D package.
Again, your assertions are based on factual inaccuracies. WC is
available right now. Moreover, pricing information is available, hence
perf/$ is available.

Obviously unsound - performance is not verified yet. Plus the slew of
price reductions around the corner. I'm not ready to do the calcs yet.

Unless you think that price/perf is another of those 'average' figures
that can be pulled out of thin air.

I will do the calcs when I have clear performance data and can work
out price/performance for my needs.
Of course it's not. However, I think it has been rather emphatically
shown that for the majority of workloads woodcrest is a superior
product.

DK

Okay - if you'd said that it looks like Woodcrest will take the lead
in a bunch of workloads and it will be available off the shelf real
soon now so it's worth having a look to see if it suits your workload,
I'd emphatically agree with you.

A blanket "Right now AMD is not performance competitive with Intel in
servers" is just plain wrong, and worse it is misinformation.

This statement is a real shame because it looks like Woodcrest could
be a very good chip, but this cheerleading really does no favours to
the perceived veracity of any reasonable claims made.

I understand it's all about marketing, and Intel are smarting from a
fairly complete drubbing for the last 2 to 3 years, but that doesn't
mean we have to join in with this sort of hype in this newsgroup.

Marketing relies on the consumer not performing any critical thinking,
let's not fall into that trap.
 
G

George Macdonald


This is really annoying you know.
That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

That is not a Woodcrest system - DC Xeon 5050... though there are a couple
of such for more $$.
Ship date for a 2S system is 7/2/06. That sounds suspiciously like
today...

Hmm, 7/23 is close enough I suppose but for Ryan, better check
www.hp.co.uk.:-(
See, I wish before you would make absolute claims you would actually
investigate them fully. As I demonstrated above, it is quite possible
to get a system that will ship to you TODAY.

Hmm, not quite... in this case.
 
R

Ryan Godridge


This is really annoying you know.
That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

That is not a Woodcrest system - DC Xeon 5050... though there are a couple
of such for more $$.
Ship date for a 2S system is 7/2/06. That sounds suspiciously like
today...

Hmm, 7/23 is close enough I suppose but for Ryan, better check
www.hp.co.uk.:-(
See, I wish before you would make absolute claims you would actually
investigate them fully. As I demonstrated above, it is quite possible
to get a system that will ship to you TODAY.

Hmm, not quite... in this case.

I figured I'd let that one go - what's a week or three amongst
friends.
 
D

David Kanter

[snip]

Who knows what the correlation is between these and Lightwave on a new
architecture. Given that Max and Maya don't correlate that well:

Source GamePC Labs - no endorsement from me.

Discreet 3D Studio Max 7.0 - Radiosity Render (Lower is better)

Intel Xeon 5140 (2.33 GHz) 106
AMD Opteron 285 (2.6 GHz) 116

Win to Woodcrest

Alias Maya 6.5 - High Definition Software Render (Lower is better)

Intel Xeon 5140 (2.33 GHz) 38
AMD Opteron 285 (2.6 GHz) 36

Win to Opteron
So I suggest that extrapolation from these two is only mildly
indicative and in opposite directions to boot.

Aha, now we've gotten to some serious analysis (sort of). However, it
looks to me like radiosity rendering is a different technique than HD
Software render (can you clarify this?). In other words, the
comparison may or may not be valid. What you really want to know is
how radiosity performance on one app compares to a different app using
the same techniques.

That being said, I am growing slightly more skeptical that the two are
not well correlated. Moreover, I think everyone here would appreciate
more evidence.
However there seems to be a trend of better performance at comparable
clock rates with the Woodcrest, which we haven't seen for a while from
Intel.

I'd also point out that if you care about performance, you should be
buying a 3GHz woodcrest, GamePC, for some reason is only using slower
binned ones. Unless there is a good a reason (i.e. you want clock or
price or form factor normalized performance) you should always use the
highest speed grade (2.8 dual core opteron against 3GHz Woodcrest).
Ok so absolute claims are out then:-
And I quote -

"Right now AMD is not performance competitive with Intel in servers".

Now correct me if I'm wrong but your position as previously stated is
as above.

That's my position. However, anyone who knows about benchmarking would
understand that there is always a YMMV attached to that.

Let's think about when the K8 was king of the roost, and Paxville was
stuck at 2.8GHz dual core. Ouch, pretty tough situation, right? I
think anyone in their right mind would agree that Intel's performance
was not competitive; I mean it was pretty terrible. On top of that,
their thermal and power use was insane.

However, it wouldn't be hard to design a benchmark where Paxville ended
up ahead of the K8. The easiest thing to do would be to design one
with a working set of 1.8MB, so that it fit in Paxville's L2 cache, but
not the K8. Every chip has a weakness, be it cache size, or
associativity. It is really not that hard to exploit said weaknesses
to twist a benchmark to show whatever you want.

So, what is the moral of the story:
Benchmarking is always a YMMV sort of discussion. Most people who pay
attention to the computer industry realize this.
If you want to now say that it will depend on workload and we don't
really have all of the information then by all means go ahead, but it
means dropping the slogans and being a little more reflective in
claims. Or to put it another way I wish you'd investigate absolute
claims fully before making them - haven't I heard that somewhere
before?

Again, anyone who really pays attention to benchmarking would realize
there is a YMMV attached to any statement. I'll stand by what I said,
which is that the K8 is not competitive for server workloads.

I'd further note I never said that WC will ALWAYS beat the K8 for ANY
server benchmark.
Your discussion was interesting but unfortunately not yet provable
without actual measurement.

Yes, and the gamepc measurements are hardly relevant.
I'll come back to you with the figures, when benchmarks have been
performed but I think your figure 5-10% is wildly optimistic. That's
not the way uncertainty works.

Again, this all depends on how different LW and Maya are. the first
thing you'd want to do is evaluate the correlation between the two,
which will require far more than one sample.
Ok now you're really pushing it - a pair of tests with two data points
on 3DS Max alone with what looks like the 32 bit version at that.

Again, work with the data you have. Or to be perfectly honest, go ask
your vendor for a demo. If you buy a signficant amount of servers they
will do that.
But i've seen 1 person jump out of a plane with a parachute like mine
so why do I need to check this one! Ok different plane, different
parachute, but surely it must be the same.

Again, if you can quantify the differences, then it's not a problem.
Since you seem to be the closest thing to an expert on the subject,
perhaps you could illuminate (no puns intended) what sort of
performance variation you'd see due to 64b v. 32b?
No my argument is predicated on a few facts, one you've pointed out to
be incorrect - availability. Now for the others - there are no
Lightwave benchmarks available at the moment. The average does not
inform the specific. There is no average until the specifics have
been calculated, only uninformed speculation.

LOL. Thank you for stating the obvious. The average actually does
tell you a lot about the specific data point, if you happen to be aware
of the residuals.
No I think you'll find that XP Pro supports 2 sockets, with as many
cores as you like.

OK, then XP 64b it is.
The difference between XP Pro and Pro 64 is mainly the amount of ram
supported and then virtual address space for each process etc.

Win2K Pro takes cores and sockets into account and thus requires
server for dual dual.

http://support.microsoft.com/kb/888732/en-us

Though rendering performance on Linux would be of interest.

I'll refrain from commenting about use of the internet and absolute
claims.

Yep http://www.anandtech.com/IT/showdoc.aspx?i=2772&p=5

Once again no endorsement and on Gentoo this time.



No the real question has absolutely nothing to do with the average in
any individual case.

Um, actually the average does matter. Try and think about what drives
sales of computers...that would be customers. Your particular
benchmark is only of relevance to you and the class of users who
either:

1. Use the same benchmark
2. Use a class of benchmarks that is knownably correlated with yours

If a given product wins 99% of all benchmarks, but loses on your
workload, then it all comes down to how important your workload is. If
it constitutes $1B/year in spending, it's probably important, if it
constitutes $20K/year, then nobody really cares except you.

And like it or not, an average is made up of individual samples...
So it begins - if the benchmarks haven't been performed and you
haven't evaluated them, then as you say your insight in this area is
limited.

The benchmarks have been performed though...of course, when taking
benchmarks from a vendor, it's far more "Take this with a salt shaker"
than YMMV.
I'd also be a little more circumspect about inferences about
a complete area of applications and one portion of the memory
architecture, the thing about HPC is that it is pretty broad.

Eh, I talk with architects from AMD and Intel, they will mostly say the
same things. Look at AMD's memory pipeline, and you should see why it
is beneficial for HPC.
It's an opinion, not backed up by a whole lot of evidence currently,
but that's your right.

To be quite frank, there is quite a bit of evidence, there are plenty
of industry standard benchmarks where a comparison is available.
Woodcrest wins most, if not all. There are websites that have done
reviews.
The implicit in any benchmark argument is shorthand for marketing
speak.
We're talking analysis here, not marketing, so let's be
accurate, specific, and deal with known quantities.

Hey, I got news you for you, there will always be YMMV in any
discussion of benchmarking, until you have sampled the whole space of
applications. Benchmarking is important for far more than marketing,
as it allows architects to calibrate design decisions to simulated
results. Benchmarking will always deal with unknown quantities, and
that's the way it is... and that's why people use statistics.
Average would only be important to me if I shared my machine with
other users - I don't. The average is not useful or in yet
calculated.

Averages are always useful. If you know how your application varies
from 'the average', it's fine. If you don't then you need to guess as
best you can. Not everyone has the luxury of seeing their app
benchmarked publicly.
Ooh now you've confused me with your technical talk of caches and
other characteristics - I thought that we'd agreed that I can't use
this web thingy let alone that sort of stuff.

That's your problem, not mine.
I'm not suggesting contrived pathological cases - merely a fairly well
known 3D package.

Well, perhaps someone will care and use lightwave. I don't, and I
won't.
Obviously unsound - performance is not verified yet.

Your performance isn't available, your $/perf isn't available. There
are plenty of performance numbers out there, and they may or may not
apply to you. You can always choose to wait, but things will always
change (especially pricing). There are times when it is obvious to
wait (a product launch occuring in a week or so), and there are times
when it isn't.
Plus the slew of
price reductions around the corner. I'm not ready to do the calcs yet.

Price reductions for who?
Unless you think that price/perf is another of those 'average' figures
that can be pulled out of thin air.
I will do the calcs when I have clear performance data and can work
out price/performance for my needs.

Sure, but don't confuse your needs with what everyone else requires. I
fully agree that you are best served by benchmarking your application,
with your data sets. No questions asked, however, the interesting
aspect is to what extent you can rely on industry standard benchmarks
or other samples.

You're basically ignoring all the data out there, which IMHO is silly.
Now, you can take a reasoned look at it and say: "I don't care about
webserving, ignore this one", and that makes sense. But I have only
heard the beginning of a good discussion about:

1. How/why Maya, 3DS max and lightwave are all so different
2. How/why Lightwave is significantly different than an average
workload
3. What the performance characteristics are of Lightwave
Okay - if you'd said that it looks like Woodcrest will take the lead
in a bunch of workloads and it will be available off the shelf real
soon now so it's worth having a look to see if it suits your workload,
I'd emphatically agree with you.

A blanket "Right now AMD is not performance competitive with Intel in
servers" is just plain wrong, and worse it is misinformation.

You're certainly entitled to your opinions. As always YMMV, and that
was hardly a blanket statement if you think about it:

Did I ever say "Right now AMD is not performance competitive with Intel
in ALL server workloads"? I don't think I did. Did I ever say "Intel
will win all server benchmarks"? I don't think so.
This statement is a real shame because it looks like Woodcrest could
be a very good chip, but this cheerleading really does no favours to
the perceived veracity of any reasonable claims made.

Heh, you want to pick on my statements for cheerleading? If you were
to bring up everything that George and Yousuf have said that is
cheerleading, you'll have arthritis...
I understand it's all about marketing, and Intel are smarting from a
fairly complete drubbing for the last 2 to 3 years, but that doesn't
mean we have to join in with this sort of hype in this newsgroup.

Have you read this newsgroup? There is so much BS hype flying around
here it's painful.

DK
 
D

David Kanter

George said:

This is really annoying you know.
That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

That is not a Woodcrest system - DC Xeon 5050... though there are a couple
of such for more $$.

Ooops, sorry about that. I will concede the point that Woodcrest may
not be available for a week or two. IBM's ETAs were around 5-10
business days, but they were more like general guidelines than actual
ship dates.
Hmm, 7/23 is close enough I suppose but for Ryan, better check
www.hp.co.uk.:-(

It's also worth checking Dell and IBM...although their US sites don't
give out good ETAs.
Hmm, not quite... in this case.

Yup, you're right.

DK
 
R

Ryan Godridge

George said:

This is really annoying you know.
Where can I find benchmarks for 3D rendering in Lightwave (8.5 or 9
beta I don't mind) which were done on generally available machines -
as in I could buy one and have it delivered tomorrow with Woodcrest
processors.

So here's the first question. Why does it have to be a GA machine?
You're clearly a little flexible about the version of the software,
which will introduce a fair amount of uncertainty.


I think you miss the point - there are NO benchmarks on retail boxes
now. There are only preview benchmarks, due to the fact that there
are no boxes available today.

That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

That is not a Woodcrest system - DC Xeon 5050... though there are a couple
of such for more $$.

Ooops, sorry about that. I will concede the point that Woodcrest may
not be available for a week or two. IBM's ETAs were around 5-10
business days, but they were more like general guidelines than actual
ship dates.
Hmm, 7/23 is close enough I suppose but for Ryan, better check
www.hp.co.uk.:-(

It's also worth checking Dell and IBM...although their US sites don't
give out good ETAs.
Hmm, not quite... in this case.

Yup, you're right.

DK

I'm not usually one to give up the fight but I've lost interest in
this one.

In the UK Woodcrest is not currently available - you said it was -
stupid mistake.

You cocked up the O/S info - a schoolboy error.

You weasel worded around your claims about - "Right now AMD is not
performance competitive with Intel in servers"

You treat everybody as if they know nothing and you know everything,
you've got some information - you've got very little knowledge.

Take your empty boasts somewhere else, we've lost interest - come back
when you've grown up and you've got a tenth of the experience of
George and Yousuf and any of the other regulars on this group.

Woodcrest may be good, not for everything, not ever.

Give my regards to the architects you speak to on a daily basis.

Now you can go back to being John Corse.
 
G

George Macdonald

I figured I'd let that one go - what's a week or three amongst
friends.

Yeah but I don't see any 5100s at the UK site yet... nor do I see direct HP
sales there so delivery date depends on the distributors/e-tailers stock
situation.
 
R

Ryan Godridge

Yeah but I don't see any 5100s at the UK site yet... nor do I see direct HP
sales there so delivery date depends on the distributors/e-tailers stock
situation.

I suspect that the point will be moot - I'll look at it and think
bugger that for a game of skittles and I'll build again.

I have this fantasy that it will work out somehow easier if I buy
rather than build - never works.

Must admit I like my Proliant 5500 - Quad PPro though, chugs along
nicely.

Ryan
 
D

David Kanter

I'm not usually one to give up the fight but I've lost interest in
this one.

In the UK Woodcrest is not currently available - you said it was -
stupid mistake.

To be perfectly honest, I hardly consider this a settled issue. All we
have is your word that WC isn't available in the UK. It appears to be
available within 2-3 weeks in the States from HP. I'll readily admit
my example was wrong (which is what I get for trying to type coherent
arguments late at night, while playing video games). However, you are
trying to claim that something does not exist or is not available.
That's a rather difficult claim, because it is, as you have said
earlier, a blanket statement. Now perhaps you are right and Woodcrest
is not available in the UK, that could be. However, so far I just have
your word for it, considering I haven't looked at any UK server
providers.
You cocked up the O/S info - a schoolboy error.

Excuse me? I cited the best sources of information that were there.
When you don't have perfect information (which never happens), you work
with what you have. Is it really that hard to try and figure out the
difference in performance between 32b and 64b windows?

What you are looking for is perfect information, and you reject
anything that doesn't fit what you want. In some ways, that is
commendable, because you need to be aware of the short comigns of the
information which you have. However, outright discarding information
is rather stupid unless you have a really good reason.
You weasel worded around your claims about - "Right now AMD is not
performance competitive with Intel in servers"

No, you just can't read between the lines. As I've said before, any
discussion of benchmarks or performance always include YMMV. Sometimes
I don't bother stating it because I assume my audience would understand
that; and sometimes I'm wrong.
You treat everybody as if they know nothing and you know everything,
you've got some information - you've got very little knowledge.

Eh, perhaps. I never claimed to know everything, and I've readily
admitted my mistakes. In fact, I seem to recall pointing out that you
are the one who knows about LW and rendering applications and asking
you questions about it (for instance, sensitivity to memory size,
similarity of different rendering methods), so I think there is in fact
rather little to back up your claims here. I'm still waiting to hear
about whether the luminosity test was in any way similar to the
software rendering test in the GamePC review...

On the other hand you seem to hold some deeply seated views which you
are unwilling to challenge or re-evaluate regarding statistics,
benchmarking and performance. AFAICT, there is very little to back
them up.

DK
 
G

George Macdonald

I suspect that the point will be moot - I'll look at it and think
bugger that for a game of skittles and I'll build again.

I have this fantasy that it will work out somehow easier if I buy
rather than build - never works.

I know what you mean - DIY is less trouble in the long run but *can* have
its additional aggro. Right now I'm trying to get F/W updates for Seagate
7200.9 SATA drives, due to a known "incompatibility" with nForce4 - Seagate
support knows about this but they're telling me to run the diagnostics "to
determine if there is a problem"... escalation is hell.Ô_ô
Must admit I like my Proliant 5500 - Quad PPro though, chugs along
nicely.

Wow - amazing how old powerhouses can have extended useful lifecycles.
 
G

George Macdonald

George said:

This is really annoying you know.
Where can I find benchmarks for 3D rendering in Lightwave (8.5 or 9
beta I don't mind) which were done on generally available machines -
as in I could buy one and have it delivered tomorrow with Woodcrest
processors.

So here's the first question. Why does it have to be a GA machine?
You're clearly a little flexible about the version of the software,
which will introduce a fair amount of uncertainty.


I think you miss the point - there are NO benchmarks on retail boxes
now. There are only preview benchmarks, due to the fact that there
are no boxes available today.

That would be an excellent point, if it were in fact true. However...

http://h71016.www7.hp.com/dstore/Mi...=2424&BaseId=19127&oi=E9CED&BEID=19701&SBLID=

That is not a Woodcrest system - DC Xeon 5050... though there are a couple
of such for more $$.

Ooops, sorry about that. I will concede the point that Woodcrest may
not be available for a week or two. IBM's ETAs were around 5-10
business days, but they were more like general guidelines than actual
ship dates.
Hmm, 7/23 is close enough I suppose but for Ryan, better check
www.hp.co.uk.:-(

It's also worth checking Dell and IBM...although their US sites don't
give out good ETAs.

Hmmm, turns out 7/23 is a Sunday - they've corrected it:) to 7/24.
 
R

Ryan Godridge

On Mon, 03 Jul 2006 22:12:16 +0100, Ryan Godridge <Ryan> wrote:


I know what you mean - DIY is less trouble in the long run but *can* have
its additional aggro. Right now I'm trying to get F/W updates for Seagate
7200.9 SATA drives, due to a known "incompatibility" with nForce4 - Seagate
support knows about this but they're telling me to run the diagnostics "to
determine if there is a problem"... escalation is hell.Ô_ô
Sometimes the long just seems to be a bit too long :(

Seagate will be telling you to turn the machine off and on again in a
bit to see if that fixes it.
I wish you well in your endeavours.
Wow - amazing how old powerhouses can have extended useful lifecycles.

I saw it on Ebay and just had to spend the £35 (including shipping!)
to get it:

Quad PPro, 640 MB ram, 4 * scsi 10K, 2 extra Xeon 700s thrown in ? All
the usual scsi raid cards etc.

I've added 2 40 GB ide drives and a dvdrom, away she goes - lovely,
gonna set it up as a file server. I added an ide card for the disks
so I can put 4 in - if I can fit them in.

I had a problem with the ide card which had a fix for Win2K pro but
not win2K server so currently only 2 PPros in. I'll see how Linux
likes it with the ide card and 4 cpus or failing that a better ide
card, seems like the Promise cards may play nicely with the older
servers.

It's completely suited for its final purpose - sitting under the house
serving files day in and day out. Parts are cheap, and it's a good
toy:)

Ryan
 
W

willbill

George said:
I know what you mean - DIY is less trouble in the long run but *can* have
its additional aggro. Right now I'm trying to get F/W updates for Seagate
7200.9 SATA drives, due to a known "incompatibility" with nForce4 - Seagate
support knows about this but they're telling me to run the diagnostics "to
determine if there is a problem"... escalation is hell.Ô_ô


with what nVidia Northbridge/Southbridge?

i mean, i've got the nForce Pro 2200/2050
(Northbridge/Southbridge)

i mean, is it the nForce4 software, or the hardware,
or both? the most recent d/l of the nForce4 software
for mine was a huge 41MB!

i mean, i've got 3 Seagate 7200.9 SATA drives and
was planning to hook them up in my SuperMicro H8DCE.
but i'll be using them RAID 1E with a somewhat high end
(Areca ARC-1210) RAID adapter, so maybe i'll be OK?

kindly provide a bit more detail on your situation

and any suggestions (for me) will be very welcome

of course, there's always "try it". :)

bill
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top