Xeon 533 FSB or P4 800 FSB ? (Corrected post)

D

David Schwartz

If all you want is a faster mass storage subsystem, use RAID.

No, I want a smoother mass storage subsystem.
From what
I've seen of the benchmarks at tomshardware.com, programs that don't use
nor cannot use dual processors don't benefit much (and, like you said,
it's probably the OS that got sped up so the apps tag along on the coat
tails of the OS speedup).

Humans don't just do one thing at a time. It's called "Windows", not
"Window".
Lots of money for little benefit. Remember
to not just look at the numbers but the percentage of difference in
performance versus the percentage of difference you pay.

Try using a dual-CPU system running Windows for a day. Do ordinary
tasks. Then switch back.

DS
 
N

Nate Edel

In comp.sys.intel David Schwartz said:
inherits a server that was obsoleted and uses it a desktop machine. I have a
dual P3-1Ghz and dual P3-750 machine that I inherited in just this way.
They're more usable desktops than a single CPU P4-3Ghz.

For most normal desktop use, any of those are so overprovisioned that one is
unlikely to notice the difference in processor speed ... although Longhorn
may change that.

A better question is how much RAM, what sort of disks? Given that the duals
were servers, it's plausible -- indeed, likely -- that they'd have better
disk systems. Given the same latency/bandwidth issue, if they've got
low-access-time SCSI, and the single-CPU P4 has a commodity IDE, the
response-time improvement is likely noticeable.
 
B

Bill Davidsen

rec.arts.anime.misc*Vanguard* said:
The Pentium 4 Extreme is *NOT* Intel's next Intel Pentium 5 (dubbed
Prescott). It's a repackaged processor they already had! They wanted
to fold the Xeon family under the much better known Pentium 4 brand
name. You haven't gotten used to Microsoft's hype yet? Athlon finally
comes out with their 65-bit processor and steals the news, so Microsoft
takes an existing 64-bit processor that they've had for awhile and slaps
on a new name to give it some press to dampen Athlon's thunder. They
wanted something to announce for the Christmas crowd.

You lost me... where does Microsoft come in on this?
 
B

Bill Davidsen

Your "strong anecdotal evidence" is voodoo superstition _IF_ it isn't
based on benchmarks appropriate to the use of the system. There are
benchmarks of real applications that show clearly, only a minor
performance increase (or decrease) in most uses. These are not
isolated synthetic benchamarks, but reproducible (and reproduced) many
many times. If you have benchmarks to the contrary that are confirmed
by a 3rd party then supply them, we're always hungry for more data.

Unfortunately it very hard to benchmark human factors like
responsiveness. And like quantum physics, measuring changes the results.
You can come closer to a non-impact measure with a system like Linux,
where you can put low impact data points in the kernel source, but even
there you need to get the et between events, and that has some impact
just by doing a high resolution get_date operation.

Having a few dozen people try A then B and give an opinion which is
better and "a little" or "a lot" is more reliable, but harder because
you can't run one test on one machine and then say "the number prove
it!" You don't benchmark which food tastes better, and it's equally hard
to prove "feels smoother" with a benchmark.
 
N

Nate Edel

In comp.sys.intel *Vanguard* said:
If all you want is a faster mass storage subsystem, use RAID. From what
I've seen of the benchmarks at tomshardware.com, programs that don't use
nor cannot use dual processors don't benefit much (and, like you said,
it's probably the OS that got sped up so the apps tag along on the coat
tails of the OS speedup).

Running on their own, no. And in the case of gaming, one usually DOES run
them on their own. In terms of real _work_, or general desktop use, one
usually does NOT run only one program at a time, and under heavy multiple
application loads, duals really shine.

I'm not aware of a system benchmark that simulates that kind of use yet;
perhaps one way to do it would be to take a dual monitor system, run a
multimedia benchmark on one, and an office apps
Lots of money for little benefit. Remember to not just look at the
numbers but the percentage of difference in performance versus the
percentage of difference you pay.

Lower-end duals can come in a good bit cheaper than higher-end single CPU,
the delta in motherboard prices is only around $150 (for desktop/workstation
dual mobos -- server duals, w/ SCSI, have a much bigger difference), and the
rest of the system costs essentially the same.

Example: dual Xeon 2.4/533s cost less than one P4 3.2, by about $80.
I'm still playing Thief Gold and Thief 2. I like stealth.

Well, for gaming there's little advantage to duals (unless you're prone to
leaving significant background tasks running while gaming, and assuming that
the particular games you play are still stable when doing that...)
 
N

Nate Edel

In comp.sys.intel David Schwartz said:
No, I want a smoother mass storage subsystem.

Raid will help with that. A really big cache will help more. A non-volatile
write cache will help a lot as well, and a lot of RAID boards can take a
large battery-backed up RAM cache. If you don't want to use RAID, the
easiest way (albeit not an inexpensive one) if running Linux, get a BBRAM
board, and put your journal on BBRAM rather than on a physical hard drive.

Duals may help slightly with mass-storage performance, but the storage
system architecture
Humans don't just do one thing at a time. It's called "Windows", not
"Window".

Depends on what they're doing. Gamers, casual home users,
productivity/office users, and software developers/scientific users have
very different needs... for the latter categories, duals help a lot. For
gamers, not so much although that may change as games start taking advantage
of system-level support for threading. For casual home users, who cares?
Most of them are find with <1ghz systems anyway.
 
G

Ghazan Haider

I was considering Xeon vs P4 for a webserver a while ago. The only
thing that really differentiates a Xeon with a P3 is the cache. I
found P3s with 512kb cache, and thought they should be sufficient for
my web and application server. Havent done any benchmarks, but going
Xeon shouldnt give you a very big difference. Remember the Xeon came
out with the P3, and was for the people who would pay a lot more for a
little more power.

As for the P4 vs P3, I really dont know. Theres the P4 Xeon, and then
theres the good but hot Athlon.

I had also been browsing some Ultra160 SCSI cards and their disks,
along with their response times, cache, throughput etc. Turns out they
equivalent to the cheaper and much larger SATA150 disks. I looked into
some 15K rpm Ultra320 disks but could never justify the cost. If you
plan to go Ultra160, might as well head for SATA and some 7200 disk
with 8mb cache and low response time.

Above 400fsb I think the bottleneck is the disk and CPU, and other PCI
cards in the system. I wouldnt recommend going all the way to 800fsb
while getting weaker CPU performance.
 
M

~misfit~

Ghazan said:
As for the P4 vs P3, I really dont know. Theres the P4 Xeon, and then
theres the good but hot Athlon.

Huh? Man you're way behind the times. The latest P4s run a lot hotter than
the latest Athlons.
 
N

Nate Edel

In comp.sys.intel Ghazan Haider said:
I was considering Xeon vs P4 for a webserver a while ago. The only
thing that really differentiates a Xeon with a P3 is the cache. I
found P3s with 512kb cache, and thought they should be sufficient for
my web and application server. Havent done any benchmarks, but going
Xeon shouldnt give you a very big difference. Remember the Xeon came
out with the P3, and was for the people who would pay a lot more for a
little more power.

Also, the P3 Xeons could be used in Quads, while the P3s topped out at
duals. Not a consideration for most of us, however.
As for the P4 vs P3, I really dont know. Theres the P4 Xeon, and then
theres the good but hot Athlon.

P4 Xeon is basically the same core as the P4, with a few exceptions in
what's enabled:
- P4 doesn't allow duals, P4 Xeon does (P4 Xeon doesn't do quads, you need
the P4 Xeon MP for those)
- In some cases the Xeons have a larger cache than P4s. Some Xeons have a
large (1-2mb) L3 cache in addition the the L2.
- Prior to the 3.06 P4 coming out, P4 Xeons had hyperthreading enabled, and
P4s didn't. Then for a while, the P4 Xeons had hyperthreading and only
the fastest 3.06 P4 did also. Now the 2.4/2.6/2.8-800mhz FSB P4s have
hyperthreading as well as the 3ghz+ models.
- Xeons haven't kept up with the FSB on desktop P4/Athlons, and at only
266mhz ("533mhz" in Intelspeak) have a performance disadvantage
Above 400fsb I think the bottleneck is the disk and CPU, and other PCI
cards in the system. I wouldnt recommend going all the way to 800fsb
while getting weaker CPU performance.

I've benchmarked the work I do before and after a motherboard swap between a
2.4/533/(845?) and a 2.4C/800/865PE (I haven't tried a 2.4/400) with
otherwise identical hardware except for the cpu/ram/main board. The
difference was significant, and since the amount of ram and the hard drive
remained the same, as did the core clock rate...
 
B

Bill Davidsen

Ghazan said:
I had also been browsing some Ultra160 SCSI cards and their disks,
along with their response times, cache, throughput etc. Turns out they
equivalent to the cheaper and much larger SATA150 disks. I looked into
some 15K rpm Ultra320 disks but could never justify the cost. If you
plan to go Ultra160, might as well head for SATA and some 7200 disk
with 8mb cache and low response time.

Under heavy database write load the SCSI does have advantages, SCSI
drives accept the data and return status later when the physical write
has been done. ATAPI drives can (a) accept and cache the data then tell
you the write is complete (it's not, bad for database work) or (b) have
the write cache disabled, in which case the performnance will really rot.

There are other cases, but db is the one where you depend on the writes
really being done (via fsync or similar).
Above 400fsb I think the bottleneck is the disk and CPU, and other PCI
cards in the system. I wouldnt recommend going all the way to 800fsb
while getting weaker CPU performance.


It depends a lot on what you do, compute intensive applications can use
a LOT of memory bandwidth. Not just engineering calculations, but
graphics, games, etc. Add high video update rates to that and memory
bandwidth does make a difference.
 
K

kony

blah, blah, blah. Complete bullshit.

LOL.

Didn't get enough trolling hours in this week so you decided to start
at 1/2 month old posts? Perhaps you wasted a couple hundred on a
dually and are too stubborn to admit it to yourself?

Anyone who makes theories and rests upon them without verification
(like benchmarks, even real-world/use 'marks) is a fool. How
seriously can we take someone who only provides verbal diarrhea?

There are SOME cases where the 2nd CPU is of benefit, but not for
typical PC use. Test your preferred apps.... if they run better then
by all means throw a 2nd CPU at 'em, but don't pretend a 2nd CPU is a
blanket performance boost, because in most _PC_ cases it isn't. This
is borne out time and time again, with single CPU systems costing same
or less, outperforming the dually. If money is no object, there are
even fewer cases where it's better to have a dual CPU system instead
of a 2nd (or 3rd, etc) system. If you mean a database server or
purpose-specific workstation with the right apps it's a different
situation, but then there are also benchmarks that show this.


Dave
 
E

Eric Gisin

Moron. One does not benchmark applications, one evaluates working systems.

My system has half-a-dozen user applications, a dozen system services. I
multitask, and so does my computer.

This is not some intellectual game, it is about real-world observations.
 
B

Bill Todd

Eric Gisin said:

From someone like you, that's down-right laughable.
One does not benchmark applications, one evaluates working systems.

'One' actually often does both; by contrast, *you* appear incompetent to do
either.
My system has half-a-dozen user applications, a dozen system services. I
multitask, and so does my computer.

Gee: I run, and so does my car. Just how many useful observations do you
suppose 'one' can draw from that?

The basic question is simple: in what percentage of use does a single
processor have difficulty keeping up with the processing load where multiple
(albeit slower) processors would do noticeably better (because multiple
independent threads are contributing significantly to the load rather than a
single CPU hog that wouldn't be greatly affected by occasionally
relinquishing a scheduling quantum to the other threads)? And the answer is
equally simple for typical PC use: not much of a percentage at all - so
getting the better performance of the single, faster CPU the rest of the
time is a win.
This is not some intellectual game, it is about real-world observations.

And you appear to be in need of glasses to improve your ability to make
them - unless you're running some ancient system without preemptive
scheduling.

- bill
 
K

kony

Moron. One does not benchmark applications, one evaluates working systems.

My system has half-a-dozen user applications, a dozen system services. I
multitask, and so does my computer.

This is not some intellectual game, it is about real-world observations.

"Real-world observations" are only as good as the data that supports
them. Your word is not enough to overcome a staggering number of
benchmarks of real applications, often running in these "half-a-dozen
user applications, a dozen system services" environments that you're
trying to differentiate. All those services and apps in the
background "usually" don't need a lot of CPU time, they'd run on a
Pentium 200 and certainly on a single CPU that's significantly faster
than either one of two in a dual CPU system.

Systems are multi-tasking but almost always, only one user. That user
is occupied with a single application at a time, and that
application's completion time, being faster with the single CPU (as
proven with benchmarks) allows that user to move on to the next task,
sooner, even with those other tasks and services in the background.
The other tasks and services can be considered a constant when they're
quite easily left running while doing benchmarks.

It is not a fluid-like sequence of execution that gets the job done
faster, it's actual performance at each and every application, when
needed, one at a time then switching tasks faster than the user can
even perceive it. If all you want is a smooth experience regardless
of the actual performance, I suggest you take some valium. On the
other hand, if you want peak performance for the actual jobs you're
running, no subjective opinion can have more weight than actual
benchmarks of that application, in the same environment in which it's
to be running.

No amount of name-calling or other troll-like behavior is going to
make your argument seem valid without some verifiable examples.
You've done anything BUT provide a convincing argument.


Dave
 
D

David Schwartz

Moron. One does not benchmark applications, one evaluates working systems.

My system has half-a-dozen user applications, a dozen system services. I
multitask, and so does my computer.

This is not some intellectual game, it is about real-world observations.


One person can only usefully employ one computer at a time for normal
tasks. Single CPU machines have annoying hangs and delays that make using
the computer frustrating. If you've never used a dual-CPU machine, you
probably just consider those delays normal and expected and don't realize
how much nicer it is to use a machine that doesn't do that to you.

A dual P3-1Ghz machine is more usable as a normal desktop machine than a
single P4-2.4Ghz machine. What matters is not so much how fast the machine
can do something (throughput), but how much time I have to spend waiting for
it to respond to me (latency).

DS
 
D

David Schwartz

"Real-world observations" are only as good as the data that supports
them. Your word is not enough to overcome a staggering number of
benchmarks of real applications, often running in these "half-a-dozen
user applications, a dozen system services" environments that you're
trying to differentiate. All those services and apps in the
background "usually" don't need a lot of CPU time, they'd run on a
Pentium 200 and certainly on a single CPU that's significantly faster
than either one of two in a dual CPU system.


You are talking about throughput when the problem is latency. For
checking emails, word processing, and the tasks normal people do all day,
latency is the problem. Dual CPU machines, even with comparatively slower
CPUs, beat the pants of single CPU machines when it comes to latency. (For
PCs running Windows.)

This is a fact that's immediately apparent to anyone who has tried
normal tasks on both types of machines. A dual P3-1Ghz is much better than a
P4-2.4Ghz machine. The amount of time you spend waiting for the machine to
respond to your command is very much less.

Systems are multi-tasking but almost always, only one user. That user
is occupied with a single application at a time, and that
application's completion time, being faster with the single CPU (as
proven with benchmarks) allows that user to move on to the next task,
sooner, even with those other tasks and services in the background.
The other tasks and services can be considered a constant when they're
quite easily left running while doing benchmarks.


Nonsense. You only rarely wait for applications to complete. But you
very commonly wait for applications to respond to your request. How can you
type an email when the system won't echo your keystrokes because it's
launching IE at blinding speed?

It is not a fluid-like sequence of execution that gets the job done
faster, it's actual performance at each and every application, when
needed, one at a time then switching tasks faster than the user can
even perceive it. If all you want is a smooth experience regardless
of the actual performance, I suggest you take some valium. On the
other hand, if you want peak performance for the actual jobs you're
running, no subjective opinion can have more weight than actual
benchmarks of that application, in the same environment in which it's
to be running.


Using a single-CPU machine is frustrating. Even 3 seconds when I can't
work on my email because the machine is busy launching my browser drives me
nuts. I don't care how fast it can perform some task because *I don't wait
for the computer to finish* I do something else while it's working.

No amount of name-calling or other troll-like behavior is going to
make your argument seem valid without some verifiable examples.
You've done anything BUT provide a convincing argument.


Have you actually used comparable machines? Say, a dual P3-1Ghz and a
P4-2.4Ghz for a day? Try it. The difference is noticeable instantaneously.

DS
 
K

kony

You are talking about throughput when the problem is latency. For
checking emails, word processing, and the tasks normal people do all day,
latency is the problem. Dual CPU machines, even with comparatively slower
CPUs, beat the pants of single CPU machines when it comes to latency. (For
PCs running Windows.)

This is a fact that's immediately apparent to anyone who has tried
normal tasks on both types of machines. A dual P3-1Ghz is much better than a
P4-2.4Ghz machine. The amount of time you spend waiting for the machine to
respond to your command is very much less.

It sounds sensible, but the lag you're seeing isn't present on many,
many systems. It is not an issue of having only one CPU that causes
that lag.
Nonsense. You only rarely wait for applications to complete. But you
very commonly wait for applications to respond to your request. How can you
type an email when the system won't echo your keystrokes because it's
launching IE at blinding speed?

Well following your example, right now I'm typing (close enough to
email) on a single CPU system. I launched IE... by the time I
switched tasks back to this post, IE was done loading, easily less
than 1 second. I could not have kept typing without switching back
since the launch of IE put it in focus, the amount of time it takes
for a human to respond is greater than that of the machine to process
the request for such simple tasks. Also I never recall having a
problem with keystrokes being echoed while any other task was
running... the machines you see that occuring on are very seriously
misconfigured.

Using a single-CPU machine is frustrating. Even 3 seconds when I can't
work on my email because the machine is busy launching my browser drives me
nuts. I don't care how fast it can perform some task because *I don't wait
for the computer to finish* I do something else while it's working.

OK, I'll repeat myself. The machine is seriously misconfigured.
There is no 3 second lag, not anywhere near that, on a single CPU
system. There is no "busy" to it, the machine will immediately
respond to far greater demand than (but including) email when at 100%
load launching programs or whatever else. Please provide a specific
scenario, down to every seemingly relevant detail, of what i need do
to see this lag, because I never have, and certainly multitask quite a
bit including email, launching the browsers, and a lot more.


Have you actually used comparable machines? Say, a dual P3-1Ghz and a
P4-2.4Ghz for a day? Try it. The difference is noticeable instantaneously.

DS

That's just it, the difference IS noticable... The P4 is faster. I
can notice how much faster it is... neither has this "lag" you write
about, but the P4 gets everything done faster. Perhaps the dual P3 I
used would be closer to the same speed with same HDD in it, as it's
HDD was about 18 months old, but the difference wasn't just in HDD
I/O.

I am at a loss to explain what you're got running that is trashing a
single CPU system... perhaps spyware or other trojans? There is no
perceivable lag. Even launching an app that takes a fair amount of
time to load, like Photoshop, doesn't interfere with typing (or
anything else). If there where even the slightest hesitation while
such events occured, I'd be on the same "dual CPU" bandwagon.



Dave
 
B

Bill Davidsen

kony said:
There are SOME cases where the 2nd CPU is of benefit, but not for
typical PC use. Test your preferred apps.... if they run better then
by all means throw a 2nd CPU at 'em, but don't pretend a 2nd CPU is a
blanket performance boost, because in most _PC_ cases it isn't. This
is borne out time and time again, with single CPU systems costing same
or less, outperforming the dually. If money is no object, there are
even fewer cases where it's better to have a dual CPU system instead
of a 2nd (or 3rd, etc) system. If you mean a database server or
purpose-specific workstation with the right apps it's a different
situation, but then there are also benchmarks that show this.

If you can't notice the difference between uni and smp, please buy
something cheap and slow and don't try to stop others who are more
perceptive from benefitting from a more responsive system. You clearly
don't do more than one thing at a time, or never noticed that delay
between pressing a key and seeing a character echo.

I don't know if you can't tell the difference or just want to cast
aspersions on your betters (well, your computer's betters) out of envy,
but this tirade claiming that slow is beautiful is getting tiresome.
 
B

Bill Davidsen

kony said:
"Real-world observations" are only as good as the data that supports
them. Your word is not enough to overcome a staggering number of
benchmarks of real applications, often running in these "half-a-dozen
user applications, a dozen system services" environments that you're
trying to differentiate. All those services and apps in the
background "usually" don't need a lot of CPU time, they'd run on a
Pentium 200 and certainly on a single CPU that's significantly faster
than either one of two in a dual CPU system.

Right, so please quote your benchmarks measuring things like the delay
between keypress and matching change on the screen. I want to see your
numbers on time to display a complex web page, either lots of images or
better yet a mix of images and JAVA.

It is not a fluid-like sequence of execution that gets the job done
faster, it's actual performance at each and every application, when
needed, one at a time then switching tasks faster than the user can
even perceive it. If all you want is a smooth experience regardless
of the actual performance, I suggest you take some valium. On the
other hand, if you want peak performance for the actual jobs you're
running, no subjective opinion can have more weight than actual
benchmarks of that application, in the same environment in which it's
to be running.

It's not the climate control or ride which make riding in a Bentley
nicer than a Yugo, either, but how fast you get to the end of the
journey is not why people by Bentleys. It's just *nicer* to use an SMP
machine, less frustrating. Things do actually do get done sooner (as
above) but that's just not the point.
No amount of name-calling or other troll-like behavior is going to
make your argument seem valid without some verifiable examples.
You've done anything BUT provide a convincing argument.

I've invited you to show any benchmarks indicating that responsiveness
is as good with uni as smp. Ball's in your court. You might see if you
can find the IBM study of productivity vs. response time, my (hard) copy
is in a box in another state, so I won't quote from memory.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top