Dual processors- Why?

D

David Maynard

DaveW said:
ONLY with applications written to take advantage of dual processors, which
are very few in number.

That is a common misconception derived from a premise that 'the one app' is
the only measure of whether performance has increased, but there is never
just 'the one app' running on a system.

Even if 'the one app', which isn't SMP aware, is confined to operating on
one processor, the second one frees it from being burdened by all the other
apps and OS functions (lets not forget disk access, sound, video, etc.)
from stealing time on the 'one' processor running 'the one app'.

Given equivalent processors in the comparison, a dual processor system has
twice the cache and twice the number of control registers for context
switching, which is another reason why they feel more responsive.
Also the OS must be written to use dual CPU's.

That, of course, is true.
 
J

Juhan Leemet

It's inherently true: 1100 can't possibly be bigger than 1200, much less
'twice' as big.


The comment was about 'speed', which I presumed meant processing power.
'Effective' is another matter, depending on what you mean by it.


Well, they shouldn't 'stop cold' unless you've got priorities set to
allocate CPU time exclusively to the database app.

Depends on the "O/S". I remember having that experience when running
big/long Access97 queries on Windows98 at a client site. Access is nice.
Windows98 blows chunks. That "cooperative multitasking" b.s. (dunno if
they've finally got rid of it? no, I guess then it wouldn't "act like
windows", would it?) was quite probably at fault. In my case, I had
written some queries using VBA, so that I could do special programmed
selections, and processing. Turned out that while the VBA loop was
running, nothing else would get any CPU. I had to put in "breaks" into the
code: e.g. count 100 iterations in the loop and make special system calls
to "voluntarily give up" the CPU to anyone that might want it. I'm used to
preemptive multi-tasking and time-slicing schedulers in "real" O/S, so
this was really/specially annoying. To actually have to write in kludges
to make multitasking work is an abomination! BTW, while I was kludging I
put in some "progress indicators". They also helped, since the queries
were slow/long and sometimes you wondered if it had crashed (again?).
That is easily explained by postulating that the database app runs on
only one processor so there's half of the system left 'idle' for your
other apps to run in. That would be true regardless of what the combined
'speed' is and doesn't say anything about it.


I don't understand why you say the comparison of otherwise equal systems
is 'difficult'.

Yeah. I generally favor multiprocessor systems for that reason. There are
more CPUs to share the load, and there's more likely one "free" to handle
any new work or event.

BTW, you cannot always linearly generalize viz. clock rates, etc. I had a
case where a quad-CPU system seemed to not perform much better (if at all)
than a dual-CPU on the same mobo. The quad CPUs were actually higher clock
rate (but different internal architecture, tho same instruction set), but
smaller cache. I think it was a combination of cache starvation and
perhaps also memory bus choking that limited performance.

p.s. These days I run Solaris and Linux and I'm much happier. YMMV
 
D

David Maynard

Juhan said:
Depends on the "O/S". I remember having that experience when running
big/long Access97 queries on Windows98 at a client site. Access is nice.
Windows98 blows chunks. That "cooperative multitasking" b.s. (dunno if
they've finally got rid of it? no, I guess then it wouldn't "act like
windows", would it?) was quite probably at fault. In my case, I had
written some queries using VBA, so that I could do special programmed
selections, and processing. Turned out that while the VBA loop was
running, nothing else would get any CPU. I had to put in "breaks" into the
code: e.g. count 100 iterations in the loop and make special system calls
to "voluntarily give up" the CPU to anyone that might want it. I'm used to
preemptive multi-tasking and time-slicing schedulers in "real" O/S, so
this was really/specially annoying. To actually have to write in kludges
to make multitasking work is an abomination! BTW, while I was kludging I
put in some "progress indicators". They also helped, since the queries
were slow/long and sometimes you wondered if it had crashed (again?).

Well, I had the impression he was using a Win2K/XP system. The Win9x series
is completely different.

Yeah. I generally favor multiprocessor systems for that reason. There are
more CPUs to share the load, and there's more likely one "free" to handle
any new work or event.

BTW, you cannot always linearly generalize viz. clock rates, etc. I had a
case where a quad-CPU system seemed to not perform much better (if at all)
than a dual-CPU on the same mobo. The quad CPUs were actually higher clock
rate (but different internal architecture, tho same instruction set), but
smaller cache. I think it was a combination of cache starvation and
perhaps also memory bus choking that limited performance.

Yes. That's why I said processors of the same class.
 
J

John R Weiss

Not true when multitasking!

Win NT4 (to a lesser degree), 2000, and XP Pro all have code that allows
apps/processes to be intelligently assigned to the best available CPU. For
example, a database query can hog an entire CPU in the background, and you
can use another app in the foreground on the other CPU.
 
J

John R Weiss

David Maynard said:
It's inherently true: 1100 can't possibly be bigger than 1200, much less
'twice' as big.

If you are considering the CPU only, that's true. If you consider the
entire system, it may or may not be true.

The comment was about 'speed', which I presumed meant processing power.
'Effective' is another matter, depending on what you mean by it.

"Effective" is speed/ease of accomplishing work/tasks. If you cannot
perform a second task at all because another is hogging CPU time, the system
is not effective at all in performing the second task.
 
D

David Maynard

John said:
If you are considering the CPU only, that's true. If you consider the
entire system, it may or may not be true.

He wasn't making a 'system' comparison. He was making a statement about
dual 550 processors vs a 1200 processor and if the rest of the system is
dramatically different then it isn't be a valid 'processor' comparison;
which was the problem with his where the disparity between a true AGP card
vs built in shared memory display corrupts the results.

Just as it wouldn't be valid to compare how fast Word2 for DOS comes up on
a PIII-933 vs WordXP on an XP 1800+ running WinXP and claim the PIII 933
was 'twice as fast' as the AMD XP 1800+.
"Effective" is speed/ease of accomplishing work/tasks. If you cannot
perform a second task at all because another is hogging CPU time, the system
is not effective at all in performing the second task.

That is certainly one way of looking at it but 'effective' depends on what
one wants to do. If, for example, the purpose is to perform that one task
as fast as possible then a dual system is not as 'effective' as a single.

However, as I said, the original comment was about 'processing speed' and
that two 550s were 'twice as fast' as a single 1200 and I still maintain
that is simply not true, all else being equal.

It may 'feel' more keyboard 'responsive' or it may meet your particular
opinion of 'more effective' but the dual processors in the comparison
aren't 'twice as fast' as the single.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Dual processors vs dual cores? 4
Dual boot 3
Ebay video card scat 2
p4/3ghz 4
AMD Dual Core question 5
Sapphire's Dual-X R9 280 OC Graphics Card 0
Selecting between 2 Processors 2
Athlon 64 dual-core question 2

Top