Wouter said:
One application usually uses one CPU, picked by Windows.
To make full use of the dual CPU capacity by one application,
the application must be written for this.
When creating multi-threaded Windows apps, most compilers
create applications that let Windows manage the thread
allocations. Windows, by default, essentially just
dispatches threads to whichever processor is least busy at
the time the thread is created.
A multi-threaded app is not confined to a single processor
unless the programmer(s) go out of their way to make it so.
Or unless they go out of their way to make it an option for
the user such as in this app ...
One application I know is MS SQL Server, this has a setting were
you can specify how to use multiple CPU's.
Some applications won't work well on multiple CPU's, specially
when these applications launch other processes that might run on
the other CPU; such a situation can result in timing problems
(e.g. one process that luckily runs on the other CPU finished
earlier than ever expected by the programmer).
Ideally threads should be completely independent of one another.
That, of course, is the whole point of threads: to allow
independent processes to be run simultaneously instead of
sequentially. However, in reality thread interdependencies
are commonplace and it is up to the programmers to manage those
interdependencies. Some programmers, of course, are better at
that than others. Purists will argue that if two threads are
interdependent then the tasks those threads do never should
have been split into separate threads in the first place.
Similarly, a badly designed multi-threaded app can end up
performing no better on several processors than when just
a single processor is available. If the threads are
interdependent and those interdependencies are badly
managed, then the overall progress of the app is basically
limited to the pace of the slowest thread.