cpu usage on multiple core system below 100%

H

herpers

Hello,

I have a question about threads. I wrote an application which uses
threads to do some calculations. The threads are started using the
Thread-class and ParameterizedThreadStart-Class. On a single core
system and even on a dual core system there is no problem getting the
cpu usage to almost 100%. We also have to quad system (one AMD, one
Intel) and on both systems the usage only goes up to 80% or 90%. It
doesn't matter how many parallel calculations I start. If start the
same number of calculations but split the amount to two separate
instances of the main application (simply by starting the application
twice), I can get the usage to almost 100%.

I did a little reasearch and understand, that threads in .net are
managed threads, which is not the same as native threads. I forgot to
bookmark it, but in the newsgroup (I think) was a little threads by
two MVPs talking about threads and BeginThreadAffinity, which seem to
tie a managed thread to a single native thread. Since I am not very
well educated on the guts of thread programming I don't know, whether
this would be a soloution to my problem.

With a few words: How can I get the usage to almost 100% without
starting my application more than once? Is there a .net-only solution
or would I have to do some api-calls and mix managed and unmanaged
code (which I would like to avoid)?

(And I hope I posted to the right newsgroup) :)

Regards,
Sascha
 
P

Peter Duniho

[...]
With a few words: How can I get the usage to almost 100% without
starting my application more than once? Is there a .net-only solution
or would I have to do some api-calls and mix managed and unmanaged
code (which I would like to avoid)?

(And I hope I posted to the right newsgroup) :)

This newsgroup is fine. As for your question, while I haven't checked it
myself recently, _generally_ you should not have a problem having your
threads consume whatever CPU is available to them, assuming they are
entirely CPU bound.

The thread affinity is a red herring, unless you're setting it. You
shouldn't be setting it unless you have a very specific need, and of
course if you do set it, it's possible that you could have multiple
threads demanding the same CPU, preventing full use of the available CPU
time.

And yes, .NET threads are managed but my understanding is that since the
JIT-compiled code is basically running natively, then as long as your
threads are just simply doing things that are CPU bound there should be no
problem with anything else preventing them from running at full-speed.

Since they're not running at full-speed that suggests that maybe they are
not entirely CPU bound. It's impossible to say in what way that's the
case, however, without a good code sample. If you pare down your code to
the bare minimum required to reproduce the "not using 100% of the CPU"
issue, but provide a complete sample that someone else can easily compile
and run without any additional work, it's possible that someone can
provide an answer as to how the code's not entirely CPU bound (or you may
discover the answer yourself in simplifying the code).

Pete
 
H

herpers

Hi,
case, however, without a good code sample.
Ok, that takes some time (which I am very short of), but I will try to
provide a good sample.
entirely CPU bound
That's probably a language (English->German) problem, but what do you
mean by CPU bound?
Is a thread, that uses Invoke(...) not completly cpu bound?

Regards,
Sascha
 
P

Peter Duniho

Hi,

Ok, that takes some time (which I am very short of), but I will try to
provide a good sample.

Please try to, if you cannot come to an answer on your own. There's just
no way to comment on specific behavior without having specific code to
talk about.
That's probably a language (English->German) problem, but what do you
mean by CPU bound?

My apologies. The word "bound" here refers to what constrains the
algorithm. That is, what does the algorithm spend its time doing.
Something that is "CPU bound" is constrained by, or bound to, the CPU. It
only does things that use the CPU, and does not do anything that uses some
other component of the PC.

In reality, very few algorithms are _entirely_ CPU bound. For example,
even memory accesses can delay the execution of a thread, and of course
anything that involves i/o to an even slower device (hard drive, network,
etc.) will cause even greater delays.

But I would say that generally speaking, a thread that only does
computational things, with or without memory access, would be considered
"CPU bound" (if I recall correctly, the Windows Task Manager doesn't
distinguish memory access from other CPU operations anyway, so even if
memory access was slowing the thread down, that wouldn't show up in the
Task Manager as reduced CPU usage).
Is a thread, that uses Invoke(...) not completly cpu bound?

Which Invoke()? I would say that calling Control.Invoke() would not in
and of itself change whether a thread is CPU bound or not. However, it
does introduce a potential delay, and could in fact lead to reduced CPU
utilization. Specifically, when a thread call Invoke(), it yields the CPU
and does not resume until the main GUI thread has had a chance to process
the invoked delegate.

Ideally, if your application is the only CPU bound task on the computer,
this shouldn't really reduce the CPU utilization. As soon as the thread
that called Invoke() blocks, waiting on the GUI thread, then the GUI
thread should get to run, and once it's done processing messages it should
yield, allowing the original thread to run again. In other words, at all
points the thread that's running is from that one process and so that one
process should be the one getting the CPU time.

But I can't say for sure that it will, since ultimately it's Windows that
gets to decide if and when a given thread will run. Any time you offer
Windows a chance to delay execution of your thread, you are potentially
reducing your CPU utilization.

Also, I'm not sure how the operating system counts the time it takes to do
the thread context switch. If that time isn't included in the Task
Manager-displayed time, and you are calling Invoke() frequently from your
thread, then I think it's entirely possible that all of that context
switching could indeed account for as much as 20% overhead, preventing
your process from appearing to use all of the available CPU time as
examined in Task Manager.

Unfortunately, I don't have first-hand information about these specific
aspects of how CPU time is measured on Windows. But it's certainly a
possibility you may want to explore. For example, try removing the calls
to Invoke() and/or try calling Invoke() less (if you're currently updating
the UI every operation, maybe change that so you only update the UI every
100 operations, for example, or maybe even less frequently).

Pete
 
H

herpers

Hi Peter,

I guess I found the problem...at least I could increase the cpu usage
a little bit. The main problem is, that my threads send progress
message to the gui. The gui itself inquires the threads for total
execution time every 500 ms and updates a listviewitem. Updating the
listviewitem every 500 ms was a bug. After I changed that to updating
the items only when there is a change in progress, the cpu usage went
up _a little_. The messages sent by the threads is appended to an rtf
control. I gues if I would supress these messages I could further
increase the cpu usage.

If I don't see things which aren't there I would say, that the task
manager only displays the cpu usage that's been caused by everything
else but the updating of the gui. I don't know, just a guess.

Anyways, thanks for the long answer. I learned a lot.

Regards,
Sascha
 
P

Peter Duniho

Hi Peter,

I guess I found the problem...at least I could increase the cpu usage
a little bit. The main problem is, that my threads send progress
message to the gui. The gui itself inquires the threads for total
execution time every 500 ms and updates a listviewitem. Updating the
listviewitem every 500 ms was a bug. After I changed that to updating
the items only when there is a change in progress, the cpu usage went
up _a little_. The messages sent by the threads is appended to an rtf
control. I gues if I would supress these messages I could further
increase the cpu usage.

If I don't see things which aren't there I would say, that the task
manager only displays the cpu usage that's been caused by everything
else but the updating of the gui. I don't know, just a guess. [...]

I doubt that's the case. The Task Manager should be displaying all of the
CPU time used by that process, regardless of where the time is spent.

I think it's more likely that by forcing thread context switches, causing
your processing thread to have to yield at arbitrary moments, that may
allow other _processes_ in the OS to get more CPU time than they otherwise
normally would. When that happens, obviously the CPU time comes from
somewhere, which is your processing-intensive process.

You're not specific about how you interact between the two threads.
However, one obvious solution would be to, anywhere you have a call to
Control.Invoke(), replace it with a call to Control.BeginInvoke(). Then,
the GUI thread will only get to process the invoked code when it's really
its turn, and your processing thread(s) will maximize their use of the CPU.

If you've got other processes on the OS that need to run, they will always
prevent you from getting to 100% CPU utilization on your own process. But
you should be able to get pretty close, and maximizing the use of your
processing thread's time by not yielding unless absolutely necessary is
the way to do that.

Note that switching to BeginInvoke() can improve your process's
utilization of the CPU. But if you are really posting an update of the
GUI for every single change to progress, then when the GUI thread does get
a chance to run, it's going to waste a lot of time performing updates for
itself that the user will never see. You'll get better utilization shown
in Task Manager, but a lot of that time will be wasted, not making forward
progress on your actual processing.

Better would be to update the GUI only every N iterations of your
processing, where you pick N so that the GUI is in fact updated only every
500 or 1000 ms or so. But even choosing N arbtirarily as, for example,
100 or 500 or 1000, is likely to show a noticeable improvement in
processing time (maybe not a huge improvement, but at least measurable).

Depending on how much information you need to pass to the GUI, it may not
actually be a bad idea to have the GUI poll the progress based on a timer
as you were doing before. For example, if the progress can be
encapsulated in a single 32-bit integer, you could just make that a
volatile field and the GUI would be able to examine it for the purpose of
updating progress without a need for any synchronization with the
processing thread. Then you can use a timer to control the actual
updating, saving the need to include some arbitrary reduction factor in
the processing thread. This would make the processing code simpler, and
would make the timing of updates more reliable, rather than depending on
some arbitrary, empirically-determined fixed iteration count for updates.

The key here is to make sure you're polling the processing thread
correctly. If that was a problem before in your code, then it's likely
you were doing something inefficient in the polling. Without any sample
code showing what you're doing, it's impossible to say what might have
been the problem. But done correctly, a timer-based polling scheme should
work fine and if you know you need to update the UI every N milliseconds,
then it's likely the right thing to do. Polling's only bad _most_ of the
time. :)

Pete
 
H

herpers

Hi Peter,
[Sorry for the late reply, but I was out of the office for a couple of
days.]
Control.Invoke(), replace it with a call to Control.BeginInvoke(). Then,
never used it before, but I will give it a try. It shouldn't make
things worse. :)
Better would be to update the GUI only every N iterations of your
processing, where you pick N so that the GUI is in fact updated only every
500 or 1000 ms or so. But even choosing N arbtirarily as, for example,
100 or 500 or 1000, is likely to show a noticeable improvement in
processing time (maybe not a huge improvement, but at least measurable).
That's what I do right now...at least for a listview of currently
running calculations. But there is another control (that rtf control I
mentioned above) that is updated immediately. Mayby I sould buffer
that into a stream (?) and flush the stream contents to the rtf
control when a gui update takes place...

Ok, the I try the BeginInvoke method, the message buffering and (also)
I will try to find unnecessary gui updates.

Thanks,
Sascha
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top