Monitor unblock priority

W

Willy Denoyette [MVP]

Jon,

Inline ***

Willy.

Jon Skeet said:
I don't see why - you're making all your threads wait on a single
monitor when that's not necessary. Why is that a weak argument against
your code, when your code is trying to show inherent inefficiency?

*** No, I didn't want to show inefficiency, I only wanted to illustrate the
"un-fairness" of the CLR when using Monitor classes as synchronization
mechanism, and I only wanted to warn OP and others about this behavior. And
note that you changed your mind only recently when you proposed the hybrid
solution (which is still somewhat inefficient be it less, unless you are
using a single lock per thread).
Yes, I don't have the time to write a full example at the moment. I may
have by the end of the week - I've spent half of this evening clearing
out backlogs of news and mail (and haven't quite finished, but I'm
getting there). I think I gave enough suggestions to show that your
code wasn't the most efficient way to go.
*** No, you didn't, you only gave suggestions to OP before I posted the
code, and after I illustrated the "un-fairness" when waiting on a common
SynchBloc (resulting in inefficiency when trying to make it behave like it
was "fair"), you said it was inefficient and you gave anough suggestions to
make it more efficient.

They're a waste if you don't want the predictability. If you want the
predictability and are willing to pay a reasonably small price, it's
not a waste at all.

*** And what do you call a reasonable price?
You said more than that though - you said that trying to synchronize
threads in a fixed/predetermined order was highly inefficient or
impossible. I reckon it's feasible to make it only minorly inefficient
- at least for reasonable scenarios (where you don't have thousands of
threads).

*** I have a something similar using a single lock per thread, it works fine
on a single CPU box, but fails some time on a 4 way SMP box unless I pin
each thread to a specific CPU (using CPU mask).
I think we're talking at cross-purposes here. Calling Wait involves
entering a monitor first, and at that point there's a definite ordering
involved.
*** Calling Wait requires you to own the SynchBlok, but calling Wait can
induce a GC run (the CLR is prepared for this), suspending all managed
threads and doing some funny stuff with threads that are trying to return
from unmanaged land, and a GC run will re-order the Monitor wait queue in
the CLR (unles ther's only a single lock :)).
Well, you talked about a thread never waking up again - that was one of
the reasons you gave for it being impossible to write a mechanism for
releasing threads in the same order as they're suspended. If a thread
never wakes up when it should do (i.e. when it's runnable) that
suggests it's unreliable for general use - at least under the
conditions that prevent the thread from ever running
*** I have testcases running on large SMP boxes and I can simulate thread
starvation when not taking care when using Monitor methods to make threads
cooperate this way.

Monitor.Wait/Pulse/PulseAll doesn't
provide it out of the box, but it can be built on top of them.

*** Agreed, but before I can make it reasonable fail-safe, I stick with
unmanaged code for this.


Willy.
 
J

jeff

Willy Denoyette said:
*** Agreed, but before I can make it reasonable fail-safe, I stick with
unmanaged code for this.

Weren't there some problems with fairness in the Win32 primitives as well?

-- jeff
 
W

Willy Denoyette [MVP]

jeff said:
Weren't there some problems with fairness in the Win32 primitives as well?

-- jeff


Jeff,

The Win32 thread scheduler uses a round-robin scheme build around a number
priority based queues. The scheduler has mechanism in place to provide
reasonable fair thread scheduling, but as you know nothing is perfect in
this business so it's possible there are/were some glitches.
Also you shouldn't compare the OS scheduler with the light weight CLR
"scheduler", they server some different purposes, the first has to manage
all threads in all processes (so it has to be reasonable fair) , the CLR has
only to manage it's process wide threads and is far less sophisticated (he
can't/shouldn't be) than the former. Another point is that the CLR "unfair
thread synchronization" is not necessarily a bad thing, it's generally more
perform than fair thread scheduling.
I remember having used some "fair thread scheduler" for Java a couple of
years ago, but this was on U**x using POSIX threads (Pthreads), don't know
if we can expect something on the CLR though.

Willy.
 
W

Willy Denoyette [MVP]

jeff said:
As far as I can tell, Richter is saying that you can't do fair scheduling
by using only
the .NET synchronization primitives. He doesn't talk at all about the
possibility
of using your own queue for ensuring fairness.

-- jeff

Jeff,

Not exactly, he says you can't make it "fair" using the Monitor class and
it's primitives (CLR Wait/Pulse/PulseAll), the problem is not the queue.
Note that the term "fair" is somewhat overloaded when talking about thread
synchronization, strictly speaking it means that all threads in a system
get's the same chance(1) to be on a CPU for the same amount of time(2). The
latter cannot/shouldn't be guaranteed when running a pre-empting scheduler
using different priority chains, but IMO the latest Windows OS'ses do a fair
job to achieve the first.

Willy.
 
J

Jon Skeet [C# MVP]

Willy Denoyette said:
*** No, I didn't want to show inefficiency

Then why did you talk about inefficiency with respect to it? "Compile
the code and run it, which the time it takes to handle the few items
and watch also the number of context switches!" That sounds like an
emphasis on inefficiency to me.
I only wanted to illustrate the
"un-fairness" of the CLR when using Monitor classes as synchronization
mechanism, and I only wanted to warn OP and others about this behavior. And
note that you changed your mind only recently when you proposed the hybrid
solution (which is still somewhat inefficient be it less, unless you are
using a single lock per thread).

Well, I proposed the hybrid solution before you presented your code, so
I don't think it was *that* recently. Even the non-hybrid solution
would be more efficient than the code originally presented by vooose.
*** No, you didn't, you only gave suggestions to OP before I posted the
code, and after I illustrated the "un-fairness" when waiting on a common
SynchBloc (resulting in inefficiency when trying to make it behave like it
was "fair"), you said it was inefficient and you gave anough suggestions to
make it more efficient.

I suggest you have another look at the timestamps on posts. You posted
your code at 13:58 yesterday. I posted my hybrid suggestion at 06:57
yesterday. (Both times GMT.)
*** And what do you call a reasonable price?

That depends on the application, of course. The hybrid solution can be
tuned, of course - and if you end up only using one thread per lock
(which I think would be the most common case for most applications) you
don't get much inefficiency at all. Bear in mind that your code did no
actual *work* there - in a real app where work items could take
significant amounts of time, the inefficiency of a few context switches
whenever another thread was released could easily be lost in the noise.
*** I have a something similar using a single lock per thread, it works fine
on a single CPU box, but fails some time on a 4 way SMP box unless I pin
each thread to a specific CPU (using CPU mask).

That sounds like either a bug in the code or a bug in .NET threading -
without seeing the code it would be impossible to say for sure. Can you
see any theoretical reason why it *should* fail?
*** Calling Wait requires you to own the SynchBlok, but calling Wait can
induce a GC run (the CLR is prepared for this), suspending all managed
threads and doing some funny stuff with threads that are trying to return
from unmanaged land, and a GC run will re-order the Monitor wait queue in
the CLR (unles ther's only a single lock :)).

But the whole point is that we're not *relying* on the monitor wait
queue, because we've got our own queue. That's how we achieve the (very
limited) fairness.
*** I have testcases running on large SMP boxes and I can simulate thread
starvation when not taking care when using Monitor methods to make threads
cooperate this way.

That sounds interesting. Just so I can check that we're not talking at
cross-purposes, are you getting to a situation where the CPU is idle
and there are threads which *should* be runnable, but aren't getting
anywhere?

I'm not suggesting that my solution makes threads get their fair
timeslice in a basically overloaded system - that kind of fairness is
out of the remit of the OP's request, IMO.
Monitor.Wait/Pulse/PulseAll doesn't

*** Agreed, but before I can make it reasonable fail-safe, I stick with
unmanaged code for this.

So you agree it's not impossible? Sounds like I'm getting somewhere :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top