J
Jon Skeet [C# MVP]
I guess we'll just have to disagree on a few things, for the reasons I've
already stated. I don't see much point in going back and forth saying the
same things...
I should say (and I've only just remembered) that a few years ago I
was unsure where the safety came from, and I mailed someone (Vance
Morrison? Chris Brumme?) who gave me the explanation I've been giving
you.
With regard to runtime volatile read/writes and acquire/release semantics of
Monitor.Enter and Monitor.Exit we can agree.
I don't agree that anything specified in either 334 or 335 covers all levels
of potential compile-time class member JIT/IL compiler optimizations.
It specifies how the system as a whole must behave: given a certain
piece of IL, there are
I don't agree that "int number; void UpdateNumber(){lock(locker){
number++;}}" is equally as safe as "volatile int number; void UpdateNumber(){
number++; }"
I agree - the version without the lock is *unsafe*. Two threads could
both read, then both increment, then both store in the latter case.
With the lock, everything is guaranteed to work.
With the following Monitor.Enter/Exit IL, for example:
...what part of that IL tells the JIT/IL compiler that Tester.number
specifically should be treated differently--where lines commented // * are
the only lines distinct to usage of Monitor.Enter/Exit?
The fact that it knows Monitor.Enter is called, so the load (in the
logical memory model) cannot occur before Monitor.Enter. Likewise it
knows that Monitor.Exit is called, so the store can't occur after
Monitor.Exit. If it calls another method which *might* call
Monitor.Enter/Exit, it likewise can't move the reads/writes as that
would violate the spec.
...where an IL compiler is given ample amounts of information that
Tester.number should be treated differently.
It's being given ample
I don't think it's safe, readable, or future friendly to utilize syntax
strictly for their secondary consequences (using Monitor.Enter/Exit not for
synchronization but for acquire/release semantics. As in the above line
where modification of an int is already atomic; "synchronization" is
irrelevant), even if they were effectively identical to another syntax. Yes,
if you've got a non-atomic invariant you still have to synchronize (with
lock, etc.)... but volatility is different and needs to be accounted for
equally as much as thread-safety.
Again you're treating atomicity as almost interchangeable with
volatility, when they're certainly not. Synchronization is certainly
relevant whether or not writes are atomic. Atomicity just states that
you won't see a "half way" state; volatility state that you will see
the "most recent" value. That's a huge difference.
The volatility is certainly not just a "secondary consequence" - it's
vital to the usefulness of locking.
Consider a type which isn't thread-aware - in other words, nothing is
marked as volatile, but it also has no thread-affinity. That should be
the most common kind of type, IMO. You can't retrospectively mark the
fields as being volatile, but you *do* want to ensure that if you use
objects of the type carefully (i.e. always within a consistent lock)
you won't get any unexpected behaviour. Due to the guarantees of
locking, you're safe. Otherwise, you wouldn't be. Without that
guarantee, you'd be entirely at the mercy of type authors for *all*
types that *might* be used in a multi-threaded environment making all
their fields volatile.
Further evidence that it's not just a secondary effect, but one which
certainly *can* be relied on: there's no other thread-safe way of
using doubles. They *can't* be marked as volatile - do you really
believe that MS would build .NET in such a way that wouldn't let you
write correct code to guarantee that you see the most recent value of
a double, rather than one cached in a register somewhere?
This *is* guaranteed, it's the normal way of working in the framework
(as Willy said, look for volatile fields in the framework itself) and
it's perfectly fine to rely on it.
Jon