When is "volatile" used instead of "lock" ?

S

Samuel R. Neff

When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam
 
B

ben.biddington

When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

But when is it better to use "volatile" instead of "lock" ?

Thanks,

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

You can also the System.Threading.Interlocked class which maintains
volatile semantics.

Seealso: http://www.albahari.com/threading/part4.html
 
C

Christof Nordiek

Samuel R. Neff said:
When is it appropriate to use "volatile" keyword? The docs simply
state:

"
The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access.
"

For a volatile field the reodering of the memory access by the optimizer is
restricted.
A write to a volatile field is always done after all other memory accesses
which precede in the instruction sequence.
A read from a volatile field is always done before all other memory accesses
wich occur after it in the instruction sequence.

A volatile field as a simple way to flag, that memorymanipulations are over.

following an example from the specs:

using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}

static void Main() {
finished = false;
// Run Thread2() in a new thread
new Thread(new ThreadStart(Thread2)).Start();
// Wait for Thread2 to signal that it has a result by setting
// finished to true.
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}
}

Since finished is volatile, in method Thread2 the write to result will
allways occur before the write to finished and in method Main the read from
finished will allways occur before the read from result, so the read from
result in Main can't occur before the write in Thread2.

HTH

Christof
 
J

james.curran

When is it appropriate to use "volatile" keyword? The docs simply
state:

Often, if just one thread is writing to the object (and other
threads just reading it),you can get away with using just volatile.

Generally, the shared object would need to be an atomic value, so the
reader may see it sudden change from state A to state B, but would
never see it half-way between A & B.
 
B

Brian Gideon

For a volatile field the reodering of the memory access by the optimizer is
restricted.
A write to a volatile field is always done after all other memory accesses
which precede in the instruction sequence.
A read from a volatile field is always done before all other memory accesses
wich occur after it in the instruction sequence.

A volatile field as a simple way to flag, that memorymanipulations are over.

following an example from the specs:

using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}

static void Main() {
finished = false;
// Run Thread2() in a new thread
new Thread(new ThreadStart(Thread2)).Start();
// Wait for Thread2 to signal that it has a result by setting
// finished to true.
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}

}

Since finished is volatile, in method Thread2 the write to result will
allways occur before the write to finished and in method Main the read from
finished will allways occur before the read from result, so the read from
result in Main can't occur before the write in Thread2.

HTH

Christof

One other important behavior that is demonstrated in your example is
that it guarentees that writes to finished are seen from other
threads. That prevents the infinite loop in Main().

Brian
 
C

Chris Mullins [MVP]

Samuel R. Neff said:
When is it appropriate to use "volatile" keyword? The docs simply
state:
"The volatile modifier is usually used for a field that is accessed by
multiple threads without using the lock Statement (C# Reference)
statement to serialize access. "

But when is it better to use "volatile" instead of "lock" ?

I would recommend using locks and properties, rather than volatile variables
or Interlocked Methods.

Locking is easier and more straight forward, and has fewer subtle issues,
than do the other two methods.
 
B

Ben Voigt

You can also the System.Threading.Interlocked class which maintains
volatile semantics.

You should use volatile and Interlocked together, neither fully replaces the
other.
 
W

Willy Denoyette [MVP]

Ben Voigt said:
You should use volatile and Interlocked together, neither fully replaces
the other.

Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use Interlock.

Willy.
 
B

Ben Voigt

Willy Denoyette said:
Not necessarily, there is no need for volatile, as long you Interlock
consistently across all threads in the process. This means that once you
access a shared variable using Interlock, all threads should use
Interlock.

I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters. Say
you are calling an Interlocked method in a loop. If the variable is not
volatile, the compiler can actually call Interlocked on a local copy, and
then write the value to the real variable once, at the end of the loop (and
worse, it can do so in a non-atomic way). Anything that maintains correct
operation from the perspective of the calling thread is permissible for
non-volatile variable access. Why would a compiler do this? For optimal
use of cache. By using a local copy of a variable passed byref, locality of
reference is improved, and additionally, a thread's stack (almost) never
incurs cache coherency costs.

Note that this is not a problem for pass-by-pointer, which must use the true
address of the referenced variable in order to enable pointer arithmetic.
But pointer arithmetic isn't allowed for tracking handles, a handle is an
opaque value anyway.

For lockless data structures, always use volatile. And then stick that
volatile variable close in memory to what it is protecting, because CPU
cache has to load and flush an entire cache line at once, and volatile write
semantics require flushing all pending writes.
 
J

Jon Skeet [C# MVP]

Ben Voigt said:
I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters. Say
you are calling an Interlocked method in a loop. If the variable is not
volatile, the compiler can actually call Interlocked on a local copy, and
then write the value to the real variable once, at the end of the loop (and
worse, it can do so in a non-atomic way).

No - the CLI spec *particularly* mentions Interlocked operations, and
that they perform implicit acquire/release operations. In other words,
the JIT can't move stuff around in this particular case. Interlocked
would be pretty pointless without this.
 
W

Willy Denoyette [MVP]

Ben Voigt said:
I don't think so, actually. Without volatile semantics, the compiler is
free to cache the value of any parameter, including in/out parameters.
Say you are calling an Interlocked method in a loop. If the variable is
not volatile, the compiler can actually call Interlocked on a local copy,
and then write the value to the real variable once, at the end of the loop
(and worse, it can do so in a non-atomic way). Anything that maintains
correct operation from the perspective of the calling thread is
permissible for non-volatile variable access. Why would a compiler do
this? For optimal use of cache. By using a local copy of a variable
passed byref, locality of reference is improved, and additionally, a
thread's stack (almost) never incurs cache coherency costs.

Note that this is not a problem for pass-by-pointer, which must use the
true address of the referenced variable in order to enable pointer
arithmetic. But pointer arithmetic isn't allowed for tracking handles, a
handle is an opaque value anyway.

For lockless data structures, always use volatile. And then stick that
volatile variable close in memory to what it is protecting, because CPU
cache has to load and flush an entire cache line at once, and volatile
write semantics require flushing all pending writes.



No, not at all. Interlocked operations imply a full fence, that is, reads
have acquire and writes have release semantics. That means that the JIT may
not register these variables nor store them locally and cannot move stuff
around them.
Think of this, what would be the use of Interlocked operation when used in
languages that don't support volatile (like VB.NET) or good old C/C++
(except VC7 and up).
I also don't agree with your statement that you should *always* use volatile
in lock free or low lock scenario's. IMO, you should almost never use
volatile, unless you perfectly understand the semantics of the memory model
of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance) and the
memory model of the CPU (IA32 vs. IA64). The last year I was involved in the
resolution of a number of nasty bugs , all of them where the result of
people trying to out-smart the system by applying lock free or low lock
techniques using volatile, since then whenever I see volatile I'm getting
very suspicious, really.......


Willy.
 
B

Barry Kelly

Willy said:
I also don't agree with your statement that you should *always* use volatile
in lock free or low lock scenario's.

As far as I can see from the rest of your post, I think you've made a
mis-statement here. I think what you mean to say is that you shouldn't
use lock-free or low-locking unless there's no alternative, not that
volatile shouldn't be used - because volatile is usually very necessary
in order to get memory barriers right in those circumstances.
IMO, you should almost never use
volatile, unless you perfectly understand the semantics of the memory model
of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance) and the
memory model of the CPU (IA32 vs. IA64). The last year I was involved in the
resolution of a number of nasty bugs , all of them where the result of
people trying to out-smart the system by applying lock free or low lock
techniques using volatile, since then whenever I see volatile I'm getting
very suspicious, really.......

I agree with you about seeing 'volatile' and it raising red flags, but
the cure is to use proper locking if possible, and careful reasoning
(rather than shotgun 'volatile' and guesswork), rather than simply
omitting 'volatile'.

-- Barry
 
W

Willy Denoyette [MVP]

Barry Kelly said:
As far as I can see from the rest of your post, I think you've made a
mis-statement here. I think what you mean to say is that you shouldn't
use lock-free or low-locking unless there's no alternative, not that
volatile shouldn't be used - because volatile is usually very necessary
in order to get memory barriers right in those circumstances.


I agree with you about seeing 'volatile' and it raising red flags, but
the cure is to use proper locking if possible, and careful reasoning
(rather than shotgun 'volatile' and guesswork), rather than simply
omitting 'volatile'.

Well, I wasn't suggesting to omit 'volatile, sorry hif I gave this
impression. What I meant was, that you should be very if when looking for
lock-free or low locking alternatives, and if you do, that you should not
"always" use volatile.
Note that there are alternatives to volatile fields, there are
Thread.MemoryBarrier, Thread.VolatileRead, Thread.VolatileWrite and the
Interlocked API's, and these alternatives have IMO the (slight) advantages
that they "forces" developers to reason about their usage, something which
is less the case (from what I've learned when talking with other devs.
across several teams) with volatile.
But here also, you need to be very careful, (the red flag should be raised
whenever you see any of these too). You need to reason about their usage and
that's the major problem when writing threaded code, even experienced
developer have a hard time when reasoning about multithreading using locks,
programming models that require to reason about how and when to use explicit
fences or barriers are IMO too difficult, even for experts, to use reliably
in mainstream computing, and this is what .NET is all about isn't it?.

Willy.


Willy.
 
B

Ben Voigt

Willy Denoyette said:
No, not at all. Interlocked operations imply a full fence, that is, reads
have acquire and writes have release semantics. That means that the JIT
may not register these variables nor store them locally and cannot move
stuff around them.

Let's look at the Win32 declaration for an Interlocked function:

LONG InterlockedExchange(
LONG volatile* Target,
LONG Value
);Clearly, Target is intended to be the address of a volatile variable.
Sure, you can pass a non-volatile pointer, and there is an implicit
conversion, but if you do *the variable will be treated as volatile only
inside InterlockedExchange*. The compiler can still do anything outside
InterlockedExchange, because it is dealing with a non-volatile variable.
And, it can't possibly change behavior when InterlockedExchange is called,
because the call could be made from a different library, potentially not yet
loaded.

Consider this:

/* compilation unit one */
void DoIt(LONG *target)
{
LONG value = /* some long calculation here */;
if (value != InterlockedExchange(target, value))
{
/* some complex operation here */
}
}

/* compilation unit two */

extern void DoIt(LONG * target);
extern LONG shared;

void outer(void)
{
for( int i = 0; i < 1000; i++ )
{
DoIt(&shared);
}
}

Now, clearly, the compiler has no way of telling that DoIt uses Interlocked
access, since DoIt didn't declare volatile semantics on the pointer passed
in. So the compiler can, if desired, transform outer thusly:

void outer(void)
{
LONG goodLocalityOfReference = shared;
for( int i = 0; i < 1000; i++ )
{
DoIt(&goodLocalityOfReference);
}
shared = goodLocalityOfReference;
}

Except for one thing. In native code, pointers have values that can be
compared, subtracted, etc. So the compiler has to honestly pass the address
of shared. In managed code, with tracking handles, the compiler doesn't
have to preserve the address of the variable (that would, after all, defeat
compacting garbage collection). Oh, sure, the JIT has a lot more
information about what is being called than a native compiler does, it
almost gets rid of separate compilation units.... but not quite. With
dynamically loaded assemblies and reflection in the mix, it is just a
helpless as a "compile-time" compiler.

I'm fairly sure that the current .NET runtime doesn't actually do any such
optimization as I've described. But I wouldn't bet against such things
being added in the future, when NUMA architectures become so widespread that
the compiler has to optimize for them.

Be safe, use volatile on every variable you want to act volatile, which
includes every variable passed to Interlocked.
Think of this, what would be the use of Interlocked operation when used in
languages that don't support volatile (like VB.NET) or good old C/C++
(except VC7 and up).

VC++, all versions, and all other PC compilers that I'm aware of (as in, not
embedded), support volatile to the extent needed to invoke an interlocked
operation. That is, the real variable is always accessed at the time
specified by the compiler. The memory fences are provided by the
implementation of Interlocked*, independent of the compiler version.
I also don't agree with your statement that you should *always* use
volatile in lock free or low lock scenario's. IMO, you should almost never
use volatile, unless you perfectly understand the semantics of the memory
model of the CLR/CLI (ECMA differs from V1.X differs from V2 for instance)
and the memory model of the CPU (IA32 vs. IA64). The last year I was
involved in the resolution of a number of nasty bugs , all of them where
the result of people trying to out-smart the system by applying lock free
or low lock techniques using volatile, since then whenever I see volatile
I'm getting very suspicious, really.......

You are claiming that you should almost never use lock free techniques, and
thus volatile should be rare. This hardly contradicts my statement that
volatile should always be used in lock free programming.
 
W

Willy Denoyette [MVP]

Ben Voigt said:
Let's look at the Win32 declaration for an Interlocked function:

LONG InterlockedExchange(
LONG volatile* Target,
LONG Value
);Clearly, Target is intended to be the address of a volatile variable.
Sure, you can pass a non-volatile pointer, and there is an implicit
conversion, but if you do *the variable will be treated as volatile only
inside InterlockedExchange*. The compiler can still do anything outside
InterlockedExchange, because it is dealing with a non-volatile variable.

Sure, but this was not my point, the point is that Interlocked operations
imply barriers, all or not full. "volatile" implies full barriers, so they
both imply barriers, but they serve different purposes. One does not exclude
the other, but that doesn't mean they should always be used in tandem, all
depends on what you want to achieve in your code, what guarantees you want.
Anyway, the docs do not impose it, the C# docs on Interlocked don't even
mention volatile, and the Win32 docs (Interlocked API's) don't spend a word
on the volatile argument. (note that the volatile was added to the
signature after NT4 SP1).
And, it can't possibly change behavior when InterlockedExchange is called,
because the call could be made from a different library, potentially not
yet loaded.

Sorry but you are mixing native code and managed code semantics. What I
mean, is that the semantics of the C (native) volatile is not the same as
the semantics of C# 'volatile'. So when I refered to C++ supporting
"volatile" I was refering to managed dialects (VC7.x and VC8) who's volatile
semantics are obviously the same as all other languages
I don't wanna discuss the semantics of volatile in standard C/c++ here, they
are so imprecise that IMO it will lead to an endless dicussion, not relevant
to C#.
Also I don't wanna discuss the semantics of Win32 Interlocked either, "Win32
interlocked API's" do accept pointers to volatile items, while .NET does
accept "volatile pointers" (in unsafe context) as arguments of a method
call, but treats the item as non volatile. Also, C#, will issue a warning
when passing a volatile field (passed by ref is required by Interlocked
operations), that means that the item will be treated as volatile, but the
reference itself will not.

Consider this:

/* compilation unit one */
void DoIt(LONG *target)
{
LONG value = /* some long calculation here */;
if (value != InterlockedExchange(target, value))
{
/* some complex operation here */
}
}

/* compilation unit two */

extern void DoIt(LONG * target);
extern LONG shared;

void outer(void)
{
for( int i = 0; i < 1000; i++ )
{
DoIt(&shared);
}
}

Now, clearly, the compiler has no way of telling that DoIt uses
Interlocked access, since DoIt didn't declare volatile semantics on the
pointer passed in. So the compiler can, if desired, transform outer
thusly:

void outer(void)
{
LONG goodLocalityOfReference = shared;
for( int i = 0; i < 1000; i++ )
{
DoIt(&goodLocalityOfReference);
}
shared = goodLocalityOfReference;
}

Except for one thing. In native code, pointers have values that can be
compared, subtracted, etc. So the compiler has to honestly pass the
address of shared. In managed code, with tracking handles, the compiler
doesn't have to preserve the address of the variable (that would, after
all, defeat compacting garbage collection). Oh, sure, the JIT has a lot
more information about what is being called than a native compiler does,
it almost gets rid of separate compilation units.... but not quite. With
dynamically loaded assemblies and reflection in the mix, it is just a
helpless as a "compile-time" compiler.

I'm fairly sure that the current .NET runtime doesn't actually do any such
optimization as I've described. But I wouldn't bet against such things
being added in the future, when NUMA architectures become so widespread
that the compiler has to optimize for them.

Be safe, use volatile on every variable you want to act volatile, which
includes every variable passed to Interlocked.


VC++, all versions, and all other PC compilers that I'm aware of (as in,
not embedded), support volatile to the extent needed to invoke an
interlocked operation. That is, the real variable is always accessed at
the time specified by the compiler. The memory fences are provided by the
implementation of Interlocked*, independent of the compiler version.

Where in the docs (MSDN Platform SDK etc..) do they state that Interlocked
should always be on volatile items?

You are claiming that you should almost never use lock free techniques,
and thus volatile should be rare. This hardly contradicts my statement
that volatile should always be used in lock free programming.

Kind of, I'm claiming that you should rarely use lock-free techniques when
using C# in mainstream applications, I've seen too many people trying to
implement lock free code, and if you ask "why", the answer is mostly
"performance", and if you asked if the measured their "locked "
implementation, the answer is mostly, well I have no 'locked'
implementation, this is what I call "premature optimization" without any
guarantees, other than probably producing unrealiable code, which is (IMO)
more important than performant code .
IMO the use of volatile should be rare in the sense that you better use
locks and only use volatile for the most simple cases (which doesn't imply
'rare'), for instance when you need to guarantee that all possible observers
of a field (of type accepted by volatile) see the same value when that value
has been written to by another observer.
Remember "volatile" is something taken care of by the JIT, all it does is
eliminate some of the possible optimizations like (but not restricted to):
- volatile items cannot be registered...
- multiple stores cannot be suppressed...
- re-ordering is restricted.
- ...
But keep in mind that, 'volatile' suppresses optimizations for all possible
accesses, even when not subject to multiple observers (threads), and that
volatile fields accesses can move, some people think they can't....

Willy.
 
G

Guest

Sorry, coming in late; but this are some poor implications with respect to
"volatile" and "lock" in this thread (other statements like "..there is no
need for volatile [when] you Interlock consistently across all threads in the
process." are valid).

"lock" and "volatile" are two different things. You may not always need
"lock" with a type that can be declared volatile; but you should always use
volatile with a member that is accessed by multiple threads (an optimization
would be that you wouldn't need "volatile" if Interlocked were always used
with the member in question, if applicable--as has been noted). For example,
why would anyone assume the the line commented with "// *" was thread-safe
simply because "i" was declared with "volatile":

volatile int i;
static Random random = new Random();
static int Transmogrify(int value)
{
return value *= random.Next();
}

void Method()
{
i = Transmogrify(i); // *
}

"volatile" doesn't make a member thread-safe, the above operation still
requires at least two instructions (likely four), which are entirely likely
to be separated by preemption to another thread that modifies i.

By the same token, the lock statement surrounding access to a member doesn't
stop the compiler from having optimized use of a member by caching it to a
register especially if that member is declared in a different assembly that
was compiled for this code was written:

lock(lockObject)
{
i = i + 1;
}

....yes, the compiler *could* assume that all members within the lock
statement block are likely accessible by multiple threads (implicit
volatile); but that's not its intention and it's certainly not documented as
doing that (and it would be pointless, other code knows nothing about this
block and could have optimized use of i by changing its order of access or
caching to a registry).

volatile and lock should be used in conjunction, one is not a replacement
for the other.
 
J

Jon Skeet [C# MVP]

By the same token, the lock statement surrounding access to a member doesn't
stop the compiler from having optimized use of a member by caching it to a
register especially if that member is declared in a different assembly that
was compiled for this code was written:

lock(lockObject)
{
i = i + 1;
}

Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

If different locks are used, you could be in trouble, but if you always
lock on the same reference (when accessing the same shared data) you're
guaranteed to be okay.
...yes, the compiler *could* assume that all members within the lock
statement block are likely accessible by multiple threads (implicit
volatile); but that's not its intention and it's certainly not documented as
doing that (and it would be pointless, other code knows nothing about this
block and could have optimized use of i by changing its order of access or
caching to a registry).

It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
volatile and lock should be used in conjunction, one is not a replacement
for the other.

If you lock appropriately, you never need to use volatile.
 
W

Willy Denoyette [MVP]

Jon Skeet said:
Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

If different locks are used, you could be in trouble, but if you always
lock on the same reference (when accessing the same shared data) you're
guaranteed to be okay.


It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.


If you lock appropriately, you never need to use volatile.


True, when using locks, make sure you do it consistently. And that's exactly
why I said that I'm getting suspicious when I see a "volatile" field. Most
of the time this modifier is used because the author doesn't understand the
semantics of "volatile", or he's not sure about his own locking policy or he
has no locking policy at all. Also some may think that volatile implies a
fence, which is not the case, it only tells the JIT to turn off some of the
optimizations like register allocation and load/store reordering, but it
doesn't prevent possible re-ordering and write buffering done by the CPU,
note, that this is a non issue on X86 and X64 like CPU's , given the memory
model enforced by the CLR, but it is an issue on IA64.

Willy.
 
G

Guest

Acquiring a lock has acquire semantics, and releasing a lock has
release semantics. You don't need any volatility if all access to any
particular item of shared data is always made having acquired a certain
lock.

....which only applies to reference types. Most of this discussion has been
revolving around value types (by virtue of Interlocked.Increment), for which
"lock" cannot not apply. e.g. you can't switch from using lock on a member
to using Interlocked.Increment on that member, one works with references and
the other with value types (specifically Int32 and Int64). This is what
raised my concern.
It certainly *is* documented. ECMA 335, section 12.6.5:

<quote>
Acquiring a lock (System.Threading.Monitor.Enter or entering a
synchronized method) shall implicitly
perform a volatile read operation, and releasing a lock
(System.Threading.Monitor.Exit or leaving a
synchronized method) shall implicitly perform a volatile write
operation.
</quote>

....still doesn't document anything about the members/variables within the
locked block (please read my example). That quote applies only to the
reference used as the parameter for the lock.

There can be no lock acquire semantics for value members. Suggesting
"locking appropriately" cannot apply here and can be misconstrued by some
people by creating something like "lock(myLocker){intMember = SomeMethod();}"
which does not do the same thing as making intMember volatile, increases
overhead needlessly, and still leaves a potential bug.
If you lock appropriately, you never need to use volatile.

Even if the discussion hasn't been about value types, a dangerous statement;
because it could only apply to reference types (i.e. if myObject is wrapped
with lock(myObject) in every thread, yes I don't need to declare it with
volatile--but that's probably not why I'm using lock). In the context of
reference types, volatile only applies to the pointer (reference) not
anything within the object it references. Reference assignment is atomic,
there's no need to use lock to guard that sort of thing. You use lock to
guard a non-atomic invariant, volatile has nothing to do with that--it has to
do with the optimization (ordering, caching) of pointer/value reads and
writes.

Calling Monitor.Enter/Minitor.Exit is a pretty heavy-weight means of
ensuring acquire semantics; at least 5 times slower if volatile is all you
need.

-- Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top