J
Jon Skeet [C# MVP]
Stelrad Kypski said:not obvious to me. what is it?
The reads are done in the same order as the increments, which means
that even without any reordering, it's quite possible to get any of the
four possibilities.
Stelrad Kypski said:not obvious to me. what is it?
Jos Scherders said:Hi,
One of the posts in this very interesting threat referred to this article
:
http://msdn2.microsoft.com:80/en-us/library/ms686355.aspx
This article includes the following piece of code (to fix the race
condition):
volatile int iValue;
volatile BOOL fValueHasBeenComputed = FALSE;
extern int ComputeValue();
void CacheComputedValue()
{
if (!fValueHasBeenComputed)
{
iValue = ComputeValue();
fValueHasBeenComputed = TRUE;
}
}
BOOL FetchComputedValue(int *piResult)
{
if (fValueHasBeenComputed)
{
*piResult = iValue;
return TRUE;
}
else return FALSE;
}
My (complete ignorent) question is this:
Must fValueHasBeenComputed really be declared volatile for the program to
produce correct results ? Isn't it true that , because iValue is declared
volatile, it is impossible that fValueHasBeenComputed = TRUE and the
result of ComputeValue() is not stored in iValue ?
Of course, because fValueHasBeenComputed is not declared volatile it may
be that FetchComputedValue() returns FALSE when in fact iValue does
contain the computed value but the other case can never occur. E.g.
FetchComputedValue returns TRUE and piResult is incorrect.
Are my assumptions correct ?
William Stacey said:| I believe the rules you write are mistaken and not supported by the
| standards or links you post, and in any case, interlocked operations
| only affect a *single* thread, not all threads.
Thanks Barry. Is this documented somewhere? An Interlocked would hardly be
useful if that was the case.
They effect other threads by virtue of their
atomic nature *and the barrier they impose. In the case of a shared
variable, only changed in CPU cache, the execution of the normal interlocked
instruction make this var (and all cache memory of current CPU) be
committed into physical memory *before executing the interlocked
instruction. So other threads will "see" changes from interlocked
operations after seeing changes in another shared variable (i.e. release
semantics).
For example:
a++;
InterlockedIncrement (ref b);
c++;
Other threads will only see changes in "a" prior to changes in "b", and will
see changes in "c" after changes in b. Critical section lock algorithms,
such as the "Peterson's algorithm" rely on this behavior or would not work
on SMP hardware. Maybe we are saying the same thing, and I am just not
understanding your intent.
William Stacey said:| Joe Duffy has replied to the blog post, by the way, confirming this.
Thanks. Good to see his reply. So he is saying Interlocked *would prevent
the clr re-order issue? So doing something like:
int a = x;
int b = Interlocked.Read(ref y);
to your example would fix this example in terms of clr reorder issue? As he
said, it is easier to use volatile, but just to finish the thread. Or would
you need Interlocked.Read on both lines?
William said:| Joe Duffy has replied to the blog post, by the way, confirming this.
Thanks. Good to see his reply. So he is saying Interlocked *would prevent
the clr re-order issue? So doing something like:
int a = x;
int b = Interlocked.Read(ref y);
to your example would fix this example in terms of clr reorder issue?
As he
said, it is easier to use volatile, but just to finish the thread. Or would
you need Interlocked.Read on both lines?
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.