multi threading c#

E

eastcoast127

Hello
I currently have a program which can run 1-10 threads. all of the threads are exactly the same. These threads access internet pages and therefore run at different speeds. I now need to know which thread hits a certain commandin the thread 1st, which one is 2nd, which one is 3rd and so on. can anyone suggest how I could do this please.
Thanks
Jon
 
A

Arne Vajhøj

I currently have a program which can run 1-10 threads. all of the
threads are exactly the same. These threads access internet pages and
therefore run at different speeds. I now need to know which thread
hits a certain command in the thread 1st, which one is 2nd, which one
is 3rd and so on. can anyone suggest how I could do this please.

I think the easiest approach would be to have a
SynchronizedCollection<WhateverTypeYouUseToIdentifyThread>
and have the code Add its thread to that.

Arne
 
A

Arne Vajhøj

Hello
I currently have a program which can run 1-10 threads. all of the threads
are exactly the same. These threads access internet pages and therefore
run at different speeds. I now need to know which thread hits a certain
command in the thread 1st, which one is 2nd, which one is 3rd and so on.
can anyone suggest how I could do this please.
Thanks
Jon

Many different approaches you can take, but I think indexing an array with
a shared variable would be easiest. For example, in the class that
implements the code you execute in your threads:

static int[] _threadRegister = new int[10];
static int _registerIndex = -1;

void RegisterCheckpoint()
{
int localIndex = System.Threading.Interlocked.Increment(ref
_registerIndex);

_threadRegister[localIndex] =
System.Threading.Thread.CurrentThread.ManagedThreadId;
}

Each thread would call the RegisterCheckpoint() method after it hits your
certain command. By using the Interlocked() class, each thread would get a
unique index into the array, into which it copies its ID number.

Note that the above is somewhat more efficient at the expense of precision.
There's a race condition in which a thread could hit the checkpoint of
interest itself first, but then get pre-empted, allowing some other thread
to register its arrival at the checkpoint earlier.

Typically, this should not be a problem...if your threads are so close in
execution that this would occur, it's debatable whether it really matters
which winds up registering itself first. But if you really really care
about it, you can use a full lock around the checkpoint to ensure that
the thread that arrives at the start of the checkpoint first is always the
one that registers itself. For example:

static readonly object _lock = new object();

void ActualWorkingMethod()
{
// do some stuff here
// other stuff here
// here's our checkpoint:
lock (_lock)
{
_threadRegister[++_registerIndex] =
Thread.CurrentThread.ManagedThreadId;

// do checkpoint work here
}
}

The above will ensure that the same thread that gets to do the "checkpoint
work" first is always exactly the thread that is also registered first.

The above is just sample code. You may want to adjust it so that the
registration array is allocated according to the actual number of threads,
store it in a different place, use something other than the ManagedThreadId
as the thread identifier in your registration array, etc.

I would go for the lock method no matter whether the race condition
is important or not.

Reason: Interlocked.Increment does not guarantee visibility to
all threads per .NET memory model. Interlocked.Increment does
guarantee visibility to all threads on x86 and x86-64. The chance
of the code even having to run on something else than x86 or
x86-64 is most likely microscopic, but still why not do it
right.

Arne
 
M

Marcel Müller

Reason: Interlocked.Increment does not guarantee visibility to
all threads per .NET memory model. Interlocked.Increment does
guarantee visibility to all threads on x86 and x86-64. The chance
of the code even having to run on something else than x86 or
x86-64 is most likely microscopic, but still why not do it
right.

If you use Volatile.Read() in the other thread, the visibility problem
shound have gone. Declaring the field as volatile will do the job as
well, but it creates a warning that volatile is discarded when passing
as ref parameter to Interlocked.Increment().


Marcel
 
A

Arne Vajhøj

If you use Volatile.Read() in the other thread, the visibility problem
shound have gone. Declaring the field as volatile will do the job as
well, but it creates a warning that volatile is discarded when passing
as ref parameter to Interlocked.Increment().

True.

Lock is not the only way to achieve the goal.

I just like lock because it solves two problems.

Arne
 
M

Marcel Müller

Lock is not the only way to achieve the goal.

I just like lock because it solves two problems.

Yes, but one should not acquire the locks too often. E.g. it might be a
bad advise to synchronize a often used property getter with a lock. Lock
free solutions like the double check idiom could be about ten times
faster than a lock. I just prepared some example applications for lock
free patterns for training purposes.

OK, this advantage may melt to some degree, if the the target platform
does not ensure acquire and release semantics like x86/x64.


Marcel
 
A

Arne Vajhøj

Yes, but one should not acquire the locks too often. E.g. it might be a
bad advise to synchronize a often used property getter with a lock.

Most good tools can be overused.
Lock
free solutions like the double check idiom could be about ten times
faster than a lock.

Double check locking is also broke according to .NET memory model
but working on x86/x86-64.

I would never use that.
OK, this advantage may melt to some degree, if the the target platform
does not ensure acquire and release semantics like x86/x64.

It is not the speed advantage but the thread safety that goes away.

Arne
 
A

Arne Vajhøj

Most good tools can be overused.


Double check locking is also broke according to .NET memory model
but working on x86/x86-64.

I would never use that.


It is not the speed advantage but the thread safety that goes away.

And even though it is rather unlikely that a .NET developer will
ever get to develop for Itanium (Windows on Itanium is very rare), then
having to develop for ARM is not that unlikely. And ARM does not have
the same memory model as x86/x86-64.

Arne
 
M

Marcel Müller

Double check locking is also broke according to .NET memory model
but working on x86/x86-64.

With volatile it should be fine.

In fact it is part of the framework. LazyInitializer.EnsureInitialized
does exactly that, as long as you use the overload with a locking objekt.
It is not the speed advantage but the thread safety that goes away.

It is only unsafe if it is implemented wrong.


Marcel
 
A

Arne Vajhøj

With volatile it should be fine.

In fact it is part of the framework. LazyInitializer.EnsureInitialized
does exactly that, as long as you use the overload with a locking objekt.

With voltatile it is fine.

But volatile also cost.

Arne
 
M

Marcel Müller

But volatile also cost.

I did some performance tests. It depends.

With target "any CPU" a volatile read the getter is about 20% slower
that without. With target x64 there is no difference.

But a lock was about 10 times slower at high concurrency level. With
only one thread the difference is less.


Marcel
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top