simplest thread safe pattern

J

John A Grandy

I have a class that I wish to make thread safe.

My application is not high intensity, i.e. it is unlikely that more than ~10
threads would potentially be accessing an instance of my class.

I have 100s of public methods in this class.

What is a viable thread-safe pattern ?

What about the following :

public class Class1
{
private Object thisLock = new Object();

public Class1()
{
lock ( thisLock )
{
// code
}
}

public void Method1()
{
lock ( thisLock )
{
// code
}
}

}
 
J

Josip Medved

My application is not high intensity, i.e. it is unlikely that more than ~10
threads would potentially be accessing an instance of my class.
What is a viable thread-safe pattern ?

Using lock is simplest safe-threading there is.
If you need something more controlable there is Monitor class.
If there is single writer and multiple readers, there is
ReaderWriterLock class.
Interlocked class is usefull for some operations also.
....

As you can guess, everything depends on what your application exactly
does with
that threads and what is your target performance. There is no general
rule - that
is why you have so many ways to syncronize everything.
I would advice you to:
1. Continue using lock statement.
2. Measure performance, measure some more and measure again.
3. Optimize only if you have problems and where you have most of
problems.

Usually there are only few hot locks and overoptimization of
everything is usually
waste of time. Optimize if you have an issue only.

P.S. I do not know what kind of program you will create, but watch it
that you
don't create too many competing threads. Thread pool usually is better
solution
than managing all threads yourself - especially if they are short
lived.
 
J

John A Grandy

What do you think of [MethodImplAttribute(
MethodImplOptions.Synchronized )] ?
 
G

Göran Andersson

John said:
What do you think of [MethodImplAttribute(
MethodImplOptions.Synchronized )] ?

From what I can gather, that only specifies that you made the method
thread safe, it doesn't change the code in any way. You still have to
make it thread safe yourself.
 
J

Josip Medved

What do you think of  [MethodImplAttribute(
MethodImplOptions.Synchronized )]   ?
Don't use it. It's equivalent to "lock (this)", which is deadlock-prone. See
 http://blogs.msdn.com/bclteam/archive/2004/01/20/60719.aspx, as well as
many other pages.

It is as deadlock prone as any other syncronization method. Real issue
is knowning
on which object to syncronize. Real problem here was that Microsoft's
own documentation
for a while stated to use lock(this) - that is deadlock-prone if
followed. You should always
lock on private object.
 
J

Jeroen Mostert

Josip said:
What do you think of [MethodImplAttribute(
MethodImplOptions.Synchronized )] ?
Don't use it. It's equivalent to "lock (this)", which is deadlock-prone. See
http://blogs.msdn.com/bclteam/archive/2004/01/20/60719.aspx, as well as
many other pages.

It is as deadlock prone as any other syncronization method.

No, it's more deadlock-prone. You are of course right that deadlock is a
danger with most synchronization methods -- I should have explicitly said
"more deadlock-prone than other methods".
 
J

John A Grandy

VS compiler won't allow bracketing a method itself with lock ...

lock {
public void Method1()
{
}
}

gives " Invalid token 'lock' is class, struct, or interface member
declaration " error

.... so , [MethodImplAttribute(MethodImplOptions.Synchronized )] is just
some special magic which accomplishes same behind the scenes ... ?


Interestingly, on adding this attribute to all of my public methods that
perform data-access , all of my System.Data.SqlClient concurrency-related
exceptions magically went away.

What are the practical drawbacks of using this attribute on a per-method
basis ?


Josip Medved said:
What do you think of [MethodImplAttribute(
MethodImplOptions.Synchronized )] ?

Same as having lock statement all around method.
 
J

Josip Medved

... so ,  [MethodImplAttribute(MethodImplOptions.Synchronized )]  is just
some special magic which accomplishes same behind the scenes ... ?

No, it does something like this

public void Method1() {
lock {
//some code
}
}
Interestingly, on adding this attribute to all of my public methods that
perform data-access , all of my System.Data.SqlClient concurrency-related
exceptions magically went away.

That is because if you add this to all your methods there is no real
concurency.
What are the practical drawbacks of using this attribute on a per-method
basis ?

Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.

Often this is too restrictive. Most usual scenario (at least for me)
is having few
writes of data and many reads. In that case ReaderWriterLock gives you
real
performance boost (in this case lock does really bad).

Also having multiple lock objects if your class does some things that
are not
in close relation gives nice boost.

However to give real advice what to use, one always needs to see code
and how that
code is to be used and thus I gave advice to just put lock everywhere
and MEASURE
where you have a preformance problem. There is no sense (IMHO) to make
perfect
arhitecture to squeze every possible nanosecond. If your program is
slow, just
optimize few methods that do have a problem and you will see huge
performance
boost (80/20 rule applies). Time spent on optimizing something that
your customer
will not feel is wasted time better spent on testing and debugging.

There is not many applications where every processor clock is so
precious that
everything needs to be perfect (and I have my doubts that anybody
writes those
applications in C#).
 
J

John A Grandy

<<<
Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.
My understanding is that this attribute implements a per-method lock. While
a thread is executing code inside the specific locked method , all other
threads will need to wait until the current thread exits.

Other methods in the class ( even those that are locked in identical
fashion ) are not affected the lock.

Is this correct ?


Here's an example that outlines my thinking :

using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Text;

namespace Namespace1
{
public class Class1
{
// external threads repeatedly call Method1,Method2,Method1, etc.

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method1()
{
// thread #2 currently executing here,
// when finished will wait for thread #1 to exit Method2
}

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method2()
{
// thread #1 currently executing here,
// when finished, will wait for thread #2 to exit Method1
}

}
}


Josip Medved said:
... so , [MethodImplAttribute(MethodImplOptions.Synchronized )] is just
some special magic which accomplishes same behind the scenes ... ?

No, it does something like this

public void Method1() {
lock {
//some code
}
}
Interestingly, on adding this attribute to all of my public methods that
perform data-access , all of my System.Data.SqlClient concurrency-related
exceptions magically went away.

That is because if you add this to all your methods there is no real
concurency.
What are the practical drawbacks of using this attribute on a per-method
basis ?

Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.

Often this is too restrictive. Most usual scenario (at least for me)
is having few
writes of data and many reads. In that case ReaderWriterLock gives you
real
performance boost (in this case lock does really bad).

Also having multiple lock objects if your class does some things that
are not
in close relation gives nice boost.

However to give real advice what to use, one always needs to see code
and how that
code is to be used and thus I gave advice to just put lock everywhere
and MEASURE
where you have a preformance problem. There is no sense (IMHO) to make
perfect
arhitecture to squeze every possible nanosecond. If your program is
slow, just
optimize few methods that do have a problem and you will see huge
performance
boost (80/20 rule applies). Time spent on optimizing something that
your customer
will not feel is wasted time better spent on testing and debugging.

There is not many applications where every processor clock is so
precious that
everything needs to be perfect (and I have my doubts that anybody
writes those
applications in C#).
 
J

Josip Medved

My understanding is that this attribute implements a per-method lock.  While
a thread is executing code inside the specific locked method , all other
threads will need to wait until the current thread exits.
Other methods in the class ( even those that are locked in identical
fashion ) are not affected the lock.

No. It creates equivalent of lock(this) for all decorated methods.
That means
that all methods that have that attribute are syncronized on same
object.

It is more like this:
[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method1() {
// thread #1 currently executing here,
}

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method2() {
// nothing can be executing here until thread #1 finishes.
 
J

John A Grandy

Yikes. That's not what I want.

What I want is for at most one thread to be executing code in each of my
class' public methods that performs data-access.

What is the best way to accomplish this ?


John A Grandy said:
<<<
Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.
My understanding is that this attribute implements a per-method lock.
While a thread is executing code inside the specific locked method , all
other threads will need to wait until the current thread exits.

Other methods in the class ( even those that are locked in identical
fashion ) are not affected the lock.

Is this correct ?


Here's an example that outlines my thinking :

using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Text;

namespace Namespace1
{
public class Class1
{
// external threads repeatedly call Method1,Method2,Method1, etc.

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method1()
{
// thread #2 currently executing here,
// when finished will wait for thread #1 to exit Method2
}

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method2()
{
// thread #1 currently executing here,
// when finished, will wait for thread #2 to exit Method1
}

}
}


Josip Medved said:
... so , [MethodImplAttribute(MethodImplOptions.Synchronized )] is just
some special magic which accomplishes same behind the scenes ... ?

No, it does something like this

public void Method1() {
lock {
//some code
}
}
Interestingly, on adding this attribute to all of my public methods that
perform data-access , all of my System.Data.SqlClient concurrency-related
exceptions magically went away.

That is because if you add this to all your methods there is no real
concurency.
What are the practical drawbacks of using this attribute on a per-method
basis ?

Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.

Often this is too restrictive. Most usual scenario (at least for me)
is having few
writes of data and many reads. In that case ReaderWriterLock gives you
real
performance boost (in this case lock does really bad).

Also having multiple lock objects if your class does some things that
are not
in close relation gives nice boost.

However to give real advice what to use, one always needs to see code
and how that
code is to be used and thus I gave advice to just put lock everywhere
and MEASURE
where you have a preformance problem. There is no sense (IMHO) to make
perfect
arhitecture to squeze every possible nanosecond. If your program is
slow, just
optimize few methods that do have a problem and you will see huge
performance
boost (80/20 rule applies). Time spent on optimizing something that
your customer
will not feel is wasted time better spent on testing and debugging.

There is not many applications where every processor clock is so
precious that
everything needs to be perfect (and I have my doubts that anybody
writes those
applications in C#).
 
J

Josip Medved

Yikes.  That's not what I want.
What I want is for at most one thread to be executing code in each of my
class' public methods that performs data-access.
What is the best way to accomplish this ?

You could make something like this:

private object _syncMethod1 = new object();
private object _syncMethod2 = new object();

public void Method1() {
lock(_syncMethod1) {
//code
}
}

public void Method2() {
lock(_syncMethod2) {
//code
}
}


However, for this to work all your methods MUST NOT depend on one
another (e.g.
if you read list in one method and write it in another - it is a
dependance).
 
J

Josh Einstein

I hate to be the bad guy, but since you asked for the best way to accomplish
it... You should seriously consider refactoring your code. For starters, you
mentioned you have a class with hundreds of methods. That should be a big
red flag right there.

Second of all, it sounds like you're not really sure what's safe to
parallelize and what isn't. You should re-think the problem and design your
code to be more task-oriented and self-contained. If you created several
smaller classes that had no external dependencies, you could simply new up a
bunch of them and throw them onto the threadpool.

Finally, if it starts to sound very confusing and a lot of work, you should
re-consider whether or not you need to do this on multiple threads. The
truth is, multithreaded programming is HARD and it can't be solved simply by
using broad locks, otherwise you may as well not have any threads at all.

Josh Einstein

John A Grandy said:
Yikes. That's not what I want.

What I want is for at most one thread to be executing code in each of my
class' public methods that performs data-access.

What is the best way to accomplish this ?


John A Grandy said:
<<<
Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.
My understanding is that this attribute implements a per-method lock.
While a thread is executing code inside the specific locked method , all
other threads will need to wait until the current thread exits.

Other methods in the class ( even those that are locked in identical
fashion ) are not affected the lock.

Is this correct ?


Here's an example that outlines my thinking :

using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Text;

namespace Namespace1
{
public class Class1
{
// external threads repeatedly call Method1,Method2,Method1, etc.

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method1()
{
// thread #2 currently executing here,
// when finished will wait for thread #1 to exit Method2
}

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method2()
{
// thread #1 currently executing here,
// when finished, will wait for thread #2 to exit Method1
}

}
}


Josip Medved said:
... so , [MethodImplAttribute(MethodImplOptions.Synchronized )] is just
some special magic which accomplishes same behind the scenes ... ?

No, it does something like this

public void Method1() {
lock {
//some code
}
}
Interestingly, on adding this attribute to all of my public methods that
perform data-access , all of my System.Data.SqlClient
concurrency-related
exceptions magically went away.

That is because if you add this to all your methods there is no real
concurency.
What are the practical drawbacks of using this attribute on a per-method
basis ?

Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.

Often this is too restrictive. Most usual scenario (at least for me)
is having few
writes of data and many reads. In that case ReaderWriterLock gives you
real
performance boost (in this case lock does really bad).

Also having multiple lock objects if your class does some things that
are not
in close relation gives nice boost.

However to give real advice what to use, one always needs to see code
and how that
code is to be used and thus I gave advice to just put lock everywhere
and MEASURE
where you have a preformance problem. There is no sense (IMHO) to make
perfect
arhitecture to squeze every possible nanosecond. If your program is
slow, just
optimize few methods that do have a problem and you will see huge
performance
boost (80/20 rule applies). Time spent on optimizing something that
your customer
will not feel is wasted time better spent on testing and debugging.

There is not many applications where every processor clock is so
precious that
everything needs to be perfect (and I have my doubts that anybody
writes those
applications in C#).
 
J

John A Grandy

Hi Josh, and thanks for the response.

I suppose I should clarify the structure of these classes :

Yes, there are hundreds of methods, each of which is business-task oriented.
However , these methods don't directly initiate any calls to the db. Rather
, I have a base class which contains wrapper methods around the common
SqlClient methods ( SqlCommand.ExecuteReader() ,
SqlCommand.ExecuteNonQuery() , SqlDataAdapater.Fill() , etc. ).

Open/close SqlConnections immediately before/after use is the only possible
architecture for ASP.NET ( web-app or SOA) , but I'm not at all sure it is
best for a WinForms app where much higher performance can likely be obtained
by leaving one or more connections open.

The actual threads in my app are spawned in higher-layer code -- this
higher-layer code knows nothing of how data-access is implemented. Rather,
it just make calls to my data-access classes' methods as needed.

Given the above, I'm not sure I see any advantage in splitting out my
data-access methods into many small classes. Currently, these classes are
per-db ( each class has a distinct static connection string ). Some have
hundreds of methods, some dozens, some only a few. But I can not a priori
impose any limits on # methods.

It would seem that my problem is how to best utilize multiple open
SqlConnection instances , where serialization if enforced on each.


Josh Einstein said:
I hate to be the bad guy, but since you asked for the best way to
accomplish it... You should seriously consider refactoring your code. For
starters, you mentioned you have a class with hundreds of methods. That
should be a big red flag right there.

Second of all, it sounds like you're not really sure what's safe to
parallelize and what isn't. You should re-think the problem and design
your code to be more task-oriented and self-contained. If you created
several smaller classes that had no external dependencies, you could
simply new up a bunch of them and throw them onto the threadpool.

Finally, if it starts to sound very confusing and a lot of work, you
should re-consider whether or not you need to do this on multiple threads.
The truth is, multithreaded programming is HARD and it can't be solved
simply by using broad locks, otherwise you may as well not have any
threads at all.

Josh Einstein

John A Grandy said:
Yikes. That's not what I want.

What I want is for at most one thread to be executing code in each of my
class' public methods that performs data-access.

What is the best way to accomplish this ?


John A Grandy said:
<<<
Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.


My understanding is that this attribute implements a per-method lock.
While a thread is executing code inside the specific locked method , all
other threads will need to wait until the current thread exits.

Other methods in the class ( even those that are locked in identical
fashion ) are not affected the lock.

Is this correct ?


Here's an example that outlines my thinking :

using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Text;

namespace Namespace1
{
public class Class1
{
// external threads repeatedly call Method1,Method2,Method1, etc.

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method1()
{
// thread #2 currently executing here,
// when finished will wait for thread #1 to exit Method2
}

[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method2()
{
// thread #1 currently executing here,
// when finished, will wait for thread #2 to exit Method1
}

}
}


... so , [MethodImplAttribute(MethodImplOptions.Synchronized )] is just
some special magic which accomplishes same behind the scenes ... ?

No, it does something like this

public void Method1() {
lock {
//some code
}
}

Interestingly, on adding this attribute to all of my public methods
that
perform data-access , all of my System.Data.SqlClient
concurrency-related
exceptions magically went away.

That is because if you add this to all your methods there is no real
concurency.

What are the practical drawbacks of using this attribute on a
per-method
basis ?

Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.

Often this is too restrictive. Most usual scenario (at least for me)
is having few
writes of data and many reads. In that case ReaderWriterLock gives you
real
performance boost (in this case lock does really bad).

Also having multiple lock objects if your class does some things that
are not
in close relation gives nice boost.

However to give real advice what to use, one always needs to see code
and how that
code is to be used and thus I gave advice to just put lock everywhere
and MEASURE
where you have a preformance problem. There is no sense (IMHO) to make
perfect
arhitecture to squeze every possible nanosecond. If your program is
slow, just
optimize few methods that do have a problem and you will see huge
performance
boost (80/20 rule applies). Time spent on optimizing something that
your customer
will not feel is wasted time better spent on testing and debugging.

There is not many applications where every processor clock is so
precious that
everything needs to be perfect (and I have my doubts that anybody
writes those
applications in C#).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top