Concurrency issue - Best Practice wanted

  • Thread starter Thread starter Robert Schuldenfrei
  • Start date Start date
R

Robert Schuldenfrei

Hi NG,

I am looking for an opinion here. I am new to C# and SQL, being an old
COBOL hand. I have started into a conversion of an old COBOL ERP system. I
have a number of functions working now and it is time to decide how best to
deal with the issues of concurrency. The old COBOL programs used record
locking to prevent concurrency errors and that logic is already "on the
shelf." Most C# books and articles I have seen use the default optimistic
concurrency as a way around these issues. I have coded a few test examples
and they work fine, except that you may have to tell a user to "try again."

I want to pause at this point before I code about 100 tables to get the
opinion of experienced experts on the best way to move forward. I have
noticed in books on SQL that you can do record locking down to (in some
cases) row level. This would seem like the way I wish to go. Further, I
have stayed away from SQL "Transactions" to this point, because there is a
lot I need to learn about SQL. Now may be the time to learn how to use
them. For ease of conversion I have tended to use SQL like the COBOL
programs use files. The less that SQL does, the more this project looks
like a language translation job. However, I really do want to use C# best
practices least I have an unsupportable "kludge" on my hands when I am done.

Thanks for your advice,

Bob

Robert Schuldenfrei
(e-mail address removed)
 
Robert,

Never, ever, EVER use locking. If you do, you will be commiting your
system to death. It will NOT be scalable, period. You will hit a glass
ceiling in terms of how many operations you can support, and it will not be
pretty.

While yes, with concurrency checking, you will have to tell the user to
"try again", it is better than having corrupt data in your database, or data
that the user enters based on other data that is stale. It's generally not
a good thing. For SQL Server based solutions, I recommend the use of
timestamps, and then modifying your SQL on updates and deletes:

update <table> set <field> = <value> where id = <id> and timestamp =
<timestamp>

The delete follows the same pattern:

delete from <table> where id = <id> and timestamp = <timestamp>

The reason you do this is that from code, you can see the number of
records affected. If the number of records affected is zero, then you know
that either the id does not exist, or that the timestamp has changed, and
you can inform the user as such.

Of course, you wrap this operation in a transaction (unless it is the
only operation you are performing, since all single statements are
serializable transactions in themselves) to ensure that when you touch the
record, no one else can either, for the duration of your operation.

Hope this helps.
 
Hi Nick,

OK, I promise NEVER to even CONSIDER locking. I have two new skills to
learn: 1/ Transactions and 2/ using timestamps. I had been using a
technique of old record / new record to determine if the record had been
changed on update or delete. Timestamps seem more elegant. Sadly, the book
I have been using for learning SQL is Petkovic, SQL Server 2000, A
Beginner's Guide. He mentions timestamps, but then goes forward to
explaining locking (and not to well I might add). Can you suggest a good
book that implements your suggested approach to concurrency that would cover
both the C# and the SQL issues?

Thank you,

Bob

Robert Schuldenfrei
(e-mail address removed)
 
Back
Top