Read Comitted vs. Update/Exclusive locks?

  • Thread starter Thread starter Nara Bala
  • Start date Start date
N

Nara Bala

I was trying to update a record from a transaction and simultaneously
trying to read the record. Both the transactions were set to the
default ReadComitted Isolation level. Now I found that the Read
blocked, even after the other transaction had finished its update
operation(of course, not committed).
I was expecting the Read to return the old committed data. But,
instead it blocked. I found that in SQL Server the write operation
makes the row lock level to UPDATE LOCK(and then after UPDATE TO
EXCLUSIVE MODE) and in this mode even a read access blocks.
So, if that is the case, the ReadCommitted Isolation level never
works?
Pls. help. Because, I thought the basic working of a ReadCommitted is
that it reads a older committed value, even if there has been some
later uncommitted changes. What am I missing?
Also is there a way in C#/.NET by which i can control these locks??
thanks,
Nara
 
Hi Nara.

You're interpretation of Read Committed is not correct - what you're
describing is an Oracle technology (proprietary, non-standards based) called
Multi-Version Read Consistency. In that model, things work as you say - old
versions of rows are stored in rollback segments (although this is not
guaranteed by the RDBMS - it's reliant on the DBA configuration of segment
size etc). When a read operation occurs against an update, the old version
is read from the rollback segment as you've described.

Neither SQL Server or DB2 do MVRC, only Oracle does. Some people love it,
but others consider it a potential disaster as it requires implementing
design patterns to ensure bad things like "double bookings" don't happen..

HTH

Regards,
Greg Linwood
SQL Server MVP
 
Sorry, this feature is not supported by Sql Server, I believe that Oracle
does allow this but imho it is not necesarily a good thing.
Sql Server does allow you to read the _uncommitted change_, again imho this
is not necesarily a good thing.

If you are interested in this feature take a look at the ReadUncommited
isolation level and at the NOLOCK tsql statement, I believe both of these
features should be used only for limited admin/monitoring only features.

I think the problem is probably going to be with the programming model you
are using, you should not be seeing the blocks placed by transactions as
anything but minor performance hits. When we try to access a row that is
locked we will continue to attempt to read it for 30 seconds, so we will
return the row as soon as your transaction is complete. Your transacted
insert or updates should be done in the most performant way possible, if you
have a transaction that lasts anywhere near 30 seconds you should
definitelly be looking at your programming model for ways to improve your
transaction management, this is an outrageous ammount of time
 
If that is the case, then what does Read Committed really imply. Does
it simply imply, not(ReadUncommitted)?
Also what can I do if i dont want my Read to block and timeout
supposing I have a long write sequence in a transaction?
Nara
 
Hi Nara.

Read Committed means simple that one transaction can only see values in the
database that are actually committed by other transactions. This is the
opposite of Read Uncommitted, where the values being written into the
database by transactions can be seen by other transactions. In SQL Server,
this behaviour is implemented using "locks". Locks are also used in Oracle,
but supplemented by "Rollback Segments" which are how Multi Version Read
Consistency is implemented.

Understanding how these work is not really very complex. In both SQL Server
/ Oracle, when transactions are updating tables (inserting, updating or
deleting) the new values are written to the table & locked. In Oracle, the
old values are written into the rollback segment (SQL Server doesn't do this
bit). In both SQL Server & Oracle, if another transaction comes along using
Read Committed that wants to read the updated table, that transaction is
blocked from reading the new values, but in Oracle, that transaction can be
diverted to read the previous value from the rollback segment. In SQL
Server, it can only block. Rollback segments in Oracle are "circular" files,
similar to SQL Server log files - they wrap around when writing rollback
values & need to be configured carefully for size, otherwise values can be
lost (rollback segments can hold multiple values for any given row,
depending on how many transactions are updating the row concurrently). If
rollback segments fill up, they simply wrap around, writing over themselves
& potentially losing historical values. This means that Oracle ultimately
can't guarantee MVRC will always work & it is often a configuration problem
for Oracle DBAs.

How to avoid blocking in SQL Server? It can be mitigated, but not completely
avoided. I took the time to write the above blurb to point out that rollback
segments are not guaranteed in Oracle either, so ultimately, unless the
concurrency dynamics are completely understood in an Oracle DB (very
difficult to do on a public website) it is not possible to completely avoid
in Oracle either.

It is possible to use Read Uncommitted, but this is a very bad idea. In this
isolation mode your transaction might read uncommitted values in the DB
which might ultimately be rolled back & therefore logically never have
existed. Not only this, but you transaction does not take read locks, which,
depending on how your DB is designed, might mean that another transaction
might over-write your transaction & leave the database in an inconsistent
state. This is common with simply designed "counter tables", where a table
is stored to keep a value like an order number, which is read by multiple
transactions.

You haven't mentioned "deadlocks" but I'll just point out to you that this
is really the biggest problem you need to worry about with locking in SQL
Server. Locking is fairly natural - it's very common for transactions to
wait microseconds for each other or even a few seconds, but deadlocks can
cause your database to actually lose information. This is where two
transactions try to acquire locks (due to performing updates) on db
resources (tables, rows, indexes etc) in the reverse order of each other.
This leads the two transactions to each acquire a lock on some db resource,
but also want to lock the resource that the other transaction already has
locked. A simple example is where Transaction1 updates/locks TableA &
Transaction2 updates/locks TableB. Then Transaction1 wants to update / lock
TableB & can't because Transaction2 is already holding a lock on TableB - so
Transaction1 is blocked. Then Transaction2 tried to update TableA, which it
can't because Transaction1 is holding a lock on TableA. At this stage, both
transactions are holding locks on what the other transaction wants, &
neither will ever let go. In this case, SQL Server simply chooses one
transaction as a victim & "kills" it, allowing the other to commit it's
work.

Taking the above into consideration, a good practice is to update tables in
the same order. This won't solve blocking (you can't solve blocking) but it
will help save you from the worse problem "deadlocking". It is common for
people to confuse locking (a normal, good thing) with deadlocking (very
bad), so I hope this extra explanation helps you! If you already knew all
that, sorry for the rant!

HTH

Regards,
Greg Linwood
SQL Server MVP
 
Nara,

The read shouldn't block. The isolation level of Read Committed will
return data that was committed to the database by other transactions,
even while your transaction is in process. If you need a consistent
view of the database during your transaction's lifetime then you'll
need to use the isolation level TRANSACTION_ SERIALIZABLE.

It's difficult to be more specific without knowing details about what
you're trying to accomplish. Personally, when I architect systems I
always want to avoid 'dirty reads' (read uncommitted) at all costs.
This means that I disable AUTO-COMMIT and manage the transactions
myself. And I set the isolation mode to whatever is appropriate for
the given transaction. I typically use TRANSACTION_READ_COMMITTED and
TRANSACTION_ SERIALIZABLE, whatever is appropriate.

Read for yourself Oracle's argument for MV concurrency:
http://otn.oracle.com/deploy/performance/pdf/CWP_9IVSDB2_PERF.PDF

And IBM's argument against MV concurrency:
http://www-3.ibm.com/software/data/pubs/papers/readconsistency/readconsistency.pdf

Hope that helps,

Matt
 
Hey Greg, Matt and Angel,
Thanks a lot for your responses. It wass really helpful and cleared
up some misconceptions.
thanks,
Nara
Angel Saenz-Badillos said:
Sorry, this feature is not supported by Sql Server, I believe that Oracle
does allow this but imho it is not necesarily a good thing.
Sql Server does allow you to read the _uncommitted change_, again imho this
is not necesarily a good thing.

If you are interested in this feature take a look at the ReadUncommited
isolation level and at the NOLOCK tsql statement, I believe both of these
features should be used only for limited admin/monitoring only features.

I think the problem is probably going to be with the programming model you
are using, you should not be seeing the blocks placed by transactions as
anything but minor performance hits. When we try to access a row that is
locked we will continue to attempt to read it for 30 seconds, so we will
return the row as soon as your transaction is complete. Your transacted
insert or updates should be done in the most performant way possible, if you
have a transaction that lasts anywhere near 30 seconds you should
definitelly be looking at your programming model for ways to improve your
transaction management, this is an outrageous ammount of time

--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no rights.
Please do not send email directly to this alias. This alias is for newsgroup
purposes only.
Nara Bala said:
I was trying to update a record from a transaction and simultaneously
trying to read the record. Both the transactions were set to the
default ReadComitted Isolation level. Now I found that the Read
blocked, even after the other transaction had finished its update
operation(of course, not committed).
I was expecting the Read to return the old committed data. But,
instead it blocked. I found that in SQL Server the write operation
makes the row lock level to UPDATE LOCK(and then after UPDATE TO
EXCLUSIVE MODE) and in this mode even a read access blocks.
So, if that is the case, the ReadCommitted Isolation level never
works?
Pls. help. Because, I thought the basic working of a ReadCommitted is
that it reads a older committed value, even if there has been some
later uncommitted changes. What am I missing?
Also is there a way in C#/.NET by which i can control these locks??
thanks,
Nara
 
One other point that didn't seem to be made. In Read Committed, no locks
are held when reading the data after a row
has been returned. Serializable and Repeatable Read do. Repeatable Read
places shared locks on every resource that is being used. Serializable
places does this on the resources, and any ranges of data that the would
have been returned.

One other thing. If users have to be blocked for 30 seconds or 30 minutes
to keep the data in your system absolutely clean, it is possibly worth it.
Obviously you will want to try to optimize code, processes, limit the scope
of transactions as small as possible, add disks/disk channels/processors,
etc, but the data consistency is most important. I don't know the value of
your data (life or death? Or users web page settings? Slightly different!)
but it is usually better to err on the side of the data.

--
----------------------------------------------------------------------------
-----------
Louis Davidson ([email protected])
Compass Technology Management

Pro SQL Server 2000 Database Design
http://www.apress.com/book/bookDisplay.html?bID=266

Note: Please reply to the newsgroups only unless you are
interested in consulting services. All other replies will be ignored :)

Nara Bala said:
Hey Greg, Matt and Angel,
Thanks a lot for your responses. It wass really helpful and cleared
up some misconceptions.
thanks,
Nara
"Angel Saenz-Badillos[MS]" <[email protected]> wrote in message
Sorry, this feature is not supported by Sql Server, I believe that Oracle
does allow this but imho it is not necesarily a good thing.
Sql Server does allow you to read the _uncommitted change_, again imho this
is not necesarily a good thing.

If you are interested in this feature take a look at the ReadUncommited
isolation level and at the NOLOCK tsql statement, I believe both of these
features should be used only for limited admin/monitoring only features.

I think the problem is probably going to be with the programming model you
are using, you should not be seeing the blocks placed by transactions as
anything but minor performance hits. When we try to access a row that is
locked we will continue to attempt to read it for 30 seconds, so we will
return the row as soon as your transaction is complete. Your transacted
insert or updates should be done in the most performant way possible, if you
have a transaction that lasts anywhere near 30 seconds you should
definitelly be looking at your programming model for ways to improve your
transaction management, this is an outrageous ammount of time

--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no rights.
Please do not send email directly to this alias. This alias is for newsgroup
purposes only.
Nara Bala said:
I was trying to update a record from a transaction and simultaneously
trying to read the record. Both the transactions were set to the
default ReadComitted Isolation level. Now I found that the Read
blocked, even after the other transaction had finished its update
operation(of course, not committed).
I was expecting the Read to return the old committed data. But,
instead it blocked. I found that in SQL Server the write operation
makes the row lock level to UPDATE LOCK(and then after UPDATE TO
EXCLUSIVE MODE) and in this mode even a read access blocks.
So, if that is the case, the ReadCommitted Isolation level never
works?
Pls. help. Because, I thought the basic working of a ReadCommitted is
that it reads a older committed value, even if there has been some
later uncommitted changes. What am I missing?
Also is there a way in C#/.NET by which i can control these locks??
thanks,
Nara
 
Back
Top