C#, ADO.NET and MS-SQL

  • Thread starter Thread starter DJ van Vliet
  • Start date Start date
Jon Skeet said:
If you don't allow any updates to the tables you're looking at while
your set of queries is executing, you should always be able to see
consistent data.

Just to be clear, the Oracle mechanism only ignores, not blocks, those
updates. Therefore, it doesn't allow updates to the effective results, if
that's what you meant.
It's one thing for a delete to fail - it's another
thing to only see *one* delete in a transacted pair, due to having read
one table before another.

The MS concurrency mechanism could handle the example you give because the
post-read lock on the first record of the pair would block the write
transaction.

The issue is limited to changes to records that will/would ultimately be
part of your select results but are changed between the time your select
starts and finishes, not ones that are changed after you've read them. If
you want to block the latter you can.

I still say that, because niether mechanism can deal with an insert that
would have been part of your results but was added *after* your query
finished, meaning you always have to deal with or accept the potential you
just got invalid (yet perhaps consistent with itself) data, and the cases
where you could get stung by the ANSI/MS/IBM approach are fairly limited,
from a pragmatic standpoint it's not as big a deal as it seems at first.
 
Daniel Billingsley said:
Just to be clear, the Oracle mechanism only ignores, not blocks, those
updates. Therefore, it doesn't allow updates to the effective results, if
that's what you meant.

Yes, that's fine - but I thought from what you were saying that SQL
Server (with an appropriate isolation level) would block the updates
completely until all the queries were complete.
The MS concurrency mechanism could handle the example you give because the
post-read lock on the first record of the pair would block the write
transaction.

Right. (Assuming you use the correct isolation level.)
The issue is limited to changes to records that will/would ultimately be
part of your select results but are changed between the time your select
starts and finishes, not ones that are changed after you've read them. If
you want to block the latter you can.

Ah - the former could still be a problem then?
I still say that, because niether mechanism can deal with an insert that
would have been part of your results but was added *after* your query
finished, meaning you always have to deal with or accept the potential you
just got invalid (yet perhaps consistent with itself) data, and the cases
where you could get stung by the ANSI/MS/IBM approach are fairly limited,
from a pragmatic standpoint it's not as big a deal as it seems at first.

That's fine - I have no problem with the idea that data gets *stale*
immediately. It's the idea that you have to expect data to be
inconsistent with itself that I was rebelling against.
 
huh?

read this:
http://msdn.microsoft.com/library/d...skPerformingOptimisticConcurrencyChecking.asp

and then read this
http://msdn.microsoft.com/library/d...s/cpguide/html/cpconoptimisticconcurrency.asp

Now... we are talking about optimistic locking in a scalable world, right?

The issue was on the integrity of the database where I would look up the
client record, create an order header the refers to it, and attempt to add
the order header, only to find that the client record had been deleted. In
this case, my argument is that you would simply re-add the client record
before adding the order header.

And your point is?

--- N
 
Nick Malik said:
my argument is that you would simply re-add the client record
before adding the order header.

And your point is?

What you describe isn't optimistic concurrency, it's last writer wins
concurrency, which has its own issues. You can deal with the issues, but
not by simply "not caring". At least I'd never use or recommend such an
application.

Here's a quote from the second article you posted:

"In an optimistic concurrency model, a violation is considered to have
occurred if, after a user receives a value from the database, another user
modifies the value before the first user has attempted to modify it."

The 1st article says the same thing.
 
Jon Skeet said:
Yes, that's fine - but I thought from what you were saying that SQL
Server (with an appropriate isolation level) would block the updates
completely until all the queries were complete.

Yes, guess I got confused switching back and forth between talking of the
two mechanisms. Sql Server could block updates to rows that it had already
read.
Ah - the former could still be a problem then?

Don't think so. The Sql Server read isolation ignores uncommitted changes -
that's the default behavior on a read cursor without doing anything (no
transaction necessary). Guess we never brought up that detail. ;)

userA reads Table1.record1 of the pair and leaves it locked (and will
eventually read Table2.record1)
userB tries to updated Table1.record1 and Table2.record1
UserB's commit is blocked because of the lock
When UserA gets to The Table2.record1 the change is ignored because it's not
committed
UserA finishes and gets his consistent data
User2 commits his updates

The problem if if userB only updates Table2.record1 before UserA gets to it.
But that's where I say if the DB is normalized and redundancies are
controlled as we've discussed then userA getting the new Table2.record1
isn't different than what he would have retrieved had he started his query a
few milliseconds later and doesn't really represent a true inconsistency.
 
Daniel Billingsley said:
Don't think so. The Sql Server read isolation ignores uncommitted changes -
that's the default behavior on a read cursor without doing anything (no
transaction necessary). Guess we never brought up that detail. ;)

Yes, I think that works - so long as the reader's lock is valid for the
whole of their transaction of reads, and not just for any particular
query. That would still have potential problems. And that brings us
neatly back to the start of the thread, where we were discussing
whether or not there was a point in having transactions around just
reads - and I still maintain that there is, assuming that the isolation
level works that way. (If it doesn't, there are problems.)

The problem if if userB only updates Table2.record1 before UserA gets to it.
But that's where I say if the DB is normalized and redundancies are
controlled as we've discussed then userA getting the new Table2.record1
isn't different than what he would have retrieved had he started his query a
few milliseconds later and doesn't really represent a true inconsistency.

Yes, I'm not talking about stale data at all - only real
inconsistencies.
 
Ah, the subtleties of e-mail and newsgroups.

It is hard to convey a complete thought here...

my comments inline...

Daniel Billingsley said:
What you describe isn't optimistic concurrency, it's last writer wins
concurrency, which has its own issues.

One of the two articles discusses methods for coping with optimistic
concurrency. One of the methods is "last writer wins." I am describing
optimistic concurrency, and I'm describing one of the methods for coping
with it. Your statement is proven false by your statement.
You can deal with the issues, but
not by simply "not caring". At least I'd never use or recommend such an
application.

The discussion, which has wandered all over the map, was discussing the
"invalidity" of the data in the database between reading it and writing it
back out. Specifically, the statement that I quoted was dealing with a
delete. To repeat my prior quote from one of your posts:
someone else has deleted the record while you were thinking about it,<<

I was discussing how you would not care about <ul>this particular
condition</ul> if you had used disconnected data. The answer, which I gave,
and was apparently not clear, was that a DELETE on the server side could be
ignored if you had a copy of the primary table record with you, because you
could re-assert the data.
Here's a quote from the second article you posted:

"In an optimistic concurrency model, a violation is considered to have
occurred if, after a user receives a value from the database, another user
modifies the value before the first user has attempted to modify it."

The 1st article says the same thing.

and if you refer to the same message where I said that a disconnected model
may "not care" because it can reassert deleted data, you will also see the
following statement (I'm quoting myself here).
updates to my benefit plans, and on one form, I update my mailing address, I
have no idea if my mailing address will be correct in their records when it
is done. This is an exception condition,<<

You see, we are in complete agreement. It is an exception condition when
two messages assert different data.

That is why I'm confused by your words. You say that you are disagreeing
with me, but then you back it up with complete agreement.

My meaning must not have been clear. I regret my lack of clarity.

I think this conversation would have been completely different if we were
speaking face to face.

With utmost respect,
--- Nick
 
Nick Malik said:
You see, we are in complete agreement. It is an exception condition when
two messages assert different data.

That is why I'm confused by your words. You say that you are disagreeing
with me, but then you back it up with complete agreement.

One note to human resources says fire the employee, the second says change
the address. You think the employee should still get a paycheck next week?
No, I most definitely do not agree with you. A delete is a modification and
those two message "assert different data", to use your phrase.
 
Jon Skeet said:
Yes, I think that works - so long as the reader's lock is valid for the
whole of their transaction of reads, and not just for any particular
query. That would still have potential problems. And that brings us
neatly back to the start of the thread, where we were discussing
whether or not there was a point in having transactions around just
reads - and I still maintain that there is, assuming that the isolation
level works that way. (If it doesn't, there are problems.)

Why do I get the feeling I'm just confusing things? To clarify my statement
above, the default behavior is that the lock only remains on while the
cursor is reading that row. While uncommitted changes to that ONE row are
indeed ignored, it's definitely not sufficient to address the problems we're
discussing. You have to bump up the isolation level to get the locks to
remain until the query is finished.

Ok? Uncommitted changes are ignored by default (and higher), but the commit
is *blocked* only by higher (non-default) isolation levels.

Also, I didn't mean to ever say read transactions are never useful, only
that from a pragmatic standpoint IF the database is properly designed the
default behavior is not nearly as bad as the knee-jerk reaction we probably
all had suggests.
Yes, I'm not talking about stale data at all - only real
inconsistencies.

I keep saying that only because in my little brain I keep going "no, that's
not inconsistency, that's just stale... no, that *would* be blocked...no,
*that* would be blocked too...". For example, I think the record pair is
the obvious case that jumps to mind that seems a big problem, but in reality
it doesn't introduce any unique problems.
 
Back
Top