C# transactions v SQL transactions

M

Mike P

I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible errors
and rollback all necessary data.

Over the last few days I've been trying to convert some of this code to
SQL Server stored procedures, but it seems to lack many of the benefits
of C# transactions - a lot of the errors don't seem to be trapped by the
SQL error trapping (e.g. if I do an update on a row that doesn't exist,
no rollback occurs and execution continues, if a table I am trying to
access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.


Any assistance would be really appreciated.


Cheers,

Mike
 
Z

zapov

Mike said:
Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.

I can think of ...
Smaller and faster client applications
Changing of procedures whithout modifying client application
List goes on... :)
 
R

Richard Blewett [DevelopMentor]

What do you mean by a C# transaction?

Are you talking about System.EnterpriseServices.ServicedComponent? or

SqlConnection.BeginTransaction?

or something else?

Regards

Richard Blewett - DevelopMentor
http://staff.develop.com/richardb/weblog

nntp://news.microsoft.com/microsoft.public.dotnet.languages.csharp/<[email protected]>

I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible errors
and rollback all necessary data.

Over the last few days I've been trying to convert some of this code to
SQL Server stored procedures, but it seems to lack many of the benefits
of C# transactions - a lot of the errors don't seem to be trapped by the
SQL error trapping (e.g. if I do an update on a row that doesn't exist,
no rollback occurs and execution continues, if a table I am trying to
access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.


Any assistance would be really appreciated.


Cheers,

Mike




---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.766 / Virus Database: 513 - Release Date: 17/09/2004



[microsoft.public.dotnet.languages.csharp]
 
N

Nicholas Paldino [.NET/C# MVP]

Mike,

Personally, I think that managing transaction state in your stored
procedures is a very, very bad idea. It limits how they can be used
elsewhere, and for the most part, are harder to maintain (as you have more
and more sp's calling each other, trying to maintain transaction state is a
pain).

I would recommend using EnterpriseServices for handling transaction
state. There are a number of reasons for this, the best one being easy
maintainability (you can turn transactions on and off with the flick of a
switch, or change their behavior). Another reason to handle this would be
because it offers the easiest translation path to Indigo.

Hope this helps.
 
S

Scott Allen

Hi mrp:

As other have commented, error handling in stored procs is very
difficult to do - it tends to clutter up the TSQL quite a bit and it's
terribly easy to overlook a problem. It's an approach I try to avoid.

SQL 2005 has some improvements, but that's not here, and if you are
concerened about portability at all its not an option.
 
E

Eric Sabine

Are you talking about database transactions or something else? Database
transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN, really should not
be used in the front-end application. You greatly increase your risk to
locking, blocking, inconsistent data, corrupt data, long running
transactions (which leads back to locking and blocking), uncommitted
transactions, and more. Transactions should be kept as short as possible to
keep database access clean. I take the totally opposite stance and say
transactions and typed sql statements (even though you didn't mention these
but they would go hand-in-hand with someone who was creating transactions in
the FE) should never exist in the front-end. I believe that the only access
from a front-end should be limited to executing stored procedures.

Eric
 
N

Nicholas Paldino [.NET/C# MVP]

Eric,

I would agree with you, transactions should not exist in the front end
(nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that the
transactions should be controlled, and not by the programmer, but rather,
through the administrative tool (which gets the initial values from
attributes declared on the class).

The reason for this is that it allows for easy extenisibility in the
future. For example, say you have your transaction code in your stored
procedure now. Say that you have some piece of code that calls the stored
procedure. Later on, you have a requirement to write to a file when the
procedure completes, and to not write to the file when it fails. You would
want some sort of transaction manager to coordinate this. Of course, this
assumes that the file system is transactional (which it is in Longhorn).

The point is that when you start performing transaction management in
the resource that is being handled by the transacton, you severely limit the
other business processes that the resource can be included in. The file is
a simple case, but what about message queues? What about both queues and
emails? As you add more resources to the transaction state (outside of the
database), you need a way to abort them all.

It is because of the fact that I am going to make my business process
transactional, and not just my database operations transactional that I use
something like Enterprise Services, or the new Transaction model being
introduced in .NET 2.0. I believe that the mindset of the db being the only
transactional resource is on the way of being well behind us, and that
external entities are needed to control all of these resources, should
something fail while trying to perform operations on them.
 
E

Eric Sabine

I do not believe we are discussing the same thing. I am strictly speaking
about SQL Server transactions and as you have indicated you are talking
about business layer "transactions." If the requirement were passed to the
DBA who was writing the stored procedure that File IO needs to take place,
he should kick it back and say that is outside the scope of the sql server
transaction.

SQL Server transactions are strictly for maintaining data integrity and
consistency, not to mention they give you a named place to restore to
instead of a datetime stamp. I believe you would benefit from not one, but
both, i.e., the data integrity transaction and the process integrity
transaction.

Eric
 
N

Nicholas Paldino [.NET/C# MVP]

Eric,

What I'm trying to say is that for any resource (DB included), if you
have the resource itself determine its transaction state, instead of a
transaction coordinator, it then becomes difficult to integrate that
resource into larger-scale transactions.

Because of that, I would recommend against handling transactions in SP
code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered as
having a resource manager, it can allow ES/COM+ to handle the transaction
management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL server,
and issue the appropriate transaction commands, based on configuration. The
idea here isn't to lock those procedures into a pattern which would be
difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I would
never write a stored procedure that has transaction management code in it.

--
- Nicholas Paldino [.NET/C# MVP]
- (e-mail address removed)



Eric Sabine said:
I do not believe we are discussing the same thing. I am strictly speaking
about SQL Server transactions and as you have indicated you are talking
about business layer "transactions." If the requirement were passed to the
DBA who was writing the stored procedure that File IO needs to take place,
he should kick it back and say that is outside the scope of the sql server
transaction.

SQL Server transactions are strictly for maintaining data integrity and
consistency, not to mention they give you a named place to restore to
instead of a datetime stamp. I believe you would benefit from not one, but
both, i.e., the data integrity transaction and the process integrity
transaction.

Eric


Eric,

I would agree with you, transactions should not exist in the front
end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that the
transactions should be controlled, and not by the programmer, but
rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to the
file when it fails. You would want some sort of transaction manager
to coordinate this. Of course, this assumes that the file system is
transactional (which it is in Longhorn).
The point is that when you start performing transaction management
in the resource that is being handled by the transacton, you severely
limit the other business processes that the resource can be included
in. The file is a simple case, but what about message queues? What
about both queues and emails? As you add more resources to the
transaction state (outside of the database), you need a way to abort
them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on the
way of being well behind us, and that external entities are needed to
control all of these resources, should something fail while trying to
perform operations on them.

Eric Sabine said:
Are you talking about database transactions or something else? Database
transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
really should not be used in the front-end application. You greatly
increase your risk to locking, blocking, inconsistent data, corrupt
data, long running transactions (which leads back to locking and
blocking), uncommitted transactions, and more. Transactions should
be kept as short as possible to keep database access clean. I take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would go
hand-in-hand with someone who was creating transactions in the FE)
should never exist in the front-end. I believe that the only access
from a front-end should be limited to executing stored procedures.

Eric




Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your
stored procedures is a very, very bad idea. It limits how they can
be used elsewhere, and for the most part, are harder to maintain
(as you have more and more sp's calling each other, trying to
maintain transaction state is a pain).

I would recommend using EnterpriseServices for handling
transaction state. There are a number of reasons for this, the
best one being easy maintainability (you can turn transactions on
and off with the flick of a switch, or change their behavior). Another
reason to handle this would be because it offers the
easiest translation path to Indigo.
Hope this helps.


I've been using C# transactions for a while and had no problems
with them. Using try catch blocks I can trap basically all
possible errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this
code to SQL Server stored procedures, but it seems to lack many of
the benefits of C# transactions - a lot of the errors don't seem
to be trapped by the SQL error trapping (e.g. if I do an update on
a row that doesn't exist, no rollback occurs and execution
continues, if a table I am trying to access doesn't exist then the
sproc crashes). Can anybody tell me what reasons there are for writing
your
transactions in stored procedures rather than your .NET code, as
there don't seem to be any to me.


Any assistance would be really appreciated.


Cheers,

Mike
 
E

Eric Sabine

My gut reaction is "I don't like it." That said, I am not an expert on
Enterprise Services or the DNF 2.0 Transaction model. I feel that this
external transaction coordinator (ETC) would cause me performance problems
in my sql server because (if I am understanding it properly) the ETC would
begin a transaction and say, start some process (transaction), get data from
disparate sources, write a text file, send data to a web service, more,
more, more, then some element bombs out and the ETC says "OK let's remain
consistent, roll everything back!". But the item that bombed out had a
timeout error at 20 seconds, so I take it that my sql server portion, which
was an update on a commonly used table, which happened to update enough rows
that the lock was escalated from a row lock to a table lock, also had to
wait for these 20 seconds (plus other elemental transaction times) before
it's transaction was rolled back. Meanwhile, my users trying to enter
orders got blocked.

I believe the purpose of the ETC is probably for business "consistency" and
I'm all for that - don't get me wrong, I'm all about the consistency :) ,
but it probably comes at a performance price, which I see happening when you
take the database-transaction away from the database and you make it no
longer dependent on the DML (data modification language, i.e., the inserts,
updates, and deletes) and you make it dependent on the business process
instead.

If you could guarantee that the ETC waits on the sql server transaction as
the last process I could be convinced, but I don't see how you get scalable
and properly performing applications if your database is forced to wait on
other non-sql processes.

Eric





Eric,

What I'm trying to say is that for any resource (DB included), if
you have the resource itself determine its transaction state, instead
of a transaction coordinator, it then becomes difficult to integrate
that resource into larger-scale transactions.

Because of that, I would recommend against handling transactions
in SP code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered as
having a resource manager, it can allow ES/COM+ to handle the
transaction management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL
server, and issue the appropriate transaction commands, based on
configuration. The idea here isn't to lock those procedures into a
pattern which would be difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I
would never write a stored procedure that has transaction management
code in it.

Eric Sabine said:
I do not believe we are discussing the same thing. I am strictly
speaking about SQL Server transactions and as you have indicated you
are talking about business layer "transactions." If the requirement
were passed to the DBA who was writing the stored procedure that
File IO needs to take place, he should kick it back and say that is
outside the scope of the sql server transaction.

SQL Server transactions are strictly for maintaining data integrity
and consistency, not to mention they give you a named place to
restore to instead of a datetime stamp. I believe you would benefit
from not one, but both, i.e., the data integrity transaction and the
process integrity transaction.

Eric


Eric,

I would agree with you, transactions should not exist in the
front end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that
the transactions should be controlled, and not by the programmer,
but rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to
the file when it fails. You would want some sort of transaction
manager to coordinate this. Of course, this assumes that the file
system is transactional (which it is in Longhorn).
The point is that when you start performing transaction
management in the resource that is being handled by the transacton,
you severely limit the other business processes that the resource
can be included in. The file is a simple case, but what about
message queues? What about both queues and emails? As you add
more resources to the transaction state (outside of the database),
you need a way to abort them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on
the way of being well behind us, and that external entities are
needed to control all of these resources, should something fail
while trying to perform operations on them.

message Are you talking about database transactions or something else?
Database transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
really should not be used in the front-end application. You
greatly increase your risk to locking, blocking, inconsistent
data, corrupt data, long running transactions (which leads back to
locking and blocking), uncommitted transactions, and more. Transactions
should be kept as short as possible to keep database access clean. I
take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would go
hand-in-hand with someone who was creating transactions in the FE)
should never exist in the front-end. I believe that the only
access from a front-end should be limited to executing stored
procedures. Eric




Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your
stored procedures is a very, very bad idea. It limits how they
can be used elsewhere, and for the most part, are harder to
maintain (as you have more and more sp's calling each other,
trying to maintain transaction state is a pain).

I would recommend using EnterpriseServices for handling
transaction state. There are a number of reasons for this, the
best one being easy maintainability (you can turn transactions on
and off with the flick of a switch, or change their behavior).
Another reason to handle this would be because it offers the
easiest translation path to Indigo.
Hope this helps.


I've been using C# transactions for a while and had no problems
with them. Using try catch blocks I can trap basically all
possible errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this
code to SQL Server stored procedures, but it seems to lack many
of the benefits of C# transactions - a lot of the errors don't
seem to be trapped by the SQL error trapping (e.g. if I do an update
on a row that doesn't exist, no rollback occurs and execution
continues, if a table I am trying to access doesn't exist then
the sproc crashes). Can anybody tell me what reasons there are
for writing your
transactions in stored procedures rather than your .NET code, as
there don't seem to be any to me.


Any assistance would be really appreciated.


Cheers,

Mike
 
N

Nicholas Paldino [.NET/C# MVP]

Eric,

I can understand your gut reaction, and to be honest, it's a different
way of thinking than most people are accustomed to. However, I would make
the argument that if you have a unit of work that takes 20 seconds, then you
have a bigger problem anyways (such as your design, where you are defining
your transaction, process, assembly, etc, etc boundaries).

You could argue that the ETC (COM+) is for business consistency, but at
the same time, these larger "transactions" on resources really, for the most
part, should have a one-to-one mapping between your business operations and
the transaction. This doesn't mean that you can't nest transactions within
each other (and indeed, for the most part, you should), but rather, one
completed unit of work is one business transaction, and that should map to
your action in COM+. Now you can have other units of work, but they all
contribute to the success or failure of the overall unit of work that they
are included in (requires a transaction)

Is there a performance hit? Absolutely. Depending what you are doing,
it varies, but I would say that for every aspect in Enterprise Services that
you add (security, out of process, transactions, etc, etc), performance
suffers by a factor of 10.

Now people freak out when they see this, as they think that anything
that is ten times slower than anything is a bad thing. This is not so. On
a 1.6 GHz IBM ThinkPad, I could implement a full ES application, integrating
security, out of process calls, saves to the database, transactioning,
object pooling, etc, etc, etc, and get you anywhere between 20 and 40 calls
a second. This is a single processor machine, hosting SQL server, COM+
running out of process, etc, etc.

For what most people do, that kind of throughput is more than
acceptable. If it is not, then that is fine, but they have to accept the
consequences of their actions. ES offers a tremendous amount of
infrastructure (as do most other environments), and not having to code and
maintain that yourself is a nightmare.

Also, the moment that you introduce another resource that is involved in
a transaction, you have to write the code that will basically handle the
commitment or abortion of the transaction should anything fail. That kind
of coordination is something that most programmers, should not have to, and
should not ever do.

The only way you could make sure that SQL Server is the last process to
be performed is to code it that way, so that it is hit last. However, in
any correctly designed component based system, your operations on SQL server
are going to be encapsulated as part of a framework which can be used
anywhere in COM+.

Your argument about a 20 second lock on the table is a good one, but
that goes against the inherent guidelines for COM+ processes. Transactions
should be relatively quick. If you are performing an operation that takes
20 in one transaction, it should be re-designed. Most likely, it can be
re-designed so that it is performed in other commitable units of work, and
then have a final command which will commit all of them (through a distinct
flag in the db on the records which is switched, or something of that
nature).

--
- Nicholas Paldino [.NET/C# MVP]
- (e-mail address removed)


Eric Sabine said:
My gut reaction is "I don't like it." That said, I am not an expert on
Enterprise Services or the DNF 2.0 Transaction model. I feel that this
external transaction coordinator (ETC) would cause me performance problems
in my sql server because (if I am understanding it properly) the ETC would
begin a transaction and say, start some process (transaction), get data
from disparate sources, write a text file, send data to a web service,
more, more, more, then some element bombs out and the ETC says "OK let's
remain consistent, roll everything back!". But the item that bombed out
had a timeout error at 20 seconds, so I take it that my sql server
portion, which was an update on a commonly used table, which happened to
update enough rows that the lock was escalated from a row lock to a table
lock, also had to wait for these 20 seconds (plus other elemental
transaction times) before it's transaction was rolled back. Meanwhile, my
users trying to enter orders got blocked.

I believe the purpose of the ETC is probably for business "consistency"
and I'm all for that - don't get me wrong, I'm all about the consistency
:) , but it probably comes at a performance price, which I see happening
when you take the database-transaction away from the database and you make
it no longer dependent on the DML (data modification language, i.e., the
inserts, updates, and deletes) and you make it dependent on the business
process instead.

If you could guarantee that the ETC waits on the sql server transaction as
the last process I could be convinced, but I don't see how you get
scalable and properly performing applications if your database is forced
to wait on other non-sql processes.

Eric





Eric,

What I'm trying to say is that for any resource (DB included), if
you have the resource itself determine its transaction state, instead
of a transaction coordinator, it then becomes difficult to integrate
that resource into larger-scale transactions.

Because of that, I would recommend against handling transactions
in SP code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered as
having a resource manager, it can allow ES/COM+ to handle the
transaction management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL
server, and issue the appropriate transaction commands, based on
configuration. The idea here isn't to lock those procedures into a
pattern which would be difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I
would never write a stored procedure that has transaction management
code in it.

Eric Sabine said:
I do not believe we are discussing the same thing. I am strictly
speaking about SQL Server transactions and as you have indicated you
are talking about business layer "transactions." If the requirement
were passed to the DBA who was writing the stored procedure that
File IO needs to take place, he should kick it back and say that is
outside the scope of the sql server transaction.

SQL Server transactions are strictly for maintaining data integrity
and consistency, not to mention they give you a named place to
restore to instead of a datetime stamp. I believe you would benefit
from not one, but both, i.e., the data integrity transaction and the
process integrity transaction.

Eric



Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I would agree with you, transactions should not exist in the
front end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that
the transactions should be controlled, and not by the programmer,
but rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to
the file when it fails. You would want some sort of transaction
manager to coordinate this. Of course, this assumes that the file
system is transactional (which it is in Longhorn).
The point is that when you start performing transaction
management in the resource that is being handled by the transacton,
you severely limit the other business processes that the resource
can be included in. The file is a simple case, but what about
message queues? What about both queues and emails? As you add
more resources to the transaction state (outside of the database),
you need a way to abort them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on
the way of being well behind us, and that external entities are
needed to control all of these resources, should something fail
while trying to perform operations on them.

message Are you talking about database transactions or something else?
Database transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
really should not be used in the front-end application. You
greatly increase your risk to locking, blocking, inconsistent
data, corrupt data, long running transactions (which leads back to
locking and blocking), uncommitted transactions, and more.
Transactions should be kept as short as possible to keep database
access clean. I take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would go
hand-in-hand with someone who was creating transactions in the FE)
should never exist in the front-end. I believe that the only
access from a front-end should be limited to executing stored
procedures. Eric




Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your
stored procedures is a very, very bad idea. It limits how they
can be used elsewhere, and for the most part, are harder to
maintain (as you have more and more sp's calling each other,
trying to maintain transaction state is a pain).

I would recommend using EnterpriseServices for handling
transaction state. There are a number of reasons for this, the
best one being easy maintainability (you can turn transactions on
and off with the flick of a switch, or change their behavior).
Another reason to handle this would be because it offers the
easiest translation path to Indigo.
Hope this helps.


I've been using C# transactions for a while and had no problems
with them. Using try catch blocks I can trap basically all
possible errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this
code to SQL Server stored procedures, but it seems to lack many
of the benefits of C# transactions - a lot of the errors don't
seem to be trapped by the SQL error trapping (e.g. if I do an update
on a row that doesn't exist, no rollback occurs and execution
continues, if a table I am trying to access doesn't exist then
the sproc crashes). Can anybody tell me what reasons there are
for writing your
transactions in stored procedures rather than your .NET code, as
there don't seem to be any to me.


Any assistance would be really appreciated.


Cheers,

Mike
 
E

Eric Sabine

Your arguments "for" are good enought that it's something I would be
interested in looking at. Perhaps I don't know enough (or anything for that
matter) about COM+ servers and how to use them. Thanks for the good
discussion.

Eric


Eric,

I can understand your gut reaction, and to be honest, it's a
different way of thinking than most people are accustomed to. However, I
would make the argument that if you have a unit of work
that takes 20 seconds, then you have a bigger problem anyways (such
as your design, where you are defining your transaction, process,
assembly, etc, etc boundaries).
You could argue that the ETC (COM+) is for business consistency,
but at the same time, these larger "transactions" on resources
really, for the most part, should have a one-to-one mapping between
your business operations and the transaction. This doesn't mean that
you can't nest transactions within each other (and indeed, for the
most part, you should), but rather, one completed unit of work is one
business transaction, and that should map to your action in COM+. Now you
can have other units of work, but they all contribute to the
success or failure of the overall unit of work that they are included
in (requires a transaction)
Is there a performance hit? Absolutely. Depending what you are
doing, it varies, but I would say that for every aspect in Enterprise
Services that you add (security, out of process, transactions, etc,
etc), performance suffers by a factor of 10.

Now people freak out when they see this, as they think that
anything that is ten times slower than anything is a bad thing. This
is not so. On a 1.6 GHz IBM ThinkPad, I could implement a full ES
application, integrating security, out of process calls, saves to the
database, transactioning, object pooling, etc, etc, etc, and get you
anywhere between 20 and 40 calls a second. This is a single
processor machine, hosting SQL server, COM+ running out of process,
etc, etc.
For what most people do, that kind of throughput is more than
acceptable. If it is not, then that is fine, but they have to accept
the consequences of their actions. ES offers a tremendous amount of
infrastructure (as do most other environments), and not having to
code and maintain that yourself is a nightmare.

Also, the moment that you introduce another resource that is
involved in a transaction, you have to write the code that will
basically handle the commitment or abortion of the transaction should
anything fail. That kind of coordination is something that most
programmers, should not have to, and should not ever do.

The only way you could make sure that SQL Server is the last
process to be performed is to code it that way, so that it is hit
last. However, in any correctly designed component based system,
your operations on SQL server are going to be encapsulated as part of
a framework which can be used anywhere in COM+.

Your argument about a 20 second lock on the table is a good one,
but that goes against the inherent guidelines for COM+ processes.
Transactions should be relatively quick. If you are performing an
operation that takes 20 in one transaction, it should be re-designed.
Most likely, it can be re-designed so that it is performed in other
commitable units of work, and then have a final command which will
commit all of them (through a distinct flag in the db on the records
which is switched, or something of that nature).


Eric Sabine said:
My gut reaction is "I don't like it." That said, I am not an expert
on Enterprise Services or the DNF 2.0 Transaction model. I feel
that this external transaction coordinator (ETC) would cause me
performance problems in my sql server because (if I am understanding
it properly) the ETC would begin a transaction and say, start some
process (transaction), get data from disparate sources, write a text
file, send data to a web service, more, more, more, then some
element bombs out and the ETC says "OK let's remain consistent, roll
everything back!". But the item that bombed out had a timeout error
at 20 seconds, so I take it that my sql server portion, which was an
update on a commonly used table, which happened to update enough
rows that the lock was escalated from a row lock to a table lock,
also had to wait for these 20 seconds (plus other elemental
transaction times) before it's transaction was rolled back. Meanwhile, my
users trying to enter orders got blocked. I believe the purpose of the
ETC is probably for business
"consistency" and I'm all for that - don't get me wrong, I'm all
about the consistency :) , but it probably comes at a performance
price, which I see happening when you take the database-transaction
away from the database and you make it no longer dependent on the
DML (data modification language, i.e., the inserts, updates, and
deletes) and you make it dependent on the business process instead.

If you could guarantee that the ETC waits on the sql server
transaction as the last process I could be convinced, but I don't
see how you get scalable and properly performing applications if
your database is forced to wait on other non-sql processes.

Eric





Eric,

What I'm trying to say is that for any resource (DB included), if
you have the resource itself determine its transaction state,
instead of a transaction coordinator, it then becomes difficult to
integrate that resource into larger-scale transactions.

Because of that, I would recommend against handling transactions
in SP code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered
as having a resource manager, it can allow ES/COM+ to handle the
transaction management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL
server, and issue the appropriate transaction commands, based on
configuration. The idea here isn't to lock those procedures into a
pattern which would be difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I
would never write a stored procedure that has transaction management
code in it.

message I do not believe we are discussing the same thing. I am strictly
speaking about SQL Server transactions and as you have indicated
you are talking about business layer "transactions." If the
requirement were passed to the DBA who was writing the stored
procedure that File IO needs to take place, he should kick it back
and say that is outside the scope of the sql server transaction.

SQL Server transactions are strictly for maintaining data integrity
and consistency, not to mention they give you a named place to
restore to instead of a datetime stamp. I believe you would benefit
from not one, but both, i.e., the data integrity transaction and
the process integrity transaction.

Eric



Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I would agree with you, transactions should not exist in the
front end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that
the transactions should be controlled, and not by the programmer,
but rather, through the administrative tool (which gets the
initial values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility
in the future. For example, say you have your transaction code
in your stored procedure now. Say that you have some piece of
code that calls the stored procedure. Later on, you have a
requirement to write to a file when the procedure completes, and
to not write to the file when it fails. You would want some sort
of transaction manager to coordinate this. Of course, this
assumes that the file system is transactional (which it is in
Longhorn). The point is that when you start performing
transaction management in the resource that is being handled by the
transacton, you severely limit the other business processes that
the resource can be included in. The file is a simple case, but
what about message queues? What about both queues and emails? As you
add more resources to the transaction state (outside of
the database), you need a way to abort them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or
the new Transaction model being introduced in .NET 2.0. I
believe that the mindset of the db being the only transactional
resource is on the way of being well behind us, and that external
entities are needed to control all of these resources, should
something fail while trying to perform operations on them.

message Are you talking about database transactions or something else?
Database transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK
TRAN, really should not be used in the front-end application. You
greatly increase your risk to locking, blocking, inconsistent
data, corrupt data, long running transactions (which leads back
to locking and blocking), uncommitted transactions, and more.
Transactions should be kept as short as possible to keep database
access clean. I take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would
go hand-in-hand with someone who was creating transactions in
the FE) should never exist in the front-end. I believe that the
only access from a front-end should be limited to executing
stored procedures. Eric




Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your
stored procedures is a very, very bad idea. It limits how they
can be used elsewhere, and for the most part, are harder to
maintain (as you have more and more sp's calling each other,
trying to maintain transaction state is a pain).

I would recommend using EnterpriseServices for handling
transaction state. There are a number of reasons for this, the
best one being easy maintainability (you can turn transactions
on and off with the flick of a switch, or change their
behavior). Another reason to handle this would be because it
offers the easiest translation path to Indigo.
Hope this helps.


I've been using C# transactions for a while and had no problems
with them. Using try catch blocks I can trap basically all
possible errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this
code to SQL Server stored procedures, but it seems to lack many
of the benefits of C# transactions - a lot of the errors don't
seem to be trapped by the SQL error trapping (e.g. if I do an
update on a row that doesn't exist, no rollback occurs and
execution continues, if a table I am trying to access doesn't
exist then the sproc crashes). Can anybody tell me what reasons
there are
for writing your
transactions in stored procedures rather than your .NET code,
as there don't seem to be any to me.


Any assistance would be really appreciated.


Cheers,

Mike
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top