Opening / Closing SQL Server connections

D

Daniel Malcolm

Just looking for further info on managing opening / closing of SQL Server
connections in an n-tier ADO.Net application.

A business layer component method might make multiple calls to various
methods within the data access layer. Is it sensible to open / close a
database connection within each method in the data access layer, even if
there will be multiple calls to data access methods in series?

The following suggests that time required to open / close a pooled
connection is negligible:

http://msdn.microsoft.com/library/d...p?frame=true#daag_managingdatabaseconnections


Alternatively, each business layer method could call a "cleanup" method in
the data access layer to close any open connections when it had finished
calls to the data access layer..

Any thoughts?

Thanks
 
G

GrantMagic

A method i like using is to open the connection once, then pass that
connection to the various methods being called for that process.
Thus avoiding reopening and closing of the database as much as possible.

The main process is then rapped in a try statement to watch for failure and
close the connection if an exception occurs.
 
P

Patrice

This is also currently what I do (though the connection is private to the
the layer).

I'm looking at this thread as I've done some investigation about this once.

When connection pooling is enabled the time for opening/closing a connection
is really very short. I considered then to open/close the connection on a
per call basis.

The benefit I see is that you don't have to include in your debug release a
check to see if all connections are properly closed. Also it allows to close
the connection generally earlier in the page (or at least as soon as
possible) and you could even imagine in some cases a lengthy thing that
don't need the DB and another page could use a connection in between.

IMO definitely worth to consider. I would like to ear from someone that
already open/close on each call...

TIA

Patrice

--
 
G

Guest

The Connection Pool holds the connection for 30 seconds, if I remember
correctly. On multiple whacks at the database, you will lose most of your
overhead.

If you know you need multiple DataTables at one time, combine them in a
single stored procedure and fill them in one whack. If you cannot do this,
the command time is the majority of the secondary hit, which you will have
regardless of whether you pull an object from pool or have a single
connection.

On the other side of the coin, you could forget to dispose of a connection,
which is a much more expensive route. Small overhead versus potentially open
expensive objects?

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
G

Guest

This can actually kill scalability, as a connection is kept open for a single
process (session?) when it could be recycled in the pool. The few seconds you
save pulling from pool can cause your app to take fewer connections and
require additional machines sooner than one that allows the connection pool
to do its work.

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
G

Guest

If you understand the ADO.NET internals, it is better to open and close for
each call. However, you can also work smart by grabbing multiple info sets
(DataTables from SELECT statements, for example) on one pass (single stored
procedure).

When you hold a connection, on a busy app esp., you can end up using tons
more memory, increase licensing costs and even slow down the apps, as you
force new connections instead of using the pool effectively. While saving a
microsecond for each process, you rob ADO.NET of its highly efficient pooling
mechanism and implement your own connection control process.

There are times where holding a connection is wiser, but I would shy away
from this methodology, as a rule.

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
A

Angel Saenz-Badillos[MS]

I am definitely biased in this but if you want my opinion I am a huge fan of
opening a connection per call. There are only three things you need to do:

Use the "using" or a try .. finally connection.Dispose() construct to
_guarantee_ that your connection is disposed.
Open the connection as late as possible. //(if the next statement is not an
Execute why do you need an open connection?)
Close the connection as early as possible.//(don't place anything between
open and close that could be done after close)

This not only results in very easy to read and maintain code, it will
guarantee very close to the best performance while giving your application
the room to become scalable. If you use multiple threads in your application
this becomes even more critical since ado.net objects are not thread safe.

The only exception I can think of would be a Winforms single threaded app
where you want to control your connections manually.
--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no
rights.Please do not send email directly to this alias.
This alias is for newsgroup purposes only.
I am now blogging about ADO.NET: http://weblogs.asp.net/angelsb/
 
M

Michael D. Long

Cowboy (Gregory A. Beamer) - MVP said:
increase licensing costs

Has there been a change in the SQL Server license that allows middleware to
multiplex a CAL? If not, how does connection pooling decrease licensing
costs?
 
M

Michael D. Long

Cowboy (Gregory A. Beamer) - MVP said:
The few seconds you save pulling from pool

If you EVER observe a latency measured in SECONDS to retrieve a connection
from the pool then you have a MAJOR PROBLEM. While I hate to be rude, do
you have any practical experience with coding and deploying real-word n-tier
applications?
 
M

Michael D. Long

Daniel Malcolm said:
The following suggests that time required to open / close a pooled
connection is negligible:

http://msdn.microsoft.com/library/d...p?frame=true#daag_managingdatabaseconnections

Wow - this is the first time I've seen connection-based transactions
referred to as manual transactions. While there is a distinct reference to
object pooling and "manual transaction enlistment," it appears that someone
on the writing team is confused about what the term means and why the
technique is useful. Manual transaction enlistment has to do with
retrieving a Global Transaction Identifier and enlisting the open connection
in an active distributed transaction.

As to your direct question about time required, this is typically < 10 ms
when new connections do not have to be opened.

However, there are caveats to the simplistic alrgorithm used by connection
pooling. Because there is no anticipation of demand for peak/surge load
demands, you can experience unnecessary latency as a large number of
connections are opened concurrently - most of which will be discarded due to
inactivity over the next 4m to 7m 40s. Even though these usage patterns are
fairly predictable for many applications, this would require that connection
pooling be based on a heuristic algorithm that monitors behavior over time
and proactively opens connections to avoid the connection thrashing that
*will* occur periodically.

A partial solution to such a problem is to utilize the min/max pool size
setting on the connection string. I know that Angel recommends against
setting the min value, but this can be a valuable technique to reduce
connection thrashing in heavily utilized environments where surge demands
for resources will occur over the course of a few milliseconds.

Another solution is to build pooled objects and hang onto that bloody
connection for the life of the object (or until you have to recycle it due
to a critical connection error). But again, there is a disadvantage for any
type of pooling - you cannot dynamically adjust the settings while the pool
is active. This ability to dynamically adjust the min/max settings is
critical (though absent) when the pool isn't self-learning, at least in the
case where the goal is to maximize application responsiveness while
dispensing the absolute minimum resources necessary to service the load and
avoid thrashing the Resource Manager.
 
K

Kevin Spencer

Cowboy is an excellent and experienced programmer. I've known him for years.
What are your credentials?

--
HTH,
Kevin Spencer
..Net Developer
Microsoft MVP
I get paid good money to
solve puzzles for a living
 
A

Angel Saenz-Badillos[MS]

I tried this before and it did not work too well, I am definitely not a
moderator and not trying to represent myself as one. In this case I feel
compelled to jump in because I have seen too many posts from all the persons
involved to respect each one individually and there is no question on your
knowledge of the subject. To top it off I believe that everybody is trying
to say the same thing!

I believe that Cowboy's reference to "seconds" in cost of opening
connections from the pool applies to the lifetime of an application, not to
the cost of retrieving an individual connection from the pool. I agree with
the point that he is trying to make regarding opening a connection in each
method versus passing the connection, in an asp.net application if you keep
the connection open unnecessarily you are stealing valuable resources from
the rest of your application and this can easily add up to much more than
the almost negligible cost of retrieving a connection from the pool IMO.

--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no
rights.Please do not send email directly to this alias.
This alias is for newsgroup purposes only.
I am now blogging about ADO.NET: http://weblogs.asp.net/angelsb/
 
M

Michael D. Long

I'm certain that you are correct about Mr. Beamer, and that a number of his
recent posts that either lack clarity (as in the "seconds" issue) or that
could steer a newbie down a bad path is likely due to factors other than his
knowledge and experience.

As to my credentials, they are based on the school of hard knocks - been
there, done that, and learned from it. If you ever scale up an application
to the point where you hit the pain points then you'll understand that I
have a great deal of insight and knowledge on this topic based on real-world
practical experience that flies in the face of theory and articles written
by mid-level practitioners. Assuming you won't (and that's a pretty safe
assumption, as few developers actually walk-the-talk), my opinions on the
topic are of no use to you so feel free to completely disregard them and
send all posts by me to the bit bucket.
 
M

Michael D. Long

I'll concede that you are correct and that Mr. Beamer is referring to the
cumulative time savings over the life of the application being measured in
seconds, and not the specific time to retrieve a connection from the pool.
I mistakenly assumed this after reading a number of questionable posts he
has made recently, including the one on how failure to release connections
could result in "increased (sic SQL Server) licensing costs." I am most
likely completely in the dark about a recent change in SQL Server licensing
that allows one to use middleware to multiplex connections - that's what I
get for being an ostrich...

I should really document the problems with this behavior thoroughly and
generate a proof, but the writing required is at the doctoral thesis level
and the supporting data comprises several Gb. Also, you need at least 2
multi-CPU servers handy and several man-weeks to run the repros and perform
independent data collection and analysis in order to independently begin to
properly understand the behaviors. In addition, you need to be well-versed
in both Oracle and SQL Server, including the locking characteristics at
various isolation levels.

Since I'm just too bloody lazy to waste my time producing a document that
would likely be read by no more than a handful of people, and that would be
understood by fewer yet, I'll just point out 2 very basic scenarios that
make releasing the connection ASAP questionable as a general practice:

1. An object running within a distributed transaction requests a
connection. At the initial point, no Global Transaction Identifier is
associated so the base pool services the requests. A statement is executed,
and the connection is released to the pool - but the question often
overlooked is, to which pool? Since the connection attributes at this point
do not match those of the base pool, a new pool is created. The next
request from an object participating in this distributed transaction may use
the connection just returned, but assuming the state has been updated to
reflect its availability. Under light loading this works most of the time,
though occasionally you will have 2 or 3 connections within the pool
associated with the distributed connection - even though the object can
*never* have more than one connection active at any given moment in time.
While a timing condition can be anticipated (due to OS activity, context
switching, etc.) where a connection request occurs before connection 1 is
marked as available, the issue with 3 active connections in a single
distributed transaction when all connections are *always* properly released
and disposed is the clue to the root cause of the case where connections
participating in distributed transactions may be orphaned in an "inactive"
state and never completely released (even though the pools themselves
eventually are destroyed).

Since a connection enlisted in a distributed transaction can only be
used by an object participating in the same distributed transaction, the
latency associated with returning the connection to the pool and retrieving
it again can be avoided. The act of releasing such a connection to "the
pool" does not in any way increase the number of connections generally
available to other activities outside the scope of the transaction, and in
fact the added latency only increases the likelihood that additional new
connection requests will be generated (only to be discarded in 3m to 7m 40s
due to inactivity, thus begins the vicious cycle of connection thrashing -
but that's another story, and a fairly complex one that only comes into play
for applications with high utilization rates).

There are a number of observations I could make here based on 6 months
of personal experience (at 60-80 hours/week) researching this issue
in-depth, and on feedback from Oracle Support / Bug Diagnosis & Escalation
(which combined racked up over a man-year on research, development,
providing optimization/tuning guidance, running my repros in Oracle's labs,
and performing interactive debugging on my system). However, it would take
many pages and diagrams to convey the necessary concepts for anyone lacking
the basic understanding of the problem to gain any real understanding of
when it is appropriate to hold the connection active for the life of the
public method call and when it is safe to release it per the common
recommendations.

2. Connection-based transactions that span multiple internal calls before a
commit is possible. I don't think there is a real need to go into details
about this, is there?

--
Michael D. Long


Angel Saenz-Badillos said:
I believe that Cowboy's reference to "seconds" in cost of opening
connections from the pool applies to the lifetime of an application, not
to
the cost of retrieving an individual connection from the pool. I agree
with
the point that he is trying to make regarding opening a connection in each
method versus passing the connection, in an asp.net application if you
keep
the connection open unnecessarily you are stealing valuable resources from
the rest of your application and this can easily add up to much more than
the almost negligible cost of retrieving a connection from the pool IMO.

--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no
rights.Please do not send email directly to this alias.
This alias is for newsgroup purposes only.
I am now blogging about ADO.NET: http://weblogs.asp.net/angelsb/




Kevin Spencer said:
Cowboy is an excellent and experienced programmer. I've known him for years.
What are your credentials?

--
HTH,
Kevin Spencer
.Net Developer
Microsoft MVP
I get paid good money to
solve puzzles for a living
 
A

Angel Saenz-Badillos[MS]

Michael,



When I hear you say: " basic scenarios that make releasing the connection
ASAP questionable as a general practice: " I get worried.



You have obviously run into situations where this is happening and this is
definitely a bug, we should not be releasing the connection ASAP. Let me try
to explain the way connection pooling is designed to work with distributed
transactions:



The first important concept is that connections that are enlisted in a
distributed transaction are not placed back into the general pool until
after the DTC is complete. They are not subject to the 4min-7min 40 seconds
clean up while they are in the subpool (DTC timeout by default is set to 1
minute, so this assumes that you have changed the default distributed
transaction timeout). These connections only get cleaned from the pool when
we receive a notification from the distributed transaction coordinator
telling us that the transaction is complete.



The second concept is that while the enlisted connections are in the sub
pool they continue to be pooled (there are some exceptions in this scenario
for Whidbey). Any time that you request a connection open from inside the
same distributed transaction context we will check to see if there are any
active enlisted connections in the sub pool that you can reuse.



The final point is that after we receive a notification saying that the
distributed transaction has completed we will attempt to move the
connections from the transacted sub pool into the general pool after
suitable clean up.



Please let me know if in your experience any of this is not working as it
should.



One more thing, I am currently working on a stress application to test the
new Whidbey System.Transactions behavior in ADO.NET, you may or may not know
that for V2.0 we are doing a major redesign of the distributed transaction
behavior by adding a managed System.Transactions.Transaction class. The new
TransactionScope is going to make it much easier to use distributed
transactions and I predict that in the next few years the number of people
that will be interested in your document will grow exponentially.



I would be very interested in talking to you, preferably on the phone,
regarding some of your thoughts on what you would like to see stressed with
this functionality. If you are willing to send me your contact information I
would be happy to contact you at your convenience.

Thank you,


--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no
rights.Please do not send email directly to this alias.
This alias is for newsgroup purposes only.
I am now blogging about ADO.NET: http://weblogs.asp.net/angelsb/




Michael D. Long said:
I'll concede that you are correct and that Mr. Beamer is referring to the
cumulative time savings over the life of the application being measured in
seconds, and not the specific time to retrieve a connection from the pool.
I mistakenly assumed this after reading a number of questionable posts he
has made recently, including the one on how failure to release connections
could result in "increased (sic SQL Server) licensing costs." I am most
likely completely in the dark about a recent change in SQL Server licensing
that allows one to use middleware to multiplex connections - that's what I
get for being an ostrich...

I should really document the problems with this behavior thoroughly and
generate a proof, but the writing required is at the doctoral thesis level
and the supporting data comprises several Gb. Also, you need at least 2
multi-CPU servers handy and several man-weeks to run the repros and perform
independent data collection and analysis in order to independently begin to
properly understand the behaviors. In addition, you need to be well-versed
in both Oracle and SQL Server, including the locking characteristics at
various isolation levels.

Since I'm just too bloody lazy to waste my time producing a document that
would likely be read by no more than a handful of people, and that would be
understood by fewer yet, I'll just point out 2 very basic scenarios that
make releasing the connection ASAP questionable as a general practice:

1. An object running within a distributed transaction requests a
connection. At the initial point, no Global Transaction Identifier is
associated so the base pool services the requests. A statement is executed,
and the connection is released to the pool - but the question often
overlooked is, to which pool? Since the connection attributes at this point
do not match those of the base pool, a new pool is created. The next
request from an object participating in this distributed transaction may use
the connection just returned, but assuming the state has been updated to
reflect its availability. Under light loading this works most of the time,
though occasionally you will have 2 or 3 connections within the pool
associated with the distributed connection - even though the object can
*never* have more than one connection active at any given moment in time.
While a timing condition can be anticipated (due to OS activity, context
switching, etc.) where a connection request occurs before connection 1 is
marked as available, the issue with 3 active connections in a single
distributed transaction when all connections are *always* properly released
and disposed is the clue to the root cause of the case where connections
participating in distributed transactions may be orphaned in an "inactive"
state and never completely released (even though the pools themselves
eventually are destroyed).

Since a connection enlisted in a distributed transaction can only be
used by an object participating in the same distributed transaction, the
latency associated with returning the connection to the pool and retrieving
it again can be avoided. The act of releasing such a connection to "the
pool" does not in any way increase the number of connections generally
available to other activities outside the scope of the transaction, and in
fact the added latency only increases the likelihood that additional new
connection requests will be generated (only to be discarded in 3m to 7m 40s
due to inactivity, thus begins the vicious cycle of connection thrashing -
but that's another story, and a fairly complex one that only comes into play
for applications with high utilization rates).

There are a number of observations I could make here based on 6 months
of personal experience (at 60-80 hours/week) researching this issue
in-depth, and on feedback from Oracle Support / Bug Diagnosis & Escalation
(which combined racked up over a man-year on research, development,
providing optimization/tuning guidance, running my repros in Oracle's labs,
and performing interactive debugging on my system). However, it would take
many pages and diagrams to convey the necessary concepts for anyone lacking
the basic understanding of the problem to gain any real understanding of
when it is appropriate to hold the connection active for the life of the
public method call and when it is safe to release it per the common
recommendations.

2. Connection-based transactions that span multiple internal calls before a
commit is possible. I don't think there is a real need to go into details
about this, is there?

--
Michael D. Long


Angel Saenz-Badillos said:
I believe that Cowboy's reference to "seconds" in cost of opening
connections from the pool applies to the lifetime of an application, not
to
the cost of retrieving an individual connection from the pool. I agree
with
the point that he is trying to make regarding opening a connection in each
method versus passing the connection, in an asp.net application if you
keep
the connection open unnecessarily you are stealing valuable resources from
the rest of your application and this can easily add up to much more than
the almost negligible cost of retrieving a connection from the pool IMO.

--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no
rights.Please do not send email directly to this alias.
This alias is for newsgroup purposes only.
I am now blogging about ADO.NET: http://weblogs.asp.net/angelsb/




Kevin Spencer said:
Cowboy is an excellent and experienced programmer. I've known him for years.
What are your credentials?

--
HTH,
Kevin Spencer
.Net Developer
Microsoft MVP
I get paid good money to
solve puzzles for a living

"Cowboy (Gregory A. Beamer) - MVP"
wrote
in message
The few seconds you save pulling from pool


If you EVER observe a latency measured in SECONDS to retrieve a connection
from the pool then you have a MAJOR PROBLEM. While I hate to be
rude,
do
you have any practical experience with coding and deploying real-word
n-tier
applications?
 
M

Michael D. Long

Agreed about the subpool - while active it won't be released. Also, I will
absolutely positively kick the living bejeezus out of anyone that ever
increases the DTC timeout value on a server I manage. Anyone that does so
must lacking key synaptic activity, because I'm sure that no one lacking a
fundamental understanding of the side effects would ever be turned loose on
a distributed application.

FYI - inactive connections that remain open indefinitely only occur in
scenarios where distributed transactions fail. This failure can be caused
by something as simple as a timeout while connecting to the server under
extremely heavy load.

FWIW, J2EE connection pooling algorithms appear to be having very similar
problems (and at even lower transaction rates). This comment is based on
inside information from that Other database company.

I'll send you contact info so that we can have a conversation about this.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top