Best Practices - Connecting to Oracle from a fat client

C

Clint

Hello all -

I'm currently working on a program that connects to both an Oracle 9i
data warehouse as well as a SQL Server 2k server. The application will
be used mainly by 30 - 50 people internally, and will connect directly
to the respective servers (ie, not using remote objects via remoting
or web services).

In addition, the Oracle server is simply going to have single selects
against it, retrieving only one value (a name based on an account
number); nothing overly intensive. The SQL Server, on the other hand,
will be handling the logging and auditing, storage, and retrieval of
any information for the program - items that could be intensive.

That said, my question is this: What is the best practice when it
comes to opening these data connections? Would it be best to have a
single connection for each server opened when the program starts, or
would it be best to have the connections open as they are needed? The
Oracle server would be queried once every few minutes or so per user,
and only for a specific user role (once the lookup's done, the data's
stored in SQL Server for historical purposes). The SQL Server, on the
other hand, can expect to see a number of queries per minute per user,
for all user roles.

I can see benefits for both sides, but I haven't seen a good solid
"this is how you should do it" answer.

Any advice is appreciated - thanks!
Clint
 
W

William \(Bill\) Vaughn

Each connection consumes resources. The process of opening the connection
(especially the first time or when no connection is available in the pool)
is expensive. However, SQL Server is designed to handle hundreds to
thousands of connections without breathing hard--40 or so should not be a
problem. Consider than we ran systems with 800 plus connections on 386/33
systems with 4mb of RAM. Keeping the connection open has its benefits. You
don't have to wait for SSPI authentication to be repeated and there is no
wait while the pooling mechanism looks for an open connection. Since the
various Windows applications won't be sharing the connection pool, and you
won't be closing the connection you'll never have to worry about pool
overflow.

What you do need to be concerned with is that "stuff happens". A connection
is like a cell-phone connection. From time to time you'll lose connectivity.
In this case your application needs to be smart enough to recover from the
disconnect and continue on.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
W

William \(Bill\) Vaughn

Until ADO 2.0, you won't know (or have a suitable property) to determine the
viability of a connection. Again it's like a cell-phone connection--if no
one is talking it's tough to know if the other party is still there after
you come out of the tunnel. Some folks poll the server with a query. That
can work, but I also query the service. I have an example I demonstrate in
my workshop that shows how this can determine if the SQL Server service
(mssqlservice) is running on the target system. This does not guarantee that
the connection is viable, but it can signal when the server goes down...

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
C

Cowboy \(Gregory A. Beamer\)

Connects pool automatically, so opening a single connection is not wise. In
addtion, each connection consumes memory. Do not open a single connection
and hold it. Good enough for a best practice.

Within a single page that pings a database over and over? Sure, open and run
everything without problem. Across pages, or across actual data calls? There
are some instances where this works, but it is an exception, not a rule.

--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

*************************************************
Think outside the box!
*************************************************
 
M

Michael D. Long

Bad advice. Connection pools are process based, and a FAT client (aka.
2-tier) is a single process. Close the ONLY active connection and the pool
goes poof!

Actually, in high volume transaction processing applications where users are
posting transactions at a fairly steady pace you can often achieve the
lowest total cost of operations by using a 2-tier architecture and having
the client application maintains active connections. Oracle Session
creation is a very expensive operation. In this particular case I'd have to
second Bill V's earlier comments and recommend keeping the connections open
for the life of the application, with appropriate logic to handle
reconnecting on fatal errors.

Note: you can take advantage of connection sharing by configuring Oracle
Server to use Multi-Threaded Server (yet another MTS). For anyone
interested, Oracle MTS doesn't provide any benefits for n-tier
applications - it just chews up resources on the database server. I leave
the understanding of why as an exercise in logic for the reader. Hint:
consider the behavior of connection pooling.
 
W

William \(Bill\) Vaughn

Nope. The State property does not change state if the server or network goes
down. It does change state if YOU or the underlying code closes the
connection but not otherwise.

Pooling was invented for situations when you connect to databases that took
a long time to connect or where the application had to close the connection
(as in ASP). In client/server or local database use, there is no advantage
to using the connection pool. When your c/s systems start stressing the
remote server it starts to make sense to consider a more active
connect/disconnect strategy.

Another advantage to leaving the connection open is the ability to build
useful server state. This means #temp tables, server-side cursors or other
custom SET properties to make your application work more efficiently.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
G

Guest

I have experience using Oracle and occasionally having connections drop. I
have 120 users of an Access Front End Program working off an Oracle Back End.
When they launch the app, the connection is created. The connection is
refreshable if they wish to click a button, but most never do. It just works
for almost everyone. There are 2 users who seem to lose their connection
around once a week, randomly. Since the connection is handled by Access and
linked tables, I have never coded anything to handle it. The users simply
restart the app. It would be possible to handle an error condition by
testing in a try catch a select on some small table... where it fails,
re-establish the connections and resume...

Leaving the connection open all day is certainly the fastest method for my
program, and I have no plans to change it.

Thanks,

Geoff
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top