Store procedure vs Direct statement ???

G

Guest

serge,

As always, there is no easy answer. From my experience, it is best to
restrict access, as much as possible to SP/UDF usage. With SQL 2005, we have
the added benefit of CLR-based (C#, etc.) code directly on the SQL Server.

Performance:
SP/UDF usage CAN greatly improve performance, if coded well. They allow you
to accept parameters and run small optimizations that are not possible with
dynamically built queries passed from the client code (in speed, anyway.)
The Execution Path of the SP/UDF is determined on first run and tweeked by
the server thereafter for some automated speed adjustments. This means that
a dynamic query has to rely on the server's first-guess only of the best path
to run. A SP/UDF has the benefit of further improvements over time.

Security:
I also do a lot of security checking from within the SP/UDFs to verify that
the user has the right to run the select/update/insert/delete command.
Sometimes these commands are driven by data states or conditions which lend
themselves well to SP/UDF integration.

I have a highly distributed data application in the development channel
right now that relies heavily on this method. With it, I am able to tweek
out great performance gains and have tight control on security. Things that
client-based queries just cannot deliver.

Good luck on your project.

Randy
 
S

Sericinus hunter

Randy said:
Performance:
SP/UDF usage CAN greatly improve performance, if coded well. They allow you
to accept parameters and run small optimizations that are not possible with
dynamically built queries passed from the client code (in speed, anyway.)
The Execution Path of the SP/UDF is determined on first run and tweeked by
the server thereafter for some automated speed adjustments. This means that
a dynamic query has to rely on the server's first-guess only of the best path
to run.

Is it really so? To my understanding, SQL Server caches execution plan for
any query, whether it is encapsulated in stored procedure or is just in-place.
 
G

Guest

It's true that procedures get cached (true they can go through a recompile
for a large number of reasons as well, which is a performance hit).

Probably the best help here is to take a look at the execution plan for
dynamic sql versus the execution plan for a stored procedure. A stored
procedure generates an actual execution plan, but dynamic sql doesn't (at
least last time I checked it didn't --- checking from within query analyzer)
or at least it is very basic.

Stored procedures are simply faster than ad-hoc queries. For example, how do
you cache a query plan that is dynamic? Which index is it going to use? Would
it be better to look at the heap instead (in other words, run a table scan).
Remember, things like the order of the columns in a query, how the order by
clause is constructed and what criteria you are using all effect performance
of a query. It's usually better to have a sql guy develop the sql code and
have programmers develop the ui, in this way the sql code can be tuned
appropriately. This is a big advantage for the SQL Server model.

However, having said that, if you are building a small application and
aren't concerned with performance, and you don't want the overhead that comes
with managing your stored procedures (permissions, performance tuning,
etc...), then just use ad-hoc queries.

At any rate, that has been my experience. You can use SQL profiler to answer
almost all the questions that have been presented in this discussion thus
far.

Hope this helps,
Bob
 
S

Sericinus hunter

BobD said:
It's true that procedures get cached (true they can go through a recompile
for a large number of reasons as well, which is a performance hit).

The same is true for ad-hoc queries.
Probably the best help here is to take a look at the execution plan for
dynamic sql versus the execution plan for a stored procedure. A stored
procedure generates an actual execution plan, but dynamic sql doesn't (at
least last time I checked it didn't --- checking from within query analyzer)
or at least it is very basic.

I don't understand this. Can you elaborate? Any query needs an execution
plan in order to be executed. Why would this plan be different for ad-hoc
query?
Stored procedures are simply faster than ad-hoc queries.

Why? Both are just chunks of SQL code.
For example, how do you cache a query plan that is dynamic?

What is dynamic query plan? Query gets compiled, execution plan gets
cached, then executed. I see no problems with this.
Which index is it going to use? Would
it be better to look at the heap instead (in other words, run a table scan).
Remember, things like the order of the columns in a query, how the order by
clause is constructed and what criteria you are using all effect performance
of a query.

This is all true, but not relevant. If you type in a similar query which only
differs by, say, ORDER BY clause, it is a different query and it will cause a new
plan be generated and cached. But if you only change parameter values, the SQL
Server will reuse already existing plan.
The only thing I can imagine in favor of stored procedures is that when the
SQL Server is looking for a cached plan, it uses the procedure name, while
for ad-hoc query it needs to parse the query and calculate some sort of a key
to use for available plan lookup. This might be faster, but I don't know
how the engine works in this respect.
It's usually better to have a sql guy develop the sql code and
have programmers develop the ui, in this way the sql code can be tuned
appropriately. This is a big advantage for the SQL Server model.

I am not arguing that. In fact, I am for stored procedures with both hands.
However, this argument about ad-hoc queries not being cached became so popular,
that I just wanted to clear this up.
 
G

Guest

Hi there,

If you get a chance, try some of the things I suggested. I recognize your
point of view, but before you refute the point, please do actually check out
what I said. Try opening up query analyzer and checking out the query plan or
using profiler to capture it that way. There's a ton of information on how to
use these tools to answer the questions you are debating, just take the time
to look into it and I think you'll find the answers you are looking for.

Here's some help on using profiler to that end:
http://www.sql-server-performance.com/rd_query_tuning.asp

Also, here is a quote from sql-server-performance.com: "One of the biggest
advantages of using stored procedures over not using stored procedures is the
ability to significantly reduce network traffic. And the more network traffic
that can be reduced, the better the overall performance of your SQL
Server-based applications.
"
http://www.sql-server-performance.com/stored_procedures.asp

Another quote from the same website: "As you know, one of the biggest
reasons to use stored procedures instead of ad-hoc queries is the performance
gained by using them."

If you work with large databases, you'll begin to see the advantages of
stored procedures. Just my humble opinion, take it for what it's worth.

Cheers mate,
Bob
 
S

Sericinus hunter

BobD said:
Hi there,

If you get a chance, try some of the things I suggested. I recognize your
point of view, but before you refute the point, please do actually check out
what I said. Try opening up query analyzer and checking out the query plan or
using profiler to capture it that way.

I have a procedure with two selects in it, the second being from
table type UDF which in turn is one select from a join of six tables.
I have executed this procedure and then cut-and-paste its code in
a separate window, including the code for the UDF, executed this one too.
The execution plans for both are identical, except the second one
is divided into individual SELECT nodes, while the first one is under
the single EXECUTE node.
Is it what you suggested me try?
Also, here is a quote from sql-server-performance.com: "One of the biggest
advantages of using stored procedures over not using stored procedures is the
ability to significantly reduce network traffic.

No doubt about this one.

[...]
Another quote from the same website: "As you know, one of the biggest
reasons to use stored procedures instead of ad-hoc queries is the performance
gained by using them."

And this is at least debatable. There are a lot of places on the net
with long threads of the holy war on the subject.
 
W

William \(Bill\) Vaughn

This debate has gone on for years. The SQL Server team at MS has done what it can to make ad-hoc queries perform better and yes, they compile to the same execution plans as SPs. Those plans (the compiled code if you will) are cached in the same memory as SP plans. Yes, there are aspects of SPs that make them more efficient as I outline below. However, the real difference is not always in execution performance but in developer productivity. My research has found that the vast majority (over 90%) of the shops out there use SPs over ad-hoc queries. They use them because:
a.. SPs can be changed insitu without recoding, retesting and redeploying an application.
b.. The require discipline within the organization to ensure that changes made to the database and its schema are done in an orderly fashion.
c.. They are more secure as DBAs rarely permit direct access to base tables but do grant limited rights to SPs.
d.. They encourage developers to use Command object Parameter lists that deal with a litany of security and formatting issues.
e.. They find that OUTPUT parameters are far more efficient than rowsets (returned by a SELECT).
f.. They find that now that SPs can be written in VB.NET or C# some very tough jobs can be done on the server instead of on the client. This saves transporting data in bulk to the client for this processing--again improving performance.
I could go on (and I do in my books), but you get the idea.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
F

Frans Bouma [C# MVP]

William said:
This debate has gone on for years. The SQL Server team at MS has done
what it can to make ad-hoc queries perform better and yes, they
compile to the same execution plans as SPs. Those plans (the compiled
code if you will) are cached in the same memory as SP plans. Yes,
there are aspects of SPs that make them more efficient as I outline
below. However, the real difference is not always in execution
performance but in developer productivity. My research has found that
the vast majority (over 90%) of the shops out there use SPs over
ad-hoc queries.

depends on how you produce your ad-hoc queries of course. Any o/r
mapper can produce queries which often outperform CRUD procs and also
offer big development speed gains over hand-written procs for each
table.
They use them because:
a.. SPs can be changed insitu without recoding, retesting and
redeploying an application.

this is specific to the situation they're used in. Often procs can't
be changed without altering the calling code as well, which means
changes at multiple places, and worse: done by different people not
working together (in a lot of cases), which often means delays,
politics and who controls what (is proc signature change leading
calling code changes or vice versa?).

Furthermore, altering a proc is often argued as a good thing and an
advantage but in reality this is a risky business: a proc can be called
from a lot of routines in your application. So altering the proc is
often not possible, so a NEW proc is required. This adds maintenance
nightmares because you then have to document which parts of the
application calls which proc (if there are more apps using the same
procs, it's even more trickier).
b.. The require discipline within the organization to ensure that
changes made to the database and its schema are done in an orderly
fashion.

what has this to do with stored procedures?
c.. They are more secure as DBAs rarely permit direct
access to base tables but do grant limited rights to SPs.

yeah, pr_DeleteCustomer(@customerID) is really more secure... when do
people stop bringing up this myth over and over again? If Joe uses an
application X and X logs into the DB using Joe's account (or whatever
account build in, doesn't matter) and X offers Joe to delete a customer
in the scope of the application, Joe can delete the customer (or any
customer) from the db using the proc called by X via a normal query
toolkit and the same user as X uses, as X simply calls a proc. Joe can
do:
EXEC pr_DeleteCustomer(10)
and customer with id 10 is removed. How is pr_DeleteCustomer prevent
Joe from doing that?
d.. They encourage developers to use Command object Parameter lists
that deal with a litany of security and formatting issues.

You think?

I do
using(DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.SaveEntity(myCustomer);
}

and it saves myCustomer, its orders, each order detail row, related
product row and other rows in the right order, syncs FK's etc. etc.
With Dyn. sql. Where are the parameters? I don't see them. And the code
above is database agnostic code. I can create a simple factory for my
DataAccessAdapter object which makes the code above save the whole
graph in sqlserver or oracle or other db.

And on top of that: with insert and update statements (which one
applies) which can often be faster than a proc call, simply because
they only update / insert the fields which have to be updated. So,
let's say your customer table has 10 fields, your order table has 10
fields. Your update procs for customer and order then either accept ALL
field values and simply update all fields, OR accept nullable
parameters and do COALESCE voodoo, OR contain if statements (a no-go,
but some people still write them)

Now tell me, can you beat those 4 lines of code (2 actually) with your
procs and call-code behind the scenes, fully debugged and tested? Is
that more productive?

Then I didn't even throw in automatic concurrency code added to the
query, when necessary.

Besides that, how are you going to call procs without parameter lists?
e.. They find
that OUTPUT parameters are far more efficient than rowsets (returned
by a SELECT).

sadly for every context in the application where a value from the db
is required, a new scalar or output parameter proc has to be written to
feed the app with data required.

With an application of say 200 tables or more, this will likely run
into the 1000+ procs (with the crud ones, the select all/one/filter
ones). I'm sure you will agree with me that if you have a proc tier
with 1000+ procs maintained by group A and the actual application
maintained by group B, it will get messy over time.
f.. They find that now that SPs can be written in
VB.NET or C# some very tough jobs can be done on the server instead
of on the client. This saves transporting data in bulk to the client
for this processing--again improving performance. I could go on (and
I do in my books), but you get the idea.

And finally we come to the one point where procs are the best option:
data processing in long running queries.

What I always find funny is that in debates between dyn. sql and procs
this one point isn't used by the pro-proc (let me call the group
favoring procs in the debate the 'pro proc people') people, or at least
later on in the discussion (if ever). And data-processing is one of the
reasons why you could opt for a proc for a given piece of
functionality. (for the readers who wonder why: the roundtrips of
transfering data back/forth to the client app to do processing there is
often slower than doing the processing in interpreted (!) SQL code.
With C#/VB.NET code on the server, it's even more clear that
long-running processing code can help). One caveat: C#/VB.NET code
maintenance on the server isn't that friendly.

Bill, what I find weird in your plea, if I may call it that, is that
you apparently ignore the downsides of procs which can hurt
productivity and maintainability of an application that much that procs
are not that good of a choice for CRUD operations. I especially make
that distinction: CRUD vs dataprocessing, because it makes the
discussion more clearer: most of these debates go about CRUD, not
dataprocessing.

With CRUD procs, you run into problems when your app needs these kind
of things:
- fetch n rows of X filtered on related tables Y and Z using parameters
A, B ...
- fetch n rows of X, all these n rows' related Y and Z rows
- fetch rows of X, filter them and order them on field a
AND then:
- fetch rows of X, filter them and order them on field b
etc.

of course you can write special procs for these situations, but with a
database of a regular enterprise size, you quickly run into the problem
that you have to write a lot of custom procs. The reason for that is
that it highly depends on the application's context in which the data
is required how the actual resultset looks like and on which set logic
it is based. This thus leads to DBA <-> developer discussions, which is
overhead, can lead to conflicts (often does) and thus delays.

Coming back to 'developer productivity' which was your main point at
the start of your post, I don't see it. On the contrary, it's not
there, it takes longer, EVEN with hand-written queries in code. Anyone
who has done development of n-tier apps which call procs (I have) knows
that if your app needs a change and the change requires different data,
the proc-change is a roadblock: you either have to maintain a proc AND
call code yourself (if you may alter the procs) OR have to request a
proc change or a new proc, if the proc can't be changed.

I've to add that I also don't think hand-writing raw SQL in your
application is a good way to do data-access, however today that's also
not necessary anymore. With the large amount of good, free (or
commercial) o/r mapping frameworks out there, for every development
team there's a solution available which fits their budget and frees
them from writing raw SQL in their apps AND gives them full development
productivity: they don't have to wait for the DBA to write the proc
they want. They don't have to debate with the DBA for hours to convince
him/her that the proc is necessary, they don't have to wait till the
proc is updated to bring the call code to the proc in production and
they don't have to wait till the DBA has tested the proc code through
and through: their SQL _is_ tested.

DBA's are necessary, also with o/r mapper using teams. Though instead
working against them (in a lot of organisations the DBA is placed in
another section of the organisation as the development team(s)) they
should work together, after all they work for the same organisation.
The DBA should consult the team when to use a join-based query over
relationships and when to use a subquery for example (most o/r mappers
out there (except dlinq and a few others) offer choices what to create:
a subquery or a join, for example in 1:n fetches with filters on the n
side).

I don't have the illusion that the 'use procs because <insert myth
list here>' arguments will go away soon or even will go away in the
future. What I do hope though is that people are more realistic about
the various options they have when it comes to data-access: by using
the real pro/con's of each option and use the techniques available per
functionality aspect. I think that debate is much more healthier AND
more valuable to developers than the bickering over which side is right
and which side is wrong.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
S

Sericinus hunter

Frans Bouma [C# MVP] wrote:
[...]

Thanks, Frans. Excellent post, very solid and balanced point of view.
 
C

Cor Ligthert [MVP]

Frans,

I am full expactation waiting on the answer from Bill on this.

(You know that I am in this on your side so I can not do that).

However I think that it is time that this discussion is once well done.

Cor
 
F

Frans Bouma [C# MVP]

Cor said:
Frans,

I am full expactation waiting on the answer from Bill on this.

(You know that I am in this on your side so I can not do that).

However I think that it is time that this discussion is once well
done.

The sad thing is: there shouldn't be any dispute over what's to be
used. I mean: it's as silly as 'use VB.NET!' 'NO!! Use C#' "No idiots!
Use Java!!!111"....

If it was 1996 today, and you weren't in posession of an expensive
toplink license, then yes, Bill would have a good point, as the tooling
to do dyn. sql without even writing a single SQL statement wouldn't
have been available.

However the points of the past don't apply anymore.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
S

Sericinus hunter

Frans said:
The sad thing is: there shouldn't be any dispute over what's to be
used. I mean: it's as silly as 'use VB.NET!' 'NO!! Use C#' "No idiots!
Use Java!!!111"....

It is not about that. It is about the fact that very wrong arguments
are so widely spread that people simply take them as axioms and don't
allow even a slightest doubt about it.
As to what to use, every specific situation will dictate the way to go.
One just need to understand the implications, and discussions like this
are very useful in this respect.
 
W

William \(Bill\) Vaughn

You're right Frans, and the millions upon millions of DBAs and serious IT
developers all over the world are wrong. Right.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
C

Cor Ligthert [MVP]

Bill,
You're right Frans, and the millions upon millions of DBAs and serious IT
developers all over the world are wrong. Right.
Probably the USA would not exist in its current way if Columbus would have
listen to those who discussed with him in the same way. Those millions then
who told that the world was flat were told that the world was flat. Every
other opinion was without any proof made ridiculous or been told that it was
heretic.

I have seen this more as reactions by some regulars in this newsgroup.
However, I had truly not expected it from you.

It is Easter so take an egg, and do what Columbus did to show that an egg
can stand right up although everybody had told that it was impossible.

Happy Easter.

Cor
 
F

Frans Bouma [C# MVP]

William said:
You're right Frans, and the millions upon millions of DBAs and
serious IT developers all over the world are wrong. Right.

Oh, so all O/R mapper users across the world aren't serious IT
developers?

If this is your argument, then I indeed think the discussion is over.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
F

Frans Bouma [C# MVP]

Sericinus said:
It is not about that. It is about the fact that very wrong
arguments are so widely spread that people simply take them as axioms
and don't allow even a slightest doubt about it.

True, though in a way I can understand some of those arguments,
because they weren't always false, however today most of them are a
little dated because of the excessive progress made in tooling since
the past decade or so. I mean, years ago, procs were compiled though
today they're not (at least not in most modern databases, if you write
your procs in C or other non-SQL language in DB2, you get a
compilation), for example.
As to what to use, every specific situation will dictate the way
to go. One just need to understand the implications, and discussions
like this are very useful in this respect.

... up till the point when the bitterness enters the conversation and
the discussion becomes nasty.

But I agree with you that it depends on the situation and with the
aspects of the situation you should choose the way you want to work
with data though do it based on reality-checked facts, not myths.

I'm pessimistic about if we'll ever get there. What I wrote in my
other posting about politics is IMHO the core reason why this problem
is still a problem in a lot of organisations. The DBAs are in one part
of the organisation, the developers in another part. There's often a
lot of distrust towards the other group and of course, it's
understandable that if you take away from the DBAs the responsibility
of writing procs and make them 'system administrators', it will make
them angry or at least it would be logical if they get upset.

However that doesn't have to be the case. If the organisations make
the DBA part of the development team, make the DBA the consultant for
the developers what to do best and make them work closely together, it
might be that what you describe perfectly is indeed within reach and
these discussions can be a thing of the past.

A thing that's often forgotten for example is the role of the DBA in
writing functions instead of procs. This is a .NET oriented newsgroup,
so people here work in an OO environment. If the DBA is working
together with the development team (as a real teammember) and writes
functions to do longrunning filters, the functions then can be used by
the developers in O/R mapping scenarios (I use the term o/r mapping,
but any OO <-> db layer will do) and you'll have the best of both
worlds: dataprocessing logic written in the db, and also the flexible
way of working with data in an OO environment with compile time checked
queries etc. etc.

Even though this discussion ended the same as all the other
discussions of the same topic ended: with bitterness and a bad feeling,
I hope the reader now has a good set of options to make a proper
decision.

FB

--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
C

Cor Ligthert [MVP]

Frans,
I'm pessimistic about if we'll ever get there. What I wrote in my
other posting about politics is IMHO the core reason why this problem
is still a problem in a lot of organisations. The DBAs are in one part
of the organisation, the developers in another part.

I once lost a full point in an AMBI examination (by telling that I knew it
but not believing forever in it). It was from 8 to 7 points so not that bad.

However, in that kind of rules "seperation of accounts" , the DBA should
never be alowed to build the sprocs. The only thing that he may do is
install them. An administrator can never have the disposal about anything.

Just my thought,

Cor
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top