SELECTs vs Stored Procedures

W

William \(Bill\) Vaughn

How is this different than what applications have to do otherwise? Is
drag-and-drop or hand-coding the application any different because you have
moved the update logic to the middle tier?

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
M

m.posseth

Well i totally agree with this , however i would like to add that if you
are talking about parameters that change , we are talking about the
parameter signature of the Proc and not the parameter values
as only a signature change requires a forced recompile while testing .

at least that is how i learned it at my 70-229 MCSE course

regards

Michel Posseth [MCP]



Ah, close.
Stored procedure query plans change--each time they're compiled. No, they're
not recompiled each time they're executed but when they're forced out of the
cache or the DBA or developer forces recompilation for one reason or
another. That's a problem. When you test a SP with a specific set of
parameters, a plan matching those parameters is created. When the parameters
change, a new plan is NOT created to match--not if there is a plan in the
cache. Hopefully the old plan works efficiently with the new parameters--but
it might not.
SQL Server 2005 changes this a bit--it can recompile parts of the plan
without affecting other parts.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________

m.posseth said:
Stored procs don't have their plans precompiled. At least not since
Sqlserver 7. Only on DB2 procedures are really precompiled and stored
in a precompiled fashion (not always as well, but most people opt for
that option, although it can hurt runtime optimization).

in more detail regarding this for those who might be interested :

SQL server 2000 stores only the source for stored procedures and triggers
when a stored procedure or trigger is first executed the source is
compiled
into an execution plan . If the stored procedure or trigger is again
executed before the execution plan is aged from memory, the relational
engine detects the existing plan and reuses it, otherwise a new plan is
built

The above process flow is simular to the process that SQL server follows
for
all SQL statements

the main performance advantage that stored procedures and triggers have in
SQL 2000 is that their SQL statements are always the same , therefore the
realational engine matches them with any execution plans

Stored procedures had a more pronounced performance advantage over other
SQL
statements in earlier versions of SQL server , they did not attempt to
reuse
execution plans for batches that were NOT stored procedures or triggers


regards

Michel Posseth [MCP]






Frans Bouma said:
Stored procs don't have their plans precompiled. At least not since
Sqlserver 7. Only on DB2 procedures are really precompiled and stored
in a precompiled fashion (not always as well, but most people opt for
that option, although it can hurt runtime optimization).

The reason for this is that only at runtime statistics are known and
only in the situation of a fresh query the optimizer can truly optimize
the plan based on the statistics. See BOL about execution plans.

As he's talking about SELECT's, the gain of a proc is not really an
issue (having processing on the server so you avoid pumping loads of
data to the client first and then back to the server).

FB

--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
M

m.posseth

well even then the statement still stands ,, as how dynamic is dynamic ???
to be non beneficial for the execution plan ? wich will work on anny SQL
statement wether it is stored in a Proc or not
that is exactly why Frans Bouma is right in his above satements

regards

Michel Posseth

Miha Markic said:
the main performance advantage that stored procedures and triggers have
in
SQL 2000 is that their SQL statements are always the same , therefore the
realational engine matches them with any execution plans

Not exactly, nothing prohibits you to craft dynamic sql within stored
procedure.

--
Miha Markic [MVP C#] - RightHand .NET consulting & development
www.rthand.com
Blog: http://cs.rthand.com/blogs/blog_with_righthand/
SLODUG - Slovene Developer Users Group www.codezone-si.info
 
W

William \(Bill\) Vaughn

Yup, changing parameter VALUES does not affect the cached plan--it's used
regardless of how well it works given the parameter values provided. If you
change the signature, you're certainly going to get a new SP and thus a new
plan when it first executes.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________

m.posseth said:
Well i totally agree with this , however i would like to add that if you
are talking about parameters that change , we are talking about the
parameter signature of the Proc and not the parameter values
as only a signature change requires a forced recompile while testing .

at least that is how i learned it at my 70-229 MCSE course

regards

Michel Posseth [MCP]



Ah, close.
Stored procedure query plans change--each time they're compiled. No,
they're not recompiled each time they're executed but when they're forced
out of the cache or the DBA or developer forces recompilation for one
reason or another. That's a problem. When you test a SP with a specific
set of parameters, a plan matching those parameters is created. When the
parameters change, a new plan is NOT created to match--not if there is a
plan in the cache. Hopefully the old plan works efficiently with the new
parameters--but it might not.
SQL Server 2005 changes this a bit--it can recompile parts of the plan
without affecting other parts.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________

m.posseth said:
Stored procs don't have their plans precompiled. At least not since
Sqlserver 7. Only on DB2 procedures are really precompiled and stored
in a precompiled fashion (not always as well, but most people opt for
that option, although it can hurt runtime optimization).

in more detail regarding this for those who might be interested :

SQL server 2000 stores only the source for stored procedures and triggers
when a stored procedure or trigger is first executed the source is
compiled
into an execution plan . If the stored procedure or trigger is again
executed before the execution plan is aged from memory, the relational
engine detects the existing plan and reuses it, otherwise a new plan is
built

The above process flow is simular to the process that SQL server follows
for
all SQL statements

the main performance advantage that stored procedures and triggers have
in
SQL 2000 is that their SQL statements are always the same , therefore the
realational engine matches them with any execution plans

Stored procedures had a more pronounced performance advantage over other
SQL
statements in earlier versions of SQL server , they did not attempt to
reuse
execution plans for batches that were NOT stored procedures or triggers


regards

Michel Posseth [MCP]






Frans Bouma said:
(e-mail address removed) wrote:

Hi john

They have there plans precompiled , so are faster.

Stored procs don't have their plans precompiled. At least not since
Sqlserver 7. Only on DB2 procedures are really precompiled and stored
in a precompiled fashion (not always as well, but most people opt for
that option, although it can hurt runtime optimization).

The reason for this is that only at runtime statistics are known and
only in the situation of a fresh query the optimizer can truly optimize
the plan based on the statistics. See BOL about execution plans.

As he's talking about SELECT's, the gain of a proc is not really an
issue (having processing on the server so you avoid pumping loads of
data to the client first and then back to the server).

FB

--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
 
F

Frans Bouma [C# MVP]

William said:
How is this different than what applications have to do otherwise? Is
drag-and-drop or hand-coding the application any different because
you have moved the update logic to the middle tier?

Well, with dynamic SQL generated at runtime, you don't have to do
anything in that scenario in most cases, while in the case of a proc,
you have to change more than 1 tier. That's not all, often there are
two people or even two teams involved in such a change, which can be
cumbersome, as it's over 2 or more tiers.

FB

--
 
P

Paul Clement

¤ Hi
¤
¤ When developing vb.bet winform apps bound to sql server datasource, is it
¤ preferable to use SELECTs or stored procedure to read and write data from/to
¤ SQL Server? Why?
¤

I would agree with most of the other sentiments toward a preference with respect to stored
procedures.

The only issues I have with SPs is that development can be cumbersome if your data store is not
supported in an integrated development environment. You can't see the SQL code, or other database
objects, so you typically have to open up a separate tool to work with the SPs.

In addition, some designs may need to implement dynamic SQL, which may not translate well to an SP.

Otherwise, the benefits of using SPs far outweigh those for naked SQL.


Paul
~~~~
Microsoft MVP (Visual Basic)
 
G

Guest

Hi frans. Always a pleasure to see your posts on this subject, even if we
disagree on others. ;)

So, you're saying, putting "dbo." in front of your table names, actually
improves performance?

Fregas
 
G

Guest

Ok, guys, we've heard a lot of pros cons about stored procedures versus
dynamic (hopefully parameterized) sql.

How do you handle optional parameters? For example, I want to query for
orders by customerid, a date range and/or productID. Any or all of these
parameters could be included:

SELECT O.*
FROM Orders
WHERE OrderID = @OrderID
AND ProductID = ProductID
AND CustomerID = @CustomerID
AND DateOrder between @startDate and @enddate

how do i make the parameters optional? I've tried using IsNull() and
Coalesce() but these have slowed performance quite a bit for me even in
stored procs. With parameterized dynamic sql, its not an issue, because I
can change the query to not include one of the pieces of the WHERE clause.
Would I just have to make a bunch of different stored procs for every
combination of the parameters? Bleh...

This is very common in applications that require search functionality, BTW.

I'm not issuing this as a challenge or to be a prick...I really want to know.

Fregas
 
W

William \(Bill\) Vaughn

Ah well, we write the SP to set default values for the parameters (for
example "NULL"). The SP logic then incorporates that fact into its WHERE
clause logic.

WHERE (x = @MyXParm OR @myXParm IS NULL)


--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
F

Frans Bouma [C# MVP]

Fregas said:
Hi frans. Always a pleasure to see your posts on this subject, even
if we disagree on others. ;)
:p

So, you're saying, putting "dbo." in front of your table names,
actually improves performance?

Yes, as it will make the sql optimizer more able to find back a plan
and re-use it.

Frans


--
 
F

Frans Bouma [C# MVP]

William said:
Ah well, we write the SP to set default values for the parameters
(for example "NULL"). The SP logic then incorporates that fact into
its WHERE clause logic.

WHERE (x = @MyXParm OR @myXParm IS NULL)

... which throws out your 'procs are faster' argument ;), because this
is slower than normal, task-tailored dynamic parameterized SQL.

Also, how is:

CREATE PROCEDURE dbo.pr_DeleteCustomer
@customerID int
AS
DELETE FROM dbo.Customers WHERE CustomerID = @customerID

more secure than, firing this:
DELETE FROM dbo.Customers WHERE CustomerID = @customerID

as dynamic sql? After all -> say I use an app and I have to delete
customers through the app sometimes. I then have access to the db
table, and can with QA run the proc, giving in a random ID value.


FB

--
 
C

Cor Ligthert [MVP]

Hi,

In my opinion is in this kind of discussion always expected that the SQL
server is by its IP directly addressed over the Internet.

That is in my opinion not the way by dotNet.

In by instance a normal ASPNET application there is not any code transported
over the network (or is should be with an by a router seperated IIS and SQL
Server, which should than be done in a very private part of the network).

While if it is a windowsforms application than in my opinion a webservice is
the way to go if it is not in an Internal network. To secure this kind of
operatinon in an Internal network is in my idea the same as securing that
workers do anything, they probably have easier methods internal to get the
information than securing SQL parameters on parts of the ISO/OSI layers.

Just my thought,

Cor
 
G

Guest

Yeah Bill, I seem to remember having performance problems with these kind of
WHERE clauses. Its about the same as using IsNull() or Coalesce(). They just
didn't perform as well as dynamic sql.

In defense of stored proc security Frans, in your example, a hacker who
gains access to the database by discovering the application's SQL Server
login could then run the delete stored procedure pr_DeleteCustomer, but
they'd have to know the procedure name and ID of the customers to delete
them. However they couldn't necessarily do DELTE FROM CUSTOMER, TRUNCATE,
run a DTS package, etc., becuase the application's login doesn't have
permissions to do those things. Thats the theory anyway.

Craig
 
G

Guest

I'm not sure if I'm understanding you correctly, but I'll add my 2 cents.

I would say that in a web hosting environment for example, the sql server is
usually accessible over the internet, so developers can access it using
Enterprise Manager. Therefore, security is a higher concern than if the sql
server is only on a local internal network. However, in either case the
bandwidth usage in sending/receiving data to and from sql server DOES make a
difference. It might be less noticable if the sql server is on your local
network, but it would still count.

In the case of a winforms application (or maybe SmartClients) in which the
database must be accessed over the internet because the users will be
connecting from both inside and outside the company network, than I agree
that some other kind of access to your data, such as web services, is a
better choice than each client accessing the database directly. Although
realistically, you would probably want to use some kind of remtoing system to
send/receive business objects, rather than just using web services for
everything.

CSLA for example (Rocky Lhotka's framework), has something called the
DataPortal which sends and receives your business objects back to your web
server or windows client application. It is designed so that the windows
clients (or even mutliple web services, if you have web farm) can ask the
DataPortal for all business objects. The DataPortal can reside on its own
server and be the only thing that talks to the database, as opposed to having
potentially hundreds of windows clients all opening connections to the DB.
This makes the application more scalable, because there are less connections
and more connection pooling.

Even though there's a lot of things I don't like about CSLA, I would
probably take this same approach if I were using windows clients, or if I had
to cluster my web server. I think CSLA handles this part for you pretty
nicely. I might still use web services for some things, or just to avoid the
hassle of remoting if I was building a relatively small, low user count,
application, but my understanding is that web services would not perform as
well in most cases. Also I prefer to work with objects directly, rather than
have some web service wrapping them up for me or talking to the DB directly.

Craig
 
F

Frans Bouma [C# MVP]

Fregas said:
In defense of stored proc security Frans, in your example, a hacker
who gains access to the database by discovering the application's SQL
Server login could then run the delete stored procedure
pr_DeleteCustomer, but they'd have to know the procedure name and ID
of the customers to delete them. However they couldn't necessarily
do DELTE FROM CUSTOMER, TRUNCATE, run a DTS package, etc., becuase
the application's login doesn't have permissions to do those things.
Thats the theory anyway.

Though, when an app uses dyn. sql (parameterized) and uses a login
specially for that app (or a couple, for example one for admin usage,
and one for fetches of generic data), how can I truncate the table
then? There's no difference, as teh security defined on the table is
the same as defined on the proc: if I have the credentials defined on
the object, I can execute an action on it, be it execute it or delete
something.

Having a proc like the one I gave doesn't prevent a user from deleting
a customer, hence not more secure.

Frans


--
 
G

Guest

Well, I think you might be mistaken, Frans. The security for the login to
run dynamic sql is a BIT broader then for the login to only run procs.

With dynamic sql, the application would have to have direct access to the
table (at the very least read/write permissions.) Therefore, a hacker who
manages to get the login for the application using dynamic sql can then do a
query like:

DELETE FROM Customers

which deletes all the customers, basically the equivalent of truncate.

However, if you use only stored procs, you can give the application's login
ONLY access execute procs and NOT the underlying table:

up_DELETE_CUSTOMER @CustomerID

in which case the hacker cannot simply blow out the whole table at once, but
has to know the id of every customer in order to do the same damage. It must
be said, that the hacker could potentially find another "SELECT" stored proc,
in order to find the customerID's if he didn't know them already somehow.

I agree that stored procs don't necessarily prevent the hacker from doing
the same damage, but makes it more difficult, which is all security can hope
to do anyway.

If the app's login has DBO access, then the hacker can also TRUNCATE and so
forth, but not many applications actually need this level of access.
However, many developers (myself included) are guilty of the following: we
develop the application using a DBO account for convenience, but don't remove
those extra permissions in production, because we're well...lazy. I know at
my company, few of the web applications we're developing are available
outside our VPN, so security is already pretty tight.

I'm not saying procs are better by any means, just that i can understand the
security benefits of their use even though I choose not to do so. In the
performance realm, I have found dynamic sql about as fast as procedures, and
procs are actually SLOWER in many cases, like some of the optional parameter
queries i've mentioned elsewhere on here. Also, it is much more difficult to
reuse code within/between stored procs, thus adding development time and
maintenance headaches to your application.

Craig
 
F

Frans Bouma [C# MVP]

Fregas said:
Well, I think you might be mistaken, Frans. The security for the
login to run dynamic sql is a BIT broader then for the login to only
run procs.

No, that's not the case, read on.
With dynamic sql, the application would have to have direct access to
the table (at the very least read/write permissions.) Therefore, a
hacker who manages to get the login for the application using dynamic
sql can then do a query like:

DELETE FROM Customers

which deletes all the customers, basically the equivalent of truncate.

However, if you use only stored procs, you can give the application's
login ONLY access execute procs and NOT the underlying table:

up_DELETE_CUSTOMER @CustomerID

in which case the hacker cannot simply blow out the whole table at
once, but has to know the id of every customer in order to do the
same damage. It must be said, that the hacker could potentially find
another "SELECT" stored proc, in order to find the customerID's if he
didn't know them already somehow.

exactly. And not only that: if a hacker is able to delete one customer
it's already not safe. If the customerID is numeric, like an int, he
might have to write a loop to delete all of them, but that's a matter
of time.
I agree that stored procs don't necessarily prevent the hacker from
doing the same damage, but makes it more difficult, which is all
security can hope to do anyway.

It's either SAFE, or not, but not 'a bit safe'. A 'bit safe' isn't
safe, it's unsafe. So if someone argues that stored procedures are
secure and dynamic sql aren't, that person then has to proof how that
can be achieved because that's much harder than 99% of the stored
procedure using people think. (for starters: moving your complete app
into the db, passing secret keys towards procs which only act if the
key is valid etc.)..

People tend to believe that as long as it is a bit safe, they're at
least a bit more secure, but that's not true. If a hacker is after your
data, he'll get it, as soon as there's one tiny hole he can peek
through.

Though, the vast majority of 'security' related concerns aren't
targeting hackers from the outside, but towards the 'manager' with ms
access who wants to run ad-hoc reports on your 3TB machine. In that
scenario, you can perfectly limit access with normal db-role based
security on tables and a couple of logins instead of one general login
which can do everything.
If the app's login has DBO access, then the hacker can also TRUNCATE
and so forth, but not many applications actually need this level of
access. However, many developers (myself included) are guilty of the
following: we develop the application using a DBO account for
convenience, but don't remove those extra permissions in production,
because we're well...lazy. I know at my company, few of the web
applications we're developing are available outside our VPN, so
security is already pretty tight.

It's often lazyness, especially in the area of 'I heard this so it
must be true, I don't have time to investigate it further'. Databases
can be locked down pretty good, though often it's a matter of doing it
on a higher level than on a lower level, because the problem is that
when a DBA says: "I decide what's used and I decide it will be stored
procedures and I write them and that's final", the developers who write
the code of the actual application are then faced with an api which
might not be optimal for their application, or consists of 90% crud
procs and therefore not more secure than any other approach as they
don't bring 100% security to the table.

Instead of working together, they work against eachother, which is
weird, as they all work for the same company. It's not new though. A
lot of developers have to argue with system administrators a lot,
because the system administrator doesn't want to take a new app into
production if it isn't tested for a month on some obscure test server.
I'm not saying procs are better by any means, just that i can
understand the security benefits of their use even though I choose
not to do so. In the performance realm, I have found dynamic sql
about as fast as procedures, and procs are actually SLOWER in many
cases, like some of the optional parameter queries i've mentioned
elsewhere on here. Also, it is much more difficult to reuse code
within/between stored procs, thus adding development time and
maintenance headaches to your application.

It's a bit of a weird thing: a team is asked to develop an n-tier
application, and they are free to decide what they use in the BL and PL
tier, but the datalayer is written by a person not part of the team,
and at the same time, has a very big say in the rest of the design of
the application.

Because, if you use procedures, it's harder to use objects in the BL
and PL tier. It's not hard to fill a set of customer entities with the
result of a select proc call. It's MUCH harder to fetch a graph of
entities efficiently with a set of procs (as in: impossible without
writing for each occasion a new proc). By opting for procedures, you
decide to move the dataset route, which is perhaps not what the team
wants to use in their app, and THEY write the application.

Instead of using the DBA's knowledge (so add the DBA as a db
consultant to the team, so the DBA knows which tables are targeted
often, and because the DBA knows which tables are large, he/she can
advice on using subqueries instead of joins or vice versa, through the
usage of the abstract DAL), the DBA and the developers aren't working
together, which is counter productive.

"But it is more secuuuuuureee!!!"
no. An n-tier application has numerous areas where a hacker can inject
code, alter data during the dataflow etc. And most hackers who look for
databases aren't interested in deleting data, they're interested in
fetching data. So the obvious place to stick a listener component is
outside the db, right below the PL.

Frans.




--
 
C

Cor Ligthert [MVP]

Frans,

You would be surprised I agree the overall trend of your message with you. I
want to add something.
It's often lazyness,

That is for me no problem if it is for investigation, however for me a
problem is that people often don't review there programs and let things as
they are when they have tried something and it seems to work. That makes in
my opinion very big holes.

Just my thought,

Cor
 
G

Guest

Hi Frans,

I respectfully disagree. Most things in this world, including technology
are a matter of degrees. I think most security analysts understand that they
can't make any system "unhackable" but its a matter of balancing how much
effort and inconvenience they put in versus the pay off of being more secure.
I don't think its a simple matter of "Stored procedures are secure, dynamic
sql is not". I depends on a lot of factors. In whole, I think you have more
security options with stored procedures, but that certainly doesn't make your
database "completely secure" whatever that is. You're looking at it as a
black and white question, and few things fit into neat categories like that.
exactly. And not only that: if a hacker is able to delete one customer
it's already not safe. If the customerID is numeric, like an int, he
might have to write a loop to delete all of them, but that's a matter
of time.

And that extra time might cause the hacker to get noticed by intrusion
detection, get logged off because the network is only open for a specific
window of time, etc. If the key is a guid or something not easily "loopable"
than its going to be even harder. Many of the recommendations microsoft
gives in locking IIS for example, simply cause the hacker to have to put in a
lot more work to do the same damage, in the hope that the hacker will get
caught or reach the extent of their knowledge and give up.


People tend to believe that as long as it is a bit safe, they're at
least a bit more secure, but that's not true. If a hacker is after your
data, he'll get it, as soon as there's one tiny hole he can peek
through.

Again, I disagree. For example, I have a Belkin router. That router has
firewall protection. It has basic security. Am I completely safe? No I'm
not. Am I as safe as if I had a professional come in and install a
Checkpoint firewall? Not nearly. Am I more safe than if I had no firewall
protection at all. Definitely.
Though, the vast majority of 'security' related concerns aren't
targeting hackers from the outside, but towards the 'manager' with ms
access who wants to run ad-hoc reports on your 3TB machine. In that
scenario, you can perfectly limit access with normal db-role based
security on tables and a couple of logins instead of one general login
which can do everything.

I do agree with this, although I would say that managers, end users, etc
shouldn't have any access to the database, but should be working through the
application so both security and business rules can be applied. Obviously if
they're developing something like crystal reports, than the security will
need to be at the database level, in which case your right. Stored procs
aren't really helping security much at all, since the developer needs access
to certain tables or views.
Instead of working together, they work against eachother, which is
weird, as they all work for the same company. It's not new though. A
lot of developers have to argue with system administrators a lot,
because the system administrator doesn't want to take a new app into
production if it isn't tested for a month on some obscure test server.

It's a bit of a weird thing: a team is asked to develop an n-tier
application, and they are free to decide what they use in the BL and PL
tier, but the datalayer is written by a person not part of the team,
and at the same time, has a very big say in the rest of the design of
the application.

Because, if you use procedures, it's harder to use objects in the BL
and PL tier. It's not hard to fill a set of customer entities with the
result of a select proc call. It's MUCH harder to fetch a graph of
entities efficiently with a set of procs (as in: impossible without
writing for each occasion a new proc). By opting for procedures, you
decide to move the dataset route, which is perhaps not what the team
wants to use in their app, and THEY write the application.

Instead of using the DBA's knowledge (so add the DBA as a db
consultant to the team, so the DBA knows which tables are targeted
often, and because the DBA knows which tables are large, he/she can
advice on using subqueries instead of joins or vice versa, through the
usage of the abstract DAL), the DBA and the developers aren't working
together, which is counter productive.

I'm fortunate enough to have not experienced that, although I've heard it
from others. I've only worked in two places that had a DBA, and in both of
those places it was a pleasant relationship. We created our own queries,
procedures, views, etc. and the db helped maintain indexes, gave us tips, did
backups, etc. They stayed out of our way, we stayed out of theirs and for
the most part helped each other when needed. My current DBA is great and my
managers understand the developers need to create data structures rather than
have them dictated to them.

This is a bit of topic, but I have to say I dislike the whole idea of a team
project where certain members are working on one tier like the database or
the business objects, and other people working on other tiers like the
presentation. I'd rather break people up into developing certain areas of
the app and making them work from top to bottom. Although its good for a
designer to create images and chop of the Html, you really need developers to
consume objects and create the UI, and thats easier if that developer has
also created his businss objects and database structures (whether just
tables, views or stored procs), because he's working with his own API(s).

I've seen the "build one tier at a time" approach to development fail many
times, even with only one or two developers. We had a guy at my current job
get fired because he felll so far behind. He created so many objects for
things and test scripts and so forth but never built his UI. He was saving
that for last. The problem was that the "test scripts" didn't accurately
test his API very well and so he had a lot of code but no screens to show for
it and no real way of knowing if his API worked. I much prefer to build one
piece of the app, usually one or two screens at a time: design an object
model, build my tables, build my data access code, build my objects,
integrate them into the UI. This way I have something to show and I know
that this piece of functionality really works.

Also, I knew another good developer who wrote all his tables, then all his
CRUD stuff in stored procedures, then all his business objects, then all his
UI. This led to a bit of wasted effort because things often change during
development. Many of his stored procs and business objects didn't ever get
used, or had to be rewritten because priorities changed, new requirements
were discovered and better ways of writing these layers are found while
developing the app.

So I build one "module" at a time, from top to bottom.

"But it is more secuuuuuureee!!!"
no. An n-tier application has numerous areas where a hacker can inject
code, alter data during the dataflow etc. And most hackers who look for
databases aren't interested in deleting data, they're interested in
fetching data. So the obvious place to stick a listener component is
outside the db, right below the PL.

I agree with this too. Very astute observation Frans.

Fregas
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top