Writing updates from a wide DataTable

N

Nigel Norris

I have a DataTable with a large number of colums - 50 or more. I'm following
the recommendations to use stored procedures to apply updates to the
database, and using the Enterprise Llibrary. So I end up with a stored
procedure with 50+ parameters, and an Update statement that updates the same
number of columns. In a typical use case only a few columns will actually be
updated, but my DAL generally doesn't know which - it just gets a DataSet
with some changes.

Everything works fine, but I feel uncomfortable with this solution. I don't
like rewriting all those columns I haven't changed. I don't like shipping
all this unchanged data around.

Should I feel uncomfortable? Are there better ways of handling this kind of
situation?

Thanks...
 
M

Marina

I would personally not use stored procedures in this case.

You can build your own update statement. Use parameters, and only add the
ones for the columns you know changed. That way you are not worrying about
columns that haven't changed, or sending that up.
 
W

William \(Bill\) Vaughn

I would feel uncomfortable too--at one point in time you might want to
restrict IO rights to specific columns or trip over a column that can't be
updated at all.

If you roll your own UpdateCommand, you don't have to set columns you don't
want to set and you can still use a SP. The UpdateCommand can define all 50
Parameters but pass NULL or your own "flag"(you might want to SET some
columns to NULL) that could be used in the SP logic that says not to SET
this column.

Creating your own UPDATE statement on the fly is what ADO classic did--based
on changed columns (one of the Update Criteria property settings). It's not
a bad idea, but moves more schema-dependent logic to the client than I would
typically like to do.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
S

Sahil Malik [MVP]

Are query plans cached with that?

- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx
-------------------------------------------------------------------------------------------

William (Bill) Vaughn said:
I would feel uncomfortable too--at one point in time you might want to
restrict IO rights to specific columns or trip over a column that can't be
updated at all.

If you roll your own UpdateCommand, you don't have to set columns you
don't want to set and you can still use a SP. The UpdateCommand can define
all 50 Parameters but pass NULL or your own "flag"(you might want to SET
some columns to NULL) that could be used in the SP logic that says not to
SET this column.

Creating your own UPDATE statement on the fly is what ADO classic
did--based on changed columns (one of the Update Criteria property
settings). It's not a bad idea, but moves more schema-dependent logic to
the client than I would typically like to do.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________
 
S

Sahil Malik [MVP]

Here is my take on it. Don't do stored procs, just for the heck of writing a
stored proc.

Now, if the number of structures the query might take i.e. it may have 2,4,
36 and 50 parameters, but never 37, 23, 24 etc. (atleast very infrequently).
then go for dynamic sql.

Otherwise, Sql Server can work with upto 2100 parameters - don't sweat 50.

And are you planning on doing column-level access restrictions anytime soon?

- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx
 
N

Nigel Norris

Marina, Bill, Sahil,

Thanks for the great comments - very helpful.

In past projects I've used a mix of stored procedures and direct SQL (or ADO
generated updates). Reading through the various debates on using an SP layer
as part of the DAL, I've been almost convinced by the 'purist' view that all
access to the database should be mediated by an SP layer. And that's what
the Patterns and Practices folks seem to be advocating. But it does get a
bit messy in cases like this.

Maybe the 'right' answer depends on the architecture. In my case I'm looking
at 3-tier, and the web service sitting in front of my database is an
integral part of the access to the database. I think I'm prepared to view
the DAL in the middle tier as something that will probably need to be
updated for schema changes, and only use stored procedures if there seems to
be a particular need for flexibility - typically on complex queries.

If I was doing 2-tier, I think I'd still look at the pure SP layer, and find
a way of tackling my wide table. One alternative to using dynamic SQL in the
SP to build a custom update was to group the columns in the table and
provide and an update procedure for each group. The client would have to
know enough to invoke the right procedure(s), but the columns do tend to
naturally fall into groups that are updated together.

I guess I was suprised that this type of problem was not seen as more of an
issue - all the discusison and samples in the P&P documents seem to pick
examples where the table only has two or three columns to update.

More comments inline below.

Nigel

William (Bill) Vaughn said:
If you roll your own UpdateCommand, you don't have to set columns you
don't want to set and you can still use a SP. The UpdateCommand can define
all 50 Parameters but pass NULL or your own "flag"(you might want to SET
some columns to NULL) that could be used in the SP logic that says not to
SET this column.

Most of my columns allow nulls and I can't be sure I won't want to set them
to null, so I'd really need a flag parameter for each one to say whether to
use it or not. Now my proc has 100+ parameters :-(

I don't know much T-SQL. For Oracle, I think it would be quite hard work in
PL/SQL building a completely dynamic update command.
Creating your own UPDATE statement on the fly is what ADO classic
did--based on changed columns (one of the Update Criteria property
settings). It's not a bad idea, but moves more schema-dependent logic to
the client than I would typically like to do.

Yes - that's one of the things that bothered me. Presumably the classic ADO
folks feel that this is a backward step. I guess Microsoft wouldn't have
taken it without them feeling that the classic ADO way was a bad idea.

And are you planning on doing column-level access restrictions anytime
soon?

Nope - but who can plan for what a DBA might do in future :)
Are query plans cached with that?

Good point. I can't speak for SQL Server - I know Oracle will cache query
plans provided you've used parameterized commands. If you've built the
entire command using literals, then the query plan caching doesn't do you
much good. Probably not an issue for updates, but definitely an issue for
dynamically built queries.
 
S

Sahil Malik [MVP]

Nigel,

SQL Server will also cache query plans for parameterized queries. The only
issue is, if you keep changing the number of parameters then it is caching
too many i.e. not effectively caching very well. Oracle would be the same.
Also, you don't have to go to 100 parameters just because you have a null,
use a where clause that looks like this Where (emp_name = ? OR ((emp_name IS
NULL) AND (? IS NULL))) .

- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx
----------------------------------------------------------------------------
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top