Want to write your SQL statements and even stored procedures in pure C#?

  • Thread starter Chad Z. Hower aka Kudzu
  • Start date
C

Chad Z. Hower aka Kudzu

New article and free source code I've posted/

How would you like to write your SQL statements and even stored procedures in pure C#? For
example:

xQuery.Where = CustomerTbl.Col.NameFirst == "Chad";

The full statement is resolved at compile time, and is type safe. But let's not limit it to something
simple, how about this:

xQuery.Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")
& CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Look too good to be true? Read on. Not only is this possible, but much more. And did I mention, its
also database independent and can work with any SQL based ADO.NET provider?

http://www.codeproject.com/useritems/CSharpSQL.asp
 
C

Chad Z. Hower aka Kudzu

Rogas69 said:
Frankly speaking I wouldn't say it is inline SQL. Just another wrapper.

Actually it can do inline SQL too, but I think you've missed the point.

Its a lot more than another wrapper, we have tons of those already. See the below syntax? Thats C#
and compiled code which later evaluates to SQL for you. Its early bound and typesafe.
 
R

Rogas69

No, I reviewed the code and still can't see anything special. The syntax
below is maybe nice, but how many lines of code you have to produce to
enable it? And if database schema changes, you have to rewrite classes like
CustomerTblBase in your example.
By inline SQL i mean something similar to PowerBuilder feature.

Peter
 
C

Chad Z. Hower aka Kudzu

Rogas69 said:
No, I reviewed the code and still can't see anything special. The
syntax below is maybe nice, but how many lines of code you have to

I had to write 0. Its generated, but if I do want to do it by hand, its easy:

[TableName("Customer")]
public class CustomerTblRow : View.Row {
public DbInt32 CustomerID;
public DbString NameFirst;
public DbString NameLast;
public DbInt32 Tag;
public DbInt32 CountryID;
}
public class CustomerTblCol : View.Columns {
public ColumnInt32 CustomerID;
public ColumnString NameFirst;
public ColumnString NameLast;
public ColumnInt32 Tag;
public ColumnInt32 CountryID;
}

The second class is optional, and only needed if I build the where clauses.

A full example is like this:

[Select("select `CustomerID`, `NameLast` from `Customer`")]
public class CustomerRow : View.Row {
public DbInt32 CustomerID;
public DbString NameLast;
}
[Test]
public void InlineSQL() {
using (Transaction xTx = new Transaction(_DB)) {
using (Query xCustomers = new Query(_DB, typeof(CustomerRow))) {
xCustomers.SelectAll();
foreach (CustomerRow xCustomer in xCustomers) {
int i = xCustomer.CustomerID;
string s = xCustomer.NameLast;
}
}
xTx.Commit();
}
}
produce to enable it? And if database schema changes, you have to
rewrite classes like CustomerTblBase in your example.

No no no.. you truly miss the point. The classes remain in sync with the database, and since its all
early bound and type safe, any changes in the database are found and resolved at COMPILE
TIME.

This is based on the newer code since the article, but even in the article its not that different.
 
C

Chad Z. Hower aka Kudzu

Rogas69 said:
No, I reviewed the code and still can't see anything special. The

Without offending - then you either truly missed the where clause part, or you dont understand what
is going on.
 
F

Frans Bouma [C# MVP]

Chad said:
Rogas69 said:
No, I reviewed the code and still can't see anything special. The
syntax below is maybe nice, but how many lines of code you have to

I had to write 0. Its generated, but if I do want to do it by hand,
its easy:

[TableName("Customer")]
public class CustomerTblRow : View.Row {
public DbInt32 CustomerID;
public DbString NameFirst;
public DbString NameLast;
public DbInt32 Tag;
public DbInt32 CountryID;
}
public class CustomerTblCol : View.Columns {
public ColumnInt32 CustomerID;
public ColumnString NameFirst;
public ColumnString NameLast;
public ColumnInt32 Tag;
public ColumnInt32 CountryID;
}

The second class is optional, and only needed if I build the where
clauses.

A full example is like this:

[Select("select `CustomerID`, `NameLast` from `Customer`")]
public class CustomerRow : View.Row {
public DbInt32 CustomerID;
public DbString NameLast;
}
[Test]
public void InlineSQL() {
using (Transaction xTx = new Transaction(_DB)) {
using (Query xCustomers = new Query(_DB,
typeof(CustomerRow))) { xCustomers.SelectAll();
foreach (CustomerRow xCustomer in xCustomers) {
int i = xCustomer.CustomerID;
string s = xCustomer.NameLast;
}
}
xTx.Commit();
}
}
produce to enable it? And if database schema changes, you have to
rewrite classes like CustomerTblBase in your example.

No no no.. you truly miss the point. The classes remain in sync with
the database, and since its all early bound and type safe, any
changes in the database are found and resolved at COMPILE TIME.

In sync with the database? Just like that? I don't think so. :) Either
you have to update class mappings based on a changed schema, or you
have to migrate the schema based on class changes. I can tell you,
that's far from 'automatic'.

Although I support dyn. sql of course, the problem I have with your
reasoning is that you claim your solution is truly database
independent, though at the same time require things in the schema (like
a horrible sequence table in sqlserver... )

FB

--
 
F

Frans Bouma [C# MVP]

Chad said:
Without offending - then you either truly missed the where clause
part, or you dont understand what is going on.

I think he means that what you present as an invention is not that
'new', every O/R mapper for the past 10, 15 years uses the technique
you presented (albeit different syntaxis).

FB

--
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
I think he means that what you present as an invention is not that
'new', every O/R mapper for the past 10, 15 years uses the technique
you presented (albeit different syntaxis).

Well yes, and no. From that view point yes. The idea is nothing new certainly, but the
implementation is quite unique. IMO normal approaches fall far short of the goal. The whole
ability especially on the mapping of conditionals is very unique.

Other DAL's eitehr resort to an ackward syntax, or more commonly allow free text strings and
literals to handle the filters etc, which defeats much of hte usefulness.
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
In sync with the database? Just like that? I don't think so. :)

Yes it is. I think you are not understanding exactly whats going on. I'll be writing more articles soon.
But this "system" is in use in several Fortune 500's, some of which are processing millions of
transactions a day. But what Ive exposed in the article is just a peak at the DAL layer.
Either
you have to update class mappings based on a changed schema, or you

They arent class mappings per se - its an object WRAPPER of the DB tables. When you embed
SQL in your app and you change the DB you cannot find mismatches until run time. With this
system, it regenerates the interfaces and all conflicts are found at comile time and then easily
resolved from there.
have to migrate the schema based on class changes. I can tell you,
that's far from 'automatic'.

Because you are not seeing the whole picture. Until I finish the other articles, remember that what
you have seen so far is just a peek at the DAL, nothing more. So your assumptions of how its used
in a bigger picture sceanrio are invalid because you are assuming in your system that its a 1:1
replacement for whatever you use now.
Although I support dyn. sql of course, the problem I have with
your reasoning is that you claim your solution is truly database

Its fully DB independent. We have systems running on the older version which run on Oracle,
SQL server, DB2, and Firebird. In fact the demo runs on Firedbird and SQL server, and they are
fairly different databases. Firebird is closer to Oracle.
independent, though at the same time require things in the schema
(like a horrible sequence table in sqlserver... )

First of all, if you read the article you would note that this is optional and not a requirement.
Second of all - sequences are not horrible, identity fields are the item that is poor. Identity
fields cause bottlenecks in transaction and multitable inserts, etc by forcing record inserts
prematurely. Identity/autoincs are widely regarded as a not very good practice.

Given the choice of implementing autoinc/id fields in other DB's that use sequences, or implenting
sequences in the others, its obvious which choice to use.
 
F

Frans Bouma [C# MVP]

Chad said:
Well yes, and no. From that view point yes. The idea is nothing new
certainly, but the implementation is quite unique. IMO normal
approaches fall far short of the goal. The whole ability especially
on the mapping of conditionals is very unique.

Mapping of conditionals?
Other DAL's eitehr resort to an ackward syntax, or more commonly
allow free text strings and literals to handle the filters etc, which
defeats much of hte usefulness.

heh, trust me, I've seen a lot of O/R mapper query languages/systems
and yours is far from unique. Neither is mine though, although it's too
type safe.

FB

--
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
Mapping of conditionals?

Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")
& CustomerTbl.Col.CustomerID > aMinID & CustomerTbl.Col.Tag != View.Null;
heh, trust me, I've seen a lot of O/R mapper query languages/systems
and yours is far from unique. Neither is mine though, although it's too
type safe.

The basic part isnt unique - but the syntax and level of integration into C# is. Try writing the above in
any other, and you'll find either an odd syntax that doesnt really represent the way we think about it,
or it will be a free string, which is bad.

The above is C# code, completely resolved at compile time. Its not evaluated until run time though,
and evaluted when its needed. You can even do things like this:

Where = CustomerTbl.Col.CustomerID > aMinID;
if (aNullTagsOnly) {
Where = Where & CustomerTbl.Col.Tag == View.Null;
}

And break it into pieces. If this is not unique as you claim, then please point me to another such
system that offers such a level of integration in C#, aside from C Omega.
 
F

Frans Bouma [C# MVP]

Chad said:
Yes it is. I think you are not understanding exactly whats going on.
I'll be writing more articles soon. But this "system" is in use in
several Fortune 500's, some of which are processing millions of
transactions a day. But what Ive exposed in the article is just a
peak at the DAL layer.

Chad, I know what I'm talking about, you can drop the marketing BS. If
you define a class in code with n fields, mapped on a given table T,
and the table changes (gets a new PK, field type changes, new fields
are added, or: 2 fields are removed), your class in code is out-of-sync
with the table it's mapped on. So you have to update the class in code.
That's not going to be automatic, either the system has to migrate the
meta-data and re-generate the class (as I do) or the developer has to
manually adjust the class.

You can't get away with "but my code will take care of the
differences" as that's not going to work with for example fields which
are removed from the table.
They arent class mappings per se - its an object WRAPPER of the DB
tables. When you embed SQL in your app and you change the DB you
cannot find mismatches until run time. With this system, it
regenerates the interfaces and all conflicts are found at comile time
and then easily resolved from there.

Like I said, every O/R mapper takes care of that at compile time...
to some distinct. Even your code. For example, if you remove a field,
and you have queries defined which use that field, it won't compile,
but that's natural. Type safe query languages make sure errors like
that are found during compile time, but that's nothing new.
Because you are not seeing the whole picture.

haha, well I think I do, Chad.
Until I finish the
other articles, remember that what you have seen so far is just a
peek at the DAL, nothing more. So your assumptions of how its used in
a bigger picture sceanrio are invalid because you are assuming in
your system that its a 1:1 replacement for whatever you use now.

Let me say this: in the past 3 years I've worked full time on one of
the market leading O/R mappers of .NET. You can be sure I've seen every
problem you try to solve as well. I've seen your codeproject article
and in there, I haven't seen anything new nor special. That's ok of
course, though don't try sell known stuff as new things, because that's
not what people here need.

You made a solution for the data-access problem, good. There are
others on the planet who have thought of that solution as well, and
perhaps way before you did.
Its fully DB independent. We have systems running on the older
version which run on Oracle, SQL server, DB2, and Firebird. In fact
the demo runs on Firedbird and SQL server, and they are fairly
different databases. Firebird is closer to Oracle.

'DB independent' is only true in the scope of the abstraction level
the dal engine offers. For example, if you have a like-filter on
sqlserver in your application and you port your app to Oracle, you'd
better have a case insensitive flag in your dal engine or it won't work.

DB independent also suggests that it doesn't matter how the database
schema is constructed, but that's not true. There are always sacrifices
to be made: no bit/unique_identifier types on SqlServer if you want to
use Oracle as well as Oracle doesn't have equivalent types.
First of all, if you read the article you would note that this is
optional and not a requirement. Second of all - sequences are not
horrible, identity fields are the item that is poor. Identity fields
cause bottlenecks in transaction and multitable inserts, etc by
forcing record inserts prematurely. Identity/autoincs are widely
regarded as a not very good practice.

I was refering to the fact that a sequence table was required on
sqlserver to get autonumber field functionality.

By definition, sequences and identity columns are the same: identity
fields use an internal sequence (the table sequence) but in general
it's the same, with one exception: you can have multiple sequenced
fields per table, well not that you want to use that often, but you can.

I don't see how identity fields cause bottlenecks in transactions: for
sequences you too have to grab the new sequence value. And about
prematurely inserts: I guess you're referring to gaps in the sequence
if a transaction rolls back? With sequences you've the same thing, as
that's a logical outcome of the fact that once a value is generated, it
can't be rolled back as that would be unsafe.
Given the choice of implementing autoinc/id fields in other DB's that
use sequences, or implenting sequences in the others, its obvious
which choice to use.

sequence objects require more maintenance (the source of a sequenced
value is not defined with the destination), and the 2nd db in the
world, DB2 offers both for example, so I don't think Identity is all
that bad compared to sequences.

(I'm not referring to the semantical issue of sequenced values as
PK's).

FB

--
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
out-of-sync with the table it's mapped on. So you have to update the
class in code. That's not going to be automatic, either the system has
to migrate the meta-data and re-generate the class (as I do) or the
developer has to manually adjust the class.

But it is automatic. And its not something unique to this system - you merely regenerate the wrappers.
The wrappers are isolated from any user code.
You can't get away with "but my code will take care of the
differences" as that's not going to work with for example fields which
are removed from the table.

No, you miss the point. Nothing magic fixes up the differences. What it does is by early binding -
you detect all problems at compile time rather than *hoping* to find them at run time. Whats
unique about this? Well nothing really from most other similar solutions - except that nearly
every other solution is only early bound on some aspects, but filters etc fail because they use
strings.
Like I said, every O/R mapper takes care of that at compile
time...

In fact very few do. They do for columns, but most dont for entity names, filters, etc. So they
solve some of the problem, but still leave conflicts to be found at runtime.

Let me say this: in the past 3 years I've worked full time on one
the market leading O/R mappers of .NET. You can be sure I've seen
every problem you try to solve as well. I've seen your codeproject
article and in there, I haven't seen anything new nor special. That's
ok of course, though don't try sell known stuff as new things, because
that's not what people here need.

If you say its not new - please show me another O/R that is fully early bound, filters and all.
Ive seen very few, andt those that I've seen have quite an ackware syntax.

Second, this is not really an O/R although it may appear so. Future articles will clear this up.
You made a solution for the data-access problem, good. There are
others on the planet who have thought of that solution as well, and
perhaps way before you did.

Please point me to one that is 100% early bound, and has a "native" type syntax.
DB independent also suggests that it doesn't matter how the
database
schema is constructed, but that's not true. There are always
sacrifices to be made: no bit/unique_identifier types on SqlServer if
you want to use Oracle as well as Oracle doesn't have equivalent
types.

Yes of course. You have to use a certain amount of commonality. If you use DB specific
column types etc of course it wont port.
I was refering to the fact that a sequence table was required on
sqlserver to get autonumber field functionality.

No, again. Its not required anywhere by the framework and the article states that. Its an optional
function that users can use. The framework does not use it anywhere.
By definition, sequences and identity columns are the same:
identity
fields use an internal sequence (the table sequence) but in general
it's the same, with one exception: you can have multiple sequenced
fields per table, well not that you want to use that often, but you
can.

They are similar yes, but identity fields cannot be obtained until after an insert.
I don't see how identity fields cause bottlenecks in
transactions: for
sequences you too have to grab the new sequence value. And about
prematurely inserts: I guess you're referring to gaps in the sequence
if a transaction rolls back? With sequences you've the same thing, as
that's a logical outcome of the fact that once a value is generated,
it can't be rolled back as that would be unsafe.

Read any of the many articles here regarding the issues with identity fields - they describe the
complexities the add much better than I can. As far as bottlnecks - they can create bottlenecks by
forcing inserts of master rows prematurely and thus forcing premature locks or even transaction
begins.
sequence objects require more maintenance (the source of a
sequenced
value is not defined with the destination), and the 2nd db in the

Trivial compared to the complexity identify fields inject into code.
world, DB2 offers both for example, so I don't think Identity is all
that bad compared to sequences.

I dont say they are evil. In fact if I was abel to design it, Id build it so that fields are identity, but
that you can prefetch values. Maybe thats what DB2, does, its been a while since I worked with
DB2. I primarily work with SQL, Oracle, and Firebird.
(I'm not referring to the semantical issue of sequenced values as
PK's).

Im not referencing that either. I dont care what the values are, my issue with identity fields is that
they are not available utnil after an insert, and looking at cross DB platforms, sequences are
much easier to simulate than identity fields.
 
R

Rogas69

Without offending - then you either truly missed the where clause part, or
you dont understand what
is going on.

Without offending, I have impression that you think that others do not
understand you at all. I say - the syntax of your wrapper although may
resemble SQL is not an inline SQL which you were advertising in the first
post. On the other hand, your code down there concatenates strings so the
difference is only level of invocations until you get to database.

This is an example from your TableView.cs
StringBuilder xSQL = new StringBuilder("INSERT INTO `" + TableName + "` (");
StringBuilder xValues = new StringBuilder(") VALUES (");

There is another thread discussing pros and cons of string concatenation vs
using parameters. No comments.
Peter
 
C

Chad Z. Hower aka Kudzu

Rogas69 said:
first post. On the other hand, your code down there concatenates
strings so the difference is only level of invocations until you get
to database.

There are many more differences - its not a matter of how it gets to the database, its a matter of
how the interface is to the developer.
This is an example from your TableView.cs
StringBuilder xSQL = new StringBuilder("INSERT INTO `" + TableName +
"` ("); StringBuilder xValues = new StringBuilder(") VALUES (");

There is another thread discussing pros and cons of string
concatenation vs using parameters. No comments.

String concatenation is late bound - Im not speaking of performance issues. But of the fact that if
there are database differences, free text strings in developer code are bad as errors will not be
caught until run time. But agian - there are many DAL and typed data readers that can do similar
especially regarding the reading of data. Moving to inserts and updates is not a big leap. The major
"new" item is the use of where clauses such as I posted.
 
C

Chad Z. Hower aka Kudzu

Rogas69 said:
Without offending, I have impression that you think that others do not
understand you at all. I say - the syntax of your wrapper although may

Actually - otehrs dont fully understand it as evidenced by many messages here. Pieces are
understood - but not all aspects. Im taking this input now and rewriting significant portions of the
article.
first post. On the other hand, your code down there concatenates
strings so the difference is only level of invocations until you get

The library does - Not the developer. And the developer works only with objects that are typed, and
completely resolved at compile time.
 
C

Chad Z. Hower aka Kudzu

As per the rewrite, here are two such items that I hope will convey certain things in a more clear
manner.

Ever wished you could truly embed SQL functionality in your C# code without using strings or late
binding? Imagine being able to write complex where clauses purely in C#:

xQuery.Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")
& CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Look closely. This is C# code, not SQL. Its resolved and bound at compile time, but evaluated at
run time. In this article I shall provide an introduction to this method and full source code for
your use.

---------------------

Indy.Data is neither an O/R mapper, nor a code generator in strict terms. Instead Indy.Data is
something different, and something similar. In this article this will not be completely apparent
yet, so for this article consider Indy.Data to be a Data Access Library (DAL) only.

Indy.Data does support code generation to keep the DAL objects in sync with your database, however
it does not create mappings to business logic, nor does it generate its own SQL during generation.

Indy.Data is not truly a O/R mapper either. Indy.Data is more flexible in that it is not restricted to
parameterized queries or stored procedures. Indy.Data also does not provide for query construction
and mapping, but instead relies on existing objects in the database, or provided SQL.

Indy.Data provides object wrappers on a 1:1 basis for database tables, views, stored procedures, or
SQL statements. These objects can be regenerated at any time to be kept in sync with database
changes. In future articles I will be detailing how Indy.Data can be used to perform the same
functions as a code gen or O/R mapper, but in a slightly different way. However for the scope
of this article assume Indy.Data to be an advanced implementation of an ADO.NET command
object. Where you would use an ADO.NET command object, Indy.Data is suitable.
 
F

Frans Bouma [C# MVP]

Chad said:
Where =
(CustomerTbl.Col.NameFirst == "Chad" |
CustomerTbl.Col.NameFirst == "Hadi") &
CustomerTbl.Col.CustomerID > aMinID & CustomerTbl.Col.Tag !=
View.Null;

oooh, a list of predicates using badly overriden operators. ('|' and
'%' for example are operators of C# already). Also, this doesn't work
in VB.NET, which doesn't support operator overloading (.NET 1.x)
The basic part isnt unique - but the syntax and level of integration
into C# is. Try writing the above in any other, and you'll find
either an odd syntax that doesnt really represent the way we think
about it, or it will be a free string, which is bad.

Heh, no :)
WHERE clauses are predicate lists. So if I can produce a list of
predicates, I represent the thinking of a WHERE clause.

Without using operator overloading, so it also works in VB.NET, and it
uses objects which care extensible with your own predicate objects (So
you can add a specific SOUNDEX predicate object for example if you want
to): (let's use the most verbose way)
PredicateExpression orFilter = new PredicateExpression();
orFilter.Add(PredicateFactory.CompareValue(CustomerFieldIndex.NameFirst,

ComparisonOperator.Equal, "Chad"));
orFilter.AddWithOr(PredicateFactory.CompareValue(
CustomerFieldIndex.NameFirst, ComparisonOperator.Equal, "Hadi"));
PredicateExpression filter = new PredicateExpression();
filter.Add(orFilter);
filter.AddWithAnd(PredicateFactory.CompareValue(
CustomerFieldIndex.CustomerID, ComparisonOperator.GreaterThan,
aMinID));
filter.AddWithAnd(PredicateFactory.CompareNull(
CustomerFieldIndex.Tag, true));

The fun thing is, I can create factories for these in my app, as
they're just objects. No need for fancy operators.
The above is C# code, completely resolved at compile time. Its not
evaluated until run time though, and evaluted when its needed. You
can even do things like this:

Where = CustomerTbl.Col.CustomerID > aMinID;
if (aNullTagsOnly) {
Where = Where & CustomerTbl.Col.Tag == View.Null;
}

And break it into pieces. If this is not unique as you claim, then
please point me to another such system that offers such a level of
integration in C#, aside from C Omega.

This is truly not unique. My predicate objects are checked at compile
time, and evaluated per predicate at runtime. It's not integrated in
the language because it has to work with VB.NET as well.

There are a couple of O/Rmappers which use operator overloading for
filters, though have a complete different syntax on VB.NET, which is
IMHO unacceptable. Also, you re-used existing operators, not what I'd
do.

FB

--
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
This is truly not unique. My predicate objects are checked at
compile

Ill post line by line - but this is what I was working on now:

ADO.NET is a good library for connecting to databases. But using ADO.NET still uses the standard
methodology that data connectivity is not type safe, and that the binding to the database is loose.
Fields are bound using string literals, or numeric indexes. All types are typecast to the desired
types. Changes to the database will introduce bugs into the application. These bugs however will
not be found until run time because of the loose binding. Unless every execution point and logic
combination can be executed in a test, often bugs will not appear until a customer finds them.

Because of this, as developers we have been conditioned to never ever change a database. This
causes databases to be inefficient, contain old data, contain duplicate data, and contain many
hacks to add new functionality. In fact this is an incorrect approach, but we've all grown
accustomed to accepting this as a fact of development.

However if we use a tight bound approach as we do with our other code, we can upgrade and update
our database to grow with our system. Simply change the database, and recompile. Your system
will find all newly created conflicts at compile time and the functionality can then be easily
altered to meet the new demand. I call this an "Extreme Database" or XDB, inline with Extreme
Programming or XP.

Using the built in ADO.NET commands reading from a query is as follows:

IDbCommand xCmd = _DB.DbConnection.CreateCommand();
xCmd.CommandText = "select \"CustomerID\", \"NameLast\", \"CountryName\""
+ " from \"Customer\" C"
+ " join \"Country\" Y on Y.\"CountryID\" = C.\"CountryID\""
+ " where \"NameLast\" = @PNameLast1 or \"NameLast\" = @PNameLast2";
xCmd.Connection = _DB.DbConnection;
xCmd.Transaction = xTx.DbTransaction;

IDbDataParameter xParam1 = xCmd.CreateParameter();
xParam1.ParameterName = "@NameLast1";
xParam1.Value = "Hower";
xCmd.Parameters.Add(xParam1);

IDbDataParameter xParam2 = xCmd.CreateParameter();
xParam2.ParameterName = "@PNameLast2";
xParam2.Value = "Hauer";
xCmd.Parameters.Add(xParam2);

using (IDataReader xReader = xCmd.ExecuteReader()) {
while (xReader.Read()) {
Console.WriteLine(xReader["CustomerID"] + ": " + xReader["CountryName"]);
}
}

The same code when written using Indy.Data is as follows:

using (CustomerQry xCustomers = new CustomerQry(_DB)) {
xCustomers.Where = CustomerQry.Col.NameLast == "Hower"
| CustomerQry.Col.NameLast == "Hauer";
foreach (CustomerQryRow xCustomer in xCustomers) {
Console.WriteLine(xCustomer.CustomerID + ": " + xCustomer.CountryName);
}
}

First lets ignore the fact that it is shorter. The standard ADO.NET could be wrapped in an objects
or method as well. What I want to point out is the fact that the standard ADO.NET requires the use
of text strings. In fact, the code listed above has a mistake in it. "@NameLast1" should be
"@PNameLast1". The error in the standard ADO.NET code will not be detected until the code is
executed, and tracing it down will be more difficult. While any mistakes in the Indy.Data code
will be caught immediately by the compiler, and the exact location pinpointed. This allows for
databases to be easily evolved during development, and greatly reduces bugs as mistakes are
found and pinpointed during compile.

Could this be done with a codegen or O/R mapper? Yes, but generally they offer one of the
following:

1) Method calls that create an awkward syntax not representative of the where clause,
especially when complex ones are encountered.

2) Strings arguments, which leaves us again with a late bound interface causing bugs to only be
found at runtime.

3) Database Parameters - Parameters can give type safety and early binding if interpreted
properly which many tools not only do, but rely on this behaviour. The problem is that parameters
severely limit the flexibility of the where clause and often cause many versions of a single
object to be created with many variations of parameters.

With Indy.Data's approach, full freedom is retained to form the where clause as needed and
still retain all the benefits of type safety, early binding, and clean syntax.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top