Want to write your SQL statements and even stored procedures in pure C#?

  • Thread starter Chad Z. Hower aka Kudzu
  • Start date
F

Frans Bouma [C# MVP]

Chad said:
But it is automatic. And its not something unique to this system -
you merely regenerate the wrappers. The wrappers are isolated from
any user code.

that's not automatic, you have to re-run a tool. What if I rename a
table, or a field? Then my code breaks as the wrapper class gets a new
name. That's not what a developer wants.
No, you miss the point. Nothing magic fixes up the differences. What
it does is by early binding - you detect all problems at compile time
rather than hoping to find them at run time. Whats unique about this?
nothing.

Well nothing really from most other similar solutions - except that
nearly every other solution is only early bound on some aspects, but
filters etc fail because they use strings.

no they don't. A lot of the O/R mappers out there don't use strings or
offer an object based query system as well.
If you say its not new - please show me another O/R that is fully
early bound, filters and all. Ive seen very few, andt those that
I've seen have quite an ackware syntax.

LLBLGen Pro does, and so do a lot of others. For example Genome uses a
similar approach as you do (with overloaded C# operators).
Please point me to one that is 100% early bound, and has a "native"
type syntax.

#define 'native'. SQL is set-based, C# is imperative, non-set based.
So what's native? SQL-like, C# like or a language/system which
represents what the SQL elements mean semantically?
They are similar yes, but identity fields cannot be obtained until
after an insert.

so? You've to insert the PK side of an FK first anyway, or do you
require that there aren't any FK's?
Read any of the many articles here regarding the issues with identity
fields - they describe the complexities the add much better than I
can. As far as bottlnecks - they can create bottlenecks by forcing
inserts of master rows prematurely and thus forcing premature locks
or even transaction begins.

Best practises say that you should FK constraints on live databases.
FK constraints force you to insert the PK side of the FK first anyway,
then sync the value with the FK side and insert the FK side. That's not
tic-tac-toe indeed: you have to sort the objects which have to be
inserted first, and you have to implement synchronization logic so when
I do:

CustomerEntity newCustomer = new CustomerEntity();
newCustomer.CompanyName = "Solutions Design";
//...

OrderEntity newOrder = new OrderEntity();
newOrder.OrderDate = DateTime.Now;
//...
newCustomer.Orders.Add(newOrder);
adapter.SaveEntity(newOrder);

I get the graph saved in the right order: first Customer, then order,
pk of customer, an identity, properly synced with newOrder.CustomerID,
and all in a single transaction.

Where are the bottlenecks? What's inserted prematurely?
Trivial compared to the complexity identify fields inject into code.

I don't see how. Unless you don't use FK constraints.
Im not referencing that either. I dont care what the values are, my
issue with identity fields is that they are not available utnil after
an insert, and looking at cross DB platforms, sequences are much
easier to simulate than identity fields.

The insert problem isn't a problem, as when FK constraints are used,
you have to insert the PK side of the FK constraint first anyway, so
even with a sequence you first have to insert the PK side, then the FK
side.

FB

--
 
F

Frans Bouma [C# MVP]

Chad said:
Actually - otehrs dont fully understand it as evidenced by many
messages here. Pieces are understood - but not all aspects. Im taking
this input now and rewriting significant portions of the article.

Please, come down from your high horse. You only humiliate yourself in
public by your statements.

FB

--
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
oooh, a list of predicates using badly overriden operators. ('|'
and
'%' for example are operators of C# already). Also, this doesn't work

Of course they already exist in C#. You can only overload operators that already exist, you
cannot simply make up new ones.

As for choice - its logical to use == for equals evaluation, after all thats what it does.
Choosing << for equals makes little sense IMO.
in VB.NET, which doesn't support operator overloading (.NET 1.x)

I noted in the article in fact that it does not work in VB.NET.
PredicateExpression orFilter = new PredicateExpression();
orFilter.Add(PredicateFactory.CompareValue(CustomerFieldIndex.NameFirst
,

ComparisonOperator.Equal, "Chad"));
orFilter.AddWithOr(PredicateFactory.CompareValue(
CustomerFieldIndex.NameFirst, ComparisonOperator.Equal, "Hadi"));
PredicateExpression filter = new PredicateExpression();
filter.Add(orFilter);
filter.AddWithAnd(PredicateFactory.CompareValue(
CustomerFieldIndex.CustomerID, ComparisonOperator.GreaterThan,
aMinID));
filter.AddWithAnd(PredicateFactory.CompareNull(
CustomerFieldIndex.Tag, true));

You make my point very clearly in fact. If you think that the code you posted is "clearer" than

xQuery.Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")
& CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Then we will never agree. If SQL were to follow your advice, I doubt SQL would have many
fans. Or lets just remove operators all together from C# and VB and use your syntax for every
day boolean expressions. :)
The fun thing is, I can create factories for these in my app, as
they're just objects. No need for fancy operators.

No need for succinct easy to read syntax either. :)
This is truly not unique. My predicate objects are checked at
compile
time, and evaluated per predicate at runtime. It's not integrated in
the language because it has to work with VB.NET as well.

Sorry, whats unique is the syntax, not the method. Many O/R / codegens do the same of course.
There are a couple of O/Rmappers which use operator overloading
for filters, though have a complete different syntax on VB.NET, which is
IMHO unacceptable. Also, you re-used existing operators, not what I'd
do.

Please point me to some. Everyone keeps saying they exist, but Ive yet to see one that operate in the
same way as I've demonstrated.
 
F

Frans Bouma [C# MVP]

Chad said:
As per the rewrite, here are two such items that I hope will convey
certain things in a more clear manner.

Ever wished you could truly embed SQL functionality in your C# code
without using strings or late binding? Imagine being able to write
complex where clauses purely in C#:

xQuery.Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst ==
"Hadi") & CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag !=
View.Null;

Look closely. This is C# code, not SQL. Its resolved and bound at
compile time, but evaluated at run time. In this article I shall
provide an introduction to this method and full source code for your
use.

As said, nothing new, nor inventive nor special.
---------------------
Indy.Data provides object wrappers on a 1:1 basis for database
tables, views, stored procedures, or SQL statements. These objects
can be regenerated at any time to be kept in sync with database
changes. In future articles I will be detailing how Indy.Data can be
used to perform the same functions as a code gen or O/R mapper, but
in a slightly different way. However for the scope of this article
assume Indy.Data to be an advanced implementation of an ADO.NET
command object. Where you would use an ADO.NET command object,
Indy.Data is suitable.

Oh? Tell me, how to fetch a graph of:
10 newest customers from 'France' with their 10 latest orders and their
order lines, in the most efficient way? (so no 1+10+100 queries, but 3)

The core element why O/R mappers are flexible and nice is because they
allow you to work with the database with the entity elements and their
meta-data. So if you want to filter on orders to get the customers, you
can. No extra elements needed like a view in the db or hard-coded sql.
Just 'customer', 'order' and a filter. It's the flexibility to define
that filter right there in code without having to go back to the db to
define a view or a proc which makes O/R mappers worth using. Fetching
from 1 db element using a filter on that same object is simple, the
things which make it interesting are the join logic (left/right/inner),
sorts on fields in a related entity etc, recursive saves of a graph,
atomicly, prefetching related data in a graph, so 1 query (with filters
and sort clauses) per node etc.

FB
--
 
C

Chad Z. Hower aka Kudzu

"Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com"

Aah, it would seem much of your "background" is of course influenced by this. :)

Im not saying that O/R mapping or CodeGen is evil, Im simply presenting and alternative. Furthermore
you should read the post I posted just before last, IMO Indy.Data is not an O/R mapper, but in fact
just a DAL, nothing more.

The code that you posted, would you mind if I include that actaul code as it seems to be real code?
I've compared output to ADO.NET and would really like to provide more comparison for users as I
do mention the syntax, but didnt have some to show. I'll even mention LLBLGen as an O/R tool for
users to consider, Indy.Data is Open Source so I have nothing to lose.
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
Please, come down from your high horse. You only humiliate
yourself in
public by your statements.

There is no high horse at all. I suspect you simply misunderstood what I intended here. Its not that
others dont understand the code, surely many here do. But many are not understanding what Im saying
in the article, which I believe much to be my fault in not properly communicating certain items. This
feedback Im using to refine and better communicate certain aspecgts.
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
that's not automatic, you have to re-run a tool. What if I rename
table, or a field? Then my code breaks as the wrapper class gets a new
name. That's not what a developer wants.

Actually thats exactly what I want. One of the key issues here is that you have differnet
methodology than I do regarding this. I dont want any magic thing to fix up DB changes. Its rare in our
systems that something is so simple as a name change and whenever something chnages I want to
know and manually fix it. When I get the rest of the articles online, this will make more sense. I
doubt you will agree with the methodology either but you should be able to see the point of view.
LLBLGen Pro does, and so do a lot of others. For example Genome

Via procedures yes. Which is fine if you like that syntax, but personally given the choice as posed
in the other message, I know which one I prefer.
uses a similar approach as you do (with overloaded C# operators).

URL on Genome please. Google finds DNA stuff. I think I've see Genome in the past at some
point.
#define 'native'. SQL is set-based, C# is imperative, non-set
based.

Sorry, in that context it would be C#. Developers are currently working in C# + SQL. To move
towards C# only is a very good goal.
so? You've to insert the PK side of an FK first anyway, or do you
require that there aren't any FK's?

Yes, but its a matter of timing. Not all inserts are simple and often child rows must be "worked"
on to prepare or calculate information. With identity fields you must create and repass and fix
up the local references after (which appears to be the common solution here) or put in a delay
between inserts while you work live.

With sequences you lose nothing aside from the fact that you have to populate the field and you
get the value before instead of after. What are the downside that you are concerned with
sequences?
FK constraints force you to insert the PK side of the FK first anyway,

Its not a matter of order its a matter of timing.
I don't see how. Unless you don't use FK constraints.

Read any of the blogs here concerning how to udpate a dataset against an identity column table.
They describe the basic issues pretty well, and thats just for basic simple master detail inserts.
The insert problem isn't a problem, as when FK constraints are
used,
you have to insert the PK side of the FK constraint first anyway, so
even with a sequence you first have to insert the PK side, then the FK
side.

Yes, that does not change.
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
Oh? Tell me, how to fetch a graph of:
10 newest customers from 'France' with their 10 latest orders and
their order lines, in the most efficient way? (so no 1+10+100 queries,
but 3)

I think a major disconnect here is that you are treating it as an O/R, its not. Its just a datareader. Nothing
less, nothing more. So your question could easily be "Show me how do do that with
SqlDataReader" and the answer would be the same.

Indy.Data is for those working with DataReader inputs, not those looking for an O/R. Indy.Data
certainly could be used in an O/R type environment and I'll detail things on this later, but it itself
is not an O/R system any more than SqlDataReader is.
The core element why O/R mappers are flexible and nice is because

Your trying to sell something that doesnt apply. Sure O/R's are great, but Im not adressing O/R's at
all or trying to replace them. Thats the key disconnect, that many read it and assumed it was some
sort of O/R. Maybe the title threw people off, and Im renaming it as the title covers something I
didnt get into this article as it was out of scope of the library itself.
allow you to work with the database with the entity elements and their
meta-data. So if you want to filter on orders to get the customers,
you can. No extra elements needed like a view in the db or hard-coded

Now you are comparing O/R against SQL...... Indy.Data is just a C# wrapper for SQL statements and
DB tables. It is NOT an O/R, so you might as well be talking about why ADO.NET is bad and why
O/R is better.. which is fine, but it doesnt apply to what we were discussing.
 
F

Frans Bouma [C# MVP]

Chad said:
"Get LLBLGen Pro, productive O/R mapping for .NET:
http://www.llblgen.com"

Aah, it would seem much of your "background" is of course influenced
by this. :)

well, 'background' doesn't have to be between quotes, I've now worked
for 6/7 days a week for the past 3 years on that system, full time, so
I'm IMHO entitled to say a few things on the subject.
The code that you posted, would you mind if I include that actaul
code as it seems to be real code? I've compared output to ADO.NET
and would really like to provide more comparison for users as I do
mention the syntax, but didnt have some to show. I'll even mention
LLBLGen as an O/R tool for users to consider, Indy.Data is Open
Source so I have nothing to lose.

If you're going to use it to make my work look bad compared to your
application, of course you may not use it.

FB

--
 
F

Frans Bouma [C# MVP]

Chad said:
Actually thats exactly what I want. One of the key issues here is
that you have differnet methodology than I do regarding this. I dont
want any magic thing to fix up DB changes. Its rare in our systems
that something is so simple as a name change and whenever something
chnages I want to know and manually fix it. When I get the rest of
the articles online, this will make more sense. I doubt you will
agree with the methodology either but you should be able to see the
point of view.

and what happens with the user code added to the also generated
classes? How are these kept in sync with the newly generated classes if
I rename a table?
Via procedures yes. Which is fine if you like that syntax, but
personally given the choice as posed in the other message, I know
which one I prefer.

No not via procedures, I don't use stored procedures and neither does
for example Genome. All SQL is generated on the fly at runtime by
evaluating the query object graph.
URL on Genome please. Google finds DNA stuff. I think I've see Genome
in the past at some point.
http://www.genom-e.com/


Sorry, in that context it would be C#. Developers are currently
working in C# + SQL. To move towards C# only is a very good goal.

no, SQL is set-oriented. Merging two different paradigms in the same
language won't work.
Yes, but its a matter of timing. Not all inserts are simple and often
child rows must be "worked" on to prepare or calculate information.
With identity fields you must create and repass and fix up the local
references after (which appears to be the common solution here) or
put in a delay between inserts while you work live.

no, first process, then send the graph for persistence.
With sequences you lose nothing aside from the fact that you have to
populate the field and you get the value before instead of after.

nothing changes. You have to make a roundtrip to teh db to get teh new
value, so you'll do that also at the last minute, i.e.: after
processing.
What are the downside that you are concerned with sequences?

not much, other than that they're separate objects, which can
accidentily deleted while teh table is still there, or vice versa. I.e.
a small maintenance thing.
Its not a matter of order its a matter of timing.

heh sure...
Read any of the blogs here concerning how to udpate a dataset against
an identity column table. They describe the basic issues pretty
well, and thats just for basic simple master detail inserts.

Why would I be interested in the problems of datasets?

FB

--
 
R

Robbe Morris [C# MVP]

My OOP ADO.NET code generator does all this for me.
Creates a two layered approach to interfacing with the database.
Optionally returning auto-populated class arrays or DataTables/DataSets.

If I change the database, just rerun the generator. When I try to
compile, it instantly shows me where changes in my app need to be
made.

In my opinion, this type of approach is far more useful in large
scale development teams where a huge portion of the database
work is done by DBAs who don't know .NET.

http://www.eggheadcafe.com/articles/adonet_source_code_generator.asp

--
2004 and 2005 Microsoft MVP C#
Robbe Morris
http://www.masterado.net

Earn $$$ money answering .NET Framework
messageboard posts at EggHeadCafe.com.
http://www.eggheadcafe.com/forums/merit.asp



Chad Z. Hower aka Kudzu said:
Frans Bouma said:
This is truly not unique. My predicate objects are checked at
compile

Ill post line by line - but this is what I was working on now:

ADO.NET is a good library for connecting to databases. But using ADO.NET
still uses the standard
methodology that data connectivity is not type safe, and that the binding
to the database is loose.
Fields are bound using string literals, or numeric indexes. All types are
typecast to the desired
types. Changes to the database will introduce bugs into the application.
These bugs however will
not be found until run time because of the loose binding. Unless every
execution point and logic
combination can be executed in a test, often bugs will not appear until a
customer finds them.

Because of this, as developers we have been conditioned to never ever
change a database. This
causes databases to be inefficient, contain old data, contain duplicate
data, and contain many
hacks to add new functionality. In fact this is an incorrect approach, but
we've all grown
accustomed to accepting this as a fact of development.

However if we use a tight bound approach as we do with our other code, we
can upgrade and update
our database to grow with our system. Simply change the database, and
recompile. Your system
will find all newly created conflicts at compile time and the
functionality can then be easily
altered to meet the new demand. I call this an "Extreme Database" or XDB,
inline with Extreme
Programming or XP.

Using the built in ADO.NET commands reading from a query is as follows:

IDbCommand xCmd = _DB.DbConnection.CreateCommand();
xCmd.CommandText = "select \"CustomerID\", \"NameLast\",
\"CountryName\""
+ " from \"Customer\" C"
+ " join \"Country\" Y on Y.\"CountryID\" = C.\"CountryID\""
+ " where \"NameLast\" = @PNameLast1 or \"NameLast\" =
@PNameLast2";
xCmd.Connection = _DB.DbConnection;
xCmd.Transaction = xTx.DbTransaction;

IDbDataParameter xParam1 = xCmd.CreateParameter();
xParam1.ParameterName = "@NameLast1";
xParam1.Value = "Hower";
xCmd.Parameters.Add(xParam1);

IDbDataParameter xParam2 = xCmd.CreateParameter();
xParam2.ParameterName = "@PNameLast2";
xParam2.Value = "Hauer";
xCmd.Parameters.Add(xParam2);

using (IDataReader xReader = xCmd.ExecuteReader()) {
while (xReader.Read()) {
Console.WriteLine(xReader["CustomerID"] + ": " +
xReader["CountryName"]);
}
}

The same code when written using Indy.Data is as follows:

using (CustomerQry xCustomers = new CustomerQry(_DB)) {
xCustomers.Where = CustomerQry.Col.NameLast == "Hower"
| CustomerQry.Col.NameLast == "Hauer";
foreach (CustomerQryRow xCustomer in xCustomers) {
Console.WriteLine(xCustomer.CustomerID + ": " +
xCustomer.CountryName);
}
}

First lets ignore the fact that it is shorter. The standard ADO.NET could
be wrapped in an objects
or method as well. What I want to point out is the fact that the standard
ADO.NET requires the use
of text strings. In fact, the code listed above has a mistake in it.
"@NameLast1" should be
"@PNameLast1". The error in the standard ADO.NET code will not be detected
until the code is
executed, and tracing it down will be more difficult. While any mistakes
in the Indy.Data code
will be caught immediately by the compiler, and the exact location
pinpointed. This allows for
databases to be easily evolved during development, and greatly reduces
bugs as mistakes are
found and pinpointed during compile.

Could this be done with a codegen or O/R mapper? Yes, but generally they
offer one of the
following:

1) Method calls that create an awkward syntax not representative of the
where clause,
especially when complex ones are encountered.

2) Strings arguments, which leaves us again with a late bound interface
causing bugs to only be
found at runtime.

3) Database Parameters - Parameters can give type safety and early binding
if interpreted
properly which many tools not only do, but rely on this behaviour. The
problem is that parameters
severely limit the flexibility of the where clause and often cause many
versions of a single
object to be created with many variations of parameters.

With Indy.Data's approach, full freedom is retained to form the where
clause as needed and
still retain all the benefits of type safety, early binding, and clean
syntax.


--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"

Blog: http://blogs.atozed.com/kudzu
 
C

Chad Z. Hower aka Kudzu

Robbe Morris said:
My OOP ADO.NET code generator does all this for me.
Creates a two layered approach to interfacing with the database.
Optionally returning auto-populated class arrays or DataTables/DataSets.

Exactly. Our approaches are very similar - in fact they share the same methodology with slightly
different implementations. There is another typed data reader generator out there too, but the name
slips my mind right now.
If I change the database, just rerun the generator. When I try to
compile, it instantly shows me where changes in my app need to be
made.
Bingo.

In my opinion, this type of approach is far more useful in large
scale development teams where a huge portion of the database
work is done by DBAs who don't know .NET.

Or in any case that the database is shared, which covers nearly every enterprise application.
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
well, 'background' doesn't have to be between quotes, I've now
worked
for 6/7 days a week for the past 3 years on that system, full time, so
I'm IMHO entitled to say a few things on the subject.

Of course. My point here is that anyone who is so fully engulfed in anything, often can see only
that. When you only have a hammer, everything looks like a nail. Ive never proposed that nails do
not exist, only that there are bolts too.
If you're going to use it to make my work look bad compared to
your
application, of course you may not use it.

I only asked to post the same code you posted and even provide a link to LLBLGen. If you think the
code you yourself posted will make LLBLGen look bad, Im not sure there is anything I can do.

I would insert it similarly to how I did ADO.NET data reader:
"Using the built in ADO.NET commands reading from a query is as follows:"

Something like "Using a popular O/R tool like LLBLGen (link) the code appears as follows:

That failing, newsgroup postings are public, I can just link to your messages in Google.
 
C

Chad Z. Hower aka Kudzu

Frans Bouma said:
and what happens with the user code added to the also generated
classes? How are these kept in sync with the newly generated classes
if I rename a table?

What happens when you change a WinForm? That regenerates code and certainly does not trash
user code. What happens when you change an ADO.NET dataset and the code regenerates?

1) Code can USE the generated code. That wont get trashed.

2) Code can inherite from teh base classes and add functionality.

If you rename a table, in fact I *want* it to break my code so I can see and manually manage the
impact.
No not via procedures, I don't use stored procedures and neither
does

Sorry, wrong terminology. Procedure as in method, not as in stored procudure. Procedure is an old
Pascal term thats stuck in my head. Your example LLBLGen code demonstrated my point perfect
of how its quite different than a SQL predicate. If thats what you want, then its great.

Well I looked at Genome. Interesting stuff - but its yet a different beast. Its closer to ECO etc. It
goes from object, to the database and wants to control the structure itself. Good for systems that
only are used by that subsystem, but not so good for existing databases or for databases that will
be used by other clients as well. Genome does look to be implemneted quite well though, and
certainly has some provisions for this even.

But here again we disconnect.

1) Either Im missing it - which I'll show code below...
2) Or you didnt look to closely at Genome.

http://www.genom-e.com/Default.aspx?tabid=72
dd.Extent(typeof(Person))["Name.IsLike({0})", Name];

Loosely typed string, only evaluated at run time. Same problem as embedding SQL strings in
code. At least its an attribute, but unless Genome scans these all at compile time and evaluates
them, then they are still left to cause undetected bugs at runtime. Does it walk the assembly and
verify all of these on each and every run at initialization or at build time? If so, does it pinpoint
the error for you to the line?

Since LLBLGen focuses on using methods for predicates, I assume we agree on the problems
introduced by this.

But back to the point. I found no reference what so ever to what you have claimed, that Genome
implements predicates like what I have shown. The closest I can find at all is its set
management, which is in fact a very neat idea, but something totally different.

In fact I find:
result.Add(new NumberManagedPersonFilter(peopleManagedMin,true))

etc. There is the *= to merge sets, but again, someting completely different.

In fact:
"Genome introduces type-safe dynamic query building using set algebra. Queries with their
arguments can be declared as methods and properties of the domain model, which can be
combined and reused in a strongly typed manner."

Methods and properties - in fact closer to LLBLGen.
no, SQL is set-oriented. Merging two different paradigms in the
same language won't work.

The results are set oriented, and C# can manage sets as well. In fact, look at C Omega's SQL
examples. And read Anders H's latest discussions about SQL and C#. Your statement is at direct
odds with Ander's statements.

Anders H:
mms://wm.microsoft.com/ms/msnse/0406/22899/Vision_of_Future.wmv

C Omega's use of SQL:
http://research.microsoft.com/Comega/doc/comega_tutorial_select_sql_database.htm

In fact - I did this well before COmega, but it looks very very familiar... Of course they have the
ability to directly extend the language, I dont.
nothing changes. You have to make a roundtrip to teh db to get
teh new
value, so you'll do that also at the last minute, i.e.: after
processing.

Yes you still need a round trip. But with idenities you very often end up recylcling through existing
sets to update the keys.

There is not a big difference between identities and sequences. Identites are not evil - sequences
are just more flexible, in some cases can eliminate bottlebecks, and are certainly easier to port
across DB's than identies are.
not much, other than that they're separate objects, which can
accidentily deleted while teh table is still there, or vice versa.
I.e. a small maintenance thing.

Yes, this point I agree on.
Why would I be interested in the problems of datasets?

Because its a common scenario that is not unique to datasets. If you are on the DB side 100% and
dont care about your users, only what makes a DBA's job easier, then identies could be considered
better.
 
C

Chad Z. Hower aka Kudzu

Chad Z. Hower aka Kudzu said:
Anders H:
mms://wm.microsoft.com/ms/msnse/0406/22899/Vision_of_Future.wmv

C Omega's use of SQL:
http://research.microsoft.com/Comega/doc/comega_tutorial_select_sql_dat
abase.htm

In fact, if we look at some exmample C Omega code:

struct {
SqlString CustomerID;
SqlString ContactName;
}* res
= select CustomerID, ContactName from DB.Customers;

foreach( row in res ) {
Console.WriteLine("{0,-12} {1}", row.CustomerID, row.ContactName);
}

It looks very much like Indy.Data:

[Select("select `CustomerID`, `ContactName` from `Customers`")]
public class CustomerRow : View.Row {
public DbInt32 CustomerID;
public DbString ContactName;
}
public void InlineSQLTest() {
using (Query xCustomers = new Query(_DB, typeof(CustomerRow))) {
foreach (CustomerRow xCustomer in xCustomers) {
Console.WriteLine("{0,-12} {1}", row.CustomerID, row.ContactName);
}
}
}
 
C

Chad Z. Hower aka Kudzu

Christian Hassa said:
Thanks for involving Genome in your heated dicussion. I don't want to
interfere, but the statements about Genome in this context are wrong:

Please, corrections are welcome. In fact I have some upcoming articles about solutions to some
of the other ideas I discussed:
http://www.codeproject.com/useritems/DudeWheresMyBusinessLogic.asp

Id like to mention Genome and others, and even provide some sample codes if possible. I had
originally mentioned it in the draft of this article but deciced to move it to later. Id like to mention
LLBLGEn too, but the author seems uncomfortable of any even slight comparison of his
product, even when using his own code or code he could submit to me.
database-generated identities or FK-constraints. I wonder whether Indy
can tolerate FK-constraints when updating object graphs (if it can do
that at all) or database-generated identities.

It can - but in a different way. Indy.Data is a much lower level tool thatn Genome. Genome is a
much bigger tool designed to cover serveral concepts while Indy.Data is designed to only cover one
specific portion.

Indy.Data is designed to replace the ADO.NET datareaders (and a few other objects). Its a type
safe early bound version of ADO.NET data access if you will.
You got it all wrong: in your quote you are referring to Genome
client-side OQL, which is still strongly typed, but late-bound - so
you are right, an error in the string would be detected only during
run-time. Still we are type-safe, Genome would tell you about a wrong
OQL string as soon as you are applying it to the Set, no matter
whether you actually fire the query against the database or not. Also
parameters, passed in to the query (like the Name in your example) are
strongly typed and will be checked when executing the query during
runtime.

Its the late binding I dont like and an area that Indy.Data really works to eliminate.
However, it seems like you haven't read the whole story and how Genome
supports decomposability of queries: usually queries in Genome are
written in the mapping file and linked to methods of your classes.
These queries are then compiled by the Genome data domain schema
compiler and you will get errors already during *compile time* of your
project. 95% of the queries in a Genome application should be written
in the mapping file for this and other reasons. Mapped Genome
OQL-queries can be even compiled into user-defined functions on SQL
server for expressing server side recursive queries.

Aah, thats good then. Thats similar to how Indy.Data handles its SQL inputs.
On the other hand, Genome Sets are much more versatile then the
query-objects (which is not really a new pattern, see e.g.
http://www.martinfowler.com/eaaCatalog/queryObject.html) you provide -

Does he have an actual impementation? or just discusses the possibility? Thish page is a bit bare.
at least from what I have seen in the examples. Sets can be used in
other expressions for sub-queries (translating into EXIST) and for
counting the elements of a Set (translating into SELECT COUNT). You
can fully project sets from one element type to another and still
intersect projected sets (which have been transformed in the same
way).

I did find how the sets are put to use in Genome rather interesting, but doesnt apply to Indy.Data.
Genome is OQL based, while Indy.Data again just strives to be a typed early bound version of
ADO.NET datareaders.
I didn't explore the boundaries and limitations of your query model,
all I can say is that we are able to express the most complicated SQL
queries you can imagine in OQL - but with all the benefits of a

I dont doubt that. Im sorry if I implied that.

I appreciate your input and feedback on corrections regarding Genome and look forward to further
discussions.
 
C

Chad Z. Hower aka Kudzu

Additional info:

"Unlike other object-relational mapping tools available on the market, Genome does not focus
solely on CRUD operations and pre-defined relationship models, which only cover the entity-
relational mapping aspect of O/R mapping"

Indy.Data in fact solely focuses on the CRUD if you will. :) Because its not an O/R tool, which I
tried to explain to the other chap here earlier.

Ive got some demos I'll be posting soon that will show things a bit better soon. Genome maps Objects
to the DB (O/R mapper) while Indy.Data puts an object interface DIRECTLY on DB objects. Indy.Data
does not itself provide any provision for O/R mapping. I've used it for such, but Indy.Data itself
does not do it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top