O/R Mapper

I

Islamegy®

I was spending time to learn the use of strongly typed collection instead
of Dataset/datatable and using Enterprise Library Applictations block..

Recently i discovered there is alot of project for o/r mapper, csla.net,
NHibrnate and many others..
Actuly NHibrnate is look unique than others for me and easier too..
But i don't know if it's the most efficient o/r mapper out there or i look
for something better.

Please if anyone have experince with these mappers may help me to find the
most efficient solutions..
thanx...
 
G

Guest

I have a love-hate with ORMs.

On the one hand, I'm intrigued by the idea of being able to work with
OOP-oriented classes vs DAL code constructs.

On the other hand, they all have some sort of learning curve. There are a
number of new ones out; DataObjects.net looks promising:
http://www.x-tensive.com/Products/DataObjects.NET/

EasyObjects.NET is another:
http://www.easyobjects.net/

Then you have Gentle.Net (sourceforge.net) and the ones you mentioned. I
think it really boils down to this: Do I want to make the commitment of time
needed to really master this and use it on a continued basis in my
applications?
Peter
 
R

Rob Trainer

I also have a love/hate relationship. I have been using XPO by DevExpress
for the lsat few months. On 1 hand it is very easy to implement a lot of
things. I have some reservations about complex queries, etc and the lack of
performance type control, but so far for this project it has been going
smoothly. It still remains to be seen how things will go when things become
more complex.

Rob Trainer
 
F

Frans Bouma [C# MVP]

Peter said:
I have a love-hate with ORMs.

On the one hand, I'm intrigued by the idea of being able to work with
OOP-oriented classes vs DAL code constructs.

On the other hand, they all have some sort of learning curve. There
are a number of new ones out; DataObjects.net looks promising:

Then you have Gentle.Net (sourceforge.net) and the ones you
mentioned. I think it really boils down to this: Do I want to make
the commitment of time needed to really master this and use it on a
continued basis in my applications?

so you want to write the same dreaded plumbing code over and over
again instead? :) It's your time, your project and your deadline of
course ;)

FB

--
 
J

Joanna Carter [TeamB]

"Islamegy®" <[email protected]> a écrit dans le message de (e-mail address removed)...

| Please if anyone have experince with these mappers may help me to find the
| most efficient solutions..

Beware of mappers that map database tables to classes; this is not the best
way to do things as it tries to impose a relational model into your object
model.

The best mappers are the ones that allow you to design your object model and
then let the mapper take care of creating the database to fit the object
model.

If you can write your application, using a mapper, and you don't need to
know anything about databases, then you have a good mapper.

Joanna
 
F

Frans Bouma [C# MVP]

Joanna said:
Beware of mappers that map database tables to classes; this is not
the best way to do things as it tries to impose a relational model
into your object model.

Ohh yeah, THE doom-scenario. :-/

Perhaps you should elaborate a bit why this is 'bad', as it really
doesn't make a difference. If you have a hierarchy of entities
(supertype/subtype) and each entity is mapped onto its own table/view,
you still have 1:1 mappings between entity and table/view but also have
inheritance. If you start from the object model, you'll end up with the
same tables and with the same mappings.

Furthermore, a lot of people have to write software which has to work
with legacy databases. Your proposal implies that these people can't
use o/r mapping, because they have to work with an 'inferior' setup...
The best mappers are the ones that allow you to design your object
model and then let the mapper take care of creating the database to
fit the object model.

and why is that 'better' ? Ever tried to add addional fields to
entities in a system with, say 500 entities, and which is already in
production ? happy migrating with your automatic schema updater :)

Another downside is that often those mappers add meta-data into the
relational model which then makes it impossible to re-use the model +
data in other applications which aren't able to use the o/r mapper core.
If you can write your application, using a mapper, and you don't need
to know anything about databases, then you have a good mapper.

... till your app is in production and a functionality change has to
be implemented. Ooops, now suddenly someone has to write migration
scripts for the 1TB of data to the new schema...

software development doesn't end when v1 is done, Joanna.

FB

--
 
O

O.S. de Zwart

Islamegy® said:
I was spending time to learn the use of strongly typed collection instead
of Dataset/datatable and using Enterprise Library Applictations block..

Recently i discovered there is alot of project for o/r mapper, csla.net,
NHibrnate and many others..
Actuly NHibrnate is look unique than others for me and easier too..
But i don't know if it's the most efficient o/r mapper out there or i look
for something better.

Please if anyone have experince with these mappers may help me to find the
most efficient solutions..
thanx...

I have used NHibernate, DeKlarrit and LLBGEN the last definitely has my
vote, Good support on the forums, very nice price for what you get, easy
to grasp, frequent useful updates and a passionate author! What more can
you ask for?

Regards,

Olle
 
D

Daniel Billingsley

Frans Bouma said:
Perhaps you should elaborate a bit why this is 'bad', as it really
doesn't make a difference. If you have a hierarchy of entities
(supertype/subtype) and each entity is mapped onto its own table/view,
you still have 1:1 mappings between entity and table/view but also have
inheritance. If you start from the object model, you'll end up with the
same tables and with the same mappings.

I find it is very liberating to be free from that shackled thinking of such
a 1:1 relationship.

Not that I'm arguing that ORM products like yours aren't valuable - they are
incredibly powerful for certain scenarios. But they are too limiting in
others, in my experience. Much like products that were called 4th GL's back
in the day - the most powerful RAD tools on the planet but at the same time
limiting when you really needed to push the limits.

I find it particularly odd that design methodologies don't start with the
data, but we jump straight there when we start writing. I've done it myself
many times. But if you were starting a new consulting engagement, you don't
walk into a company and ask "what data do you store?" but rather you ask
"what is it that you DO here?" There are several UML diagrams meant to
document that information, and they are usually done before any design of
the applicaton or its data begins. Doesn't it make sense to flow from use
case documentation to functional design of the business layer that must
perform those things, and then finally to the required data - rather than
from use case to data to business layer? As I understand it, Kathleen
Dollard's book argues pretty strongly for that approach.
 
F

Frans Bouma [C# MVP]

Daniel said:
I find it is very liberating to be free from that shackled thinking
of such a 1:1 relationship.

In an inheritance hierarchy, where each subtype's own fields are
mapped onto the subtype's own table/view, every subtype is actually
mapped on multiple tables/views.

Though the subtype's own fields are mapped on 1 table/view.
If you want to map one entity on multiple tables, why would you want
to do that? The only reason I can think of is a very very bad
relational model.
Not that I'm arguing that ORM products like yours aren't valuable -
they are incredibly powerful for certain scenarios. But they are too
limiting in others, in my experience. Much like products that were
called 4th GL's back in the day - the most powerful RAD tools on the
planet but at the same time limiting when you really needed to push
the limits.

what have O/R mappers to do with 4GL tools? I cant project the flaws
of 4GL's onto O/R mappers to see why o/r mappers are limiting in
several scenario's, so it would help if you could elaborate WHY they
are limiting in some scenario's and also: which scenario's exactly?

For example: our o/r mapper supports lists based on fields from
related entities, so you can create views from entities you define.
This can greatly benefit a developer if s/he has to fetch data for
reporting or other read-only purposes with aggregates, groupby clauses
etc. etc.

Solely focussing on objects is then pretty limiting, agreed, but most
data-access solutions which are worth using offer a way to fetch data
in other flexible ways.
I find it particularly odd that design methodologies don't start with
the data, but we jump straight there when we start writing. I've
done it myself many times. But if you were starting a new consulting
engagement, you don't walk into a company and ask "what data do you
store?" but rather you ask "what is it that you DO here?" There are
several UML diagrams meant to document that information, and they are
usually done before any design of the applicaton or its data begins.

ever heard of NIAM/ORM (http://www.orm.net ) ? Abstract relational
modelling done in the functional research phase. No-one talks about
data, as data are just bits and bytes and have no meaning without
context, and it's the context that's important.

After the functional research phase has ended and the technical
research phase has ended and software has to be written, you start
thinking which code you can replace with code pulled from a shelve so
you gain time. No-one considers writing his own grid control a valuable
way to spend time, but apparently a lot of people find it completely
reasonable to spend a lot of time writing data-access plumbing code.

I always say: first see which code you have to write, then check which
code can be replaced by standard components. it's very simple, and you
keep a clear overview because you know what the standard component
replaces.
Doesn't it make sense to flow from use case documentation to
functional design of the business layer that must perform those
things, and then finally to the required data - rather than from use
case to data to business layer? As I understand it, Kathleen
Dollard's book argues pretty strongly for that approach.

again, no-one talks about data. Customer Has Order, Order BelongsTo
Customer. Customer IsIdentifiedBy CustomerID etc. etc. Simple fact
based sentences which formulate the analysis you've been doing and
result in an abstract relational model. You can then use it to produce
a physical datamodel in your RDBMS, or generate classes from it or both.

The idea of using an existing relational model implemented in a
datamodel is that you can easily reverse engineer these models to an
abstract level which represents an equal abstract model which was also
used to create the datamodel in the first place!

Or do you think people with 500+ tables in their database write those
tables by hand directly in an SQL editor? No of course not, they use
modelling tools which given them a higher level overview.

FB

--
 
J

Joanna Carter [TeamB]

"Frans Bouma [C# MVP]" <[email protected]> a écrit dans le
message de news: (e-mail address removed)...

| > Beware of mappers that map database tables to classes; this is not
| > the best way to do things as it tries to impose a relational model
| > into your object model.
|
| Ohh yeah, THE doom-scenario. :-/

No, not doom, just not the best.

| Perhaps you should elaborate a bit why this is 'bad', as it really
| doesn't make a difference. If you have a hierarchy of entities
| (supertype/subtype) and each entity is mapped onto its own table/view,
| you still have 1:1 mappings between entity and table/view but also have
| inheritance. If you start from the object model, you'll end up with the
| same tables and with the same mappings.

Not true. If you just look at the typical example of Composition like an
Invoice :

database version
=============
Invoice
ID
Reference
CustomerID
Date

InvoiceLine
ID
InvoiceID
Qty
ProductID
UnitPrice

object version
==========
InvoiceLine
ID
Qty
Product
UnitPrice

Invoice
ID
Reference
CustomerID
Date
Lines

Note the shift from each Line having to refer to its "owner" Invoice to the
Lines being a part of the Invoice with no back reference necessary.

| Furthermore, a lot of people have to write software which has to work
| with legacy databases. Your proposal implies that these people can't
| use o/r mapping, because they have to work with an 'inferior' setup...

Not true. A good OPF allows you to map objects to legacy databases and
customise storage/retrieval beyond the simplistic 1:1 mapping.

| > The best mappers are the ones that allow you to design your object
| > model and then let the mapper take care of creating the database to
| > fit the object model.
|
| and why is that 'better' ? Ever tried to add addional fields to
| entities in a system with, say 500 entities, and which is already in
| production ? happy migrating with your automatic schema updater :)

Yes, it works very nicely thank you. Don't judge all OPFs by those you
already know; some of us took the time and effort to ensure a comprehensive
design.

| Another downside is that often those mappers add meta-data into the
| relational model which then makes it impossible to re-use the model +
| data in other applications which aren't able to use the o/r mapper core.

Again, only in inferior mappers.

| > If you can write your application, using a mapper, and you don't need
| > to know anything about databases, then you have a good mapper.
|
| ... till your app is in production and a functionality change has to
| be implemented. Ooops, now suddenly someone has to write migration
| scripts for the 1TB of data to the new schema...

Not true. You really haven't seen a good OPF yet, have you ?

| software development doesn't end when v1 is done, Joanna.

Which is why my OPF is several versions old now :)

Joanna
 
F

Frans Bouma [C# MVP]

Joanna said:
"Frans Bouma [C# MVP]" <[email protected]> a écrit dans
le message de news: (e-mail address removed)...
| > Beware of mappers that map database tables to classes; this is not
| > the best way to do things as it tries to impose a relational model
| > into your object model.
Ohh yeah, THE doom-scenario. :-/

No, not doom, just not the best.

what the definition of 'best' is in your statement is still vague.
Not true. If you just look at the typical example of Composition like
an Invoice :

database version
=============
Invoice
ID
Reference
CustomerID
Date

InvoiceLine
ID
InvoiceID
Qty
ProductID
UnitPrice

object version
==========
InvoiceLine
ID
Qty
Product
UnitPrice

Invoice
ID
Reference
CustomerID
Date
Lines

Note the shift from each Line having to refer to its "owner" Invoice
to the Lines being a part of the Invoice with no back reference
necessary.

Ok, though that would make single entity fast data manipulation pretty
difficult, as you always have to have the Invoice object around.

But, tell me: why is this 'better' ? Of course wacky setups are
thinkable where there are clear differences, the question is if this
will bring you any advantage.

Btw, you still have one entity mapped onto one db object, which I was
arguing. Of course I wasn't arguing that every field in the table/view
has to be present in the entity and vice versa. I was arguing that if
you pick an entity type E, it has one target db object, and if you use
inheritance, inherits the fields and target objects of its supertypes,
but that's it.
| > The best mappers are the ones that allow you to design your object
| > model and then let the mapper take care of creating the database
to | > fit the object model.

Yes, it works very nicely thank you. Don't judge all OPFs by those
you already know; some of us took the time and effort to ensure a
comprehensive design.

Yes, I know. 3 years full time now and counting.

Btw I was referring to the model you proposed: add fields / hierarchy
elements to classes and let a tool propagate the changes to the db.
(which you seem to declare 'best'). The propagation of changes to the
db can be cumbersome in a huge schema with lots of data.
| > If you can write your application, using a mapper, and you don't
need | > to know anything about databases, then you have a good
mapper.

Not true. You really haven't seen a good OPF yet, have you ?

So, your 'superior' mapper always cranks out the proper upgrade
scripts, and takes into account the full load of the db ? Remember: you
claimed starting with classes and let the o/r mapper update the db
according to the changes made to the class model be 'best'. So, 'not
true', what does that really mean?
Which is why my OPF is several versions old now :)

mine too. so that's not an argument. Besides, I was refering to the
business application build with the o/r mapper, which is for example a
year in production and needs an update.

It has a lot of tables and data. You suggest the developer should just
update the class model, and let the o/r mapper alter the db schema, I
argued that that will be difficult in a schema in production with a lot
of data.

FB

--
 
J

Joanna Carter [TeamB]

"Frans Bouma [C# MVP]" <[email protected]> a écrit dans le
message de news: (e-mail address removed)...

| what the definition of 'best' is in your statement is still vague.

I did not state that my OPF was the best, just that deriving classes from
tables was not the best. Unfortunately, unitl we get a "perfect" OODB, we
will all be working with compromise due to the impedance mismatch between
the two paradigms.

| But, tell me: why is this 'better' ? Of course wacky setups are
| thinkable where there are clear differences, the question is if this
| will bring you any advantage.

Because this is an accurate modelling of the real compositional nature of
such classes; it is not abstracted to "make it fit" a relational database.

| Yes, I know. 3 years full time now and counting.

Heheh, then it should start getting easier now :))

| Btw I was referring to the model you proposed: add fields / hierarchy
| elements to classes and let a tool propagate the changes to the db.
| (which you seem to declare 'best'). The propagation of changes to the
| db can be cumbersome in a huge schema with lots of data.

As I have already said, I did not say that this method was "the best", just
that it avoids having an object model coerced into a relational one.

| So, your 'superior' mapper always cranks out the proper upgrade
| scripts, and takes into account the full load of the db ? Remember: you
| claimed starting with classes and let the o/r mapper update the db
| according to the changes made to the class model be 'best'. So, 'not
| true', what does that really mean?

We don't write migration scripts, the OPF looks after that for us; since we
wrote the scripting engine.

Our OPF evaluates execution times and attempts to optimise the generated
SQL; if it still can't achieve desired speeds, then we have a facility for
injecting a custom generator for any given type.

Most complex data retrieval that would require joins, etc tends to be for
what I would call reporting classes. IOW, they are complex queries for
something like, how many of a certain product were sold to Mr Bloggs in
2004. For these scenarios, we write a class like ProductSalesSummary and
ensure that the OPF contains a custom query, handcrafted if necessary.

I am not saying that your design doesn't have value; it certainly makes a
quick migration for legacy databases but for new-build projects, we have
found the object-led approach works very well.

Joanna
 
F

Frans Bouma [C# MVP]

Joanna said:
I did not state that my OPF was the best, just that deriving classes
from tables was not the best. Unfortunately, unitl we get a "perfect"
OODB, we will all be working with compromise due to the impedance
mismatch between the two paradigms.

Well, if you follow this analogy:
- create NIAM model from reality
- create E/R model from niam model
- create tables from E/R model

- create entity types back from tables,

you get back your e/r model. With the proper tooling you get back the
niam model from the reverse engineered e/r model.

The advantage is that you can use NIAM/ORM for functional analysis and
also use the results of that for the db schema, which results in a
relational model build on entity definitions, which are also the base
of what you will work with when you reverse engineer it. At the same
time you have the freedom to adjust the entity definitions in your
classes to the application, without jeopardizing the route from
functional research + abstract model -> e/r model -> tables.

In a lot of organisations, this is a proper way of designing the
database, often done by a person who's job it is to design the
schemas/maintain the schemas because that person is the specialist for
relational models. Although I'd love to say that programs can do that
job for that person, it's often a lot of 'interpretation' of the
reality to come to the proper result.
Because this is an accurate modelling of the real compositional
nature of such classes; it is not abstracted to "make it fit" a
relational database.

though neither would the other way around.
As I have already said, I did not say that this method was "the
best", just that it avoids having an object model coerced into a
relational one.

yeah well, if you start with the classes and create teh relational
model from that, you of course cut corners to make it as easy as
possible for the classes to get persisted and let the o/r mapper do its
job, which will compromise what otherwise would have been a solid
relational model.

It's hard, but doable to reproduce an abstract model of entities
which aren't 1:1 copies of table definitions, and which are also usable
in hierarchies for example (supertype/subtype) which already move away
from the relational approach as inheritance isn't modelable in a
relational model without semantic interpretation of the relational
model and / or containing data.
We don't write migration scripts, the OPF looks after that for us;
since we wrote the scripting engine.

Though how a production database is migrated is often depending on the
amount of data in the tables, not the structure of the tables itself,
and because a production database with a lot of data is hard to backup
/ restore easily, migrating these databases can be a task which is
often done by hand, tested in labs first etc.
Our OPF evaluates execution times and attempts to optimise the
generated SQL; if it still can't achieve desired speeds, then we have
a facility for injecting a custom generator for any given type.
Clever.

Most complex data retrieval that would require joins, etc tends to be
for what I would call reporting classes. IOW, they are complex
queries for something like, how many of a certain product were sold
to Mr Bloggs in 2004. For these scenarios, we write a class like
ProductSalesSummary and ensure that the OPF contains a custom query,
handcrafted if necessary.

I do similar things: you can define a list based on fields in the
entities in the project (which support inheritance) which are related,
and you can also do that in code, dynamically. Using the compile-time
checked query language then to build the query with aggregates, groupby
etc. for reporting data.

Not a lot of the o/r mappers out there offer that kind of features, as
most of them use the entity as their smallest block to work with, while
if they would use teh attribute (entity field), dynamic lists and the
like would be possible.
I am not saying that your design doesn't have value; it certainly
makes a quick migration for legacy databases but for new-build
projects, we have found the object-led approach works very well.

I also think what is the core information provider for your design is
important. If you, as I stated above, use NIAM/ORM or similar highly
abstract modelling technique to design the reality with the customer
during the functional research phase, you have other sources of
information than when you start with a UML model. It's not that the
other way of doing things doesn't work or works less good, it's just
that a person should realize that by starting with the DB requires a
properly designed relational model, and starting with classes requires
a properly designed class model. If people don;t realize that, no
matter what technique they choose, it will be cumbersome.

FB

--
 
J

Joanna Carter [TeamB]

"Frans Bouma [C# MVP]" <[email protected]> a écrit dans le
message de news: (e-mail address removed)...

| Well, if you follow this analogy:
| - create NIAM model from reality
| - create E/R model from niam model
| - create tables from E/R model
|
| - create entity types back from tables,
|
| you get back your e/r model. With the proper tooling you get back the
| niam model from the reverse engineered e/r model.

Having never found a use for things like NIAM (I had to look it up to see
what it was), I can't see what you mean. I start my modelling by defining
classes that contain, not only data, but also functionality as OO design is
not just about data in isolation.

Then, if instances of that class require persisting, I then go on to decide
which attributes (data) require persisting and then map those attributes to
fields of tables in the teh OPF. Many classes in my systems do not actually
need persisting at all as they are "worker" classes, used to manipulate
other objects.

| In a lot of organisations, this is a proper way of designing the
| database, often done by a person who's job it is to design the
| schemas/maintain the schemas because that person is the specialist for
| relational models. Although I'd love to say that programs can do that
| job for that person, it's often a lot of 'interpretation' of the
| reality to come to the proper result.

We find that, using our approach, employing a DBA would be serious overkill
as all the tables are created automatically, adjusted automatically and the
only SQL involved is simple Select, Update, Insert and Delete statements

| though neither would the other way around.

Having to add foreign key fields and referential integrity constraints into
a database where they do not exist in the object model is corrupting the
original model. Relational theory was invented to accomodate the need to
store data in a relational database. If you were able to use a well designed
OODB, then you would no longer need relational databases. Why should I have
to store the invoice lines separately from the invoice if they are an
integral part of that invoice ? Only because a relational database requires
me to do it that way.

| yeah well, if you start with the classes and create teh relational
| model from that, you of course cut corners to make it as easy as
| possible for the classes to get persisted and let the o/r mapper do its
| job, which will compromise what otherwise would have been a solid
| relational model.

I don't cut corners in class design, I use the OPF to translate the object
model into a relational one. However, due to the minimal requirements of
implementing an object storage mechanism, the majority of what you call a
"solid" relational model simply never gets done; it is the job of the
business classes themselves to maintain inter-object integrity.

| It's hard, but doable to reproduce an abstract model of entities
| which aren't 1:1 copies of table definitions, and which are also usable
| in hierarchies for example (supertype/subtype) which already move away
| from the relational approach as inheritance isn't modelable in a
| relational model without semantic interpretation of the relational
| model and / or containing data.

It looks like we must agree to differ. You seem to think that everything has
to be solved using relational modelling, whereas I only use RDBs as a
temporary store for objects.

| Though how a production database is migrated is often depending on the
| amount of data in the tables, not the structure of the tables itself,
| and because a production database with a lot of data is hard to backup
| / restore easily, migrating these databases can be a task which is
| often done by hand, tested in labs first etc.

If I were to migrate from one database manufacturer to another, I would
simply use one type of connection to read in the objects and then write them
out using a different type of connection. Both connections are written to
accomodate any peculiarities of the particular DB to which they are talking.

| I do similar things: you can define a list based on fields in the
| entities in the project (which support inheritance) which are related,
| and you can also do that in code, dynamically. Using the compile-time
| checked query language then to build the query with aggregates, groupby
| etc. for reporting data.
|
| Not a lot of the o/r mappers out there offer that kind of features, as
| most of them use the entity as their smallest block to work with, while
| if they would use teh attribute (entity field), dynamic lists and the
| like would be possible.

Certainly, we use an attribute (property/field) as the smallest granularity.

We have been known to create database views that manage inherited classes,
so that the base/derived classes are stored in "partial" tables linked on
the instance ID. but we tend to keep most classes to full class per table,
even if that means the same columns in multiple tables. This is an
optimisation that pays off in retrieval times at the expense of being able
to easily retrieve heterogeneous lists of all instances of a hierarchy.

| I also think what is the core information provider for your design is
| important. If you, as I stated above, use NIAM/ORM or similar highly
| abstract modelling technique to design the reality with the customer
| during the functional research phase, you have other sources of
| information than when you start with a UML model. It's not that the
| other way of doing things doesn't work or works less good, it's just
| that a person should realize that by starting with the DB requires a
| properly designed relational model, and starting with classes requires
| a properly designed class model. If people don;t realize that, no
| matter what technique they choose, it will be cumbersome.

We have never neede NIAM/ORM, we usually start with UML class diagrams and
use case diagrams. When we have a tested, working object model, then we look
at persistence. Don't forget, in theory we could just keep all the objects
in memory as in something like Prevayler, only writing the whole object
graph to disk when the application quits.

We really are approaching applicaiton design from two different directions;
you from the relational driven background where the world can be ruled by
RDBs :) , I from the OO background where RDBs are mere (and inefficient)
slaves to the object world :)

Joanna
 
M

Manfred

Many factors will probably play into your decision including
- Can you start with the domain model, and are you free to design your
schema?
- Or do you have an existing database schema which you cannot modify?

Also, the amount of data you want to process, the deployment scenarios,
the number of concurrent users, the kind of data access (OLAP vs. Data
Processing/Number Crunching) might all have an impact on your decision.
You may have to support multiple different SQL dialects, e.g. MS SQL
vs. MySQL vs. Oracle vs. DB/2. If that is the case you need to figure
out how to handle differences yourself or you can use and OR framework
that does it for you.

You might also consider performance issues, e.g. what happens if you
process 100,000 objects in a list, each of them referencing another set
of objects. When you access the list then do you want to load all the
100,000 objects immediately? When you access one of the 100,000
objects, do you want the OR framework to load the other referenced
objects as well? Think of a network of objects: Where do you want to
"cut-off" loading? Do you want to use lazy loading? How does this
related to caching? Do you want to be able to support a distributed
cache? Read/write? Read-caching only?

The point I want to make is that selecting the "best" OR framework does
not reduce the number of questions you need to answer. At best you get
a different set of questions.

Also, assuming that introducing an OR framework completely hides the
database access/product/etc. is naive at best. You still need to know
what is going on under the hood. For instance, you probably want to
know what kind of SQL statement is being sent to the server.
From a project I can remember a case where a simple method retrieved
two strings from the database. As the team didn't have a lot of
experience with the OR tool the just did what they thought was the
straight forward approach. In the end the SQL statement was a join over
a dozen tables and extremely slow. In the particular case the issue was
solved with a fairly simple stored procedure wrapped by a simple method
call.

As a side note: beware of people who think that mapping tables to
classes is identical to mapping a good object-oriented design to
tables, and that you end up with the same set of tables in both cases.
It usually raises a red flag if someone does not know the difference
between a class diagram and an ERD.

Bottom line my suggestions would be:
- Depending on your project size run a little experiment with different
approaches, determine selection criteria, and finally decide on the
make-or-buy decision.
- If you roll your own mapping, you should be extremely confident that
you can do better that those who implemented the available frameworks,
or you should have an excellent justification.

Best regards,
Manfred.
 
M

Manfred

"Ever tried to add addional fields to
entities in a system with, say 500 entities, and which is already in
production ? happy migrating with your automatic schema updater :) "

This doesn't need to be an issue in all cases. We have designed our
product in a way that the customer can add his own fields/properties to
any domain object. These custom fields/properties even 'survive'
updates of the software.

If we would want to add a specific field/property to all domain objects
then we would simply extend the common base class for all domain
objects.

I think it depends on the requirements and in particular on the
object-oriented design of your system, and how the domain model is
mapped onto the schema.

The story certainly changes if other factors play a role, e.g. the
database schema is given and cannot be changed.

Best regards,
Manfred.
 
J

Joanna Carter [TeamB]

"Manfred" <[email protected]> a écrit dans le message de (e-mail address removed)...

| This doesn't need to be an issue in all cases. We have designed our
| product in a way that the customer can add his own fields/properties to
| any domain object. These custom fields/properties even 'survive'
| updates of the software.

Have you been reading my source code ? :))

| The story certainly changes if other factors play a role, e.g. the
| database schema is given and cannot be changed.

Heheh, especially if you have a DBA who insists that you only use their
stored procs, etc :-(

Joanna
 
J

Joanna Carter [TeamB]

You might also consider performance issues, e.g. what happens if you
process 100,000 objects in a list, each of them referencing another set
of objects. When you access the list then do you want to load all the
100,000 objects immediately? When you access one of the 100,000
objects, do you want the OR framework to load the other referenced
objects as well? Think of a network of objects: Where do you want to
"cut-off" loading? Do you want to use lazy loading? How does this
related to caching? Do you want to be able to support a distributed
cache? Read/write? Read-caching only?
<

I have found the idea of "proxy" instances where objects in lists are only
partially loaded with the properties essential to browsing e.g. ID, Name,
Description; the rest of the properties are only loaded when you select an
object for editing.
The point I want to make is that selecting the "best" OR framework does
not reduce the number of questions you need to answer. At best you get
a different set of questions.
<

Agreed
From a project I can remember a case where a simple method retrieved
two strings from the database. As the team didn't have a lot of
experience with the OR tool the just did what they thought was the
straight forward approach. In the end the SQL statement was a join over
a dozen tables and extremely slow. In the particular case the issue was
solved with a fairly simple stored procedure wrapped by a simple method
call.
<

We have also used a method whereby we use a series of single table selects
in sequence basing the where clause or such on the results of the first
query.
As a side note: beware of people who think that mapping tables to
classes is identical to mapping a good object-oriented design to
tables, and that you end up with the same set of tables in both cases.
It usually raises a red flag if someone does not know the difference
between a class diagram and an ERD.
<

Amen to that !

Joanna
 
S

sdurity

Islamegy® said:
I was spending time to learn the use of strongly typed collection instead
of Dataset/datatable and using Enterprise Library Applictations block..

I tried this route, too. It takes too long and you won't really be able
to build in all the features that a framework can offer.
Recently i discovered there is alot of project for o/r mapper, csla.net,
NHibrnate and many others..
Actuly NHibrnate is look unique than others for me and easier too..
But i don't know if it's the most efficient o/r mapper out there or i look
for something better.

It may depend on if your database is supported by the framework. Most
C# folks are sucked into the MS universe and settle for SQL Server.
Most of the .Net frameworks will support that.
Please if anyone have experince with these mappers may help me to find the
most efficient solutions..

It depends, of course, on your application. If you are looking at
SmartClient applications and not web apps, take a look at DevForce from
IdeaBlade (www.ideablade.com). It has an ORM component. However, it
also has data-binding classes for the UI part of the application. IMO,
they understand the needs and realities of most business application
development. And they solve them with some great tools. For some of the
best documentation around, check out their Concepts guide. As a
customer, I have also found their support to be extremely responsive,
perceptive, and smart.

Conceptually, you can picture DevForce as a usable implementation of
CSLA, which seems to be a highly over-engineered approach. R. Lohtka,
the CSLA author, is a member of their advisory board.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top