N-Tier architecture with .NET typed Datasets

N

Natehop

I've been attempting to design an n-tiered framework leveraging .NET's
strongly typed Dataset. My Framework will serve as the foundation to
several client apps from Windows applications to web sites and web
services.

The architecture consists of a business rules tier, a data tier, and a
common tier. The common tier contains my typed Datasets while the
business rules and data tiers contain functions to populate and save the
common Datasets. Pretty standard n-tier design; however, I'm having
quite the time determining what database tables my typed Datasets should
contain.

I've read several articles, books, and newsgroups in an attempt to find
some best practices regarding this issue; however, my inquiries thus far
illustrate that no best practices have yet evolved.

My research and experience has led me to 2 possible solutions.

--PROBLEM DOMAINS -----------------------------------
The first solution is to use "problem domains" when designing the Dataset
schema. The key to problem domains is that they encapsulate all needed
tables within the Dataset. The issue I find with this approach is that
it requires one to make a judgment early in the development lifecycle as
to what problem domains exist and what tables fall within them, leaving a
high probability for mistakes and either missing tables that should have
been included or including tables that really weren't needed. The
probability of getting it wrong increasing with the size and complexity
of your database. Not to mention any unknown requirements that will be
placed on the framework 6 months from now. The other issue I find with
problem domains is that they are driven by features. This means that its
highly likely that a problem domain could exist for every feature in the
application. No code reuse, and the framework simply becomes unneeded
complexity.

--BUILDING BLOCKS------------------------------------
The second solution is to use a very granular approach when defining the
Dataset schema; meaning, we only include a parent table and all of its
children 1 level deep in the Dataset. I have spent the most time
experimenting with this solution, and while it addresses the issues posed
by problem domains, this solution has its shortcomings as well.

The first issue is the sheer number of objects required. Basically any
table that is a parent table becomes an object (typed Dataset). The
second issue that has met with resistance on my team is that management
of these objects is now pushed to the client. The client code may need
to create 2 or 3 of these granular objects for a given feature or
"problem domain". Which illustrates another issue... transactions. If
client code is working with 3 of these granular objects and needs to
update the database, how do I wrap the update of all 3 objects within a
single transaction if the data layer treats these objects as separate and
distinct (each have individual save methods within the data layer)?

One though I had was to make the data objects very granular and then wrap
them within a problem domain object at the business rules layer. That
way we end up with a very flexible and reusable data layer and common
layer. If we happen to be short sited when designing the business rules
objects or "problem domains", the impact will be much less on framework
development.

To over simplify, I'll use the object of a car.

If we are making Volkswagen Beetles, the problem domain solution is to
define the domain around that specific vehicle. When a new vehicle is
required, we create an entirely new vehicle. No reuse, but we get
everything we need in one object which is easily managed in the business
rule layer or data layer.

The granular approach is to first define the wheels, engine, seats,
etc... And then let either the client code or business rule layer build
the needed car with the components.

Obviously you can tell that I am leaning toward a more granular approach;
however, I was hoping to hear from someone who may be using either
technique or even something else with success.

Thanks in advance for any feedback and/or advice. Keep in mind that what
I'm really after is best practices.


-- Nate

-----------------------------------------------------------
"...touch a solemn truth in collision with a dogma...
though capable of the clearest proof, and you will find
you have disturbed a nest, and the hornets will swarm
about your eyes and hand, and fly into your face and eyes."
------------------------------------------------ John Adams
 
M

Miha Markic

Hi Natehop,

I have a n-tier project which has both web and winform frontends.
I decided to go with the first approach - using a dataset per business
process and it works just fine for me.
However, the project was pretty much well defined.
Even if it isn't well defined I would prefer my approach over granular one
because it is more strict.
Just my opinion.
 
E

Eric Newton

I tend to favor an approach that doesnt deal with DataSets at all, ie,
building classes (basically structures) that can ferry the data around in a
type-safe way. I've run into some of the problems you have by trying to
design a framework around something that is still to ambiguous in the first
couple of runs, and you end up shifting around a lot of baggage that you
dont need.

For multiple items, I just use the Arrays to return multiple whether through
a parent/child relationship or as a return from a method

For example, an AccountInfo class:

public (struct|class) AccountInfo
public int AccountID;
public int CustomerID;
public AccountDetailInfo[] AccountDetailInfos;
end (struct|class)
public (struct|class) AccountDetailInfo
public int AccountDetailId;
public int AccountID;
public string SomethingHere;
public decimal Amount;
public bool SomeFieldIsValid
end struct
 
D

David Browne

Miha Markic said:
Hi Natehop,

I have a n-tier project which has both web and winform frontends.
I decided to go with the first approach - using a dataset per business
process and it works just fine for me.
However, the project was pretty much well defined.
Even if it isn't well defined I would prefer my approach over granular one
because it is more strict.
Just my opinion.

I agree. The first approach is better. The only real drawback you
mentioned is that you have to define your application features before you
know how to bundle your tables together into DataSets.

I think that's a good thing.

You said:
The issue I find with this approach is that
it requires one to make a judgment early in the development lifecycle as
to what problem domains exist and what tables fall within them.

You are correct. But that's a best practice anyway. But basically one use
case, one set of screens, one transaction, one dataset.

Plus some global datasets, like one full of all your mostly read-only
tables, used for client-side caching.

David
 
N

Nate Hopkins

Thanks for the input. We decided to move forward with the problem domain
approach.

-- Nate

-----------------------------------------------------------
"...touch a solemn truth in collision with a dogma...
though capable of the clearest proof, and you will find
you have disturbed a nest, and the hornets will swarm
about your eyes and hand, and fly into your face and eyes."
------------------------------------------------ John Adams
 
M

Mike Bridge

I am looking at Typed DataSets for the first time, and I'm also
struggling with this issue. I haven't been able to find any useful
information on how to design a large system with them, and I'm
starting to wonder whether Typed DataSets are of much use in anything
beyond a small application.

My general impression is that Typed Datasets are a nice addition to a
generic DataSet, but there isn't enough there to build a decent Data
Access layer. If you're using the "Problem Domains" model, it seems
to me that you are essentially defining "Views" on the data---one view
per TDS---and each view requires a separate mapping to the underlying
data. This means that you have a large amount of duplication, and the
end result is a pretty poor architecture.

But if you're using the Building Blocks architecture, you have the
problem of not having a really decent set of tools in between your
application and the data store. Nnavigating the relationships between
these building blocks becomes more difficult (I'm not even sure how
you'd do that at all), and it doesn't seem like transactions are
supported.

The alternative seems to be to go out and buy a decent O/R mapping
tool. With a good one, you can worry a lot less about the ugly
details of persisting data.

However, I'd love to hear from someone who's successfully designed a
decent set of Business Objects and Data Objects with Typed DataSets.

-Mike
 
M

Miha Markic

Hi Mike,
However, I'd love to hear from someone who's successfully designed a
decent set of Business Objects and Data Objects with Typed DataSets.

I have a fairly big solution (15 projects) - mutual fund managament software
for a large bank company.
And IMO it works just fine.
If I had to do it again, I would take the same path (though I would change
details here and there :) )

BTW, for some portions I use code generated by a template generator - it
helps me a lot.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top