n-Tier development

S

Steve Barnett

Ok, I've never done n-Tier development and I'm getting confused...

Assuming I have a business layer and a data access layer that may be running
on different machines, how does my business layer know where my data access
layer is running? What is it I have to do in the business layer that
"creates" the data access layer on the other server?

Second, what do I pass between my business layer and my data access layer?
Should I be returning a dataset from the data access layer to the business
layer and letting the business layer convert it in to business objects, or
should my data access layer return business objects?

For example, assume my business layer deals with "Client" objects. Should my
business layer ask my data access layer to return it a "client" object,
leaving the data access layer to determine what that involves, or does my
business layer ask the data access layer for some information from tables
and then the business layer builds the client object from the returned
dataset(s). If so, who builds the SQL, which I thought would be a DAL
function.

As you can probably tell, I could do with a simple (but practical) tutorial
on n-Tier development... anyone know of a good one as it relates to DESKTOP
based applications. Unfortunately, this app I'm working on is not a web
based app.

Thanks
Steve
 
G

Guest

i have gone through ur query, if u want the data access on different machine
then u have to refer(somehow) the data access layer DLL in the business layer.
u can return both the dataset and as well as object, it depends on u.
 
J

john smith

Steve said:
Ok, I've never done n-Tier development and I'm getting confused...

Assuming I have a business layer and a data access layer that may be running
on different machines, how does my business layer know where my data access
layer is running? What is it I have to do in the business layer that
"creates" the data access layer on the other server?

Second, what do I pass between my business layer and my data access layer?
Should I be returning a dataset from the data access layer to the business
layer and letting the business layer convert it in to business objects, or
should my data access layer return business objects?

For example, assume my business layer deals with "Client" objects. Should my
business layer ask my data access layer to return it a "client" object,
leaving the data access layer to determine what that involves, or does my
business layer ask the data access layer for some information from tables
and then the business layer builds the client object from the returned
dataset(s). If so, who builds the SQL, which I thought would be a DAL
function.

As you can probably tell, I could do with a simple (but practical) tutorial
on n-Tier development... anyone know of a good one as it relates to DESKTOP
based applications. Unfortunately, this app I'm working on is not a web
based app.

Thanks
Steve

I don't see why you'd want to have DAL and BLL on 2 separate computers.
You'd have to use something like a web service or remoting between the
two... That would very much suck IMHO, and do nothing good but add more
latency (ok, that ain't good either ;)) If one machine doesn't have
enough power, you can load balance between 2+ machines that has both the
DAL+BLL on them (all the middleware).

Between DAL & BLL, you could pass several things... Some people pass
DataSets, others pass DataReaders, or you can use some translation
layer/class between them to translate it to business objects at that
point... It seems to be a matter of preferences really.

As for building SQL, that would be the DAL's function, but it shouldn't
have to build very much... Most queries should be hardcoded really...
The only queries I usually generate are the complex ones - those for
complex searches or custom paging and such. Most queries (plain CRUD
operations on the DB) are static (hand coded or generated by some
automation tool).

Whether it's a web app or a desktop app, the n-tier principles are
mainly the same, you just tend to use/connect to the middleware
differently. If you seach on codeproject.com I'm sure you'll find
several n-tier dev articles.
 
C

CMM

Don't listen to that other poster. I'll give you a really simple example on
why programmers in the last five years have lost their minds.... and have
forgotten all the lessons a lot of us learned in the 90's.

What developers call "tiers" nowadays, I call simple "encapsulation".... NOT
TIERS. Somewhere a mass hypnosis has taken root across the development world
concerning this.

Here's an example:

1) Supposse your DAL is on the client machine like the other user recommends
(making it a 2-tiered App). You have a dataset where you've changed 10 rows.
Calling Update on a DataAdapter with this dataset will result in your update
stored proc (or update sql) being called 10 times! That's 10 network round
trips.... across a busy 100-megabit network!
2) Even more likely you've changed one record in one table and 1 record in 3
other related child tables (Employe--EmployeAddresses--etc.). That's 4 round
trips. Your app is now a VERY CHATTY application. A network's nightmare.
3) In a true N-Tier structure (a 3-tiered App where the DAL is an another
machine) there's only ONE round trip with your DAL.
4) Now, your DAL and DB servers are connected via 1-gigabit(!)
connection....(this is common)...and connected to each other via a dedicated
switch (so their conversations don't swamp the network the way your old
2-tiered rinky dink app did).
5) ...Or even better are on the SAME server. The round trips don't matter AT
ALL.
6) Now imagine that your client app needs to run on a simple VPN with an
office overseas. Your old 2-tiered rinky dink app is TOTALLY UNUSABLE and
not much better than your employees using an Access MDB on shared workgroup
server a la Windows 3.1.

Developers think that because they put their data access in a separate files
that they're code warriors and doing something special. In fact, they're not
doing much at all except achieving nice encapsulation (always a good
practice!). They're creating simple 2-tiered apps that people were coding in
1996 and suffering later for it.

P.S.
ADO.NET 2.0 with SQL Server 2005 introduces "batch updating" which addresses
some of these concerns. But there are HUGE caveats with batch updating that
render it almost useless (like returning Id's from INSERTS, etc) and almost
always require a "rescan" of the affected tables... rendering moot any
performance gains gotten by using batch updating. Again, TRUE N-TIER
development is the answer here.

P.P.S.
You use remoted objects quite easily.... (and building your own barebones
remoting host is super-duper easy... but, if you need object pooling and
stuff like that you'll need to run them in IIS or Windows Server Component
Services).

Dim o As IContactsService =
Activator.GetObject(GetType(IContactsService), serverUri)

where IContactsService is an Interface in a lightweight DLL that exists on
the server and client. Optionally you can put the server components directly
on the client.

Check out these links
-http://www.thinktecture.com/Resources/RemotingFAQ/default.html
-http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/introremoting.asp
 
C

CMM

I disagree with almost everything you state.
See below for my inline comments:

john smith said:
I don't see why you'd want to have DAL and BLL on 2 separate computers.
You'd have to use something like a web service or remoting between the
two... That would very much suck IMHO, and do nothing good but add more
latency (ok, that ain't good either ;)) If one machine doesn't have enough
power, you can load balance between 2+ machines that has both the DAL+BLL
on them (all the middleware).

See my other post. It's all about round trips... and not even about
load-balancing and "high-end stuff." Even a minor app with about 20 users
benefits from a true 3-Tiered design (3-Tiered meaning the DAL is another
machine).

See my reply to the original poster for more.
Between DAL & BLL, you could pass several things... Some people pass
DataSets, others pass DataReaders,

Passing DataReaders is horrible (pre .NET 2.0, anyway). Do I need to explain
why? Do people actually still do this?
or you can use some translation layer/class between them to translate it to
business objects at that point... It seems to be a matter of preferences
really.

Yes. Datasets can behave like busines entities if you wanted them to.
Spending time with ORM is such a waste of time when Typed Datasets do
anything and everything you could hope to accomplish. I like the "Document"
paradigm... where service classes act ON the dataset (validators, data
access, etc) and the dataset is basically dumb. It's clean and modular. I
can change a "validation service" without ever touching the dataset class or
the client application.
As for building SQL, that would be the DAL's function, but it shouldn't
have to build very much... Most queries should be hardcoded really..

Why?
Will someone please explain (and defend!) this fairly common supposition?

..
The only queries I usually generate are the complex ones - those for
complex searches or custom paging and such.

Seems to me that these exact type of queries are the ONLY queries that
actually benefit from stored procedures / functions (because of compliation
and query plans and optimization and stuff like that).

But this....
Most queries (plain CRUD operations on the DB) are static (hand coded or
generated by some automation tool).

is retarded. Wrapping up *every* little INSERT/UPDATE simple SELECT in a
stored proc is retarded. It's not that its "wrong"... I'm not against it and
in fact I even like it (it seems untuitively "orderly")....it's just that 1)
it doesn't accomplish much and 2) it's totally expensive and unmanageable in
the long run unless you have a whole dedicated DBA team. It's not worth the
effort for the little payoff that it provides.

Whether it's a web app or a desktop app, the n-tier principles are mainly
the same, you just tend to use/connect to the middleware differently.

Not true. In n-Tier the middleware is stateless. This is a big design
difference. However, invocation is not all that different and it sounds
scarier than it really is (ooohh remoting host exe-- what is that? I'm
scared.).
 
C

Cor Ligthert [MVP]

CMM,

Altough we completely agree about what you write about encapulation and
mixing this up with Tiers. Is in my opinion your calculation not correcte
and the way you tell that a real physical Tier needed as well not.

To communicate with the middle tier on that seperate computer, you have to
communicate as well with that middle tier. However now it has to do it
twice, one time on your so called 1Gb line and the same as before on your
100Mb line in your sample. The clientcomputer (which is not a terminal) need
the same amount of information and has the same amount of updates.

Therefore there is only an extra piece of datacommunication introduced. The
transport of data on your 1Gb line.

For me the multitier is right in the Terminal application situation (not
windowsclient emulating). In the windowsform situation I see no advantages,
while this Terminal situated 3 tier is already done by a classic ASP or
ASPNET application where the browser functions in a way as a Terminal and
the application as a middle tier.

By the way probably before 1996 multi tier were more common because it is a
very good practise on a real Terminal based computer as Unix and other
Mainframes.

Just my thought,

Cor
 
C

CMM

I know what you're saying... but I think you're missing my point...

Talking about desktop WindowsForms apps here...

You're off in your description of the "double-communication" (though I see
what you mean). You have to think about how Ethernet works. In a 2-tiered
app four updates sent to the database BROADCASTS eight (!) trips across YOUR
ENTIRE LAN. This affects EVERY SINGLE user's workstation in your LAN. Isn't
it better for you to just send *one* message and let DAL and DB talk to each
other isolated and do their little 4-command conversation?

You have to take a step back and think about this... The client workstation
and the LAN-in-general are used for other things not just *your app.*
There's web browsing going on, e-mailing being done, and Joe in the next
cubicle downloading the VS2005 4 GB ISO file from msdn.microsoft.com. I'd
rather a bunch of INSERT/UPDATE statements happen between the DAL Server and
the DB Server (with the DAL on the same machine as the DB or through a
dedicated switch- so the LAN at large is not affected).

I think it is. In almost every single situation.

[Workstation]
|
<dataset with changes>
|
[App Server (DAL/Validation)]
| | | |
<update statements>
| | | |
[Database Server]


P.S. I don't advocate 3-tiered remoting between ASP.NET and the DB. In this
case the Web Server and the DB Server should be hooked (and switched)
together and communication between them is isolated and super-fast (if
they're not on the same machine to begin with which a lot people do too!).
The LAN caveats that I describe do not apply in this set-up as they're
already Server-to-Server.
 
C

CMM

To clarify further.... I'm talking about "round trips" here. Their affect on
your LAN. Your LAN's affect on your app's round trips. And, the fact that a
3-tiered design solves this neatly. I agree that the same amount of data has
to pass between an "extra" layer and that seems inefficient. The savings are
realized because your DAL and DB communicate separate from your LAN and the
trouble of "round trips" is vaporized.

If you have a small office with 10 people each connected to their own
dedicated port on a switch... then my argument does not apply (though I
could argue that it still does because of web browsing e-mail, etc). But, if
your company grows (it could happen!... if you're lucky!... it's happened to
me!) you're soon stuck with a badly performing 2-tiered app that needs to be
rewritten. It should have been written properly from the beginning
(3-tiered)! ;-)


CMM said:
I know what you're saying... but I think you're missing my point...

Talking about desktop WindowsForms apps here...

You're off in your description of the "double-communication" (though I see
what you mean). You have to think about how Ethernet works. In a 2-tiered
app four updates sent to the database BROADCASTS eight (!) trips across
YOUR ENTIRE LAN. This affects EVERY SINGLE user's workstation in your LAN.
Isn't it better for you to just send *one* message and let DAL and DB talk
to each other isolated and do their little 4-command conversation?

You have to take a step back and think about this... The client
workstation and the LAN-in-general are used for other things not just
*your app.* There's web browsing going on, e-mailing being done, and Joe
in the next cubicle downloading the VS2005 4 GB ISO file from
msdn.microsoft.com. I'd rather a bunch of INSERT/UPDATE statements happen
between the DAL Server and the DB Server (with the DAL on the same machine
as the DB or through a dedicated switch- so the LAN at large is not
affected).

I think it is. In almost every single situation.

[Workstation]
|
<dataset with changes>
|
[App Server (DAL/Validation)]
| | | |
<update statements>
| | | |
[Database Server]


P.S. I don't advocate 3-tiered remoting between ASP.NET and the DB. In
this case the Web Server and the DB Server should be hooked (and switched)
together and communication between them is isolated and super-fast (if
they're not on the same machine to begin with which a lot people do too!).
The LAN caveats that I describe do not apply in this set-up as they're
already Server-to-Server.

Cor Ligthert said:
CMM,

Altough we completely agree about what you write about encapulation and
mixing this up with Tiers. Is in my opinion your calculation not correcte
and the way you tell that a real physical Tier needed as well not.

To communicate with the middle tier on that seperate computer, you have
to communicate as well with that middle tier. However now it has to do it
twice, one time on your so called 1Gb line and the same as before on your
100Mb line in your sample. The clientcomputer (which is not a terminal)
need the same amount of information and has the same amount of updates.

Therefore there is only an extra piece of datacommunication introduced.
The transport of data on your 1Gb line.

For me the multitier is right in the Terminal application situation (not
windowsclient emulating). In the windowsform situation I see no
advantages, while this Terminal situated 3 tier is already done by a
classic ASP or ASPNET application where the browser functions in a way as
a Terminal and the application as a middle tier.

By the way probably before 1996 multi tier were more common because it is
a very good practise on a real Terminal based computer as Unix and other
Mainframes.

Just my thought,

Cor
 
C

Cor Ligthert [MVP]

CMM,

I see that you have well thought about it. However trying to get you out of
a loop you are in my idea in (or maybe I am in it and can you show me that).

What does it change when I communicate on whatever line to a middle Tier
DataServer or direct to the EndDataServer?

I wrote already that the needed information and sended updates stays for the
clients the same to whatever of those two and that goes over the same line
how small it is and what protocoll to detect if it is for the right client
is used.

You use your 1Gb line for probably the smallest amount of data and it is
even clean, so probably is no more than 2Mb used, assuming that you don't
use it for things as backup, replication etc.

In your picture

[Workstations] <dataset> Or whatever information transportcontainer
||||||||||||||||||||||
[App Server (DAL/Validation)]
| |
<retrieve/update statements>
| |
[Database Server]

Or tell me what I miss?

Cor

CMM said:
I know what you're saying... but I think you're missing my point...

Talking about desktop WindowsForms apps here...

You're off in your description of the "double-communication" (though I see
what you mean). You have to think about how Ethernet works. In a 2-tiered
app four updates sent to the database BROADCASTS eight (!) trips across
YOUR ENTIRE LAN. This affects EVERY SINGLE user's workstation in your LAN.
Isn't it better for you to just send *one* message and let DAL and DB talk
to each other isolated and do their little 4-command conversation?

You have to take a step back and think about this... The client
workstation and the LAN-in-general are used for other things not just
*your app.* There's web browsing going on, e-mailing being done, and Joe
in the next cubicle downloading the VS2005 4 GB ISO file from
msdn.microsoft.com. I'd rather a bunch of INSERT/UPDATE statements happen
between the DAL Server and the DB Server (with the DAL on the same machine
as the DB or through a dedicated switch- so the LAN at large is not
affected).

I think it is. In almost every single situation.

[Workstation]
|
<dataset with changes>
|
[App Server (DAL/Validation)]
| | | |
<update statements>
| | | |
[Database Server]


P.S. I don't advocate 3-tiered remoting between ASP.NET and the DB. In
this case the Web Server and the DB Server should be hooked (and switched)
together and communication between them is isolated and super-fast (if
they're not on the same machine to begin with which a lot people do too!).
The LAN caveats that I describe do not apply in this set-up as they're
already Server-to-Server.

Cor Ligthert said:
CMM,

Altough we completely agree about what you write about encapulation and
mixing this up with Tiers. Is in my opinion your calculation not correcte
and the way you tell that a real physical Tier needed as well not.

To communicate with the middle tier on that seperate computer, you have
to communicate as well with that middle tier. However now it has to do it
twice, one time on your so called 1Gb line and the same as before on your
100Mb line in your sample. The clientcomputer (which is not a terminal)
need the same amount of information and has the same amount of updates.

Therefore there is only an extra piece of datacommunication introduced.
The transport of data on your 1Gb line.

For me the multitier is right in the Terminal application situation (not
windowsclient emulating). In the windowsform situation I see no
advantages, while this Terminal situated 3 tier is already done by a
classic ASP or ASPNET application where the browser functions in a way as
a Terminal and the application as a middle tier.

By the way probably before 1996 multi tier were more common because it is
a very good practise on a real Terminal based computer as Unix and other
Mainframes.

Just my thought,

Cor
 
J

john smith

CMM said:
I disagree with almost everything you state.
See below for my inline comments:

I'll agree to disagree :) Not everybody can agree on everything, that's
perfectly fine with me. However, I think you mostly misunderstood what I
said.
See my other post. It's all about round trips... and not even about
load-balancing and "high-end stuff." Even a minor app with about 20 users
benefits from a true 3-Tiered design (3-Tiered meaning the DAL is another
machine).

I do use true 3-tiered designs. Yes, obviously there are benefits. You
fail to mention something: the DAL is on another machine than ??? And
where do you assume that ??? is? It sounds to me like you have your BLL
in the client itself (there are no mentions of 2 app servers anywhere in
your other posts - one for DAL and one for BLL). And BTW 3-tiered can
mean a LOT of different things to a lot of people... I've seen MVPs
argue over this just last week...
See my reply to the original poster for more.

Passing DataReaders is horrible (pre .NET 2.0, anyway). Do I need to explain
why? Do people actually still do this?

I didn't say it was my preffered method or anything (nor recommended it)
- just that some people use it. The odd times where I use it it doesn't
go straight to BLL either, there's a "helper/translation class" that
turns it into Business Objects in between (an extra layer). 99.5% of
what I do nowadays is v2.0 :)
Yes. Datasets can behave like busines entities if you wanted them to.
Spending time with ORM is such a waste of time when Typed Datasets do
anything and everything you could hope to accomplish. I like the "Document"
paradigm... where service classes act ON the dataset (validators, data
access, etc) and the dataset is basically dumb. It's clean and modular. I
can change a "validation service" without ever touching the dataset class or
the client application.

That's still a matter of preferences. There are TONS of people which are
REALLY into various ORM tools and would very much disagree with you (be
it NHibernate or others).
Why?
Will someone please explain (and defend!) this fairly common supposition?

Why not use hardcoded queries for most simple stuff? You'd rather build
soem kind of complex on-the-fly query generator instead? Code generation
tools with some good templates generates 95% of this stuff very well
(never find bugs in it either). I don't see why you'd want to have to
even bother generating them dynamically. I'm not sure what your point is
really.
.

Seems to me that these exact type of queries are the ONLY queries that
actually benefit from stored procedures / functions (because of compliation
and query plans and optimization and stuff like that).

Actually, after much toying around with various sprocs (and reading
other ppl's in-depth articles and similar tests on this, and a lot of
chat with our DBA), most of such queries (in our apps at least) DON'T
benefit from that, mainly because they can't get cached (some things
can't take parms or are too complex and must be generated by a sproc
then use sp_executeSql... long story, not even going there) - exactly
the inverse of what you said...
But this....


is retarded. Wrapping up *every* little INSERT/UPDATE simple SELECT in a
stored proc is retarded. It's not that its "wrong"... I'm not against it and
in fact I even like it (it seems untuitively "orderly")....it's just that 1)
it doesn't accomplish much and 2) it's totally expensive and unmanageable in
the long run unless you have a whole dedicated DBA team. It's not worth the
effort for the little payoff that it provides.

Who's talking about sprocs here? (that'd be you...) I'm talking about
static/hardcoded parameterized queries in the DAL (again, all generated
by some codegen - no work at all)
Not true. In n-Tier the middleware is stateless. This is a big design
difference. However, invocation is not all that different and it sounds
scarier than it really is (ooohh remoting host exe-- what is that? I'm
scared.).

Who said anything about the middleware NOT being stateless? I don't see
what you're trying to say here... You're making assumptions or something
again...
 
J

john smith

CMM said:
Don't listen to that other poster. I'll give you a really simple example on
why programmers in the last five years have lost their minds.... and have
forgotten all the lessons a lot of us learned in the 90's.

Yeah, ignore me based on you've misunderstood ~100% of what I said?
Okie... If you say so...
What developers call "tiers" nowadays, I call simple "encapsulation".... NOT
TIERS. Somewhere a mass hypnosis has taken root across the development world
concerning this.

Here's an example:

1) Supposse your DAL is on the client machine like the other user recommends
(making it a 2-tiered App). You have a dataset where you've changed 10 rows.

When did I EVER recommend putting it on the client (hint: NEVER). This
would be a 2 layer app, but it's not what I recommended whatsoever...
 
C

CMM

Cor Ligthert said:
CMM,

I see that you have well thought about it. However trying to get you out
of a loop you are in my idea in (or maybe I am in it and can you show me
that).

What does it change when I communicate on whatever line to a middle Tier
DataServer or direct to the EndDataServer?

When you communicate with the middle tier you're sending a
dataset-wth-changes in one network call over the LAN. This translates into
exactly two broadcasts (1 there, and 1 back.... a rountrip) no matter how
many records are in the dataset.

vs.

When you communicate with the data server directly you're sending sql
insert/update commands over several network calls. For instance, submitting
10 changed rows causes 20 broadcasts across your LAN.

It's as simple as that. Coupled with that fact that your LAN is used for
many other things (web, e-mail, etc.)- not just your app- I think the first
solution (3-tiered) is always the more efficient. It doesn't make much
difference with just a few users... but it also doesn't hurt much either!...
and it makes your app a lot more robust and easier to maintain in the
long-run.
 
C

CMM

Also, to correct your diagram...

In a 3-tiered design....

[Workstation(s)] <dataset> Or whatever information transportcontainer
|||||||||||||||||||||| (LAN)
[App Server (DAL/Validation)] <retrieve/update statements>
|||||||||||||||||||||||||||||||||||||||||||||||| (NOT-LAN)
[Database Server]

Also, if the DAL was on the same machine as the workstations (2-tiered
design) the diagram changes to this:

[Workstation(s) /w DAL] <retrieve/update statements>
|||||||||||||||||||||||||||||||||||||||||||||||| (LAN)
[Database Server]

Communication with the DAL doesn't get diagramed because you're just passing
a simple reference to the Dataset to the DAL (there's no serialization or
marshalling involved). In fact when you see authors recommend calling
GetChanges() before passing data to the DAL.... this is ONLY for TRUE
n-tiered design... not when your DAL is on the client machine.


Cor Ligthert said:
CMM,

I see that you have well thought about it. However trying to get you out
of a loop you are in my idea in (or maybe I am in it and can you show me
that).

What does it change when I communicate on whatever line to a middle Tier
DataServer or direct to the EndDataServer?

I wrote already that the needed information and sended updates stays for
the clients the same to whatever of those two and that goes over the same
line how small it is and what protocoll to detect if it is for the right
client is used.

You use your 1Gb line for probably the smallest amount of data and it is
even clean, so probably is no more than 2Mb used, assuming that you don't
use it for things as backup, replication etc.

In your picture

[Workstations] <dataset> Or whatever information transportcontainer
||||||||||||||||||||||
[App Server (DAL/Validation)]
| |
<retrieve/update statements>
| |
[Database Server]

Or tell me what I miss?

Cor

CMM said:
I know what you're saying... but I think you're missing my point...

Talking about desktop WindowsForms apps here...

You're off in your description of the "double-communication" (though I
see what you mean). You have to think about how Ethernet works. In a
2-tiered app four updates sent to the database BROADCASTS eight (!) trips
across YOUR ENTIRE LAN. This affects EVERY SINGLE user's workstation in
your LAN. Isn't it better for you to just send *one* message and let DAL
and DB talk to each other isolated and do their little 4-command
conversation?

You have to take a step back and think about this... The client
workstation and the LAN-in-general are used for other things not just
*your app.* There's web browsing going on, e-mailing being done, and Joe
in the next cubicle downloading the VS2005 4 GB ISO file from
msdn.microsoft.com. I'd rather a bunch of INSERT/UPDATE statements happen
between the DAL Server and the DB Server (with the DAL on the same
machine as the DB or through a dedicated switch- so the LAN at large is
not affected).

I think it is. In almost every single situation.

[Workstation]
|
<dataset with changes>
|
[App Server (DAL/Validation)]
| | | |
<update statements>
| | | |
[Database Server]


P.S. I don't advocate 3-tiered remoting between ASP.NET and the DB. In
this case the Web Server and the DB Server should be hooked (and
switched) together and communication between them is isolated and
super-fast (if they're not on the same machine to begin with which a lot
people do too!). The LAN caveats that I describe do not apply in this
set-up as they're already Server-to-Server.

Cor Ligthert said:
CMM,

Altough we completely agree about what you write about encapulation and
mixing this up with Tiers. Is in my opinion your calculation not
correcte and the way you tell that a real physical Tier needed as well
not.

To communicate with the middle tier on that seperate computer, you have
to communicate as well with that middle tier. However now it has to do
it twice, one time on your so called 1Gb line and the same as before on
your 100Mb line in your sample. The clientcomputer (which is not a
terminal) need the same amount of information and has the same amount of
updates.

Therefore there is only an extra piece of datacommunication introduced.
The transport of data on your 1Gb line.

For me the multitier is right in the Terminal application situation (not
windowsclient emulating). In the windowsform situation I see no
advantages, while this Terminal situated 3 tier is already done by a
classic ASP or ASPNET application where the browser functions in a way
as a Terminal and the application as a middle tier.

By the way probably before 1996 multi tier were more common because it
is a very good practise on a real Terminal based computer as Unix and
other Mainframes.

Just my thought,

Cor





"CMM" <[email protected]> schreef in bericht
Don't listen to that other poster. I'll give you a really simple
example on why programmers in the last five years have lost their
minds.... and have forgotten all the lessons a lot of us learned in the
90's.

What developers call "tiers" nowadays, I call simple
"encapsulation".... NOT TIERS. Somewhere a mass hypnosis has taken root
across the development world concerning this.

Here's an example:

1) Supposse your DAL is on the client machine like the other user
recommends (making it a 2-tiered App). You have a dataset where you've
changed 10 rows. Calling Update on a DataAdapter with this dataset will
result in your update stored proc (or update sql) being called 10
times! That's 10 network round trips.... across a busy 100-megabit
network!
2) Even more likely you've changed one record in one table and 1 record
in 3 other related child tables (Employe--EmployeAddresses--etc.).
That's 4 round trips. Your app is now a VERY CHATTY application. A
network's nightmare.
3) In a true N-Tier structure (a 3-tiered App where the DAL is an
another machine) there's only ONE round trip with your DAL.
4) Now, your DAL and DB servers are connected via 1-gigabit(!)
connection....(this is common)...and connected to each other via a
dedicated switch (so their conversations don't swamp the network the
way your old 2-tiered rinky dink app did).
5) ...Or even better are on the SAME server. The round trips don't
matter AT ALL.
6) Now imagine that your client app needs to run on a simple VPN with
an office overseas. Your old 2-tiered rinky dink app is TOTALLY
UNUSABLE and not much better than your employees using an Access MDB on
shared workgroup server a la Windows 3.1.

Developers think that because they put their data access in a separate
files that they're code warriors and doing something special. In fact,
they're not doing much at all except achieving nice encapsulation
(always a good practice!). They're creating simple 2-tiered apps that
people were coding in 1996 and suffering later for it.

P.S.
ADO.NET 2.0 with SQL Server 2005 introduces "batch updating" which
addresses some of these concerns. But there are HUGE caveats with batch
updating that render it almost useless (like returning Id's from
INSERTS, etc) and almost always require a "rescan" of the affected
tables... rendering moot any performance gains gotten by using batch
updating. Again, TRUE N-TIER development is the answer here.

P.P.S.
You use remoted objects quite easily.... (and building your own
barebones remoting host is super-duper easy... but, if you need object
pooling and stuff like that you'll need to run them in IIS or Windows
Server Component Services).

Dim o As IContactsService =
Activator.GetObject(GetType(IContactsService), serverUri)

where IContactsService is an Interface in a lightweight DLL that exists
on the server and client. Optionally you can put the server components
directly on the client.

Check out these links
-http://www.thinktecture.com/Resources/RemotingFAQ/default.html
-http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/introremoting.asp



Ok, I've never done n-Tier development and I'm getting confused...

Assuming I have a business layer and a data access layer that may be
running on different machines, how does my business layer know where
my data access layer is running? What is it I have to do in the
business layer that "creates" the data access layer on the other
server?

Second, what do I pass between my business layer and my data access
layer? Should I be returning a dataset from the data access layer to
the business layer and letting the business layer convert it in to
business objects, or should my data access layer return business
objects?

For example, assume my business layer deals with "Client" objects.
Should my business layer ask my data access layer to return it a
"client" object, leaving the data access layer to determine what that
involves, or does my business layer ask the data access layer for some
information from tables and then the business layer builds the client
object from the returned dataset(s). If so, who builds the SQL, which
I thought would be a DAL function.

As you can probably tell, I could do with a simple (but practical)
tutorial on n-Tier development... anyone know of a good one as it
relates to DESKTOP based applications. Unfortunately, this app I'm
working on is not a web based app.

Thanks
Steve
 
C

CMM

Steve Barnett said:
Ok, I've never done n-Tier development and I'm getting confused...

Assuming I have a business layer and a data access layer that may be
running on different machines, how does my business layer know where my
data access layer is running? What is it I have to do in the business
layer that "creates" the data access layer on the other server?

Second, what do I pass between my business layer and my data access layer?
Should I be returning a dataset from the data access layer to the business
layer and letting the business layer convert it in to business objects, or
should my data access layer return business objects?

Your DAL should return datasets to the BL. Your BL should return your
complex entity objects using whatever ORM technique you implement
yourself.... this is a A LOT of work.... I personally like to use Typed
Datasets across ALL layers... the BL uses a set of validator classes to
validate and enforce business rules on the dataset. And the DAL uses
DataAdapters to submit dataset changes to the database.

This sort of Document / Validation / Rules / Transform paradigm "act on" the
dataset.... the dataset is dumb.... it is similar to (XML, XSD enforcement,
XSLT if you know what I mean). Datasets don't even have to look like the
Tables in the database at all. Your Contact tables has field labeled
Company_Name_En and another one labeled Company_Name_zh-CN for
globalization.... your Dataset can simply have a CompanyName field. You do
this via DataAdapters TableMappings feature which is exactly what you'd be
doing using your own (super-tedious and messy) ORM code.

I would also argue that Datasets (Typed Datasets to be precise) function
very well as "entities"... once you realize that Typed Datasets have NOTHING
TO DO with "databases" (they're agnostic... simply a a strongly typed array
on steriods) and are just a "collection of entities" and there's no shame
for instance in a Dataset containing a Single Contact object (row). In fact
Typed Datasets solve a lot of limitations of ORM that take a lot of manual
coding (like implementing Undo, sorting, raising events when values change,
etc etc).

However if you want your entities to have built-in encapsulated rules
(something which I again argue belongs OUTSIDE of the entities) then full
blown ORM is the way you ought to go (almost never in my opinion).
For example, assume my business layer deals with "Client" objects. Should
my business layer ask my data access layer to return it a "client" object,
leaving the data access layer to determine what that involves, or does my
business layer ask the data access layer for some information from tables
and then the business layer builds the client object from the returned
dataset(s). If so, who builds the SQL, which I thought would be a DAL
function.

The DAL builds the SQL or it calls an SP with the SQL already in it. Your BL
is responsible for validating the data and enforcing business ruless.... and
optionally transforming the data to and from a complex object for the client
to use (see my comments above).
 
C

CMM

.... and in short- I agree with you that there is little benefit in having
the DAL and BL on separate servers. I thought you were referring to the
client workstations (which should not contain the DAL or BL as is the common
(and lazy) practice).
 
C

Cor Ligthert [MVP]

CMM,

We are talking about physical seperated Tiers why do you switch to Data
Access Layers. With DAL you only are physical replacing references, that was
something we where not talking about. It goes a very very very little bit
slower, however the encapsulated design has its advantages. Therefore that
going slower will normally be gained back in more speed by the fact that the
design is better and therefore there are no strange things in your programs
which has everywhere its dataaccess points.

However, we where talking about physical seperated Tiers, I don't see why
you come now with DAL. Your point was that a DAL was not by definition
direct a physical Tier. However the last years used as are they the same as
discussed, with that I did agree, now using the DAL in the discussions does
it in my opinion not make it really clear.

The process goes in my idea as this.

The client sents (using different layers) as much validated datarows which
have rowstate changes to the DataBase Server. Here the datarows are
processed and sent information back to the client. The information can be
processed or not done, this is told by a returned value. If there is an
error the only one who can resolve the error than it is the client.

I don't see where your seperated phycical middle tier speed up this process
seen from the technique of the LAN.

I know what is a Lan and the behaviour from differnt types. I have
implemented almost every type LAN from physical thin coax ethernet based
until now wireless ethernet and all types as ARC, TokenRing, Collision
Detect/Collision Avoid and BroadBand. This before you think that I don't
know what that is. In that to speed up, I have used software as well as
hardware routers in the Collision ones.

I am still curious what can be done by your so called application server (to
be clear and repeating, in a windowforms intelligent client situation. In
Unix, mainframe or any other kind of situation with not intelligent clients,
the middle tiers are a must in my opinion).

Cor
 
C

CMM

Cor Ligthert said:
CMM,

We are talking about physical seperated Tiers why do you switch to Data
Access Layers. With DAL you only are physical replacing references, that
was something we where not talking about. It goes a very very very little
bit
<snip>

Because I'm trying to make the point that the DAL belongs in a physical tier
separated from the client. That's the only point I'm trying to make. I think
I made it.

I'm not talking about anything else (BL, validation, rules, whatever...
which can be anywhere depending on requirements).

I don't really understand the rest of your post. Sorry.
 
C

Cor Ligthert [MVP]

CMM,

Now I see how you create a lot of Lan traffic,
I'm not talking about anything else (BL, validation, rules, whatever...
which can be anywhere depending on requirements).
This has in your situation to be communicated over the Lan everytime with
the client, while when the DAL and Busines Layer is implemented on the
client side there is no extra trafic. You cannot fix errors by default.
Otherwise it could be already done in the client before sending.

Cor
 
C

CMM

Not sure I understand your post...

Validation and business rules... this is tricky:
In true n-tier you would also divide the business rules and validation
across all three tiers (client, middle, db)... like this:

Simple Validation Layer (on the client)
- Simple validation can happen on the client machine (IsDate,
IsStringTooLong, IsNumberGreaterThanZero, etc etc.). Preferably you would
put this in a layer of its own (layer=classes not tier) and not hardcode it
in the UI. It depends on the requirements and your preferences. I like
encapsulation.

You can also implement some "atomic" business rules that concern the entity
and only the entity... like (Contact.DeliveryMethod=Email And
Contact.Email="") = ERROR.

Complex Rules Layer
- Business Rules are enforced in the physical middle tier. Why? Well,
because sometimes (often) they call for extra communication with the DB and
validating against data that the client machine does not have and doesn't
care about! For instance, "Did some *other* change in the database
invalidate this proposed update." This extra DB communication (round trips!)
should happen between the MiddleTier and the DB and not through the LAN!

Don't you agree?

You have App-A that allows a secretary to edit a contact's information. This
app has no knowledge of a Contact's Magazine Subscriptions (that's handled
by some other app somewhere else). Say the secretary submits a record change
that makes a Contact's Email="". You have a business rule in the MiddleTier
that checks the database's ContactSubscriptions table to make sure the user
hasn't subscribed to any "E-mail subscriptions." Why should the App-A client
have to do this? How can it validate it? Should it query the database for
subscriptions it otherwise doesn't care about??? That's a LAN broadcast!!!!
Why not just put that in the MiddleTier where it belongs! If the contact has
a Magazine Subscription, the MiddleTier BL rejects the change (doesn't pass
it along to the DAL).

DB Constraints
- Sometimes this duplicates business rules at times. There's nothing wrong
with that. Generally data-entegrity and data-quality merit it. Often
though... flexibility dictates that constraints NOT be used to enforce
"business rules" per se.... but rather data-integrity. Of course there's a
grey area here.. and it takes thoughtfulness to think some of this stuff
through. Like, "will there be times when this constraint should not be
enforced?"... if so, it shouldn't be a constraint... it should be a rule in
the middle tier.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top