middle tier recommendations

P

Param R.

Hi all, we are in the process of architecting a new application that will
have an asp.net front end & sql back end. In the past we have used
webservices as a middle tier solution but in terms of performance it has not
been upto the mark. Besides with the latest .net version there are some
known issues with calling web services (keepalives etc.). What other middle
tier solution does .net have to offer? What is .net remoting and how does it
work? Is it similar to DCOM? Does it have to run under IIS? I would like
something that is not dependent on IIS preferably. In our solution we will
be passing custom objects back and forth with the middle tier interacting
with the database.

Any help and guidance here is much appreciated!

thanks!
 
G

Guest

Remoting is certainly faster than an ASMX web service. You completely lose
the reliance on HTTP, which speeds things up quite a bit. If you want a SOA
type distributed architecture (web services style), Remoting is a good
option. It is a bit more time consuming to architect, over ASMX, but worth it
for the perf gains.

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
P

Param R.

So remoting runs independent of IIS? Does it communicate over tcp? What
port? Firewall restrictions? Also, I am a VB programmer. Could you point me
in the direction of some good articles on remoting for VB.

thanks!
 
G

Guest

This article is an introduction to .Net remoting:

http://www.msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/introremoting.asp

You will also find there is a .Net remoting newsgroup, which is very useful
(thanks to Ken Kolda and Sam Santiago). Also would recommend googling for
Ingo Rammer, who is an expert on .Net remoting and written a book on it.

To answer your questions, .Net remoted objects can communicate over http
(hosted in IIS) or tcp (hosted in windows service, console app, etc.). You
can even define your own channels. It also provides 2 in built formatters -
SOAP and binary. It is actually the formatter that makes the bigger
difference to performance (again, there is an article on MSDN comparing the
various options).

The tcp channel communicates over the port you specify. Obviously being tcp,
you will need to configure your firewalls apporpriately. One thing to note,
remoting has no built in secuirity. When hosted in IIS, you will be using
IIS's security feature. Hosting in IIS using the binary formatter will
perform better than using web services. Tcp/binary is the ultimate solution
in terms of performance.

Hope this helps
Dan
 
N

Nick Malik

What is driving the architecture? What are the key constraints?

Honestly, most applications are fine with an ASP.NET layer that calls a
simple middle layer, written as DLLs, that call SQL. That said, most
applications have fewer than 100 concurrent users. I'm going to venture a
guess that this doesn't apply to you.

How many servers have you set aside for this application?
How many users do you plan to serve with this application?
What is the nature of their use (continuous use for business day, occasional
light use, occasional heavy use, receiving a stream of information)?

In addition, you didn't provide the key constraints that drives the
architecture.
Do you have high uptime requirements (99.9% or better)?
Do you have variable scalability issues (sudden spikes that increase traffic
by an order of magnitude or more for a sustained period)?
Do you need to be able to modify the behavior of the system while it is
running due to the nature of competition in your business?
Do you have existing systems that you need to communicate with? If so, are
these systems designed for real-time communication or do you need to batch
things up?

Without at least a little of this information, my answer would be too vague
to be useful.

As for .NET remoting, it is a useful mechanism for designs that need to
partition the execution of the application onto multiple servers. The
marshalling is far more efficient than with web services, but it is still
marshalling... and if you are sending data sets across a marshalling
boundary, you are probably not designing your interfaces correctly.

An excellent book: Advanced .NET Remoting by Ingo Rammer.

HTH,
---- Nick
 
P

Param R.

But I guess to setup TCP/binary there would be extra work needing to be done
in terms of writing my own listener etc? There is also a certain essence of
time in this project. I basically have 3 weeks. Could I start off with
running it in IIS and then moving to TCP later if needed?


thanks!
 
P

Param R.

Nick, to answer your questions:-

1. 2 Web Servers & 1 Database
2. Starting out 2000+ users.
3. Continuous business use M-S 8-7.
4. 99.99% (8 a.m. - 10 p.m.)
5. During peak times we could see a sudden spike in usage.
6. YES. Realtime modification is a necessity.
7. Limited communication with existing systems - in realtime.

It is all about speed for this project. If the web pages are slow and the
apps take time to execute then the business starts losing $$...

thanks!
 
B

Ben Strackany

Nick -

Along the lines of what Param said -- remoting & web services would be
useful if your middle tier were on its own machine, but IMO the MT should
only be its own box if you are doing some serious CPU-intensive tasks in
your middle tier (which I am assuming you are not) or if you have some other
reason.

So IMO your middle tier should be .NET assemblies/DLLs & be on the web
server (i.e. the same machine as the ASP.NET pages), referenced & called
from ASP.NET just like other assemblies. No web services, no soap, no
remoting. That keeps it simpler, faster, more reliable.

We did the whole 3 physical tier thing years ago & it was not worth it. :)
 
N

Nick Malik

Hi Param,

1. Do not seperate your application into physical tiers. Use logical
tiers. You do not need physical tiers. The cost of marshalling will far
exceed the benefit of multi-processing. SOAP won't help, Remoting won't
help. They are both forms of marshalling. Besides, you don't have an
application server on which to put the remoted component!

2. You are about three machines short in your configuration. If the usage
and uptime are what you say they are, you need one more database server as
failover, one more web server (at least) to support the load, especially to
cope with sudden spikes, and you need a load balancer in front of your web
servers. I hope that your web servers are dual proc, with 2GB of RAM
running Windows Server 2003. Your DB servers should be quad-proc running
Windows Advanced Server 2003 Datacenter Edition with SQL Server 2000 EE (at
least until Yukon comes out). This version of SQL provides some interesting
possibilities when it comes to failover, which is like insurance... you
complain about the cost until you need it, and they you consider yourself
smart for having bought it.

3. Real-time modification can be done, but it requires some forethought.
Most modifications will require you to stop and restart your system, which
ruins that 99.99% uptime record. (4 nines, in your window = 26 minutes of
downtime per year). If you can wait until after 10pm to make a code change,
your app is easier to create and update because you can simply replace a
component, restart IIS, and you are done. On the other hand, if you can
code some rules in the database, you can make immediate changes by making
sure that your app reads the rules periodically. (Downside: you won't know
which transactions took place under the old rules and which under the new
rules... This little tidbit is chewing on one of the apps that my team is
currently supporting. Not fun.) The third possibility, and the hardest to
do well, is a pluggable architecture. You have to have some form of
reliable communication mechanism between components if you want to do this
(either using LoadLibrary to bring components into play, or MSMQ to allow
messages to pile up while you replace a component, or using a web service
that you can bring down, replace the component and bring back up while your
application copes.). This is hard stuff. Don't do it if you can avoid
it... the bugs alone can eat your budget.

4. If speed is of the essence, take as much processing out of the critical
path as possible. If there is something that can be done by a service, on
the side, or even in a seperate thread, that's better than making the user
wait. Burden the user as little as possible with a complicated user
interface. For speed, you need simple, even if it means that you have
different screens that appear for different people to do similar things.
Tailor the interface to the user's typing styles, mouse usage styles, and
normal points where they would wait for something, and you will gain more
speed than worrying about multi-processing designs.
If your app is a web app, minimize the size of the pages. Reduce the amount
of javascript. Don't download huge lists of items for a drop down control.
Even on a LAN, a big page takes a second to download, and those seconds add
up.

I hope this is helpful.

--- Nick
 
P

Param R.

Hi Nick, thanks a bunch for the useful info.

1. I forgot to mention I do have a standby database server for failover. The
only thing is that it is a cold standby. We are running Standard Edition.
Currently we cannot afford EE.
2. We use Windows Load Balancing between the 2 webservers. They are single
CPU with 1GB RAM. I am planning on adding 1GB more of RAM. Unfortunately I
do not have a budget for a 3rd box and a load balancer.
3. I agree with you on the pages being "skinny". The customer base we are
targeting, if the pages are slow, they will go elsewhere.A lot of them are
on dialups.
4. I can get away with realtime modifications after 10, so I can keep it
simple for now.
5. I know the cost of marshalling is high. I was coming from a "code
re-usability" / plug-n-play standpoint. If tomorrow I need to expose an
interface to an external partner I can simply write a web service layer over
the business logic/data layer to exchange data? OR if the growth is
tremendous then I can go get servers that will just run the middle tier? I
come from the good old days of COM/COM+ where MS was pushing the whole
n-tier philosophy. Does the same apply to .net 1.1 and the upcoming 2.0?
6. Talking about server 2003, we just recently upgraded a few of our apps
from 2000 to 2003 and have begun to see significant slow page load &
response times. Most of our sites run over SSL and we cant seem to find the
problem. Same hardware. Our network guys are working on it but havent struck
gold as yet. From reading various posts on these NGs it seems 2003 has some
inherent performance issues with IIS

thanks!
 
P

Param R.

Another thought I was having is to write a middle tier database layer that
basically caches records in Datasets that periodically update the backend
DB. Basically write some sort of DB emulator all residing in memory. The
question is, is this worth it or not? Am I trying to re-invent the wheel and
does SQL already do cacheing for me?

For example, app requests record with id 566 from table1. My middle tier
checks to see if already in cache and if not retrieves from cache and stores
it. Now app updates record and my middle tier saves changes in memory and
only updates after "x" mins... The reason this would be useful is my
application basically has the user go through several data capture screens
which modify several records in tables. User may go back and forth between
screens until they click "finish & submit"...

thanks!
 
N

Nick Malik

Hi Param
1. I forgot to mention I do have a standby database server for failover. The
only thing is that it is a cold standby. We are running Standard Edition.
Currently we cannot afford EE.

Hidden secret: if you can get your hands on the SQL Server SDK from SQL
Server 6.5, there is a little utility in there for "log shipping." It
continues to work under Windows 2000, and it will work just fine for
Standard Edition. This gives you an ability to have a "warm" standby
(manual effort to restart, but restart within 5 minutes). Log Shipping is
built in to SQL Server 2000 EE... basically the same stuff but with much
better U/I... but if you are on the cheap, this will do.
2. We use Windows Load Balancing between the 2 webservers. They are single
CPU with 1GB RAM. I am planning on adding 1GB more of RAM. Unfortunately I
do not have a budget for a 3rd box and a load balancer.

Depending on how political your organization is, it is often better to ask
for hardware and get the request shot down... that way when it turns out
that your hardware is not sufficient, you can say "not my fault."
Regardless, the cost of one more web server and a load balancer is not that
high. (under 30K total). You could lose customers if the site isn't fast
enough, right? What's the value of those customers? Anyway: you probably
already know the arguments.
4. I can get away with realtime modifications after 10, so I can keep it
simple for now.

Then you can simply stage a change on your test machine and when it is time
to go live, make the change in the middle of the night. No need for
pluggable bits (good for you... time is of the essence).
5. I know the cost of marshalling is high. I was coming from a "code
re-usability" / plug-n-play standpoint. If tomorrow I need to expose an
interface to an external partner I can simply write a web service layer over
the business logic/data layer to exchange data? OR if the growth is
tremendous then I can go get servers that will just run the middle tier?

That depends on the amount of use that interface will get. If your design
is scalable, you can put your app, with DLLs, on any web server without
regard to other web servers. If so, then there will be no problem with
putting the same DLLs on another machine with a different interface (web
services as opposed to ASP.NET). If the interface is not being widely used,
put it on BOTH web servers and balance it, just like a regular web site.
I come from the good old days of COM/COM+ where MS was pushing the whole
n-tier philosophy. Does the same apply to .net 1.1 and the upcoming 2.0?

Still applies, but there will be much more work done in Longhorn to bring
this to a much tighter reality. For now, you don't gain anything from that
approach, so simply distribute your DLLs to each machine. You can still run
them in Enterprise Services if you want to manage security more tightly, or
if you want better transactional support for SQL.
6. Talking about server 2003, we just recently upgraded a few of our apps
from 2000 to 2003 and have begun to see significant slow page load &
response times. Most of our sites run over SSL and we cant seem to find the
problem. Same hardware. Our network guys are working on it but havent struck
gold as yet. From reading various posts on these NGs it seems 2003 has some
inherent performance issues with IIS
Most common reason is running IIS 6 in IIS 5 Compatibility mode. Some of
your apps will stop working when you switch this off, so be prepared for the
possibility that you may need to rewrite some bits, but when you switch to
full IIS 6, it should run much faster.

Hope this helps,
--- Nick
 
P

Param R.

All my apps are asp.net. How can I switch from IIS5 Compatibility mode to
full iis6 mode? Is it a registry setting?

thanks!
 
A

Alvin Bruney [MVP]

it is an iis setting, run inetmgr at the command prompt to access the snap
in
 
P

Param R.

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top