cost of object instantiation in .NET 2.0

J

John A Grandy

I'm trying to get a decent idea of the relative performance of three types
of implementations of data-access classes in ASP.NET 2.0.

I believe this boils down to a more basic question regarding the cost of
object instantiation in .NET 2.0. It becomes especially relevant to web-app
data-access classes due to the vast variety of db queries page requests may
require. Caching is not appropriate for most of these datasets, and so many
queries must be performed, implying many data-access object instantiations.

Types of implementations :

1. non-singleton classes with non-static methods where all methods involved
in processing a page request must instantiate various data-access class
objects (possibly multiple times)

2. singleton classes that implement a getInstance() method that provides a
pre-existing instance of the class if one exists; if an instance does not
exist, getInstance() instantiates a class object and assigns it to an
internal static reference

3. classes where all data-provision methods in the interface are implemented
as static methods


In the course of writing a scalable ASP.NET 2.0 web-app, has anyone done any
benchmarking (either formal or informal) ... or has a general sense of the
relative peformance of these 3 implementations ?

Does anyone know of any whitepapers or other studies that might be available
( either MS or external ) ?
 
M

Michael C

John A Grandy said:
I'm trying to get a decent idea of the relative performance of three types
of implementations of data-access classes in ASP.NET 2.0.

Wouldn't it be irrelevant compared to retreiving data from a database and
populating a dataset?

Michael
 
J

John A Grandy

I'm trying to examine all avenues for peformance optimization. I'm
examining sprocs and ADO.NET code as well as usage of data-access classes.

Cost of instantiation of data-access classes might be small compared with
other costs ... but there is always potential for optimization to provide
value at any app layer.
 
B

Barry Kelly

John A Grandy said:
I'm trying to get a decent idea of the relative performance of three types
of implementations of data-access classes in ASP.NET 2.0.

If your code is accessing a database, then database access code
(including the network round-trip I assume it's going to involve) is
going to dominate your time, and eliminating as many roundtrips to the
database as possible will be your first, most fruitful, source of
performance increases. That would probably entail bulking together
requests if you can't do any caching.
so many
queries must be performed, implying many data-access object instantiations.

Database calls (implying a network round-trip) are so many times more
expensive than object instantiations that the instantiation cost is in
the realm of a rounding error. I think you need to measure the two costs
before trying too hard to optimize object instantiation.
In the course of writing a scalable ASP.NET 2.0 web-app, has anyone done any
benchmarking (either formal or informal) ... or has a general sense of the
relative peformance of these 3 implementations ?

With respect to memory management on .NET, probably one of the most
important things is to try to achieve a low percentage of time spent on
GC, which in turn means reducing the rate of generation 2 garbage
collections. Keeping gen-2 GCs low implies being aware of how the GC
works, of how large your objects are, and how long you are keeping them
in memory - but you have to measure GC% before you know this is where
your problem is.

The most important thing is to measure: benchmark a 'typical' request
scenario and find out if your time is really going into object
instantiations. When you start talking about the database, I seriously
doubt that micro-optimizing allocations is going to be of much benefit.

Allocating an object in .NET is the cost of incrementing a pointer and
running whatever code is in the constructor - i.e. not much cost at all
and most of it is under your control through the constructor. Objects
get more expensive if you attach them to structures which are going to
live for a while, because that gives them a chance to:

1) Be promoted out of gen-0 into gen-1 (and thus usually out the CPU
cache)
2) or (worst-case scenario) have a mid-life crisis (live until gen-2 and
then become irrelevant)

The shorter your objects' lives with respect to the overall allocation
rate, the better. Sometimes it's cheaper to allocate and initialize a
new object than it is to keep a cached one around, depending on how big
it is and how costly it is to initialize - but measure.

If it's easy to restructure your code not to use allocations, then it
might be worthwhile (but measure first!), but if it means bending over
backwards not to allocate, then it almost certainly isn't worth it. You
should look at your algorithms and actual code running on the hottest
paths first, IMHO.

-- Barry
 
M

Michael C

John A Grandy said:
I'm trying to examine all avenues for peformance optimization. I'm
examining sprocs and ADO.NET code as well as usage of data-access classes.

Cost of instantiation of data-access classes might be small compared with
other costs ... but there is always potential for optimization to provide
value at any app layer.

The difference would have to 1000 to 1 at an absolute minimum. Do whatever
make more sense in your code.

Michael
 
N

Nicholas Paldino [.NET/C# MVP]

John,

You are going to have to instantiate the objects no matter what, it
would seem. The model, as it exists now, does not allow for multithreaded
access to the same objects. For example, you are not going to share the
same connection between threads, nor are you going to share the same data
adapter, or command between threads.
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

The difference in performance would really be minimal between the types
of implementation. I would avoid the second alternative, though. As
ASP.NET is a multi-threaded environment, a singleton class would
introduce more problems than it would solve as you would have to add
locking to make it thread safe.

Keep it simple, and concentrate on stability. Make sure that the
implementation allows you to always handle the database connection
gracefully. If you start leaking connection objects it would cause much
more performance problems than creating a few extra objects.

It doesn't matter how highly optimized the code is, if it doesn't work. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top