IIS Web Application Deteriorates

B

beparker

We have a Framework 3.5 application that deteriorates quickly when the
w3wp.exe App Pool process gets to a certain memory usage. The users
browser will start clocking and the page they are attempting to load
either doesn't load or gets partially loaded and never finishes.

The application is in its own App Pool. The App Pool is set up to
recycle at 2:00am and when the used memory gets to 650mb. The Request
queue limit is set to 4000. Everything else is set to the defaul.

This is a low-use application. We typically have about 5-15 people
logged in at once with maybe 2-4 of them actively working and
requesting pages at one time.

The database server is MSSQL 2000. We're using connection pooling and
are not setting a maximum number of connections.

The web server is a newly installed 2003 server.

Short history:
The web application was on our old production server ( also 2003 ) and
was exhibiting the same behavior. It would work fine until the
w3wp.exe process would be between 650 and 800 meg and then the users
would experience web page clocking and partial loads with clocking.
Unfortunately, the app pool process was hitting this mark once every 2
days at the least to 2 times a day at the most and had to be recycled.

There had been issues with this server in the past so we fired up a
new server for the web server and set up IIS and the App Pool and
moved the web over to it. The web site acted the same way on this new
server, unfortunately.

Since it was obvious that the web site had some sort of memory leak,
we looked through the code and identified possible leak situations.
This code was changed and published. We saw an immediate positive
change in how much memory was being leaked. But, the issues didn't go
away - the app pool process would now climb to 250-300mb of memory and
then start having the same issues as before. Althought the memory
ceiling was lower, it wasn't getting there any faster because some
memory leaks were plugged. So, the code change helped with the memory
leak, but it didn't impact the real issue that was causing the
clocking.

If I look at the database connections while the application is
running, there are typically only 1 or 2 connections coming from this
application, and not many at all overall on the database server; less
than 25. There are no artifically low thresholds set on the server
for maximum number of connections. During the "issue" period, the
database connections look the same, just 1 or 2 from the application
and not many from elsewhere.

As much as I was hoping it was a database connectivity issue, I really
don't think it is. I'm focusing back on the .Net code again but don't
see anything that would cause this problem. The pages people are
requesting at the time the application has problems are different each
time; there doesn't seem to be a pattern of requests.

I'd appreciate some ideas on what I should be looking for as the cause
of this issue.

Thanks,
-BEP
 
B

beparker

Thank you for your suggestion.

Our Request Queue Limit is set to 4000 and the traffic on the site is
very low. I'd be sureprised if the server was handling request from
more than 5 people at a time at peak times and then they only request
a few pages during their session. We shouldn't be getting close to
the 4000 limit.

Aside:
We do still have a memory leak of some sort, but it's not nearly as
bad as it was before. Before the code change for the memory leak, the
app pool would grow to over 600meg before the app became unstable/
unresponsive. After some of the leaks were plugged, the app pool
doesn't grow as large or as quickly, but the application still messes
up in about the same amount of time as it did before. I'm under the
impression that the memory leaks aren't causing the application
instability. They definitely need to be fixed, however.

I'm still looking for anything that would cause the current issue.

-BEP
 
B

beparker

Thank you for your suggestion.

Our Request Queue Limit is set to 4000 and the traffic on the site is
very low. I'd be sureprised if the server was handling request from
more than 5 people at a time at peak times and then they only request
a few pages during their session. We shouldn't be getting close to
the 4000 limit.

Aside:
We do still have a memory leak of some sort, but it's not nearly as
bad as it was before. Before the code change for the memory leak, the
app pool would grow to over 600meg before the app became unstable/
unresponsive. After some of the leaks were plugged, the app pool
doesn't grow as large or as quickly, but the application still messes
up in about the same amount of time as it did before. I'm under the
impression that the memory leaks aren't causing the application
instability. They definitely need to be fixed, however.

I'm still looking for anything that would cause the current issue.

-BEP
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top