ms sql server grabbing 5Gb. Is this good nomal?

R

Raphael Gomes

So I have a .NET site with a few rather large tables (the three main
ones at about 1 million records each). The hosting setup consists of
some Web Servers running IIS 6, being served by another box running MS
SQL Server 2005.

The web servers run smoothly enough, but even without much use (this
site has such a multi-server setup because it has spikes of use near
holidays) but the SQL Server is coughing up and dying about once a
day.

The guys at the hosting company show me the logs showing that the SQL
server is under heavy use (processor at almost +90%, 5Gb of RAM used).

I'm running SQL profiler trying to find out if there are some rogue
query causing trouble (I used linq for this project) but so far, I
haven't seen good benchmarks for sql server performance. Are these
numbers "normal" for a setup like this? (single sql server serving
multiple web servers, tables of about 1 million records each - so far
being used for simple selects, inserts and updates).

Any help would be very much appreciated,
 
J

Jeroen Mostert

Raphael said:
So I have a .NET site with a few rather large tables (the three main
ones at about 1 million records each). The hosting setup consists of
some Web Servers running IIS 6, being served by another box running MS
SQL Server 2005.

The web servers run smoothly enough, but even without much use (this
site has such a multi-server setup because it has spikes of use near
holidays) but the SQL Server is coughing up and dying about once a
day.

The guys at the hosting company show me the logs showing that the SQL
server is under heavy use (processor at almost +90%, 5Gb of RAM used).
I'm willing to bet good money that you're either missing critical indexes or
issuing inherently inefficient queries ("SELECT * FROM hugetable") or both.
Heavy processor and memory use are typical of table scans, and scanning a
table of a million records is... inadvisable.
I'm running SQL profiler trying to find out if there are some rogue
query causing trouble (I used linq for this project) but so far, I
haven't seen good benchmarks for sql server performance.

Check for queries which take an inordinate amount of time (anything over 10
seconds that's issued more than once almost certainly needs tuning) and
queries which take far greater amounts of logical reads than others. Focus
on optimizing these first.
Are these numbers "normal" for a setup like this? (single sql server
serving multiple web servers, tables of about 1 million records each - so
far being used for simple selects, inserts and updates).
What's "normal" also depends on your hardware and your actual data needs --
what is "simple" to you is not necessarily simple for the server. However,
if your SQL Server consistently eats 90% CPU you're almost certainly doing
*something* wrong -- either the server is way underpowered or your database
design is wrong, with the latter being a lot more likely.

Now, all this said, this has nothing do to with C# so far, .NET and LINQ or
no. A far better newsgroup for questions like this would be
microsoft.public.sqlserver.server. But before you ask questions there,
searching the web for tips on optimizing SQL Server performance is likely to
be even more fruitful.
 
R

Raphael Gomes

I'm willing to bet good money that you're either missing critical indexes or
issuing inherently inefficient queries ("SELECT * FROM hugetable") or both.
Heavy processor and memory use are typical of table scans, and scanning a
table of a million records is... inadvisable.

I see... So, if I understand correctly these, I should filter the
selects ('SELECT fields I actually will use' instead 'SELECT *') and,
if I need to search a field (which I do) which is not the PK, it
should be an index.
Check for queries which take an inordinate amount of time (anything over 10
seconds that's issued more than once almost certainly needs tuning) and
queries which take far greater amounts of logical reads than others. Focus
on optimizing these first.

Good advice! So far, all the queries run in a couple of seconds, but
some have about 200x more reads (as seen on SQL Server Profiler) than
others. I'm talking about 7780 reads for some selects versus 3, for
selects in the same table.
What's "normal" also depends on your hardware and your actual data needs --
what is "simple" to you is not necessarily simple for the server. However,
if your SQL Server consistently eats 90% CPU you're almost certainly doing
*something* wrong -- either the server is way underpowered or your database
design is wrong, with the latter being a lot more likely.

Probably, I did the mistake of using lots of settings out of the box
and then scalability struck back, badly. :p
Now, all this said, this has nothing do to with C# so far, .NET and LINQ or
no. A far better newsgroup for questions like this would be
microsoft.public.sqlserver.server. But before you ask questions there,
searching the web for tips on optimizing SQL Server performance is likely to
be even more fruitful.

Indeed! Group found and added. I'll ask there, now.

Thanks for you answers. They really helped.
 
P

Paul

You obviously have a number of options.

1] Scale you DB hardware

2] Identify Issues in software and fix.

Personally in your situation I would not be just looking at 1 or the other.
Business wise it is coming up to Easter and you said your servers go mad
during the holidays. Obvoiusly you have scaled in the past but at the web
server level. My suggestion would be that maybe just maybe youy are pushing
you SQL close to its limits in any case. I would solve the impending problem
by scaling your DB hardware by adding say a Clustered SQL server environment
rather than one box. This does two things.

1] Improved performance (well you would hope)

2] If one box dies you business can still run although with SQL crawling
until the other box comes back up.


Anyway a stab at what is happening on your queries. You did not mention how
the records were getting returned but a stab at why sometimes a SQL query
may or may not use an index even though one is there. SQL Server uses
Statistics to decide if it will use an index or not. So if we have a table
with 150000 rows and we select all rows with a status of open we may have
5000 rows with a status of open. Which is about 3% anything of below 5% and
the query analyser will prolly use the index however if we selected where
the status was closed. when we first released the app selectivity would be
high so fine but as we closed orders over time we may end up with more and
more. so say 15000 of our records are status = closed. So 10% of our records
are closed. In this scenario our selectivity is lowered so the analyzer is
much more likely to do a table scan than use the index.

Once you have indetified those queries check the execution plan, and then
perform a test to see the selectivity of the values been selected on.

Hope this helps. DB performance issues can be a nightmare to solve but
generally are rewarding once done.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top