Need ideas for managing large table...

O

OLOT

Our office runs a direct marketing campaign where approx 200,000 prospects
are mailed a postcard each month. We run about 5% response rate and it can
take 120 days for a campaign to end. So we're looking at 800,000 records in
the table at all times with about 10,000 per month calling in for
information. The table actually performs fairly well. We have a web server
app accessing the table for about 12 users. Problem is, if you try to
import records or run too many queries and reports, performance suffers.

I was thinking that since the majority of the work is being done on 10,000
records, that I could benefit from moving the data from the "prospects"
table to a newly created "customer" table as they call in. 10 of the 12
users only access these 10,000 records. (2 operators input their code,
check their address and then route the calls.

I can provide much more info if needed. I could really use a fresh set of
keys to see thru this mess and offer the simpler solution that currently
eludes me...

thanks in advance.
 
J

Jeff Boyce

A few thoughts...

If you are only using 10K records (for the most part) at any one time, do
you have an index on the field(s) used to identify those?

Are you using a SQL-Server back-end for your web db? If so, create a view
on the server that only returns those 10K for "standard" use, and use other
views for other purposes.

I'd be leery of synchronization issues from copying subsets back/forth.

You didn't mention front-end/back-end, OS, version, etc. For more detailed
responses, try more details...
 
O

OLOT

If you are only using 10K records (for the most part) at any one time, do
you have an index on the field(s) used to identify those?
yes

Are you using a SQL-Server back-end for your web db? If so, create a view
on the server that only returns those 10K for "standard" use, and use
other
views for other purposes.

yes, SQL Server 2000 running on SBS 2003 premium.
I'd be leery of synchronization issues from copying subsets back/forth.

thats what our web-server programmer said. However, it wouldn't be a back
and forth thing. I was thinking of moving them over and never coming back.
The "prospect" table would have entries documenting the previous call-in so
the operator would know they're a repeat caller, but our agents would look
the person up from the new "customer" table. None of the information from
the "customer" table would ever need to be syncronized back to the
"prospects" table. When the campaign is dead, I just delete the record from
"prospects".
You didn't mention front-end/back-end, OS, version, etc. For more
detailed
responses, try more details...

I have a tendancy to be long winded... We're running "Resin" on the
front-end and SQL Server 2000 on the back-end. Most of the importing and
reporting is done thru Access 2002 Project.

Daily importing and reporting cannot be done during normal operating hours
because it hinders performance. I just hate maintaining a table with close
to a million records when only 10,000 records are used on a regular basis.
Obviously I don't want to create something thats going to create a bigger
mess.

Current tables in use:

Lists - ListcodeId is the primary
Contacts - IdString is the primary (this is the big guy)
OrderInfo - 1-1 with Contacts, this holds further info on the 10,000 records
mentioned above
Employee - RepId is primary, 1-many with Contacts

Importing daily records into contacts is a killer, as well as running
queries/reports on Lists stats, (multiple queries to generate report...)

Any direction would be appreciated...
 
J

Jeff Boyce

I'm confused...

If you have a SQL-Server back-end, and it is reasonably well-indexed, and
uses views, I'm not sure why an Access front-end for reporting purposes
would put ANY load (OK, so, hardly any) on the server.

Are you saying that the SQL-gurus are telling you to stop, or are you
noticing that the Access reporting front-ends are slow? If the latter, it
may be how you are querying the back-end data, not the speed of the back-end
itself...

Jeff Boyce
<Access MVP>
 
O

OLOT

Jeff Boyce said:
I'm confused...

If you have a SQL-Server back-end, and it is reasonably well-indexed, and
uses views, I'm not sure why an Access front-end for reporting purposes
would put ANY load (OK, so, hardly any) on the server.

So am I. I'll research the above and get back to you.
Are you saying that the SQL-gurus are telling you to stop, or are you
noticing that the Access reporting front-ends are slow? If the latter, it
may be how you are querying the back-end data, not the speed of the
back-end
itself...

The reporting on my side is fine. But when I import names or run too many
queries on the "contacts" table the SQL Server jumps up in allocated memory
and slows down the response times for all the other users accessing the
backend thru "Resin". (SQL Server is at say 300mb in memory and jumps up to
say 480mb or 500mb when I import 15,000 names.) Some of my reports run 5
queries to get the results I need. Running the report multiple times (say
hourly) can also slow down the response time on "Resin".

The reason I got off on the "new" table tangent is because the 10,000 names
from the moment they call in are accessed and used for several months, and
since they represent orders and such, they're archived for safe keeping.
The remaining 190,000 names will never call in and are never used or
accessed. They just sit there for 120 days in case they call in...
 
J

Jeff Boyce

I can imagine why downloading 15,000 names/records might slow things down,
but enough to be noticeable?

What kind of server is running your SQL-Server back-end? What kind of
resources (i.e., RAM, HD) does it have? How many other SQL-Server databases
are on it?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top