Slow query performance after Repair and Compact

M

MattC

I have an Access 2K db split into FE and BE, but both files reside in the
same directory on the client PC (don't ask me, I didn't write it!!). The
client runs XP, but I'm not sure what SP level it (and Access) are at.

My customer reported that after a regular repair and compact operation on
the BE (which was successful), one particular report went from <1sec to
1min to be generated. No other problems exist (other reports are fine), so
I immediately suspected a problem with the index/es in one or more of the
associated tables.

By drilling down and running all the queries individually, I was able to
find the actual query causing the problem.

By then removing each of the 3 tables in turn, I was able to find the actual
table causing the problem.

I then did a manual compare of the indexes in the current version and the
backup, expecting to see differences - there were none!

I then created a new DB and imported all the tables from the original, but
the problem remains.

I'm sure there's an index problem, but I just can't seem to be able to nail
it (or even just fix it without realising why!) - any suggestions?
 
D

david epsom dot com dot au

When you do a compact, all of the database statistics are reset,
and all of the query plans are discarded.

When you run a query next, a new query plan is constructed, using
the new database statistics.

Sometimes the new query plan is much worse than the old query
plan, and the query runs much slower.


You should of course construct a new table, then copy only the
data from the old table to the new table, then move any
relationships from the old table to the new table, then delete
the old table, then rename the new table,

but don't expect it to make any difference.

(david)
 
M

Matt

Thanks Arvin,

The SUBDATASHEET NAME property was set to [Auto], but setting it to
[None] didn't seem to help either.

I'll check out the other response to my post and then try Tony's site
again - have already dropped him a mail in desperation!

Thanks again,

Matt

================================================================
 
V

Vitalijus J. Karalius

Open the query in design view, then SQL then add and then subtract a space
(change and undo the query change) then save the query, then run it. This
will force the query to recompile and optimize. Do NOT run it before saving
it after the design change. Hope it helps.


Matt said:
Thanks Arvin,

The SUBDATASHEET NAME property was set to [Auto], but setting it to
[None] didn't seem to help either.

I'll check out the other response to my post and then try Tony's site
again - have already dropped him a mail in desperation!

Thanks again,

Matt

================================================================
My first action would be to open the table in Design View, choose properties
by right-clicking on the titlebar, and make sure that the subdatasheet is
set to [None]. If that doesn't help try some tips at Tony Toews performance
FAQ:

http://www.granite.ab.ca/access/performancefaq.htm
 
M

Matt

Thanks Vitalijus,

I came to exactly that solution today!

I was demonstrating the problem and my troubleshooting to the boss,
which required me to "hardcode" one of the criteria into the queries
(I hadn't been doing this in my previous tests, which clouded the
results and made me think it was a table issue).

After doing all this I removed my changes, saving the queries again,
and ran the report from the form as normal - it appeared in about 2-3
seconds!

Unfortunately though I wasn't documenting the procedure, and it didn't
quite work on the "live" copy - I'm now repeating the procedure at
home to work it out, and will have to repeat it tomorrow...

Out of interest, and if you have time, do you know HOW a repair &
compact could manage to trash the query?

Thanks again,

Matt

Open the query in design view, then SQL then add and then subtract a space
(change and undo the query change) then save the query, then run it. This
will force the query to recompile and optimize. Do NOT run it before saving
it after the design change. Hope it helps.


Matt said:
Thanks Arvin,

The SUBDATASHEET NAME property was set to [Auto], but setting it to
[None] didn't seem to help either.

I'll check out the other response to my post and then try Tony's site
again - have already dropped him a mail in desperation!

Thanks again,

Matt

================================================================
My first action would be to open the table in Design View, choose properties
by right-clicking on the titlebar, and make sure that the subdatasheet is
set to [None]. If that doesn't help try some tips at Tony Toews performance
FAQ:

http://www.granite.ab.ca/access/performancefaq.htm
 
D

david epsom dot com dot au

compact could manage to trash the query?
When you do a compact, all of the database statistics are reset,
and all of the query plans are discarded.

When you run a query next, a new query plan is constructed, using
the new database statistics.

Sometimes the new query plan is much worse than the old query
plan, and the query runs much slower.

(david)

Matt said:
Thanks Vitalijus,

I came to exactly that solution today!

I was demonstrating the problem and my troubleshooting to the boss,
which required me to "hardcode" one of the criteria into the queries
(I hadn't been doing this in my previous tests, which clouded the
results and made me think it was a table issue).

After doing all this I removed my changes, saving the queries again,
and ran the report from the form as normal - it appeared in about 2-3
seconds!

Unfortunately though I wasn't documenting the procedure, and it didn't
quite work on the "live" copy - I'm now repeating the procedure at
home to work it out, and will have to repeat it tomorrow...

Out of interest, and if you have time, do you know HOW a repair &
compact could manage to trash the query?

Thanks again,

Matt

Open the query in design view, then SQL then add and then subtract a space
(change and undo the query change) then save the query, then run it. This
will force the query to recompile and optimize. Do NOT run it before saving
it after the design change. Hope it helps.


Matt said:
Thanks Arvin,

The SUBDATASHEET NAME property was set to [Auto], but setting it to
[None] didn't seem to help either.

I'll check out the other response to my post and then try Tony's site
again - have already dropped him a mail in desperation!

Thanks again,

Matt

================================================================

My first action would be to open the table in Design View, choose properties
by right-clicking on the titlebar, and make sure that the subdatasheet is
set to [None]. If that doesn't help try some tips at Tony Toews performance
FAQ:

http://www.granite.ab.ca/access/performancefaq.htm
 
M

Matt

So after all that screwing around with queries, I remembered the
results of an earlier test: by replacing the "bad" data file with a
known "good" one, I could toggle the problem... Obviously it's nothing
to do with the front end at all, and must be either the data, or (as I
first thought) a messed up index.

This time I created a MAKETABLE query after renaming the suspect
table, thereby copying every row into a new table with a new
structure.
Even without re-creating any indexes or relationships, or even setting
some of the finer field properties such as input masks etc, the report
now ran in around 5 seconds - a huge improvement and finally I'm
definately on the right track...

Then followed a long period of recreating relationships and indexes,
and setting all the field properties - a test was performed after
EVERY change to see which one might be the problem. I eventually found
that one particular index was causing the whole problem, and simply
removing this index from the original "bad" table also fixed the
problem.

I've scanned the table data in that field and every single row is set
to '0' (the default value and correct type etc - no problems). I sorta
thought that if you couldn't remove and recreate an index on a
particular field, even in a new copy of the table, it must be the
data?

The customer informed me that this particular field was a legacy item
and had never been used by this company, so leaving the index out was
a perfectly acceptable solution. What took me quite a number of hours
to find eventually took about 30 seconds to fix!

A series of data entry tests were then performed to simulate a few
days' work, and a repair and compact was repeated several times - the
problem didn't return, so I must now assume that it was caused by a
transient condition in the first place and was not necessarily
procedural or otherwise user-related.
Nevertheless, I've edited their maintenance procedure to include a
backup of the data file BEFORE anything is done to it (!)

Thanks for all the advice all, I would have taken a lot longer to nail
this one by myself!

Matt
 
M

Matt

So after all that screwing around with queries, I remembered the
results of an earlier test: by replacing the "bad" data file with a
known "good" one, I could toggle the problem... Obviously it's nothing
to do with the front end at all, and must be either the data, or (as I
first thought) a messed up index.

This time I created a MAKETABLE query after renaming the suspect
table, thereby copying every row into a new table with a new
structure.
Even without re-creating any indexes or relationships, or even setting
some of the finer field properties such as input masks etc, the report
now ran in around 5 seconds - a huge improvement and finally I'm
definately on the right track...

Then followed a long period of recreating relationships and indexes,
and setting all the field properties - a test was performed after
EVERY change to see which one might be the problem. I eventually found
that one particular index was causing the whole problem, and simply
removing this index from the original "bad" table also fixed the
problem.

I've scanned the table data in that field and every single row is set
to '0' (the default value and correct type etc - no problems). I sorta
thought that if you couldn't remove and recreate an index on a
particular field, even in a new copy of the table, it must be the
data?

The customer informed me that this particular field was a legacy item
and had never been used by this company, so leaving the index out was
a perfectly acceptable solution. What took me quite a number of hours
to find eventually took about 30 seconds to fix!

A series of data entry tests were then performed to simulate a few
days' work, and a repair and compact was repeated several times - the
problem didn't return, so I must now assume that it was caused by a
transient condition in the first place and was not necessarily
procedural or otherwise user-related.
Nevertheless, I've edited their maintenance procedure to include a
backup of the data file BEFORE anything is done to it (!)

Thanks for all the advice all, I would have taken a lot longer to nail
this one by myself!

Matt
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top