After splitting, terrible performance!

D

David W. Fenton

I don't have any control over the antivirus software. I will email
someone about scanning for MDB files, but I doubt if my desires
will have much influence.

I don't believe that is any longer the issue that it once was. There
was a time when Norton AV was very bad in handling (and even
corrupting) MDB files, but they fixed their code.

These days most software scans *all* files and doesn't use lists of
file types, so in order to turn off MDBs you would usually have to
turn off scanning all files and then remove MD? from the list. But
that leaves a lot of files unscanned, which can be dangerous.

I don't think this is a major issue any longer and should be ignored
until other things have not worked.

In my opinion, your performance issues have little to do with the
environment you're operating in and have everything to do with the
non-optimized design of your app. Two things you need to do:

1. make sure that each user is not constantly attempting to create
and delete the LDB file on the server. Do this by maintaining a
connection to the back end database at all times. Two methods for
this:

a. keep a hidden form open at all times bound to a table in the
back end.

b. open a database variable pointing to the back end when your app
opens and close it when the app exits.

2. in all of your application's forms, retrieve the smallest number
of records at a time necessary to serve the needs of the user. This
means that you will have no forms bound to the full recordset. Yes,
this goes against all the Access tutorials and examples, but it's
the reality for a properly-performing multi-user networked
application.
 
D

David W. Fenton

I must deploy on Tuesday, so unless by noon
tomorrow I can get the performance up I must go back to the
unsplit database and I will curse Access for any further problems.
It just shouldn't be this difficult.

The fault is your own for designing your application in a
configuration different from the one in which you intended to deploy
it.

I have never created an unsplit application. Ever. From the
beginning, all my apps, once they are past the basic prototyping
stage, are split.

My bet is that this is the practice of nearly all professional
Access developers.

It's simply foolish to blame on Access an error in your development
process.
 
D

David W. Fenton

Yes. The Switchboard allows the user to open modal forms in
sequence while the Switchboard form itself stays always open.

Then the Switchboard form is a good candidate for initializing and
cleaning up (on exit) your persistent connection.
 
6

'69 Camaro

Hi, David.
Er, that would slow down performance, not speed it up.

I'm counting the time it takes a user to rekey the record because it was
locked on the same data page as another user's edited record -- and perhaps
I should include the overhead of making the phone call to me to complain
that he got a record lock message as well ;-) -- as being slower than
general database operations for row level locking.
Also, if the MDB was converted from pre-Jet 4, it can never use
row-level locking.

You know, this thought has never occurred to me. What has your experience
been with converted Access 97 databases? Is the lack of the row level
locking due to the database being enabled in Jet 4.0 first? (I know that
ends up being a security risk if security is implemented.) Or are the data
pages configured so that row level locking is impossible? Perhaps we should
have imported everything into a new Jet 4.0 database file instead of
converting it from Access 97, which is how most of us did it way back when.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blogs: www.DataDevilDog.BlogSpot.com, www.DatabaseTips.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
D

David W. Fenton

I'm counting the time it takes a user to rekey the record because
it was locked on the same data page as another user's edited
record -- and perhaps I should include the overhead of making the
phone call to me to complain that he got a record lock message as
well ;-) -- as being slower than general database operations for
row level locking.

I was counting the lost CPU cycles that come from having to track at
the row-level instead of the page level. I don't like throwing away
all that performance all the time just to save time occasionally.

Of course, I don't have any apps that concurrency issues using just
page-level optimistic locking. Maybe I'm lucky?
You know, this thought has never occurred to me. What has your
experience been with converted Access 97 databases? Is the lack
of the row level locking due to the database being enabled in Jet
4.0 first? (I know that ends up being a security risk if security
is implemented.) Or are the data pages configured so that row
level locking is impossible? Perhaps we should have imported
everything into a new Jet 4.0 database file instead of converting
it from Access 97, which is how most of us did it way back when.

If you want row-level locking, you have to create a new one. I've
never wanted that, so always just converted. On the other hand, once
an app has been through several development cycles I almost always
recreate it to clean out the crud anyway, so most of my A2K and
higher apps are now native, rather than converted.
 
D

David Portwood

I unchecked Track Auto Names and ran Tom's code to change the SubDataSheets
property to "[None]" for all tables and discovered a dramatic performance
improvement in the split database. I don't know to what extent either or
both of these contributed. And for what it's worth, I also tried running the
split database with and without a persistent connection to the backend and
didn't see any difference either way.

The forms are still taking perhaps a bit longer to open than I'd like, but
this is not really a concern. The big problem was that it was taking two
seconds or more to save a record during the data entry process. Now,
however, a new record is being saved immediately, far as I can tell. I'm
satisfied now with the performance of the split database and that is what
I'll be deploying.

Wickerath mentioned some other things to check into, and I will, since today
I received a stay of execution regarding my rollout deadline. One of the
sups decided to go on vacation for two weeks and we're going to wait for his
return.

I'm using Access 2000. Don't know how my performance-related adventures
would relate to other versions.

Thank you all for your help.
 
T

Tony Toews [MVP]

David Portwood said:
And for what it's worth, I also tried running the
split database with and without a persistent connection to the backend and
didn't see any difference either way.

You will notice the big difference here when you have a second user in
the database.
I'm using Access 2000. Don't know how my performance-related adventures
would relate to other versions.

Same for A2000, A2002 and A2003. I assume same for A2007.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Tony's Microsoft Access Blog - http://msmvps.com/blogs/access/
 
6

'69 Camaro

Hi, David.
I don't have any control over the antivirus software. I will email someone
about scanning for MDB files, but I doubt if my desires will have much
influence.

It probably isn't a factor in your environment, but when it is a factor,
it's a significant performance drain. We just had someone a couple of weeks
ago complain about rotten performance, and it turned out that the problem
was caused by the workstation's antivirus software scanning for MDB files.
This was much more common years ago, but it continues to be a performance
issue, although it's fairly rare these days. I wouldn't worry about it
unless the other significant performance factors have already been
eliminated, and you're still having performance problems. In your case, I
can see a later post of yours listing the steps you took from Tom
Wickerath's article that improved the performance to an acceptable degree.
Isn't this the same as record level locking?

Yes. In relational database terminology, a row is a record is a tuple. But
don't call it a record in the client/server database newsgroups, or someone
will surely "correct" you. ;-)
Where can I set this property?

Select the Tools -> Options... menu and select the "Advanced" tab on the
Options dialog window, then mark the "Open databases using record level
locking" check box.
Also, I was just reading that using DAO to open recordsets doesn't allow
for "row level" locking (at least in A2000). I am currently using DAO
methods, of course.

Use ADO to open the first connection to the database with row level locking,
and keep the connection open. All other subsequent connections from other
users will use row level locking, too. When the last user is out, close the
original ADO connection.
I must deploy on Tuesday, so unless by noon tomorrow I can get the
performance up I must go back to the unsplit database and I will curse
Access for any further problems.

I see that you were successful from your later post. Excellent!
It just shouldn't be this difficult.

If one takes the easy road and accepts all of the defaults, then it will be
difficult later to find what's slowing the database down. Fortunately, you
now have a list of items (mostly these bad-choice defaults) to avoid and
steps to take for your future database development successes.
Thanks for your help, Gunny.

You're welcome.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blogs: www.DataDevilDog.BlogSpot.com, www.DatabaseTips.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
6

'69 Camaro

Hi, David.
I don't think you're right about resolving the permissions on each
folder above, as there's a setting for "traverse directory
permissions" (or something like that) that allows you to give access
to a file in a subdirectory to user groups that are prevented from
seeing files in the parent folder of that subdirectory.

One has to hope that the Windows network administrator set that setting, and
generally, unless they're told to or they're fairly experienced, they don't
do it.
I also don't think there's any problem with using *local* long
pathnames with spaces. It's on the server shares that it matters.

It's not that much of a performance drain on the local file system, but
you're right that it makes a difference in speed when traversing the network
server's paths and file name. But why make needless calls to Windows API
functions? It takes extra time. That's why it's on my list of speed
enhancers, although some are much more effective than others.
I also wonder, based your comments above, where exactly you put your
front end files on workstations.

In C:\UserData or D:\UserData.
You have to choose a place on your
local workstations where users have full permissions, and the only
place that this is the case with default Windows permissions (since
Win2K) is in the documents and settings folder.

When the UserData directory is created, the appropriate permissions are
granted.
I really think the concern over the My Documents location is
unwarranted.

When we tested it, we found performance increases by not using such a long
path to the database directory and ensuring that the 8.3 DOS naming
convention was used. It was small on the local workstation compared to the
performance increases on the network server, but every bit helps when one
wants to increase speed. And it benefitted those who used shortcuts to open
secure databases, because the 255 character limit was being hit due to long
paths for both the database file and the MDW file.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blogs: www.DataDevilDog.BlogSpot.com, www.DatabaseTips.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
6

'69 Camaro

Hi, David.
I was counting the lost CPU cycles that come from having to track at
the row-level instead of the page level. I don't like throwing away
all that performance all the time just to save time occasionally.

Excellent point. But the users don't seem to notice the slowdown from the
row level locking. They do, however, notice when they get blocked from
saving the record they just keyed in and have the uncomfortable option of
either overwriting the other record or rekeying their own. They would
rather take the latter option, but this is what they complain about that
slows them down: retyping that data input.
Of course, I don't have any apps that concurrency issues using just
page-level optimistic locking. Maybe I'm lucky?

I don't think it's luck. It's skill -- and being able to talk the customer
into only showing the necessary records on a form, instead of all of them --
which I haven't always been successful doing.
If you want row-level locking, you have to create a new one.

I wish I'd known that. We didn't do that until (or if) we ran into
problems, and then we created a new database file and imported all objects
into it. Or when they needed a new design and we cut out certain features,
we'd create a new database file and only import the objects with the needed
functionality and we'd go from there. Eventually, they all became Jet 4.0
databases through creation, not conversion, but it took a good while.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blogs: www.DataDevilDog.BlogSpot.com, www.DatabaseTips.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
6

'69 Camaro

Hi, David.
The big problem was that it was taking two seconds or more to save a
record during the data entry process. Now, however, a new record is being
saved immediately, far as I can tell.

More than likely, it was the elimination of the subdatasheets that improved
this operation's performance.
Thank you all for your help.

You're welcome. Glad you sorted out the major problem with performance.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blogs: www.DataDevilDog.BlogSpot.com, www.DatabaseTips.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
D

David W. Fenton

One has to hope that the Windows network administrator set that
setting, and generally, unless they're told to or they're fairly
experienced, they don't do it.

It's the DEFAULT, and has been since at least Windows 2000 (which
was the first version of NT to tighten up permissions on system and
application space), so I think it's *very* unlikely to be turned
off.
It's not that much of a performance drain on the local file
system, but you're right that it makes a difference in speed when
traversing the network server's paths and file name. But why make
needless calls to Windows API functions? It takes extra time.
That's why it's on my list of speed enhancers, although some are
much more effective than others.


In C:\UserData or D:\UserData.

Well, on C: that would mean setting up special permissions, and the
point of the profile is that the user has full control over it by
default. There's no telling what the permissions are on your D:
drive, but most people only have the one volume in their workstation
and the C: drive is the only option. Storing the front end in the
profile is the correct approache in my opinion. In fact, I think
it's the only logical approach if you're running LUA, as everyone
should be (and practially no one ever does).
When the UserData directory is created, the appropriate
permissions are granted.

By whom? By hand? By a script running as an admin? There is no
reason that you should have to run as an admin just to install an
Access application (assuming that Access itself is already
installed).
When we tested it, we found performance increases by not using
such a long path to the database directory and ensuring that the
8.3 DOS naming convention was used.

What about having your shortcut to the database use the short
filename version of the long path? Does that still exhibit the
performance hit?
It was small on the local workstation compared to the
performance increases on the network server, but every bit helps
when one wants to increase speed. And it benefitted those who
used shortcuts to open secure databases, because the 255 character
limit was being hit due to long paths for both the database file
and the MDW file.

Well, surely you could use JUNCTION to set up a symbolic link to
take care of that.
 
D

David W. Fenton

Excellent point. But the users don't seem to notice the slowdown
from the row level locking. They do, however, notice when they
get blocked from saving the record they just keyed in and have the
uncomfortable option of either overwriting the other record or
rekeying their own. They would rather take the latter option, but
this is what they complain about that slows them down: retyping
that data input.

As I said, this simply doesn't happen in my apps. I haven't done
anything special in engineering them.

But I did experiment way back when with trying to write my own write
conflict resolution routines (using the Form's Error event), and it
was not a success. So I just went with optimistic locking from then
on and have not had any users complain about encountering write
conflicts. If they'd been happening I'd have heard about it (my
clients aren't shy about complaining!).
I don't think it's luck. It's skill -- and being able to talk the
customer into only showing the necessary records on a form,
instead of all of them -- which I haven't always been successful
doing.

Yes, but page-level locking really does increase the concurrency
problem, as it's a larger block of data that's being locked.

I do have several replicated apps, and that surely improves
concurrency, as the random Autonumber will mean that
consecutively-added records will end up on nonconsecutive data pages
in most cases. I've often wondered if random Autonumbers was not a
possible solution to some page-level concurrency problems.
I wish I'd known that. We didn't do that until (or if) we ran
into problems, and then we created a new database file and
imported all objects into it. Or when they needed a new design
and we cut out certain features, we'd create a new database file
and only import the objects with the needed functionality and we'd
go from there. Eventually, they all became Jet 4.0 databases
through creation, not conversion, but it took a good while.

That's the way my apps have been, too, but I wasn't wanting to have
row-level locking.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top