Back End Bloat: Only Happens With Multi Users

P

PeteCresswell

The app's back end has been bloating badly - seemingly as the result
of an iterative process that creates records and attempts to save
them, trapping the 3022 if there is already a record out there and
merrily moving on to the next record.

I've run it a half-dozen times solo and nada: no bloat.

But the two times I have run it with two people connected to the back
end concurrently, the bloat happens.

We're talking 84 megs ===> 170+ megs.

Could multiple users connected really be a factor?

Intuitively, it doesn't seem so.... but my tests *seem* to say
otherwise.

Here are a few code frags:
--------------------------------------------------
Public Function PaymentAccruals_Update_Security( _

) As Long
4000 DebugStackPush mModuleName & ": PaymentAccruals_Update_Security"
4001 On Error GoTo PaymentAccruals_Update_Security_err
....

4820 With thePaymentAccrualRS
4821 .AddNew
4822 !SecurityID = theSecurityID
4823 !TradingAccountID = tradingAccountRS!
TradingAccountID
4824 !PaymentAccrualDate = curAccrualDate
4825 !PaymentAccrualAmount = curAccrual
4826 !CreatedAt = theTimeStamp
4827 !CreatedBy = theUserID
4828 .Update
4829 End With
....

PaymentAccruals_Update_Security_err:
Select Case Err
Case gError_DuplicateValue '3022
Resume Next 'We've already computed this guy and just want to
bypass that date
Case Else
BugAlert True, "SecurityID='" & theSecurityID & "',
SingleSecuritySwitch='" & theSingleSecuritySwitch & "', arraySize='" &
arraySize & "'."
End Select
Resume PaymentAccruals_Update_Security_xit
--------------------------------------------------
 
P

PeteCresswell

     I have found this to be a really inefficient method.  It invokes the
error handling which is slow.  Also, it may involve adding the record and
then rolling back the addition...

     Try making one of those changes and see if that helps with your bloat issue.

One thing I found was that recordset that was being passed to the
routine was opened dbAppendOnly.

Gotta wonder if there's some issue there with adding duplicate records
and the subsequent rollbacks.... multi-users only... but that's way
beyond my level of expertise.


Since the RS was already open, I passed it as a straight-up Dynaset
(no dbAppendOnly) and the prefaced the .AddNew with a .FindFirst and a
check of .NoMatch.

This made the bloat go away - and coding-wise it's clearly more of a
Good-Right-And-Holy-Path than just doing a .AddNew/.Update and letting
MS Access sort it out.

All's well that ends well.... I guess......

But it's still bugging me that the problem does not occur with only
one user connected - but occurs with two or more connected.

viz:
------------------------------------------------
4820 With thePaymentAccrualRS
4829 .FindFirst "SecurityID=" & theSecurityID & "
AND TradingAccountID=" & tradingAccountRS!TradingAccountID & " AND
PaymentAccrualDate=#" & curAccrualDate & "#"

4830 If .NoMatch = True Then
4831 .AddNew
4832 !SecurityID = theSecurityID
4833 !TradingAccountID = tradingAccountRS!
TradingAccountID
4834 !PaymentAccrualDate = curAccrualDate
4835 !PaymentAccrualAmount = curAccrual
4836 !CreatedAt = theTimeStamp
4837 !CreatedBy = theUserID
4838 .Update

4840 DoEvents
4841 k = k + 1
4842 'LogTime gLogType_WithinTask, "Record
written to table"
4843 'LogTime gLogType_EndTask
4844 End If
4849 End With
-----------------------------------------------
 
C

Clifford Bass

Hi Pete,

This may work for the most part, but you may have the issue of it not
finding some row that was just added by another user because it was not in
the recordset at the the time the recordset was opened. This was why I
suggested the open/close method. It will be as up-to-date as you can get and
will find those that other users enter. Also, it uses the database's ability
to retrieve exactly only the necessary row, using the index(es) and no more.
So if your table has 50K rows and you only need to check against a few, it
does not waste the time to do the find. Although if the find uses the
index(es), that may not be an issue.

I suspect the reason for the bloat only when there are multiple users
is that it is much more complicated to deal with locking, uncommitted inserts
and updates, rollbacks and probably other things. With multiple users it
probably has to record all types of additional information about the
transaction and then when it fails, it has to "undo" it all, probably by
writing even more information into the database. My guess.

Clifford Bass
 
T

Tony Toews [MVP]

PeteCresswell said:
The app's back end has been bloating badly - seemingly as the result
of an iterative process that creates records and attempts to save
them, trapping the 3022 if there is already a record out there and
merrily moving on to the next record.

I've run it a half-dozen times solo and nada: no bloat.

But the two times I have run it with two people connected to the back
end concurrently, the bloat happens.

We're talking 84 megs ===> 170+ megs.

Could multiple users connected really be a factor?

Yes, I've seen this behavior in comparing A97/Jet 3.5 with A2000/Jet
4.0. I suspect MS decided, when there are multiple users on the BE
MDB, to insert new records in their own individual 4 kb page rather
than stuffing as many as possible into the kb page. I suspect this
makes the indexes and locking much easier to deal with.

Tony
 
D

david

And my theory? As we now know, Windows 2000 shipped with
SMB multi-user locking bugs that corrupted Access databases.
My pet theory is that pre-release, the Access 2000 people were
desperately stripping back any multi-user locking feature that they
could get rid of, in what was ultimately a misdirected effort to avoid
the corruption problems we sometimes saw when it was released.

(david)
 
D

David W. Fenton

m:
The app's back end has been bloating badly - seemingly as the
result of an iterative process that creates records and attempts
to save them, trapping the 3022 if there is already a record out
there and merrily moving on to the next record.

The solution is quite easy:

Don't attempt to append records that already exist.

That is, test before you append. This could be done in the source
SQL, with an outer join so you only append records where the other
side of the join to the the destination table is empty, or by some
other method (which would likely be less efficient).

But the point is that there's no question that error 3022 creates
bloat and it always has done so.

So, avoid error 3022.
 
P

PeteCresswell

m:
The solution is quite easy:

Don't attempt to append records that already exist.

That is, test before you append. This could be done in the source
SQL, with an outer join so you only append records where the other
side of the join to the the destination table is empty, or by some
other method (which would likely be less efficient).

But the point is that there's no question that error 3022 creates
bloat and it always has done so.

So, avoid error 3022.

Well, between you, Tony, David, and Clifford I guess that pretty much
nails it.

I'll just (Ha!.... "Just"...Riiiiights) rewrite the routine so that
it creates a work table instead of attempting to insert directly into
the prod table.

Then I'll run a query that appends from work to prod - but only
records where there is no prod rec existing.

(I tried checking each record before inserting, but it ate up too much
time.)
 
Top