Do Transactions guard against corruption?

M

Michel Walsh

Also, the 'assumption' you made about the way Jet works will end into a
nightmare: any transaction will have to deal with multiple replication
problems,with concurrent replications, if more than one transaction occur at
the same time! The developers may not have done exactly what MS SQL Server 6
was doping, and that is fine, but they are aware of 'good' strategies, and
very poor ones. The mechanism you 'presented', while it seems 'easy' at
first glance, it drives you toward a worse problem to solve than the initial
problem which was to be solved.



Vanderghast, Access MVP
 
M

Michel Walsh

Your ‘hypothetic’ mechanic would have to deal with replication when the
transaction is committed, and that, even on database not having implemented
the replication mechanic, ***but that is not its only problem***. You are
well aware that a transaction can be embedded into another transaction, so
that second transaction will be … a second file. Take a look at the
complexity to read data inside that second transaction: is it in the second
file? Too bad if it is not, since you have to check if it is in the file of
the first transaction? Again, too bad, if it is not, because then, you have
to read the main database file. And that, for each and every record! Imagine
the beauty of the indexing over all that. Add that you have to remember to
delete records, eventually, since you just cannot delete them, after all, in
your transaction ‘files’… the commit would have no way to propagate those
deletions in the main db.



A log does not need to imply any form of replication, is much more robust
and direct to the point of the promises of transaction atomicity, does not
increase the complexity for simple read, and does not require much coding,
at least, in comparison with the alternative of the very hypothetic mechanic
you presented.







Vanderghast, Access MVP
 
J

JString

Because, in spite of my best efforts, my back end is still suffering from
corruption issues. If I cant find anything that fixes the problem then I'll
have to switch out the back end entirely with a different engine.
 
D

David W. Fenton

in spite of my best efforts, my back end is still suffering from
corruption issues.

Corruption is an environmental issue in all cases. That is,
something in the software or hardware operating environment is
substandard and leads to corruption.

I assume you've read Tony Toews's corruption FAQ? If not, Google and
study it carefully.

Let me also backtrack slightly on the assertion that you can't
programmatically avoid corruption. It is certainly possible to use
inefficient methods that increase your apps exposure to the
possibility of corruption. For instance, if you loop through a DAO
recordset and make changes to individual records instead of using a
SQL statement to do the same changes, you are greatly increasing the
time that a dropped connection could corrupt the database.

But this assumes that you are doing things *wrong*. A properly coded
app will lock the back end records as short a period of time as
possible for efficiency purposes, but this will also make it safer,
since it means that the vulnerability periods when the back-end
table is in an edit lock will be as short as possible.

Some people insist that the only way to do this is to go with an
entirely unbound app, but I think that's ridiculous and have never
built a single unbound app in my nearly 15 years of professional
Access development.
 
D

David W. Fenton

Using transactions will take longer than not using transactions.
Thus transaction would be slightly more prone to corruption.

Only if the transaction keeps an edit lock on the back end. I don't
believe it does -- I think it keeps a read-only lock, so it wouldn't
actually increase the time the back end is in a vulnerable state.
It's like the difference between looping through a DAO recordset
doing edits record-by-record and using a SQL statement to do the
updates all in one go. Using a transaction would mean that all the
updates happen in one go, and all occur at the data page level. This
means it's going to put the back end in a vulnerable state for a
much shorter period of time than if you did *not* use transactions.

Or, at least, so it seems to me.
 
D

David W. Fenton

Also, the 'assumption' you made about the way Jet works will end
into a nightmare: any transaction will have to deal with multiple
replication

Eh? What does replication have to do with it? Do you mean multiple
users?
problems,with concurrent replications, if more than one
transaction occur at the same time!

That's managed by data stored in the LDB file, which tells each Jet
user what is available for writing at which time. If it's locked by
another user, then it won't be updatable. So, yes, it's possible
that user 1 starts a transaction that would be a read-only lock on
the tables involved and and a second user's transaction would fail
to commit if that lock was still in place.

All of this happens not through any kind of log, but simply by each
user's installation of Jet reading the LDB file.
The developers may not have done exactly what MS SQL Server 6
was doping, and that is fine, but they are aware of 'good'
strategies, and very poor ones. The mechanism you 'presented',
while it seems 'easy' at first glance, it drives you toward a
worse problem to solve than the initial problem which was to be
solved.

I don't know what fantasy world you live in, but my comments are
based on extensive reading in the documentation for Jet.

There is *no* transaction log anywhere in Jet.

There simply cannot be.

Ever.

(and replication doesn't use any kind of log, either, BTW)
 
D

David W. Fenton

Your ‘hypothetic’ mechanic would have to deal with replication
when the transaction is committed, and that, even on database not
having implemented the replication mechanic, ***but that is not
its only problem***. You are well aware that a transaction can be
embedded into another transaction, so that second transaction will
be … a second file. Take a look at the complexity to read data
inside that second transaction: is it in the second file?

Yes, of course it is. Within the transaction, likely the data is
written to a temp file and loaded into a data structure in memory.
Nothing is written to the real MDB until the transaction has been
successfully completed.
Too bad if it is not, since you have to check if it is in the file
of the first transaction? Again, too bad, if it is not, because
then, you have to read the main database file. And that, for each
and every record! Imagine the beauty of the indexing over all
that. Add that you have to remember to delete records, eventually,
since you just cannot delete them, after all, in your transaction
‘files’… the commit would have no way to propagate those deletions
in the main db.

I think you are basing your comments on how you *wish* Jet worked,
instead of how the documentation for Jet says it works.
A log does not need to imply any form of replication,

I was not the one who introduced the confusing term "replication." I
have no idea what you mean by it, actually.
is much more robust
and direct to the point of the promises of transaction atomicity,
does not increase the complexity for simple read, and does not
require much coding, at least, in comparison with the alternative
of the very hypothetic mechanic you presented.

All of your statements about the desirabilty of transaction logs are
correct, of course.

They just don't apply in any way to Jet, which does not use them,
either for rollback/commit or for standard operation logging (there
are two kinds of transactions involved in your discussion, only one
of which is supported by Jet).
 
M

Michel Walsh

Well, you choose to not understand what fits your need. When you modified
two somehow disconnected databases and need to merge them into one (on
commit), that is called replication. Or that is news to you?

How your 'mechanic' propagates the deleted records from a transaction with
the mechanic you describe? How it handles the autonumbers of a table on
commit (other users CAN append records to the same table, isn't it, they are
not locked out as soon as the transaction also append a record to the
table)? How it handles simple read (and indexes) when embedded transactions
occur, from *within* the embedded (not yet committed) transactions? You see,
it is not important that you come with an answer as much as your answer does
not involve another extra mechanic for case one, and another special
dedicated mechanic for case two, etc. All those mechanics would have to be
coded while the log mechanic... implies no extra specific database mechanic
that the database itself already deliver: lock, set, append, delete. The
SELECT * FROM table does not need any special consideration if your are
within a transaction, or not, with a log file, but needs very special ones
with the mechanic you described, when executed within a transaction.


Your basic claim is that since YOU never read anything about Jet having log,
so NECESSARY Jet has no log. That is a very weak argument, and depend on our
predisposition to believe you read the appropriate stuff. Have you even read
anything about CHECK(), or BAND in the Jet db engine programmer's handbook?
No, so, there are just illusion whenever I could eventually use them?


Have you ever read in the same book that the mechanic you described is what
happen with transaction? so, your logic is still applicable: YOU have not
read it, so SURELY it does not exist?


I have not read it, me neither, in any official documentation, and while any
piece of knowledge has always a tag of 'degree of certitude' associated to
that piece of knowledge, it just happen that John L. Viescas has a much
better credibility tag than the one of your claim has, and, as extra,
(transaction) log makes much more common sense, strategically and
economically to develop, in the business of writing a database engine, which
Microsoft also has a tag of expertise.





Vanderghast, Access MVP
 
D

David W. Fenton

Well, you choose to not understand what fits your need. When you
modified two somehow disconnected databases and need to merge them
into one (on commit), that is called replication. Or that is news
to you?

No, it's not news to me at all. But Jet replication does not use
logs either (and this is an area in which I have a great deal of
expertise, in fact).
How your 'mechanic' propagates the deleted records from a
transaction with the mechanic you describe?

In replication or a multi-user database? In Jet Replication, the
MSysTombstones table is used.
How it handles the autonumbers of a table on
commit (other users CAN append records to the same table, isn't
it, they are not locked out as soon as the transaction also append
a record to the table)?

In replicated databases, all Autonumbers are converted to random. In
over 10 years of creating replicated applications for my clients,
I've never once encountered a duplicate Autonumber between replicas.
If you have enough inserts and enough replicas where that's a
problem, there are a number of solutions to the problem to insure
uniquess of PKs in each replica.

But none of them involves a transaction log, which doens't exist in
Jet.
How it handles simple read (and indexes) when embedded
transactions occur, from *within* the embedded (not yet committed)
transactions?

I already explained that in another post -- it uses the LDB file to
keep track of who has what locked.
You see,
it is not important that you come with an answer as much as your
answer does not involve another extra mechanic for case one, and
another special dedicated mechanic for case two, etc. All those
mechanics would have to be coded while the log mechanic...

I don't have a clue what you're talking about. I've explained how
Jet works in all cases. You may not like the explanation, but it is,
in fact, the way the documentation for Jet says it works (in regard
to replication, I know it from experience as much from reading about
it).
implies no extra specific database mechanic
that the database itself already deliver: lock, set, append,
delete. The SELECT * FROM table does not need any special
consideration if your are within a transaction, or not, with a log
file, but needs very special ones with the mechanic you described,
when executed within a transaction.

No, it needs nothing more than a temp work file and locking on the
real data file that is honored by all the users operating on that
data file. This is, of course, accomplished via the LDB file.
Your basic claim is that since YOU never read anything about Jet
having log, so NECESSARY Jet has no log.

All the documentation I've ever read about Jet that compares it to a
server database makes a point of the lack of logging in Jet being
one of the crucial differences.
That is a very weak argument, and depend on our
predisposition to believe you read the appropriate stuff. Have you
even read anything about CHECK(), or BAND in the Jet db engine
programmer's handbook? No, so, there are just illusion whenever I
could eventually use them?

There is one reference to CHECK() in the Jet 3.5 programmer's guide
(MS never published a guide for Jet 4 or later), and it is on p. 75
and says:

Microsoft Jet SQL doesn't support ANSI SQL constructs that set
validation rules or default values (for example, field-level
CHECK or DEFAULT clauses).

It may be that Jet 4 changes that (as a lot of things were
introduced into Jet 4 that make it somewhat more consistent with
server database engines).

There is no reference for BAND in the index, so I would assume it's
not supported in Jet 3.5.

Now, you could argue that everything changed in Jet 4, but that's
ludicrous -- certain things were added to an already-robust database
engine that had been under development for nearly 10 years, but the
basic operation of the engine was not changed.

Since I have the book out, here are some relevant quotations:

USING TRANSACTIONS IN THE MULTIUSER ENVIRONMENT

You can use Microsoft Jet transactions to group updates in units
that can be committed or rolled back as a whole. Because
transactions save updates in a temporary file instead of in real
tables, they also have useful effects in multiuser environments.
(p. 258)
Have you ever read in the same book that the mechanic you
described is what happen with transaction? so, your logic is still
applicable: YOU have not read it, so SURELY it does not exist?

I don't have a clue what you're talking about when you impute to me
the invention of "a mechanic." I have explained how Jet works, and
it does not involve a transaction log.
I have not read it, me neither, in any official documentation, and
while any piece of knowledge has always a tag of 'degree of
certitude' associated to that piece of knowledge, it just happen
that John L. Viescas has a much better credibility tag than the
one of your claim has, and, as extra, (transaction) log makes much
more common sense, strategically and economically to develop, in
the business of writing a database engine, which Microsoft also
has a tag of expertise.

But they didn't do it. There is no transaction log at any level in
Jet. Period. If there *were* such a thing, the Jet documentation
would give a clue as to how to use it (what purpose would it serve
if it couldn't be used for the same purposes as it is used in server
databases?). But there isn't any such documentation, because IT
DOESN'T EXIST.

You are the one who has the obligation here to affirmatively
demonstrate that this phantom Jet transaction log exists. Find some
MS documentation that shows that it exists. Otherwise, you're just
blowing smoke.

I cannot prove it doesn't exist (it's logically impossible to prove
a negative), but you can logically prove that it *does*. Yet, you
won't be able to, because it does *not* exist in Jet.

It may seem to *you* like the only way Jet should have been
designed, but it is not the way Jet *was* designed (way back in the
early 90s).
 
D

david

Used to be like this. may have changed

Jet writes transactions to a local temp file, which is a kind of mdb,
which you can call a log if you want. When the transaction is committed,
the temp file/log/temp mdb is written to the data mdb, and the temp
file is deleted.

Jet 2 used a temp file because of memory limitations in Window 3.1.
It would make more sense to use memory now.

When Windows 95 was introduced, it's disk cache did not honour
Windows 3.1 OSflushToDisk commands. This meant that Access
2.0 transactions were not flushed to disk. I THINK that Access
2.0 transactions were automatically flushed to disk, and that the
dbFlushOSCacheWrites option was added to restore this behaviour,
but I could be wrong, it could be that dbFlushOSCacheWrites no
longer works.

I THINK that there is a new Flush To Disk API in XP or VISTA,
(required because there is so much BIT stuff around now) but I
don't know how it works, and I would bet that Jet has not been
updated to use it.

If you are seeing frequent corruption, you need to check that your
server has all it's service packs applied. Then check that all your
hardware works correctly. I've gone for years without seeing
(structural) corruption in a BE database.

If you don't have the authority to find and fix the hardware/OS
problem you are experiencing, consider switching to a SQL Server
/MSDE back end. SQL Server is not compatible with Jet transactions
any more, so do NOT implement Jet transactions if you are considering
SQL Server.

(david)


Michel Walsh said:
I am not sure about how Jet does it, but I assume it does real writing in
the db (with locks) and use its internal log for a roll back, as MS SQL
Server do: "Write-ahead logging and automatic rollback and rollforward of
transactions during the recovery phase of SQL Server startup ensure
durability" (typo, if any, are mine, taken from: Inside Microsoft SQL
Server 2000, page 43) and so, even "If the powers fails immediately after
the acknowledge (...) the transaction is guaranteed to exist in the
database". With a 'merge' from an external db, as you expose, that won't be
possible, unless you have a mechanism to handle the case of failure 'while
merging', and thus, be able to undo it the resulting 'mess' ... without
having transaction at your disposition.



Vanderghast, Access MVP
 
M

Michel Walsh

It seems you really try to not understand what every one else do, I imagine.

First, I never speak about transaction mechanic except to mention that it is
somehow involved to merge two disconnected database. Your system will need
that, locks won't do that kind of job. Autonumber values in a TRANSACTION
are not generated as random number, all of a sudden, so how will you merge
the SEQUENTIAL number from your disconnected file with the main database
file, when you will be committing the transaction? How will you propagate
the DELETED record, from the TRANSACTION, on commit? (I don't say
replication, I say and I repeat, it is a delete in a transaction). Using
"LOCK" is NOT an answer at all, it won't DELETE a record. A LOCK does not
explain how from WITHIN an embedded transaction, you will execute a read:
embedded transaction do NOT lock read from the embedded transaction THROUGH
its embedding transaction(s). Your explanation is just NOT working.


Oh, by the way, a log is generally maintained in a "different file", indeed,
so the quote you supplied does not invalidate the fact there is a
transaction log.


And since you claim to not understand what I wrote, I find useless, for me,
to continue this ''conversation". In fact, it is not YOU that I try to
convince of anything, just in case you did not understood that too.





Vanderghast, Access MVP
 
M

Michel Walsh

First, I never said the log was in the same database, and indeed, it is
generally another 'file'. But that is not the MAIN point FACT that
differentiate a LOG and what you describe.


The main point, with a log, is that the modifications are done IN THE real
DATABASE file, so indexes, DRI, SQL statements, do not have to be modified
at all, while they will have to be if the 'data' involved in the transaction
was temporary written into ANOTHER 'database'. Furthermore, with a log, when
the transaction is committed, the related element IN THE LOG are deleted
after the relevant locks are removed, and that is all what need to be done!
while if the transaction is rolled-back, the log is read (backward) to undo
the modifications, then the lock are removed.


So, the differences, with a log, where COMMIT does almost nothing and
ROLL-BACK is the undo-er, are:

- the modifications occur in the same database with the data, AFTER the
solution to undo it is stored in the log (rather than storing modified data
in another database... and how will you 'store' a deleted record), so
COMMITING is quite atomic, and rolling-back is safe as long as the log is
not destroyed.

- no need to add special code to perform SELECT, from OUTSIDE the
transaction or from WITHIN the transaction: with 2 files where the data
would reside, you will need special code, to reach the appropriate file,
from WITHIN the transaction (locks don't do lock YOURSELF out, so invoking
the locks to do that part of the job does NOT hold),

- no need to add code to maintain DRI. In fact, if such code would
exists, to maintain DRI between two files, it will be possible to enforce
DRI among linked tables... and it is NOT! (another hint: how an illegal
APPEND inside a transaction reacts? it fails only on COMMIT, or *as soon
as* you try an illegal APPEND, wihtout having to commit the transaction?)


- no need to add special code to build and consult indexes on
'partition' of data. In fact, MS SQL Server 2005 can do that, in some
circumstances, but not Jet.


- no need to modify the queries plans of execution dependant of the fact
you are inside a transaction, or not. (related in part to the index problem
with two data files) Indeed, being inside a transaction, reading your data
is quite complex, and the optimal query plan would probably be quite
different, and again, locks do nothing since these locks are YOURS (within
the transaction).


- no 'merging' of data and no need to conciliate eventual conflict (such
as autonumber, which do NOT turn into random while you are inside a
transaction, or unique constraint, DRI, or whatever), since the data is
already acceptable in the database, or has been rejected while the
transaction was adding/modifying/deleting the data, way way long before you
ever commit the transaction.

- transactions embedded inside transactions do not need special code. With
a different data-file by transaction, you will spawn one data-file per
embedded transaction? And again, imagine the SELECT statements, the DRI, ...


And with LOG, COMMIT is easy. RollBack does not need special conflict
resolution, since the restored data WAS validated, in the previous stage.
See how easy it is in terms or writing code to supply that functionality.
Microsoft project managers know that, why on Earth will they have chosen a
more expensive solution, in time, in possibilities of bug, stability, and
money?




Vanderghast, Access MVP


david said:
Used to be like this. may have changed

Jet writes transactions to a local temp file, which is a kind of mdb,
which you can call a log if you want. When the transaction is committed,
the temp file/log/temp mdb is written to the data mdb, and the temp
file is deleted.
(...)
 
M

Michel Walsh

"Indeed, being inside a transaction, reading your data is quite complex,"

is missing the context I implied: " ... is quite complex under the scenario
of multiple files holding the data".


Vanderghast, Access MVP
 
D

David W. Fenton

It seems you really try to not understand what every one else do,
I imagine.

First, I never speak about transaction mechanic except to mention
that it is somehow involved to merge two disconnected database.

If you're speaking of replication, then, again, you're simply wrong.
There is no transaction tracking in replication (though there are
lineage fields that do keep track of the source of changes to
individual fields).
Your system will need
that, locks won't do that kind of job.

But they *do* work with just locks. That's the basis of Jet's
interaction with the MDB file. It always has been.
Autonumber values in a TRANSACTION
are not generated as random number, all of a sudden, so how will
you merge the SEQUENTIAL number from your disconnected file with
the main database file, when you will be committing the
transaction? How will you propagate the DELETED record, from the
TRANSACTION, on commit? (I don't say replication, I say and I
repeat, it is a delete in a transaction).

You are the one who confusingly used the term "replication." You are
the one bringing up the question of merging "two disconnected
databases." That sounds like replication to me.

Now, as to your point, I didn't say that a transaction randomizes
Autonumbers. I said that *replication* randomizes the autonumbers,
since I thought you were talking about replication (since you used
that term).

As to how the transaction manages it, while the transaction is in
process, the relevant tables are locked, which means no records can
be added. Second, the temp file is *not* "disconnected" -- it is
under the control of the local Jet process, which knows the state of
the real MDB and its locking file (as well as its Autonumber seed
values and so forth).

So, there's no merging required. Or, at least, no merging in the
sense that you are worrying about.

Deletes are just the same -- the relevant tables are under a
read-only lock while the transaction affecting them is still
pending. Thus, deletes don't matter until the transaction is
committed.
Using
"LOCK" is NOT an answer at all, it won't DELETE a record. A LOCK
does not explain how from WITHIN an embedded transaction, you will
execute a read: embedded transaction do NOT lock read from the
embedded transaction THROUGH its embedding transaction(s). Your
explanation is just NOT working.

Here's a further quote from the programmer's handbook:

The most common use of transactions in multiuser environments is
to make sure users don't see an incomplete view of shared data as
it's being changed.

For example, assume your application is running code that is
updating data, and another user is simultaneously running a
report on that data. If you don't wrap your updates in a
transaction, the user running the report could receive
inconsistent data if some records have been updated by your code,
and some have not been. If you wrap your updates in a
transaction, however, the other user can't receive inconsistent
data because all records are updated at once.

Try not to keep a transaction open too long. All locks that
result from your edits with the transaction are kept in place
until the transaction is committed or rolled back. This can
diminish the concurrency of your multiuser application. (p. 258)

That describes a way in which locks are used to insure that
transactions don't produce inconsistent results for other users.
Oh, by the way, a log is generally maintained in a "different
file", indeed, so the quote you supplied does not invalidate the
fact there is a transaction log.

You're really reaching here.
And since you claim to not understand what I wrote, I find
useless, for me, to continue this ''conversation". In fact, it is
not YOU that I try to convince of anything, just in case you did
not understood that too.

You really don't have a clue about the subject you are pontificating
about. I only know something about it because I've studied MS's
documentation of the Jet database engine. You have only your
imagination about how you wish Jet worked.

Readers can judge who has credibility here.
 
D

David W. Fenton

Used to be like this. may have changed

Jet writes transactions to a local temp file, which is a kind of
mdb,

No, it's not a log file (i.e., a sequential record of transactions
against the MDB file). It's a Jet data file. Ever noticed that some
of the Jet temp files are MDBs?
which you can call a log if you want. When the transaction is
committed, the temp file/log/temp mdb is written to the data mdb,
and the temp file is deleted.

What do you mean by that? It makes no sense, as it stretches the
definition of the term "log" beyond what anyone has ever claimed for
it. Any temp file, e.g., MS Word's, would by your definition be a
log file. That's nonsensical from the standpoint of the claims being
made here, which is that, behind the scenes, Jet works just like a
server database.
Jet 2 used a temp file because of memory limitations in Window
3.1. It would make more sense to use memory now.

Yet, it doesn't. Run a large operation in a transaction in your
latest version of Access, and it will create temp files in the local
temp folder.
 
D

David W. Fenton

First, I never said the log was in the same database, and indeed,
it is generally another 'file'. But that is not the MAIN point
FACT that differentiate a LOG and what you describe.

You're now claiming that the temp files that Jet maintains are log
files? That claim makes nonsense of the definition of a log file
you've already provided.
The main point, with a log, is that the modifications are done IN
THE real DATABASE file,

Yet, the Jet programmer's guide says explicitly that they are *not*
done in the real MDB.
so indexes, DRI, SQL statements, do not have to be modified
at all, while they will have to be if the 'data' involved in the
transaction was temporary written into ANOTHER 'database'.
Furthermore, with a log, when the transaction is committed, the
related element IN THE LOG are deleted after the relevant locks
are removed, and that is all what need to be done! while if the
transaction is rolled-back, the log is read (backward) to undo the
modifications, then the lock are removed.

This is how server dbs work. It is not how Jet works, no matter how
much you might wish it to be so.
So, the differences, with a log, where COMMIT does almost nothing
and ROLL-BACK is the undo-er, are:

- the modifications occur in the same database with the data,
AFTER the solution to undo it is stored in the log (rather than
storing modified data in another database... and how will you
'store' a deleted record), so COMMITING is quite atomic, and
rolling-back is safe as long as the log is not destroyed.

Explained in another posting.
- no need to add special code to perform SELECT, from OUTSIDE
the transaction or from WITHIN the transaction: with 2 files where
the data would reside, you will need special code, to reach the
appropriate file, from WITHIN the transaction (locks don't do lock
YOURSELF out, so invoking the locks to do that part of the job
does NOT hold),

Jet clearly uses locks to accomplish this, just as server dbs do
during the actually updating of the database.
- no need to add code to maintain DRI. In fact, if such code
would exists, to maintain DRI between two files, it will be
possible to enforce DRI among linked tables... and it is NOT!
(another hint: how an illegal APPEND inside a transaction reacts?
it fails only on COMMIT, or *as soon as* you try an illegal
APPEND, wihtout having to commit the transaction?)

You have invented an imaginary requirement that you then conclude
cannot be met. There is no necessity for separate DRI mechanisms, as
the temp file is being controlled by the same Jet process that is
reading the actual MDB data file. So Jet knows the rules under which
valid data can be inserted/deleted/updated.
- no need to add special code to build and consult indexes on
'partition' of data. In fact, MS SQL Server 2005 can do that, in
some circumstances, but not Jet.

You're just making it up again.
- no need to modify the queries plans of execution dependant
of the fact you are inside a transaction, or not. (related in
part to the index problem with two data files) Indeed, being
inside a transaction, reading your data is quite complex, and the
optimal query plan would probably be quite different, and again,
locks do nothing since these locks are YOURS (within the
transaction).

If I had a million dollars, I'd probably be living in a nicer
apartment. But I don't, so I'm not. Likewise, what you would like to
be the case is only what you would like, not what *is*.
- no 'merging' of data and no need to conciliate eventual
conflict (such as autonumber, which do NOT turn into random while
you are inside a transaction, or unique constraint, DRI, or
whatever), since the data is already acceptable in the database,
or has been rejected while the transaction was
adding/modifying/deleting the data, way way long before you ever
commit the transaction.

I've explained all of this in another post.
- transactions embedded inside transactions do not need special
code. With a different data-file by transaction, you will spawn
one data-file per embedded transaction? And again, imagine the
SELECT statements, the DRI, ...

This is a problem of your own imagining.
And with LOG, COMMIT is easy.

Yes, it is.

But Jet does not work that way.
RollBack does not need special conflict
resolution, since the restored data WAS validated, in the previous
stage.

There is no need for any special handling for a rollback with a Jet
transaction, since the changes at the time a rollback can be done
have not actually been applied to the main MDB -- at that point
there are still only in the temp file.
See how easy it is in terms or writing code to supply that
functionality. Microsoft project managers know that, why on Earth
will they have chosen a more expensive solution, in time, in
possibilities of bug, stability, and money?

Yes, it *is* easy with a server database.

With a file system database like Jet, it is impossible, because
there is no single process that could manage the log for all users.
 
D

David W. Fenton

"Indeed, being inside a transaction, reading your data is quite
complex,"

is missing the context I implied: " ... is quite complex under
the scenario
of multiple files holding the data".

You are imagining how it works, and claiming that it must use a new
temp file for each level of the transaction. I don't know how Jet
actually does it, but I see no problem with this.

Again, you have no basis for your claims about what Jet does or does
not do except your gut feeling that it *has* to be that way.
 
M

Michel Walsh

If you still have data in a separate file for a transaction, how will you
handle a transaction inside a transaction:

Begin transactionA
... do something
Begin transactionB
... do something
SELECT * FROM table1


as example?


Will you have 3 files? one being the mdb, one being for transactionA and one
being for transactionB. Note too that transactionB is NOT locked out of what
transactionA did modify/delete/append.


So, how reading a record will be done: (REMEMBER, we are INSIDE the
transaction, NOT OUTSIDE it, so OUR transaction-locks do not lock ... OUR
transaction, neither our embedded transaction)

Look in the mdb, if a record is found, still look in transactionA file
'tombstone' to see if transactionA has deleted it happy if it is, but if it
is not, look in table1 records maintained in transactionA file to see if
transactionA has modified it, but that does not end there, we then have to
see if it is NOT IN transactionB 'tombstone' (or whatever tracking the
TRANSACTION mechanic it uses to remember what is deleted) and if it is not,
look in table1 records maintained in transactionB to see if it has been
updated. But that is not all. You still need to scan table1 maintained by
transactionA to see if a record there is NOT IN table1 in mdb, in the case
of a record appended to table1 inside transactionA... but, no so fast buddy,
still have to look if is NOT IN transactionB tombstone or if it is NOT IN
table1 maintained by transactionB. And, no, it is not the end of it, still
have to scan table1 maintained in transactionB for records there while NOT
IN table1 form mdb (you have already consider record from mdb) neither NOT
IN table1 maintained in transactionA while NOT IN the tombstone of
transactionB (case of a record initially appended from transactionA, deleted
from transactionB, and next, appended from transactionB). And I probably
missed a couple of cases.

And that is for a simple:

SELECT * FROM table1


I still have not COMMIT anything. Imagine how simple the "and the data is
merge on commit" can REALLY be 'simple'.

Sure, we can simplify a little bit that hypothetic process of yours, but it,
that is still a process that seems to take at least MANY times the amount of
time than it would without transaction, even for the most elemental READ: it
will be not only much more expensive to build, to debug and maintain, but
also, it would be totally inefficient.


And indexes, and data referential integrity, and ... well, that has already
been exposed and you didn't really supply any concrete counter-argument than
the very weak claim:

"there is not log because I never have read that".

John L. Viescas said there is an internal (private, not exposed) log when a
transaction is implied. (Obviously, YOU never have read John's writing).

Your claim leads to inefficiencies, while the presence of log makes a lot of
sense, in terms of performance and in terms of development of the database
engine.

Yes, indeed, in the absence of 'seeing the code' with their own eyes, people
are reduced to have an idea about how it PROBABLY works. For me, it is a
log, and the transaction data is written IN THE mdb, where the log main
functionality is to be able to UNDO on ROLLBACK.




Vanderghast, Access MVP
 
R

RD

And if the LDBs are getting orphaned regularly, then you have major
problems with your operating environment that could lead to
corruption/data loss.

I support a dozen MDB's created by a consultant who retired a couple
of years ago. This was the kind of Access "developer" that give
Access developers a bad name. Multiple users on a single, non-split
database on a network drive. Look-up fields in tables. Basing forms
on a query and then using DLookup to populate fields on the form that
already exist in the query. Most of these applications are utter
garbage. I've been trying to fix them as I get time but, I wear other
hats around here so they must wait.

By contrast, the two Access applications I've built here have been
stable and running fine for 18 months.

The orphaned ldb's are almost always caused by users. Many of them
are barely computer literate and think nothing of manually turning off
a computer while apps are still running.

Eh ... job security.

RD
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top