Access Client to SQL Server Back-End (Conversion Project)

J

Jordan S.

A potential client has a non trivial Access97 application that has an
"application database" (.mdb file with forms, reports and modules) linked to
a "data database" (.mdb file with tables and indexes only) on a file server.
There are about 100 users. The "data database" is about 70MB in size and
contains 120 tables. The "application database" contains 75+ forms
(including subforms), 650+ queries (yes - over six hundred fifty queries!)
and 300+ reports.

The client application has practically all UI controls (including forms and
controls on the forms, like combo boxes etc) bound to tables or queries.
Filters are commonly used in the various UI widgets. Many forms are have 15+
different controls bound to tables or queries.

Performance is slow - abysmally slow.

The client has invited me to come on board and assist in swapping the "data
database" out for SQL Server - and simply hook up the "application database"
to the SQL Server database. All this in an attempt to significantly improve
performance.

I'm hesitent to take this project on until I am very clear on what it will
take to make this project successful. I'd hate to do the conversion and have
poor performance remain.

I understand that MS Access can do passthrough queries. But AFAIK (and
please correct me if I'm wrong), using passthrough queries would preempt the
binding of UI controls. Yes? No? If this true, then all those forms would
have to be reworked so as to not be bound to the tables - but instead we'd
have to go with recordsets and do all updates in code? Yes?

Any suggestions or guidance on specifically what it would take to do this
[back end database conversion to SQL Server - while keeping the client as an
Access97 database] would be greatly appreciated. If upgrading to a newer
version of Access (newer than 97) would make a difference, then I'd like to
know specifically what the new feature(s) are and why they'd make a
difference (so I can talk intelligently to the client about why they should
or must upgrade their client installations of MS Access).

Thanks!
 
G

Guest

Hi Jordon,
I'm hesitent to take this project on until I am very clear on what it will
take to make this project successful. I'd hate to do the conversion and have
poor performance remain.

If you decide that you really need to migrate, you should obtain a copy of
the book titled "Microsoft Access Developer's Guide to SQL Server", written
by Mary Chipman and Andy Baron (SAMS Publishing). Here is a link to the book
at Amazon.com:

http://www.amazon.com/exec/obidos/t...103-5399559-8631817?v=glance&s=books&n=507846

Early in the book (page 6) the authors write:

<Begin Quote>
"Many people think that upsizing from the Jet database engine to SQL Server
is a universal panacea for whatever is ailing their Access databases. It's
not. In fact, just the opposite is usually true. If your Access application
is a dog, then most likely it will still be a dog after you upsize it to SQL
Server--perhaps an even bigger, uglier, shaggier dog! Even a well-designed
Jet database often won't run any faster after the tables are upsized to SQL
Server if you are using the same data access methods you used for your Access
database. In order to successfully convert your Access database, you have to
be clear about why it needs to be upsized, and you need to understand how to
take advantage of the strengths of SQL Server by reworking the data access
elements of your Access application."

</End Quote>


650 queries and 300 reports certainly sounds a bit excessive. How many of
these objects are actually used? A good investment would be Total Access
Analyzer, marketed by FMS, Inc.:
http://www.fmsinc.com/products/analyzer/index.html

Many times, reports seem to breed like rabbits in databases, with several
being slight variations of a given report. These can often be re-written
using techniques that build the WHERE portion of a SQL statement on-the-fly.

One of the most important things to ensure is that the FE database is
establishing a connection to the BE database when it is opened. When you open
the FE database, and you are the only person that is using the application
(or you are the sole person working on a dev. copy), you should see a locking
database file created for the BE database. Use Windows Explorer to verify
that you see the .ldb file being created for the BE database as soon as you
open the FE database. This idea, and others, are discussed in this article:

Implementing a Successful Multiuser Access/JET Application
http://www.access.qbuilt.com/html/multiuser_applications.html

Note that the discussion of NameAutocorrect and SubDatasheets will not apply
in your case, since you are using Access 97.

I'll let others answer the rest of your questions.


Tom Wickerath, Microsoft Access MVP

http://www.access.qbuilt.com/html/expert_contributors.html
http://www.access.qbuilt.com/html/search.html
__________________________________________

Jordan S. said:
A potential client has a non trivial Access97 application that has an
"application database" (.mdb file with forms, reports and modules) linked to
a "data database" (.mdb file with tables and indexes only) on a file server.
There are about 100 users. The "data database" is about 70MB in size and
contains 120 tables. The "application database" contains 75+ forms
(including subforms), 650+ queries (yes - over six hundred fifty queries!)
and 300+ reports.

The client application has practically all UI controls (including forms and
controls on the forms, like combo boxes etc) bound to tables or queries.
Filters are commonly used in the various UI widgets. Many forms are have 15+
different controls bound to tables or queries.

Performance is slow - abysmally slow.

The client has invited me to come on board and assist in swapping the "data
database" out for SQL Server - and simply hook up the "application database"
to the SQL Server database. All this in an attempt to significantly improve
performance.

I'm hesitent to take this project on until I am very clear on what it will
take to make this project successful. I'd hate to do the conversion and have
poor performance remain.

I understand that MS Access can do passthrough queries. But AFAIK (and
please correct me if I'm wrong), using passthrough queries would preempt the
binding of UI controls. Yes? No? If this true, then all those forms would
have to be reworked so as to not be bound to the tables - but instead we'd
have to go with recordsets and do all updates in code? Yes?

Any suggestions or guidance on specifically what it would take to do this
[back end database conversion to SQL Server - while keeping the client as an
Access97 database] would be greatly appreciated. If upgrading to a newer
version of Access (newer than 97) would make a difference, then I'd like to
know specifically what the new feature(s) are and why they'd make a
difference (so I can talk intelligently to the client about why they should
or must upgrade their client installations of MS Access).

Thanks!
 
J

Jordan S.

Thanks Tom.

RE:
<< 650 queries and 300 reports certainly sounds a bit excessive. How many of
these objects are actually used?>>

Great question. I asked their current developer and he said that *all* of
them are actually being used. The copy I was evaluating was his production
version - from which he had already removed the unused objects. Astounding
(to me!) but true according to the current developer.

-Jordan.



Tom Wickerath said:
Hi Jordon,
I'm hesitent to take this project on until I am very clear on what it
will
take to make this project successful. I'd hate to do the conversion and
have
poor performance remain.

If you decide that you really need to migrate, you should obtain a copy of
the book titled "Microsoft Access Developer's Guide to SQL Server",
written
by Mary Chipman and Andy Baron (SAMS Publishing). Here is a link to the
book
at Amazon.com:

http://www.amazon.com/exec/obidos/t...103-5399559-8631817?v=glance&s=books&n=507846

Early in the book (page 6) the authors write:

<Begin Quote>
"Many people think that upsizing from the Jet database engine to SQL
Server
is a universal panacea for whatever is ailing their Access databases. It's
not. In fact, just the opposite is usually true. If your Access
application
is a dog, then most likely it will still be a dog after you upsize it to
SQL
Server--perhaps an even bigger, uglier, shaggier dog! Even a well-designed
Jet database often won't run any faster after the tables are upsized to
SQL
Server if you are using the same data access methods you used for your
Access
database. In order to successfully convert your Access database, you have
to
be clear about why it needs to be upsized, and you need to understand how
to
take advantage of the strengths of SQL Server by reworking the data access
elements of your Access application."

</End Quote>


650 queries and 300 reports certainly sounds a bit excessive. How many of
these objects are actually used? A good investment would be Total Access
Analyzer, marketed by FMS, Inc.:
http://www.fmsinc.com/products/analyzer/index.html

Many times, reports seem to breed like rabbits in databases, with several
being slight variations of a given report. These can often be re-written
using techniques that build the WHERE portion of a SQL statement
on-the-fly.

One of the most important things to ensure is that the FE database is
establishing a connection to the BE database when it is opened. When you
open
the FE database, and you are the only person that is using the application
(or you are the sole person working on a dev. copy), you should see a
locking
database file created for the BE database. Use Windows Explorer to verify
that you see the .ldb file being created for the BE database as soon as
you
open the FE database. This idea, and others, are discussed in this
article:

Implementing a Successful Multiuser Access/JET Application
http://www.access.qbuilt.com/html/multiuser_applications.html

Note that the discussion of NameAutocorrect and SubDatasheets will not
apply
in your case, since you are using Access 97.

I'll let others answer the rest of your questions.


Tom Wickerath, Microsoft Access MVP

http://www.access.qbuilt.com/html/expert_contributors.html
http://www.access.qbuilt.com/html/search.html
__________________________________________

Jordan S. said:
A potential client has a non trivial Access97 application that has an
"application database" (.mdb file with forms, reports and modules) linked
to
a "data database" (.mdb file with tables and indexes only) on a file
server.
There are about 100 users. The "data database" is about 70MB in size and
contains 120 tables. The "application database" contains 75+ forms
(including subforms), 650+ queries (yes - over six hundred fifty
queries!)
and 300+ reports.

The client application has practically all UI controls (including forms
and
controls on the forms, like combo boxes etc) bound to tables or queries.
Filters are commonly used in the various UI widgets. Many forms are have
15+
different controls bound to tables or queries.

Performance is slow - abysmally slow.

The client has invited me to come on board and assist in swapping the
"data
database" out for SQL Server - and simply hook up the "application
database"
to the SQL Server database. All this in an attempt to significantly
improve
performance.

I'm hesitent to take this project on until I am very clear on what it
will
take to make this project successful. I'd hate to do the conversion and
have
poor performance remain.

I understand that MS Access can do passthrough queries. But AFAIK (and
please correct me if I'm wrong), using passthrough queries would preempt
the
binding of UI controls. Yes? No? If this true, then all those forms would
have to be reworked so as to not be bound to the tables - but instead
we'd
have to go with recordsets and do all updates in code? Yes?

Any suggestions or guidance on specifically what it would take to do this
[back end database conversion to SQL Server - while keeping the client as
an
Access97 database] would be greatly appreciated. If upgrading to a newer
version of Access (newer than 97) would make a difference, then I'd like
to
know specifically what the new feature(s) are and why they'd make a
difference (so I can talk intelligently to the client about why they
should
or must upgrade their client installations of MS Access).

Thanks!
 
G

Guest

The ratio of reports to queries seems wrong - you would think
that 60 different report layouts could handle all the data variations
that 650 queries could generate - but this is a coding style issue.

In Access, the easiest and most flexible way to generate a report
variation is to copy and create a new report. And it rarely causes
any kind of support issues. Truthfully, the only reason I re-use
existing reports instead of creating new ones, is that it is so d-d
difficult to find the report you need to modify when you have 300
of them. Even as it is, we always have to ask the user for the
report ID number - and it is embarrassing to have to force the
user to use a meaningless number, just to communicate with
support programmers.

I think there are some conversion notes somewhere, but i can't
find the URL tonight.

In A97, pass through queries were normally not updateable. There
is no restriction on using them as data sources, bound to forms
our queries.

In A97/SQL Server, you cannot update or append to an identity
field in a linked table/view unless you use pass-through commands.

Upgrading from A97/Jet 3/SQL Server to A2K/Jet 4/SQL Server
has killed all of our DAO transactions.

SQL Server will try to keep all of it's indexes in memory. Data
does not have to be sent over the network to maintain or check
indexes. If the number of users you have is limited by the network
bandwidth, then moving to SQL Server may allow you to increase
the number of users. If network latency causes synchronisation
problems between the users, then moving to SQL Server may allow
you to do more transactions per second.

Moving to SQL Server may allow you to handle more data,
users, or transactions, if you are at one of those limits.

It doesn't make your forms faster.

(david)


Jordan S. said:
Thanks Tom.

RE:
<< 650 queries and 300 reports certainly sounds a bit excessive. How many of
these objects are actually used?>>

Great question. I asked their current developer and he said that *all* of
them are actually being used. The copy I was evaluating was his production
version - from which he had already removed the unused objects. Astounding
(to me!) but true according to the current developer.

-Jordan.



Tom Wickerath said:
Hi Jordon,
I'm hesitent to take this project on until I am very clear on what it
will
take to make this project successful. I'd hate to do the conversion and
have
poor performance remain.

If you decide that you really need to migrate, you should obtain a copy of
the book titled "Microsoft Access Developer's Guide to SQL Server",
written
by Mary Chipman and Andy Baron (SAMS Publishing). Here is a link to the
book
at Amazon.com:

http://www.amazon.com/exec/obidos/t...103-5399559-8631817?v=glance&s=books&n=507846

Early in the book (page 6) the authors write:

<Begin Quote>
"Many people think that upsizing from the Jet database engine to SQL
Server
is a universal panacea for whatever is ailing their Access databases. It's
not. In fact, just the opposite is usually true. If your Access
application
is a dog, then most likely it will still be a dog after you upsize it to
SQL
Server--perhaps an even bigger, uglier, shaggier dog! Even a well-designed
Jet database often won't run any faster after the tables are upsized to
SQL
Server if you are using the same data access methods you used for your
Access
database. In order to successfully convert your Access database, you have
to
be clear about why it needs to be upsized, and you need to understand how
to
take advantage of the strengths of SQL Server by reworking the data access
elements of your Access application."

</End Quote>


650 queries and 300 reports certainly sounds a bit excessive. How many of
these objects are actually used? A good investment would be Total Access
Analyzer, marketed by FMS, Inc.:
http://www.fmsinc.com/products/analyzer/index.html

Many times, reports seem to breed like rabbits in databases, with several
being slight variations of a given report. These can often be re-written
using techniques that build the WHERE portion of a SQL statement
on-the-fly.

One of the most important things to ensure is that the FE database is
establishing a connection to the BE database when it is opened. When you
open
the FE database, and you are the only person that is using the application
(or you are the sole person working on a dev. copy), you should see a
locking
database file created for the BE database. Use Windows Explorer to verify
that you see the .ldb file being created for the BE database as soon as
you
open the FE database. This idea, and others, are discussed in this
article:

Implementing a Successful Multiuser Access/JET Application
http://www.access.qbuilt.com/html/multiuser_applications.html

Note that the discussion of NameAutocorrect and SubDatasheets will not
apply
in your case, since you are using Access 97.

I'll let others answer the rest of your questions.


Tom Wickerath, Microsoft Access MVP

http://www.access.qbuilt.com/html/expert_contributors.html
http://www.access.qbuilt.com/html/search.html
__________________________________________

Jordan S. said:
A potential client has a non trivial Access97 application that has an
"application database" (.mdb file with forms, reports and modules) linked
to
a "data database" (.mdb file with tables and indexes only) on a file
server.
There are about 100 users. The "data database" is about 70MB in size and
contains 120 tables. The "application database" contains 75+ forms
(including subforms), 650+ queries (yes - over six hundred fifty
queries!)
and 300+ reports.

The client application has practically all UI controls (including forms
and
controls on the forms, like combo boxes etc) bound to tables or queries.
Filters are commonly used in the various UI widgets. Many forms are have
15+
different controls bound to tables or queries.

Performance is slow - abysmally slow.

The client has invited me to come on board and assist in swapping the
"data
database" out for SQL Server - and simply hook up the "application
database"
to the SQL Server database. All this in an attempt to significantly
improve
performance.

I'm hesitent to take this project on until I am very clear on what it
will
take to make this project successful. I'd hate to do the conversion and
have
poor performance remain.

I understand that MS Access can do passthrough queries. But AFAIK (and
please correct me if I'm wrong), using passthrough queries would preempt
the
binding of UI controls. Yes? No? If this true, then all those forms would
have to be reworked so as to not be bound to the tables - but instead
we'd
have to go with recordsets and do all updates in code? Yes?

Any suggestions or guidance on specifically what it would take to do this
[back end database conversion to SQL Server - while keeping the client as
an
Access97 database] would be greatly appreciated. If upgrading to a newer
version of Access (newer than 97) would make a difference, then I'd like
to
know specifically what the new feature(s) are and why they'd make a
difference (so I can talk intelligently to the client about why they
should
or must upgrade their client installations of MS Access).

Thanks!
 
A

Albert D.Kallal

Performance is slow - abysmally slow.

After how many users is the performance slow? With 2 users...is it ok...and
then with 15 or 20...it is slow?

The above is a rather important question, since if more users = slow system,
then it is the network being overloaded by the
increase in users..and sql server could likely help this situation a lot.

If the application is slow with 2 users, then we have a problem. A 70 meg
file is not that large. Assuming VERY SMALL tables in the 50,000 to 150,000
record range, response times for 2-10 users should be near instant, and few,
if any delays for working with forms should be noticed.

You don't mention how large some of these tables are, but 70 megs for the
data is not large at all...

If the application is slow with 2 or 3 users, then application DESIGN is at
issue now. Remember, just converting and moving the data from the back to
sql sever WILL NOT improve performance. In fact, often performance will
drop. It should be also noted that a jet file share (mdb) is actually
FASTER then sql server when no network is involved.

Further, was there a particular point in time that the application ran
slow....or did it always run slow?

So, ms-access is just as fast c++ or any other tool of choice when grabbing
data from sql server. And, even bound forms to sql server can perform VERY
well, and for MOST cases it is not worth it to use un-bound forms. However,
the issue remains that it performs slow now...I don't think it should....

If the designs are responsible for the slow performance now, then moving to
sql server will not fix this problem. It is by DESIGN that you will obtain
good performance here NOT moving to sql server. I going to repeat this again

moving to sql server will NOT FIX YOUR PERFORMANCE.

It also not clear why type of network is involved here. I talk about the
network here....

http://www.members.shaw.ca/AlbertKallal//Wan/Wans.html

using passthrough queries would preempt the
binding of UI controls. Yes?

Yes, for the most part. However, for repots and some displays of
data...pass-through is a good idea. However, if we have a select to a table
with 500,000 records, and we ask the form to load the one record, ms-access
will only grab the one record from sql server. So, in these kinds of cases,
a pass-though will do absolute nothing to change performance. What WILL
change performance is how many records you fetch. So, pass-though queries
are OFTEN not the problem, nor are they some quick fix to performance.

Remember, even in ms-access, if you request one record to a form out of
150,000 records...only that one record is transferred.. Contrary to popular
belief, the whole table is NOT transferred...(unless a poor design is at
fault).
If this true, then all those forms would
have to be reworked so as to not be bound to the tables - but instead we'd
have to go with recordsets and do all updates in code? Yes?

Doing the above and not changing the existing design again will not fix
performance problems at all.

What needs to be determined right now is why the performance of the current
system is slow. It is network load, is it network speed, or poor designs?
(or, all 3!!).

For example, it makes NO sense at all to load up a form...and THEN have the
user start searching through records. That would be like having a bank
machine download all accounts...and THEN ask you what account to work on....

It should be well noted that a good design in ms-access will perform well
with a mdb file..and MOST of the time, converting to sql server will not
help performance, but will improve security, and certainly allow more users
then a standard file share. (as mentioned...after how many users it slows
down would shed light on this issue).

So, for example, to search for a name in a small table of 150,000 records,
the search for the name should be instant. Here is a example of a design
that works well with both a jet file share..and sql server in ms-access

http://www.members.shaw.ca/AlbertKallal/Search/index.html

The key in the above is that very few records are retrieved..and thus how
can this search be slow? As mentioned, ms-access does not
pull the whole table...so, in the above, with 5, or 125,000 records...it
should not present any kind of noticeable delay...
Many forms are have 15+
different controls bound to tables or queries.

Well, 15 fields, or 40 fields will not change the performance. However,
controls such as combo boxes, or listbox, or sub-forms will need to be
addressed. A often simple solution is to create views on the sql side, and
bind the combo box direclty to that view without ANY sql on the control..

eg, put JUST the view name in the combo

myvCboCompany


By the way, while we are at this...we can assume that combo box, or
listbox's are never more then say 100...perhaps 200 entries. (golly, how can
anyone work with a combo box with more then 200 entries anyway...it is
torture to the users..and also bad from a performance point of view.). So,
this is what I mean by good desings will perform well.
 
R

Rick Brandt

Jordan said:
A potential client has a non trivial Access97 application that has an
"application database" (.mdb file with forms, reports and modules)
linked to a "data database" (.mdb file with tables and indexes only)
on a file server. There are about 100 users. The "data database" is
about 70MB in size and contains 120 tables. The "application
database" contains 75+ forms (including subforms), 650+ queries (yes
- over six hundred fifty queries!) and 300+ reports.

The client application has practically all UI controls (including
forms and controls on the forms, like combo boxes etc) bound to
tables or queries. Filters are commonly used in the various UI
widgets. Many forms are have 15+ different controls bound to tables
or queries.
Performance is slow - abysmally slow.

The client has invited me to come on board and assist in swapping the
"data database" out for SQL Server - and simply hook up the
"application database" to the SQL Server database. All this in an
attempt to significantly improve performance.

I'm hesitent to take this project on until I am very clear on what it
will take to make this project successful. I'd hate to do the
conversion and have poor performance remain.

I understand that MS Access can do passthrough queries. But AFAIK (and
please correct me if I'm wrong), using passthrough queries would
preempt the binding of UI controls. Yes? No? If this true, then all
those forms would have to be reworked so as to not be bound to the
tables - but instead we'd have to go with recordsets and do all
updates in code? Yes?
Any suggestions or guidance on specifically what it would take to do
this [back end database conversion to SQL Server - while keeping the
client as an Access97 database] would be greatly appreciated. If
upgrading to a newer version of Access (newer than 97) would make a
difference, then I'd like to know specifically what the new
feature(s) are and why they'd make a difference (so I can talk
intelligently to the client about why they should or must upgrade
their client installations of MS Access).
Thanks!

There are undoubtedly many reasons for and benefits from migration to a SQL
Server back end. Also, there are certainly many things that could be done to
the application to solve the performance problems. The problem is that moving
to SQL Server is not going to be one of these things.

A server back end provides greater security, scalability, data integrity, etc.,
but a poorly performing (data in an MDB) application that simply has the data
moved to a SQL Server will likely not be any faster and could in fact see many
operations get even slower.

Poor performance in a database application is a problem of *Design* and those
design shortcomings have to be addressed. It might very well be better to
implement those fixes *while* doing a migration to a server back end, but do NOT
make promises to management that the simple act of migrating to SQL Server is
going to make things faster or you are setting yourself up for a big backlash
when that doesn't happen.
 
R

RobFMS

Some thoughts/ideas on what can make an application slow:

1. If you use your production version to do your development. Opening and
closing of the object may cause the object to become decompiled. See:
http://www.fmsinc.com/free/tips.html#accessdecompile

2. If loading a form is slow:
- Are you using a lot of tabs? See:
http://www.fmsinc.com/free/tips.html#accesslatebind

- Are you using a lot of combo/list boxes? Consider loading them
on-demand

3. Is there a lot of record-by-record processing occurring? Possiblly a well
written query could improve performance.

HTH


--
Rob Mastrostefano

FMS Professional Solutions Group
http://www.fmsinc.com/consulting

Software Tools for .NET, SQL Server, Visual Basic & Access
http://www.fmsinc.com


Jordan S. said:
A potential client has a non trivial Access97 application that has an
"application database" (.mdb file with forms, reports and modules) linked
to a "data database" (.mdb file with tables and indexes only) on a file
server. There are about 100 users. The "data database" is about 70MB in
size and contains 120 tables. The "application database" contains 75+ forms
(including subforms), 650+ queries (yes - over six hundred fifty queries!)
and 300+ reports.

The client application has practically all UI controls (including forms
and controls on the forms, like combo boxes etc) bound to tables or
queries. Filters are commonly used in the various UI widgets. Many forms
are have 15+ different controls bound to tables or queries.

Performance is slow - abysmally slow.

The client has invited me to come on board and assist in swapping the
"data database" out for SQL Server - and simply hook up the "application
database" to the SQL Server database. All this in an attempt to
significantly improve performance.

I'm hesitent to take this project on until I am very clear on what it will
take to make this project successful. I'd hate to do the conversion and
have poor performance remain.

I understand that MS Access can do passthrough queries. But AFAIK (and
please correct me if I'm wrong), using passthrough queries would preempt
the binding of UI controls. Yes? No? If this true, then all those forms
would have to be reworked so as to not be bound to the tables - but
instead we'd have to go with recordsets and do all updates in code? Yes?

Any suggestions or guidance on specifically what it would take to do this
[back end database conversion to SQL Server - while keeping the client as
an Access97 database] would be greatly appreciated. If upgrading to a
newer version of Access (newer than 97) would make a difference, then I'd
like to know specifically what the new feature(s) are and why they'd make
a difference (so I can talk intelligently to the client about why they
should or must upgrade their client installations of MS Access).

Thanks!
 
J

Jordan S.

Thanks Albert...

Performance is abysmally slow with just ONE SINGLE USER! Additionally, when
the whole thing is run on a laptop computer (i.e., NO network involved at
all), performance is abysmally slow. By "abysmally slow" I mean it takes 30
seconds to 2 minutes (sometimes 4 minutes) to open a single form.

So yes, it's perfectly clear that we have fundamental design issues (which
one can easily conclude just by browsing - "eyeballing" - the various
tables).

So I have already communicated to the potential client that a complete
rewrite is *necessary* (for both the relational table design and the client
application). Now they still want for the client to be done in MS Access97
(upgrading to newer version of Access only if absolutely necessary). So my
question is how to make sure that we're squeezing the best possible
performance out of an Access client - in the context of fundamental design
changes. Thus my questions about passthrough queries.

It appears from responses so far that much of the data access logic behind
the forms might need to become much more complex - with retrieving data via
passthrough queries - then doing updates via separate action queries. There
would be no more binding (except for read-only data), and updates would have
to be accomplished via action queries. So give that, we're now proposing a
"mostly disconnected" client application and would have to then roll our own
transactions and concurrent update conflict handling logic. Yes?


-Jordan
 
J

Jordan S.

Thanks Rick...

RE:
<< It might very well be better to implement those fixes *while* doing a
migration to a server back end, but do NOT make promises to management that
the simple act of migrating to SQL Server >>

Agreed. In fact I've already convinced the client that a complete rewrite of
the relational database design is in order, and that simply throwing the
existing bad desing in SQL Server won't help anything. Additionally, the
client application part is quite kludgy - so a complete redesign/rewrite of
that is also in order.

So my questions are how to get the best performance out of an non trivial
Access client that is using SQL Server as the back-end database. It seems to
me that Access really shines for RAD purposes when one can simply bind
everything to tables or queries. But now that's apparently out for the most
part (at least all updatable data) because apparently passthrough queries
will give us the best performance (and are also not updatable).

So now we're losing much of the RAD appeal of MS Access (ala "no more
binding everywhere") so I'm wondering why we'd keep this client app in MS
Access at all. That's a legitimate question. Please help me justify keeping
this thing in MS Access. I need the strongest possible arguments in favor of
an MS Access client in this scenario where we lose much of the binding and
then have to write a bunch of code to do updates.

Thanks!

-Jordan.
 
R

Rick Brandt

Jordan said:
Thanks Rick...

RE:
<< It might very well be better to implement those fixes *while*
doing a migration to a server back end, but do NOT make promises to
management that the simple act of migrating to SQL Server >>

Agreed. In fact I've already convinced the client that a complete
rewrite of the relational database design is in order, and that
simply throwing the existing bad desing in SQL Server won't help
anything. Additionally, the client application part is quite kludgy -
so a complete redesign/rewrite of that is also in order.

So my questions are how to get the best performance out of an non
trivial Access client that is using SQL Server as the back-end
database. It seems to me that Access really shines for RAD purposes
when one can simply bind everything to tables or queries. But now
that's apparently out for the most part (at least all updatable data)
because apparently passthrough queries will give us the best
performance (and are also not updatable).
So now we're losing much of the RAD appeal of MS Access (ala "no more
binding everywhere") so I'm wondering why we'd keep this client app
in MS Access at all. That's a legitimate question. Please help me
justify keeping this thing in MS Access. I need the strongest
possible arguments in favor of an MS Access client in this scenario
where we lose much of the binding and then have to write a bunch of
code to do updates.
Thanks!

-Jordan.

Nearly all of my apps use Access front ends to either SQL Server or UDB400
server back ends and nearly every data entry form is a bound form. You
certainly do NOT have to give that up. You just have to make sure that
forms do not open huge tables by default. You limit the number to the bare
minimum with exactly one record being the best.

For the most part going to a server back end you want to always be thinking
about what you will be dragging over the wire and trying to make that as few
bytes as possible for any given task. Coincidentally the exact same thing
is true for a split "ALL MDB" application so this is simply a networking
"best practice" not simply a best practice for server back ends.
 
J

Jordan S.

Thanks Rick...

RE:
<< Nearly all of my apps use Access front ends to either SQL Server or
UDB400 server back ends and nearly every data entry form is a bound form>>

So when going to SQL Server, what do you bind TO in order to have updatable
forms? Or are your forms read-only? One cannot bind to a stored procedure to
have updatable forms. So how do you do it? Do you bind to updateable SQL
Server Views?

-Jordan.
 
R

Rick Brandt

Jordan said:
Thanks Rick...

RE:
<< Nearly all of my apps use Access front ends to either SQL Server or
UDB400 server back ends and nearly every data entry form is a bound
form>>
So when going to SQL Server, what do you bind TO in order to have
updatable forms? Or are your forms read-only? One cannot bind to a
stored procedure to have updatable forms. So how do you do it? Do you
bind to updateable SQL Server Views?

-Jordan.

ODBC linked tables. An ODBC linked view could also be used if the view were
updateable.

Stored Procedures and Passthrough queries will definitely play a role in
your converted app, but it need not be a very large one. The idea that
nearly everything in such a setup is accomplished with SPs and PT queries is
simply incorrect. Standard Access queries, forms, and reports built against
ODBC linked tables will be perfectly fine for the majority of activities.
 
A

Albert D.Kallal

Jordan S. said:
Thanks Albert...

Performance is abysmally slow with just ONE SINGLE USER! Additionally,
when the whole thing is run on a laptop computer (i.e., NO network
involved at all), performance is abysmally slow. By "abysmally slow" I
mean it takes 30 seconds to 2 minutes (sometimes 4 minutes) to open a
single form.

Since the single stand alone system runs slows, then we need to check the
setup. It would be hard to
believe that the application was deployed, and ran so slow. OFTEN THE SETUP
of the computer is
at fault.

However, single user with not network on a computer in access97 usually did
run well, so, it sounds like
either the task at hand has to work with lots of data, or something else (as
mentioned, JET is much faster
then sql server when working stand alone with no network)..

The list of things to check is here:

http://www.granite.ab.ca/access/performancefaq.htm

I would go though that list one by one (a good many features and suggestions
are for later versions..since 97 is now 4 versions old).
It appears from responses so far that much of the data access logic behind
the forms might need to become much more complex - with retrieving data
via passthrough queries - then doing updates via separate action queries.

The above will do absolute to improve performance in ms-access. If I have
form that lets me edit a customer. Lets assume the customer file has 250,000
records in that file.

If I prompt you for the customer number, and load the form, the ONE record
will be loaded

dim strCust as string

strCust = inputbox("What customer num to edit")

if strCust = "" then exit sub

docmd.openform "frmCustomers",,,"CustID = " & strCust


If you look at the above code, we prompt the user for a customer..and then
load the form. Only the ONE record will be loaded to the form in the above
example. Writing a whole bunch of pass-through queries...a whole bunch of
update code will do ABSOLUTE NOTHING for performance. It will certainly
allow you to run up big consulting bill, and create more billable
hours...but it will not improve performance over that ms-access and a bound
table.
There would be no more binding (except for read-only data), and updates
would have to be accomplished via action queries.

Why would the above improve performance? It will not. If I have a 100
connections open, and those connections are not transferring data..then
reducing the number of connections does nothing here. Of course, if all
those connections are shuffling data..then yes, we gain....
So give that, we're now proposing a "mostly disconnected" client
application and would have to then roll our own transactions and
concurrent update conflict handling logic. Yes?

The above approach will give you the developer more control as to how
updates to data is going to be done, but in of its self, it will not be a
performance boast to editing a record in ms-access with the current bound
system. Writing all that extra code to accomplish the 3 simple lines of code
I gave above will not improve performance by ONE bit. OF course, my example
is simple..but to pull a record...edit it...and write it back, using
reocrdsets, and pass-through quires will do NOTHING to change my above
example of editing one record...

The enemy here is not a bound forms. However, what writing code to load up a
reocrdset does is force the developer to THINK about what will be loaded
into the reocordset. So, often, the code design will change so that you ask
the user what to edit, and then load up the form. This exact sequence can
occur in ms-access with bound forms..and a HUGE savings in code (my example
above 3 lines of code did this).

BOTTOM LINE:
To fix performance, you need to reduce the number of records loaded to a
form. You don't need pass-though queries, or disconnected record sets to
reduce this transfer. (they don't result in less data being transferred). In
fact, creating a reocrdset....loading
recordset...disconnecting...editing..and then re-connecting, and then
writing the data back often will result in MORE network traffic, since the
closing and opening of the reocrdset can create network traffic. This thus
does not save anything over using a bound form. To be fair, using un-bound
forms gives you MORE contorl over the data..but at a cost of wrting much
code. If you are going to use un-bound forms, then dump ms-access, as you
are using the wrong tool. (you will also need about 3-6 times the budject
you need also!!).

I have some interesting notes on a project I converted from a old mainframe
system to ms-access. Take GOOD NOTICE of the part where I talk about
developer skill here. Make no mistake, to enter into a project of any large
size without ms-access experience is going to be a disaster. What most
developers fail to understand is that ms-access has a STEEPER and LARGER
learning curve then say VB6....in fact...MUCH larger. Don't let the fact
that ms-access is often used as a "end user" tool fool you into thinking the
programming and development side is also easier...it is not....

Here is that article...and in there I give out some tid-bits as to types of
questions I ask ms-access developers when hiring them. Also, pay attention
to the skill level(s) I mention that you need for successful project.

http://www.members.shaw.ca/AlbertKallal/Articles/fog0000000003.html
 
G

Guest

NO network involved at all
seconds to 2 minutes (sometimes 4 minutes) to open a single form.

Cool! I can imagine 3 situations where that would happen:

1) Brain dead anti-virus on a 500MB data file.

2) Form bound to a 500MB data table using FindFirst to move to
the correct record.

3) Datasheet form using combo boxes, all bound to local SQL Server.

Have you seen a new way? I'm always on the lookout for arcane
Access knowledge :~)

(david)
 
J

Jordan S.

Thanks for taking the time for your comprehensive response. Now, a couple of
follow up items, if you don't mind:

RE:
<< Since the single stand alone system runs slows, then we need to check the
setup >>
Not necessarily (although a smart direction to take). Remember we're dealing
with a fundamentally flawed tables design (violates 2nd and 3rd normal forms
as a matter of course), and frequently has its forms bound 20+ different
ways to various underlying tables or queries (e.g., bound combo boxes with
FILTERs, etc). Plus, as stated elsewhere in this thread the db has over 650
queries in use. I've seen them... most queries I "eyeballed" are based on
joins between tables and subqueries - which themselves are based on other
subqueries based on even further subqueries. It's almost laughable except
that it's "for real." This is how we can get such bad performance on a stand
alone system.

RE:
<< I would go though that list one by one (a good many features and
suggestions are for later versions..since 97 is now 4 versions old).>>
Good advice there for me IF I were to be looking to fix the existing system.
As it is, it's so far off track that it will be quicker - much quicker - to
do a complete redesign (relational model) and rewrite of the client.

Regarding your promotion of the idea that [bringing LESS data across the
wire] is the way to improve performance - I TOTALLY agree with you. I have
never been confused about that, and I understand that doing passthrough
queries won't help one bit IF they bring back too much data. But my
questions about passthrough queries and disconnected communications with MS
Access still stand.
FWIW: I have done over 5 years of non trivial MS Access programming... just
never needed passthrough queries and never had SQL Server on the back end
(we bound forms to small recordsets - apparently along the lines of what you
suggest)... thus my inquiries about the passthrough scenario.

RE:
<< If you are going to use un-bound forms, then dump ms-access, as you are
using the wrong tool. >>
Can you elaborate a bit? Why is MS Access the wrong tool for dealing with
un-bound forms? What would instead be a better or "right" tool for using
un-bound forms? and Why?

Thanks again for taking the time - much appreciated!

-Jordan
 
A

Albert D.Kallal

RE:
<< Since the single stand alone system runs slows, then we need to check
the setup >>
Not necessarily (although a smart direction to take). Remember we're
dealing with a fundamentally flawed tables design

Sure, above makes total sense. The only suggestion here was that we do need
to ensure a proper setup that is *outside* the issue(S) of poor design. That
was the only reason for the above suggestion.

RE:
<< If you are going to use un-bound forms, then dump ms-access, as you are
using the wrong tool. >>
Can you elaborate a bit? Why is MS Access the wrong tool for dealing with
un-bound forms? What would instead be a better or "right" tool for using
un-bound forms? and Why?

There is several reasons, but the main reason is that the whole philosophy
and design of ms-access is built around bound forms. If you don't use bound
forms, then all of the built in features that saves TONS OF coding can't be
use. For example, in ms-access (as compared to VB), we OFTEN have 2, 3 or
even MORE events for the same things. (we have on-delete, Before del
confirm, After del confirm).

Another simple and great example is that we have both a on-load event, and a
on-open event. On-open has a cancel event, and thus can cancel a form load.
(in vb, this is a royal pain).

So, if you dump bound forms, then you have a very complex forms object, but
for the MOST part, you will NOT be able to use the events and features of
this form (before update, after date...even the dirty flag of the form is
not set nor used when un-bound). So, you have this HUGE HUGE object model
that is FULL OF stuff to edit data, verify data (the before update has a
cancel event, and you still even have a after update event).

Some of these events and features can be used..but a LOT of them cannot be
use with un-bound forms. Take a look at these continues forms, you can't
have un-bound continues forms

http://www.members.shaw.ca/AlbertKallal/Articles/Grid.htm

I shudder to think of the work to duplicate the above screens. you will have
to resort to activeX controls for many of your "grids" of data, since you
can' t use the built in features.

In other development tools such as VB, or VB.net, the system is OPTIMIZED
for building forms that are not data bound. So, you have wizards, and things
like data repeater controls, and even a data connection object that you have
to place on the form to navigate the reocrdset. Once you setup the data
connection, and put the data controls on the screen, then the forms
designers for these products will HELP YOU build and place fields on the
form. With ms-access, you will NOT HAVE ANY of these helpful
features..because you are not using bound forms. You can actually wind up in
situation where you do MORE WORK then those other systems!!

ms-access is a good 3-8 times better in terms of development compared to
those other systems. By using un-bound forms everywhere...you might actually
windup doing MORE work then what those other systems offer..and that
multiplier of time taken will go up even more!!!

There are many times when you want to use a un-bound form to edit some data
in ms-access, but there is a cost, and the general design of the product is
not built around this concept like VB and other products.

So, my point is that un-bound forms are a lot of work, and if you do such,
then you miss out on much of why ms-access is so productive.

I mean, it takes one line of code to open a ms-access to a particular
customer invoice..

docmd.OpenForm "frmCustomer",,,"id = 5324"

The above just took a second to write....how much work will it be to open up
a form to a particular record when you have a un-bound system?

And, if the above customer invoice is bound to a sql server table with
500,000 records, only the one record will be loaded anyway. Writing a whole
bunch of code to load up the reocrdset to one record..and then populate the
controls gains you nothing in terms of performance here..but it sure is a
lot of work.

Last but not least

Without question quality applications that are built around a un-bound
forms is the NORM in our industry outside of ms-access. So, at the end of
the day it goes without saying that higher quality, and better applications
can be written by using tools that work in a un-bound environment..but, they
cost more...
 
J

Jordan S.

Albert,

Thank you SO MUCH for taking the time to provide such a complete response.
Truly MVP assistance going on here!

-Jordan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top