How can I get Access 2003 to use more memory?

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

I have a large database in Access, and 1 GB of RAM. I would like to set up
Access to use 512 or 768 MB of RAM to speed things up. Is it possible to
persuade Access to use more RAM?
 
On-Tech said:
I have a large database in Access, and 1 GB of RAM. I would like to
set up Access to use 512 or 768 MB of RAM to speed things up. Is it
possible to persuade Access to use more RAM?

It will use as much RAM as it needs. Slow database stuff is ALMOST always
caused by disk i/o not CPU/RAM issues.
 
Joseph said:
Or design issues. :-)

Well yes, but poor design usually manifests itself as more disk activity
than otherwise would be necessary and it is the additional disk activity
that makes it slow (and LAN traffic if networked).
 
What happens is that windows itself becomes a large disk cache, and thus the
data stays in ram anyway.

I don't think that ms-access benefits much from having a HUGE amount of ram.
It certainly can use a big chunk, but after that, it is the disk cache that
will keep the data in memory...not ms-access. (the time to transfer from
disk cache to ms-access database is very small anyway).

You don't mention what size of files you are working with (i.e.:how many
records etc.).

Access works very well with small files of 100,000 records.

And example of amazing performance follows

Lets assume a typical products database and we want to keep
inventory..

Lets assume 500 products in the product table. = 500 records

Lets assume that we had 5 stock taking days where we added stock to EACH
product 5 TIMES this year.
(so, each stock item was re-stocked 5 times during the year. Note that the
sample given design allows for price changes as new stock arrives).


That now means that our Inventory table has 2500 records.

Lets also assume that each Inventory item has 50 orders in the invoices
(order details) table on average.


That now means our Orders Details table has 50 * 2500 = 125,000 records.


So, what we want to do is calculate quantity on hand.

So, we got 125,000 detail records, and 2500 inventory items (and 500
products.

Remember, in a modern system, we do NOT store the quality on hand, but must
calculate it on the fly.

The beauty of this approach is that I can then simply delete, or add, or
modify
records, and the totals for inventory is always 100% correct.

Further, I can just paint away with the
forms designer and build sub forms etc for invoice details where users
can enter the quantity ordered. Again, no special code is needed to
update the inventory stocks since I *CALCULATE* it on the fly
when needed.

That means the sql has to join up all records that belong to each
product..and sum them, and then subtract the quantities in the
invoice details.

Time to total up all of the in-stock records (that is running sql
statements to total all Inventory additions less all those 125,000 order
details to come up with a total for EACH product.???

On a average pc today, ms-access will total up and generate the quality on
hand for those 125,000 detail of is LESS then 1 second. (this time
includes the matching, adding, and subtracting of all orders in the system).

JET as a standalone system with no network is faster then sql server in this
regards.....
 
The database that is causing me a problem is about 1.4 GB. Most of that is
one large table, with 1.1 million records, each with 54 fields. It takes
about 80 seconds to open that table. Running a query can take a couple of
minutes. During that time, Access never exceeds 36 MB of RAM, and generally
takes about 15% of the CPU cycles. Meanwhile, the disk is churning. I've
got to think that if Access could use 800 MB of RAM, less disk usage would be
necessary.
 
On-Tech said:
The database that is causing me a problem is about 1.4 GB. Most of
that is one large table, with 1.1 million records, each with 54
fields. It takes about 80 seconds to open that table. Running a
query can take a couple of minutes. During that time, Access never
exceeds 36 MB of RAM, and generally takes about 15% of the CPU
cycles. Meanwhile, the disk is churning. I've got to think that if
Access could use 800 MB of RAM, less disk usage would be necessary.

That data is stored on the disk and what is taking the time is reading it off of
the disk. RAM is not relevant to that task.

Have you looked at your indexing? That is where most query performance issues
are resolved. That and the design of the queries themselves.
 
Do you regularly compact your database? Are you running a split, front-end
(queries, forms, reports, macros, modules) and back-end (tables with data,
relationships)?

The best collection of information and links about multiuser performance and
avoiding corruption (but many of the suggestions apply to single-user
databases, too) that I know about is at MVP Tony Toews' site,
http://www.granite.ab.ca/accsmstr.htm. It would be worth a look.

And, I concur with Rick Brandt's suggestion about indexing.

Larry Linson
Microsoft Access MVP
 
On-Tech said:
The database that is causing me a problem is about 1.4 GB. Most of
that is one large table, with 1.1 million records, each with 54
fields. It takes about 80 seconds to open that table. Running a
query can take a couple of minutes. During that time, Access never
exceeds 36 MB of RAM, and generally takes about 15% of the CPU
cycles. Meanwhile, the disk is churning. I've got to think that if
Access could use 800 MB of RAM, less disk usage would be necessary.

I wonder. 54 fields is a lot. It may be fine, but it may also be an
indication of lack of normalization. Have you examined the table for
possible improvements in normalization?
 
I wonder. 54 fields is a lot. It may be fine, but it may also be an
indication of lack of normalization. Have you examined the table for
possible improvements in normalization?

Yep, it is definitely not a well-designed database. I'm importing
information from text files frequently, and that was easier to set up by
having one table with all the fields. And now, of course, I've got all sorts
of queries, forms and reports built on this creaky structure. One of these
days....

What I was hoping was that I could persuade Access to cache more in RAM, to
reduce the amount of disk I/O.
 
On-Tech said:
Yep, it is definitely not a well-designed database. I'm importing
information from text files frequently, and that was easier to set up
by having one table with all the fields. And now, of course, I've
got all sorts of queries, forms and reports built on this creaky
structure. One of these days....

What I was hoping was that I could persuade Access to cache more in
RAM, to reduce the amount of disk I/O.

Would you expect to be able to copy 1GB of data from one hard drive to another
faster with more RAM? You have a lot of data stored on a disk. If you run a
query that needs to pull lots of that data then that data has to be read from
the disk. Disk i/o is slow, probably the slowest part of a computer.
Additional RAM does not address this fundamental problem.

Having proper indexing DOES address the problem because reading an index takes
significantly less disk i/o than doing a table scan. That is always the primary
place to look for improving query speed. Even that though only helps with joins
and when using criteria to pull a subset. If you have a very large database and
you need to run queries against ALL of it, then that is going to be slow and
there is not much you can do about it except to upgrade the system in ways that
improve disk operations.
 
Would you expect to be able to copy 1GB of data from one hard drive to
another
faster with more RAM? You have a lot of data stored on a disk. If you run a
query that needs to pull lots of that data then that data has to be read from
the disk. Disk i/o is slow, probably the slowest part of a computer.
Additional RAM does not address this fundamental problem.
Additional RAM would address this problem if Access would load the data into
RAM once, then work on it there, rather than going to disk over and over to
get the same data. Copying data from disk to disk isn't affected much by the
amount of RAM, but being able to load a file into RAM before manipulating it
certainly speeds things up.
 
On-Tech said:
Additional RAM would address this problem if Access would load the
data into RAM once, then work on it there, rather than going to disk
over and over to get the same data. Copying data from disk to disk
isn't affected much by the amount of RAM, but being able to load a
file into RAM before manipulating it certainly speeds things up.

More than likely it IS doing exactly that. Access queries are almost always
faster after being run once because the data is cached in memory after the first
run.

Is your problem that you are running the same or similar queries over and over
again or are you running different queries that need to pull lots of new data
each time?

Windows programs (and Windows itself) will generally grab as much RAM as they
need to do a task. There should be no need to "force" it to use more.
 
Is your problem that you are running the same or similar queries over and
over
again or are you running different queries that need to pull lots of new data
each time?

I do run a variety of different queries, or the same query with a few
criteria changed, but no matter what I do, Access doesn't use more than 36 MB.
 
On-Tech said:
I do run a variety of different queries, or the same query with a few
criteria changed, but no matter what I do, Access doesn't use more
than 36 MB.

Then that is all it needs.
 
Back
Top