Limits of Access 2007

E

Erwin Bormans

Hoi everyone,

I made an Access application with VBA in MS Access 2007.

It started as a side program just to help out a little, but it grow to a
full size program used daily and of great importance.

It keeps track of metal construction where glass (windows etc) gets
delivered on. As those constructions gets delivered on a daily basis the
database is growing quite fast.

How many records can Access keep track of? Do I need to worry that resources
will run out shortly?

At the moment the delivery tabel is counting about 6000 records and the
total filesize is approximatly 30MB.

Thanks in advance!

Kind regards,
Erwin Bormans
 
D

Daniel Pineault

For some general specs on the limit of access 2007 see:
http://blogs.msdn.com/access/archive/2006/06/05/access-2007-limits.aspx
http://www.databasedev.co.uk/access_specifications.html

That said, I have dbs with 100,000's of records and they run without any
issue. I have seen MVPs mention dbs with 1,000,000's of records. 6,000
should not present a problem!

As always, perform regular backups and a compact and repair from time to
time (just some house cleaning) to ensure optimal performance.
--
Hope this helps,

Daniel Pineault
http://www.cardaconsultants.com/
For Access Tips and Examples: http://www.devhut.net
Please rate this post using the vote buttons if it was helpful.
 
D

Daniel Pineault

Just found a direct answer to your question.

"As has been the case with every version of Access, there is no limit in
terms of maximum number of rows. The only limit is that a table must be in a
single MDB (or ACCDB) file, and that the size of the file cannot exceed 2
Gb.

--
Doug Steele, Microsoft Access MVP"

The entire post can be found at:
http://www.thecodecage.com/forumz/a...maximum-number-rows-access-2007-can-take.html
--
Hope this helps,

Daniel Pineault
http://www.cardaconsultants.com/
For Access Tips and Examples: http://www.devhut.net
Please rate this post using the vote buttons if it was helpful.
 
A

Albert D. Kallal

At the moment the delivery tabel is counting about 6000 records and the
total filesize is approximatly 30MB.

Have you done a compact and repair..what is the size after that?

Anyway, I have many applications that have 5 or even 10 tables all with
50,000 or more records.....
 
D

David W. Fenton

=?Utf-8?B?RGFuaWVsIFBpbmVhdWx0?=
ust found a direct answer to your question.

"As has been the case with every version of Access, there is no
limit in terms of maximum number of rows. The only limit is that a
table must be in a single MDB (or ACCDB) file, and that the size
of the file cannot exceed 2 Gb.

The entire post can be found at:
http://www.thecodecage.com/forumz

There's this thread on StackOverflow.com:

http://stackoverflow.com/questions/1221435/

I tried to make the smallest possible records and see how many I
could insert into a single table in order to see how big the MDB
file got. I finally gave up when I got to 21.7 million records and a
file of 600MBs.
 
T

Tom van Stiphout

On 19 Nov 2009 01:53:22 GMT, "David W. Fenton"

Hi David,
If I understand your stackoverflow post correctly you used a very slow
algorithm (one row at a time) to add rows to a table. Why not use an
insert statement on the table, selecting from the entire set? Thus
your inserts would get exponentially larger. Then perhaps cap it at
1E6 records. My guess is you would run out of the 2GB limit in a few
minutes.

-Tom.
Microsoft Access MVP
 
D

david

1E6 records. My guess is you would run out of the 2GB limit in a few

It still took me a couple of hours (leaving it to work) to get to 2GB.

Although the inserts got exponentially larger, they also got exponentially
slower.

That was probably on Windows 2000 with 1GB of memory.

Also, when you compact an MDB, Windows tries (tried?) to bring the
whole file into memory.... compacting that file was slow too.

132 million unindexed Boolean values, so it wasn't just the indexing that
slowed down the append query:

http://groups.google.com/group/micr...a9d34f73c0/841f6f703e386e8e?#841f6f703e386e8e

(david)
 
T

Tony Toews [MVP]

Daniel Pineault said:
That said, I have dbs with 100,000's of records and they run without any
issue. I have seen MVPs mention dbs with 1,000,000's of records. 6,000
should not present a problem!

A client just upsized their 300 Mb backend with 900K records in the
transactions table. In the words of one user "Things are scary
fast." What used to take a minute now takes 5 or 6 seconds. What
used to take 4 or 5 seconds is now "blink of an eye".

This is a time sheets transaction system which is used to pay the 500
or 1000 employees as well as generate daily time sheets which are
signed by the client on a daily basis. IOW a crucial system.

The upsizing has now taken the stress of the ladies who
entered/updated the data. They can easily do the daily tasks.

So the point being that there comes a point in time when you should
upsize. And this system upsizing probably should've happened two
years ago when they only had 400K transactions in the tables.

Tony
--
Tony Toews, Microsoft Access MVP
Tony's Main MS Access pages - http://www.granite.ab.ca/accsmstr.htm
Tony's Microsoft Access Blog - http://msmvps.com/blogs/access/
For a free, convenient utility to keep your users FEs and other files
updated see http://www.autofeupdater.com/
Granite Fleet Manager http://www.granitefleet.com/
 
E

Erwin Bormans

To all,

Thanks for the many reactions!!!

I will sleep on my 2 ears :)

I did do a compact and repair before I quoted the size, but there are a
couple more tables with some 1000's of records aswell...

Anyway thank you very much!

Kind regards,
Erwin
 
D

David W. Fenton

If I understand your stackoverflow post correctly you used a very
slow algorithm (one row at a time) to add rows to a table. Why not
use an insert statement on the table, selecting from the entire
set? Thus your inserts would get exponentially larger. Then
perhaps cap it at 1E6 records. My guess is you would run out of
the 2GB limit in a few minutes.

No, I didn't. I was using a 4-column table where all the fields were
Byte fields. I generated values based on the records already in the
table, but with the inserted values generated by random numbers. So,
each batch would be larger by the number of records inserted. For
instance, if you start with 1 record, and insert as many records as
already exist, you have two. Next time, you start with 2 and insert
2. Next time you start with 4 and end up with 8. Shortly, you've got
1000s of records.

This is not the original SQL, but very similar to it:

INSERT INTO tblGBTest1 ( PK1, PK2, PK3, PK4 )
SELECT 255*Rnd([Pk1]), 255*Rnd([Pk2]), 255*Rnd([Pk3]),
255*Rnd([Pk4])
FROM tblGBTest;

The target table had a composite unique key on the four columns, so
once the number of records became large enough, there started being
collisions on existing values.

That was a requirement of the test scenario, actually, so taking off
the index and running the test would not have met the requirements
set out in the original question.
 
D

David W. Fenton

A client just upsized their 300 Mb backend with 900K records in
the transactions table. In the words of one user "Things are
scary fast." What used to take a minute now takes 5 or 6
seconds. What used to take 4 or 5 seconds is now "blink of an
eye".

The largest app I've upsized was a 300MB back end with 350K records
in the main table and 600K in the main child table. After upsizing,
the main operations were not noticeably faster than before, because
all the editing was via forms that loaded filter recordsets to begin
with.

A few data-intensive operations did speed up, though, particularly
calculating totals on the child data. But not nearly as much speeded
up as I expected.
 
E

erika gomez

tengo a ignacio de torres haciendose pasar por mary cruz y me ha pegado
mientras duermo me he levantado con la muñeca rota y el pie estoy en calle
juan de la encina que pongan a alguien de vigilancia y ke le kiten la careta
y lo maten
 
J

John W. Vinson

I did do a compact and repair before I quoted the size, but there are a
couple more tables with some 1000's of records aswell...

If you have hundreds of thousands of records you need to work carefully on
optimization. If you have millions, you need to optimize VERY carefully, or
(better) move to SQL/Server or another client/server solution.


If you have "some thousands" you're very comfortably within Access' comfort
zone.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top