Access...multiple tables vs many null fields

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

I am trying to decide whether to break up a table of 160 data elements + 2 indices into 4 tables of 40+2.

Around 80% of the time the last 40 elements of the long record will be null, meaning there would be no record at all in Table 4, 50% of the time the last 80 element would be null, so no record at all in Table 4 and 3, and 20% of the time for the last 120, so no record at all in Tables 4, 3, 2.

Is the extra overhead of repeating the indices when records do exist in Tables 2 3 or 4 more or less costly than carrying so many null fields in one record, but only needing the indices in the one record?
 
TravisR said:
I am trying to decide whether to break up a table of 160 data elements + 2 indices into 4 tables of 40+2.

Around 80% of the time the last 40 elements of the long record will be
null, meaning there would be no record at all in Table 4, 50% of the time
the last 80 element would be null, so no record at all in Table 4 and 3, and
20% of the time for the last 120, so no record at all in Tables 4, 3, 2.
Is the extra overhead of repeating the indices when records do exist in
Tables 2 3 or 4 more or less costly than carrying so many null fields in one
record, but only needing the indices in the one record?

It *might* be if you had a slow computer and well over a million records.
The exception to this would be cases where it is absolutely impossible to
avoid looping through huge numbers of records via VBA instead of using SQL
queries and you needed rapid response.
 
lol..it might be which, more or less costly? I assume you mean that breaking it up into 4 tables is ok unless millions of records that have to be looped through?
 
I am trying to decide whether to break up a table of 160 data elements + 2 indices into 4 tables of 40+2.

I'd look very closely at whether it might be possible to break it into
two tables of (perhaps) 3 and 2 fields each. It really sounds like you
have a one to many (40, 80, 120 or 160) relationship embedded within
each of these exceedingly wide records!
 
The overhead is NOT the issue here.

If you have 20 fields, 40 fields, 80 fields, then really, it don't matter.

Empty string fields do NOT take up disk space. On the other hand, disk space
is SO UNBELIEVABLE CHEAP that counting this issue is like counting the
amount of air your data entry people breathes. This issue as a general rule
will not hurt performance either.

Also, 160 fields sounds like a VERY HIGH number of fields here.You need to
use as many tables as the solution requires, if that is 4 tables, or 40
tables, then that is what you must use. I must say however, I have NEVER
seen a well designed application with anything remotely approaching 150
fields (that number sounds high)>
Is the extra overhead of repeating the indices when records do exist in
Tables 2 3 or 4 more or less costly than carrying so many null fields in one
record, but only needing the indices in the one record?

The above is a non issue, and a waste of your time. What you need to do here
is to get a good data model designed, and normalize your data. If your data
is normalized, and results in a 160 field single record table, then that is
what you better use (splitting into multiple tables will hurt performance in
this case). However, splitting and normalizing data most of the time results
in better performance (but only so if you NEED TO!). Don't split up data
because you might increase performance, as that usually don't help.
Worrying about a few indexes here and there usually NEVER becomes a make, or
break issue here in terms of performance. If you have a good design, then
good performance usually follows. The rule here is that you move data into
other tables if normalizing your data tells you to.

it is not clear if you are dealing with very small tables (say,k only 75,000
records), or large tables 750,000 records? I mean, 5-15 users, 40-50 tables
in the 50,000 record range is absolute nothing for ms-access and a JET file
share.

The ONLY THING you need to worry about is normalizing your data.

Here is some links on this issue:

ACC2002: Database Normalization Basics
http://support.microsoft.com/?id=283878

Database Normalization Tips by Luke Chung
http://www.fmsinc.com/tpapers/genaccess/databasenorm.html

Data Normalization Fundamentals by Luke Chung
http://www.fmsinc.com/tpapers/datanorm/index.html

324613 - Support WebCast: Database Normalization Basics
http://support.microsoft.com/?id=324613
 
It might be more costly if you had to loop through a lot of records.

My only experience with this was on a 300 MHz machine with not much RAM.



TravisR said:
lol..it might be which, more or less costly? I assume you mean that
breaking it up into 4 tables is ok unless millions of records that have to
be looped through?
 
Back
Top