R
Ray S.
Let me see if I can simplify this. I am running MS Access 2003.SP3 to connect
to a mainframe server that contains my company's financial data. Twice a day
the data on the mainframe is updated for the entire enterprise's general
ledger. I only work with three subsidiary companies, doing financial
analysis. As you can imagine, the entire data file is quite large. I need to
use every means possible to reduce the amount of data that I acquire for my
analyses. There are tables on the mainframe that allow me to filter out the
companies I don't need to look at, so part of my query to the mainframe
includes joins to those necessary tables. This allows me to exclude closed
cost centers and other elements that would just bloat my local table. Now,
even doing that still takes about an hour to run the query and give me my
data. The problem is that the data that I get includes all data for all dates
posted since the enterprise began keeping these records. I can filter for
just the current year's postings, but even that is about a half million
records. And, every day when I run my query, I am reloading all the data.
What I want to do is limit the daily data by using the maximum posting date
in my prior day's file as a filter for the current selection of data (the
idea is that I will only be selecting data posted since my last update). I
posted an earlier question asking for help on this. I tried exactly what was
suggested, but the result was not helpful at all. The last suggestion was to
break up the subquery into steps that essentially had me bring the entire
data into my local machine before I filtered for the maximum post date...now,
that's ridiculous...it is EXACTLY what I'm trying to avoid. The addition of
the subquery does not in any way speed up the process. In fact, it hangs
forever and my session gets killed by the next daily post runs on the
mainframe. I'm looking for help in making this process work reasonably fast.
I understand the suggested subquery, but it does not work efficiently at all.
Any other ideas?
to a mainframe server that contains my company's financial data. Twice a day
the data on the mainframe is updated for the entire enterprise's general
ledger. I only work with three subsidiary companies, doing financial
analysis. As you can imagine, the entire data file is quite large. I need to
use every means possible to reduce the amount of data that I acquire for my
analyses. There are tables on the mainframe that allow me to filter out the
companies I don't need to look at, so part of my query to the mainframe
includes joins to those necessary tables. This allows me to exclude closed
cost centers and other elements that would just bloat my local table. Now,
even doing that still takes about an hour to run the query and give me my
data. The problem is that the data that I get includes all data for all dates
posted since the enterprise began keeping these records. I can filter for
just the current year's postings, but even that is about a half million
records. And, every day when I run my query, I am reloading all the data.
What I want to do is limit the daily data by using the maximum posting date
in my prior day's file as a filter for the current selection of data (the
idea is that I will only be selecting data posted since my last update). I
posted an earlier question asking for help on this. I tried exactly what was
suggested, but the result was not helpful at all. The last suggestion was to
break up the subquery into steps that essentially had me bring the entire
data into my local machine before I filtered for the maximum post date...now,
that's ridiculous...it is EXACTLY what I'm trying to avoid. The addition of
the subquery does not in any way speed up the process. In fact, it hangs
forever and my session gets killed by the next daily post runs on the
mainframe. I'm looking for help in making this process work reasonably fast.
I understand the suggested subquery, but it does not work efficiently at all.
Any other ideas?