Not sure there is a simple solution for your question, especially if other
users are potentially inserting/deleting/editing between your 50k query
runs, and especially with the way Access handles TOP.
Crucial factors might be:
1. Make sure you allow Windows as much swapfile space as it needs.
2. Make sure you maintain lots of free hard disk space on the drive that
hosts your temp folder and swapfile.
3. Compact the database periodically.
4. Make sure all tables in the query have a primary key.
5. If necessary, create temp table to hold just the primary key value of the
records you want to select, and then inner join this on the main query so as
to give Access a simple and efficient selection group.
--
Allen Browne - Microsoft MVP. Perth, Western Australia.
Reply to group, rather than allenbrowne at mvps dot org.
Dennis said:
Oh, I forgot - back to my original issue:
I have a LARGE table. Big. Huge. ;^)
My goal is to limit the number of records read and processed at-a-time (to
say, 50,000 or so), while maintaining the integrity of the sort. I have
been
getting "Out of Memory" errors. So is there some way that I can use
something
like OFFSET, LIMIT that will allow the sort to be maintained properly
across
multiple passes?
Thanks!
Allen Browne said:
Dennis, you don't need to do anything.
The Rushmore technology in JET (the query engine in Access) will
automatically use the index if the ORDER BY clause matches the same
fields
in the same order.
Rushmore is actually very good at using the indexes automatically.
I created a multi-field index (named "Pass2Index") for a dataset. The
fields
are all the ones in the ORDER BY below:
recSet.Open "SELECT * from [Quarterly Data - Passed] WHERE [Year] = "
&
yearCounter & " ORDER BY [Procedure Code], [Major Program], " & _
"[Recipient ID Num], [Service Date From], [Health Plan Pmt
Date], [Mod1], " & _
"[Health Plan Claim Identifier], [Table Insert Date] DESC"
Now, how do I actually USE that index in a SELECT statement as above?
What
do I need to change, and to what?
Thanks!!