S
Smartin
Not so much a question but an observation.
I have been analyzing somewhat large chunks of data (on the order of
25000 rows) using SELECT xx PERCENT. I noticed that the returned number
of rows generally differs from the expected number by tens or even a
couple thousand.
The explanation seems to be that SELECT xx PERCENT returns all the rows
that favor xx on the column indicated, not a percentage of source rows
to be returned.
For example, if I query
SELECT TOP 95% MYTABLE.* FROM MYTABLE ORDER BY SOMEVALUE ASC;
Access seems to calculate 95% of the max of SOMEVALUE, then returns the
result of
SELECT TOP 95% MYTABLE.* FROM MYTABLE
WHERE SOMEVALUE <= [max(somevalue*.95)]
ORDER BY SOMEVALUE ASC;
Since there might be many rows that match <= [max(somevalue*.95)] the
result set might differ considerably from the expected. I suspect
rounding factors into this as well, as I have seen in other posts.
I have been analyzing somewhat large chunks of data (on the order of
25000 rows) using SELECT xx PERCENT. I noticed that the returned number
of rows generally differs from the expected number by tens or even a
couple thousand.
The explanation seems to be that SELECT xx PERCENT returns all the rows
that favor xx on the column indicated, not a percentage of source rows
to be returned.
For example, if I query
SELECT TOP 95% MYTABLE.* FROM MYTABLE ORDER BY SOMEVALUE ASC;
Access seems to calculate 95% of the max of SOMEVALUE, then returns the
result of
SELECT TOP 95% MYTABLE.* FROM MYTABLE
WHERE SOMEVALUE <= [max(somevalue*.95)]
ORDER BY SOMEVALUE ASC;
Since there might be many rows that match <= [max(somevalue*.95)] the
result set might differ considerably from the expected. I suspect
rounding factors into this as well, as I have seen in other posts.