MS-Access 2003 and SQL Server 2000 speed comparison

G

Grasshopper

Hi,

I have a table with 8 millions records. Some of the fields needed to
be parse and modified. Example, "ABC DEF" will be parsed into two
fields "ABC" and "DEF". And If there is "123" in the field, if will
be replaced by "456". For example, "XYZ 123 AAA" will be "XYZ 456
AAA". Also, update query and append query will be used in the
project.


Give the above information, what will be a faster database to do this
- MS-Access 2003 or SQL Server 2000?

Thanks.

- Grasshopper -
 
A

Arvin Meyer [MVP]

I certainly hope that your parsing is not because of values and instead is
because of data. That is: You want to parse first name and last name from a
full name field, but you don't ever want to parse all the Earls and Barons
into separate fields. Or the Mr. and Ms. into separate fields, you'd add a
Title field in the first case and a Gender or Salutation field in the
second.

That said, Access is a bit more nimble and easier to set up, but the speed
will more depend on the processing power put to the task. However, they
didn't name it Jet because it was slow. Since I work in both, I'd probably
do the manipulation in Access, but I'd soon be looking at SQL-Server for
storage unless the 8 million records won't increase much and there aren't
many users.
 
G

Grasshopper

Thanks Arvin. So you are saying that Access is good enough in this
case if the table size reminds at 8 millions records?

In your opinion, up to how many records would the speed start to make
big significant different?

I will soon need some helps on writing a faster SQL statement in
Access. I ran a query in Access with one millions records in it and
it took 3 hours to complete. This is not a simple Select query but an
Update query that based on a subquery. Do you think 3 hours is
reasonable or there is still room for speed improvement by modify the
SQL?

(The project I am involved with is a street related project.)


- Grasshopper -
 
L

Larry Linson

Grasshopper said:
Thanks Arvin. So you are saying that Access is
good enough in this case if the table size reminds
at 8 millions records?

Some Access database applications handle millions of Records and do so
efficiently, rapidly, and well. Others handle far fewer records
inefficiently, slowly, and poorly. There are far to many factors entering in
to performance to make a judgement just on data size alone. Some include:
the hardware, software, and network environments, the application
requirements, design, and implementation.
In your opinion, up to how many records would
the speed start to make big significant different?

Even with an in-depth study of the application, that would be difficult or
impossible to predict.
I will soon need some helps on writing a faster SQL
statement in Access. I ran a query in Access with
one millions records in it and it took 3 hours to complete.
This is not a simple Select query but an Update query
that based on a subquery. Do you think 3 hours is
reasonable or there is still room for speed improvement
by modify the SQL?

(Former MVP Tom Ellison is a specialist in complex queries and SQL
performance. Because even the free MSDE allows multithreading, which Jet
does not, he used an Access front-end and MSDE back-end even in a single
user environment.) But, if I understand your case... you are accessing a
million or so records, you are executing the subquery a million or so times,
and then in at least some of the cases, are updating the record. That's
quite a lot of processing.

Larry Linson
Microsoft Access MVP
 
G

Grasshopper

Hi Larry,

Thank you for your sharing. You provided a very important information
here - that the subquery will run the number of times of the records
size. This may explain why the query is running so slow. I will need
to find a way to avoid using the subquery.

My "database" is just a flat table full of addresses that need to be
processed though queries. I am wondering if it is worth to spend the
effort (time and money) to have the database in SQL Server for the
speed consideration.

Also, assume I use SQL as the backend and Access as the front end (for
query purposes), would the performance be significantly slower than if
I do the queries directly in SQL Server?

- Grasshopper -
 
J

John W. Vinson

Thank you for your sharing. You provided a very important information
here - that the subquery will run the number of times of the records
size. This may explain why the query is running so slow. I will need
to find a way to avoid using the subquery.

If you can do it using a Join rather than a Subquery, and if you have
the appropriate indexes, your performance should improve - perhaps
dramatically.
My "database" is just a flat table full of addresses that need to be
processed though queries. I am wondering if it is worth to spend the
effort (time and money) to have the database in SQL Server for the
speed consideration.

A badly designed query will run slowly in Jet - and will run slowly in
SQL. You'll get a great deal more benefit out of optimizing your
indexes and (especially) your query design than you will out of moving
to SQL.
Also, assume I use SQL as the backend and Access as the front end (for
query purposes), would the performance be significantly slower than if
I do the queries directly in SQL Server?

Not particularly. Since SQL/Server - by itself - doesn't have the
forms, reports, and other data presentation facilities that Access
provides, "doing everything in SQL" isn't really an option; and Access
can be very good at formulating queries and having the backend - JET
or SQL - run them.

John W. Vinson [MVP]
 
D

dbahooker

John

you CAN 'do everything in SQL' by using Access Data Projects

ADP has a 'index tuning wizard' that comes with SQL developers
edition.. a whole $49

and SQL Server supports mulitple processors

I would reccomend using full text search rather than splitting
everything like that; FTS is a great tool
 
D

dbahooker

Re:
A badly designed query will run slowly in Jet - and will run slowly in
SQL. You'll get a great deal more benefit out of optimizing your
indexes and (especially) your query design than you will out of moving
to SQL.

what he means to say is this:

a) lose the training wheels
b) of course you should use SQL Server
c) SQL Server has a QUERY OPTIMIZER
d) you should split your query into temp tables; etc because you've
obviously got locking problems. There isn't anything in the world I
couldn't do with 8m records in an hour using SQL Server.
 
A

Arvin Meyer [MVP]

As Larry and John mentioned, your speed problem is most likely due to the
subquery and to indexes (or lack thereof). I rarely use subquery, preferring
to run the sql statement, get the result and then join the result to the
tables in a second query. This stops the need to rerun the subquery for
every row in the query. If it's an Update query it is crucial to use well
designed indexes. SQL-Server won't speed up much unless the machine is
faster and has much more RAM, and the log is truncated (something you
probably shouldn't do with an Update query).
 
D

dbahooker

Hey Arvin;

STFU and stop trying to data warehouse in MDB.

of _COURSE_ SQL Server is bigger, better, faster and the same price--
don't be a retard; move to SQL Server.

FIRE and then SPIT ON anyone that still uses MDB for anything

ADP won the war back in 1999

-Aaron
 
D

dbahooker

for the record, correlated subqueries ROCK

instead of scanning the same table 10 times of COURSE you should put
it in SQL Server

relying on hear-say from senile MDB ****tards is a laughable strategy

-Aaron
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top