简体   繁体   中英

Why is complex SQL slow when used on Access front end and SQL Back-end?

I understand that in the case of complex queries with multiple join statements it is preferred to use pass through queries. My confusion is on what happens when we do not use pass through query for complex sql statements.

Is it slow because the ODBC driver cannot parse the query and SQL server cannot understand it and sends all the data through the network pipe to be queried by Access itself

OR

Is it slow because even though ODBC driver can parse the SQL statement it takes a lot of time to do so

First of all, if a pass-through query is used with an ODBC driver, then Access itself makes no attempt to parse the query and instead just sends it to the database via the ODBC driver . In this way, one can submit queries using the native SQL dialect understood by the server. This allows specialized and/or highly optimized queries to be submitted that Access could not execute itself. Futher, such queries can also refer to server tables (and other objects) that are not linked to Access.

In the case of a "normal" Access query (not a pass-through), the Access database engine will attempt to parse, interpret and optimize the query according to its own capabilities. In doing so, Access will construct one or multiple queries that it will send to the server also via the ODBC driver . Upon receiving the data from the server, it will then apply all residual joins and criteria to the data in order to satisfy the overall SQL statement-- this is done locally by Access, independent of the remote server .

Just as others commented, sometimes Access might be smart enough to instruct the remote server to perform some joins or to apply some criteria (eg WHERE conditions), but the Access engine is not smart enough to always choose the best optimization. That is especially true when the query contains source tables both from Access tables and remote tables. These limitations are precisely why the pass-through query exists as an option, so that the programmer can intervene in sending optimized queries to the server, then perform other joins and criteria later using additional Access queries.

Thus, any query sent to the server must go through the ODBC driver. It is correct that the ODBC driver will do some initial parsing of the statements so that it can handle parameters, etc., but statement parsing is only the beginning of an efficient database operation . A database engine stores detailed information about indices, constraints, relationships, etc. It uses such meta-data to retrieve, combine and sort data efficiently, but it does so by storing that meta-data along with the tables. So a remote server will have all that meta-data on the remote server which Access cannot utilize. Access and SQL Server (and any other RDBMS) have different database engines and are not designed to exchange the underlying meta-data (eg indexes, constraints -- as mentioned before). It is worth noting that sometimes minimal information can be specified, like primary keys, to assist Access in using remote tables more efficiently, but this is really minimal and will not provide guaranteed efficient data operations.


In response to the comment about query efficiency "after all these years"...

The fact is that Access is largely an older technology with very basic relational database capabilities built primarily for local databases. It was never designed to be optimized for remote data operations. Furthermore, the underlying database engines of Access and SQL Server (or any other RDBMS) are not compatible. They each have their own way of storing data and meta-data. The only interoperability comes via SQL statements of which you are aware and which were discussed in the previous paragraphs. There is no standard SQL terms for exchanging the necessary complex meta-data for fully optimizing queries between Access and remote servers, at least not beyond standard JOINS and WHERE conditions.

However, it is completely reasonable to expect progress over the years. Advanced database servers--SQL Server included--do indeed support means of replicating data tables and other objects across remote servers. In that case, highly efficient queries can be crafted which request data from tables that are distributed over multiple servers. So the ultimate answer to the expectation of progress would be to replace old technology like Access and implement newer, more complex and capable data servers. This is not meant as a slight against Access, only a realization that Access will not change much at all even after many more years. New data storage and retrieval technologies are being updated, not Access.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM