简体   繁体   中英

MySQL ODBC Update Query VERY Slow

Our Access 2010 database recently hit the 2GB file size limit, so I ported the database to MySQL.

I installed MySQL Server 5.6.1 x64 on Windows Server 2008 x64. All OS updates and patches are loaded.

I am using the MySQL ODBC 5.2w x64 Driver, as it seems to be the fastest.

My box has an i7-3960X with 64GB RAM and a 480GB SSD.

I use Access Query Designer as I prefer the interface and I regularly need to append missing records from one table to the other.

As a test, I have a simple Access Database with two linked tables:

tblData links to another Access Database and

tblOnline uses a SYSTEM DSN to a linked ODBC table.

Both tables contain over 10 million records. Some of my ported working tables already have over 30 million records.

To select records to append, I use a field called INDBYN which is either true or false.

First I run an Update query on tblData:

UPDATE tblData SET tblData.InDBYN = False;

Then I update all matching records:

UPDATE tblData INNER JOIN tblData ON tblData.IDMaster = tblOnline.IDMaster SET tblData.InDBYN = True; 

This works reasonably fast, even to the linked ODBC table.

Lastly I Append all records where INDBYN is False to tblOnline. This is also acceptable speed, although slower than appends to a Linked Access table.

Within Access everything works 100% and is incredibly fast, except the DB is getting too big.

On the Linked Access Table, it takes 2m15s to update 11,500,000 records.

However, I now need to move the SOURCE table to MySQL, as it is reaching the 2GB limit.

So in future I will need to run the UPDATE statement on a linked ODBC table.

So far, when I run the same simple UPDATE query on the linked ODBC table it runs for more than 20 minutes, and then bombs out saying the query has exceeded the 2GB memory limit.

Both tables are identical in structure.

I do not know how to resolve this and need advice please.

I prefer to use Access as the front-end as I have hundreds of queries already designed for the app, and there is no time to re-develop the app.

I use the InnoDB engine and have tried various tweaks without success. Since my database uses relational tables, it looked like the best option to use INNODB as opposed to MyISAM.

I have turned doublewrite on and off and tried various buffer pool sizes, including query cache. It does not make a difference on this particular query.

My current my.ini file looks like this:

#-----------------------------------------------------------------------
# MySQL Server Instance Configuration File
# ----------------------------------------------------------------------

[client]
no-beep

port=3306

[mysql]

default-character-set=utf8

server_type=3
[mysqld]

port=3306

basedir="C:\Program Files\MySQL\MySQL Server 5.6\"

datadir="E:\MySQLData\data\"

character-set-server=utf8

default-storage-engine=INNODB

sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

log-output=FILE
general-log=0
general_log_file="SQLSERVER.log"
slow-query-log=1
slow_query_log_file="SQLSERVER-slow.log"
long_query_time=10

log-error="SQLSERVER.err"

max_connections=100

query_cache_size = 20M

table_open_cache=2000

tmp_table_size=502M

thread_cache_size=9

myisam_max_sort_file_size=100G

myisam_sort_buffer_size=1002M

key_buffer_size=8M

read_buffer_size=64K
read_rnd_buffer_size=256K

sort_buffer_size=256K

innodb_additional_mem_pool_size=32M

innodb_flush_log_at_trx_commit = 1

innodb_log_buffer_size=16M

innodb_buffer_pool_size = 48G

innodb_log_file_size=48M

innodb_thread_concurrency = 0

innodb_autoextend_increment=64M

innodb_buffer_pool_instances=8

innodb_concurrency_tickets=5000

innodb_old_blocks_time=1000

innodb_open_files=2000

innodb_stats_on_metadata=0

innodb_file_per_table=1

innodb_checksum_algorithm=0

back_log=70

flush_time=0

join_buffer_size=256K

max_allowed_packet=4M

max_connect_errors=100

open_files_limit=4110

query_cache_type = 1

sort_buffer_size=256K

table_definition_cache=1400

binlog_row_event_max_size=8K

sync_relay_log=10000
sync_relay_log_info=10000

tmpdir = "G:/MySQLTemp"
innodb_write_io_threads = 16
innodb_doublewrite
innodb = ON
innodb_fast_shutdown = 1

query_cache_min_res_unit = 4096

query_cache_limit = 1048576

innodb_data_home_dir = "E:/MySQLData/data"

bulk_insert_buffer_size = 8388608

Any advice will be greatly appreciated. Thank you in advance.

Communication of MS Access with MySQL thru linked table is slow. Terribly slow. That is the fact which can't be changed. Why is it happening? Access firstly load data from MySQL, then it process the command and finally it puts the data back. In addition, it does this process row by row! However, you can avoid this if you don't need to use parameters or data from local tables in your "update" query. (In another words - if your query is always same and it use only MySQL data)

Trick is to force MySQL server to process the query instead of Access! This can be achieved by creating "pass-thru" query in Access , where you can write directly your SQL code (in MySQL syntax). Access then sends this command to MySQL server and it is processed directly within that server. So your query will be almost as fast as doing it in local access table.

Access is a single-user system. MySQL with InnoDB is a transaction-protected multi-user system.

When you issue an UPDATE command that hits ten or so megarows, MySQL has to construct rollback information in case the operation fails before it hits all the rows. This takes a lot of time and memory.

Try switching your table access method to MyISAM if you're going to do these truly massive UPDATE and INSERT commands. MyISAM isn't transaction-protected so these operations may run faster.

You may find it helpful to do your data migration with some tool other than ODBC. ODBC is severely limited in its ability to handle lots of data, as you have discovered. For example, you could export your Access tables to flat files and then import them with a MySQL client program. See here... https://stackoverflow.com/questions/9185/what-is-the-best-mysql-client-application-for-windows

Once you've imported your data to MySQL, you then can run Access-based queries. But avoid UPDATE requests that hit everything in the database.

Ollie, I get your point on avoiding UPDATES that hit all rows. I use that to flag rows which are missing from the destination database, and it has been a quick and easy way to append only the missing rows. I see SQLyog has an import tool to Append new records only, but this still runs through all rows in the import table, and runs for hours. I will see if I can export only the data I want to CSV, but would still be nice to get the ODBC connector to work faster than present, if at all possible.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM