简体   繁体   English

MySQL ODBC更新查询非常慢

[英]MySQL ODBC Update Query VERY Slow

Our Access 2010 database recently hit the 2GB file size limit, so I ported the database to MySQL. 我们的Access 2010数据库最近达到2GB的文件大小限制,因此我将数据库移植到了MySQL。

I installed MySQL Server 5.6.1 x64 on Windows Server 2008 x64. 我在Windows Server 2008 x64上安装了MySQL Server 5.6.1 x64。 All OS updates and patches are loaded. 所有操作系统更新和补丁均已加载。

I am using the MySQL ODBC 5.2w x64 Driver, as it seems to be the fastest. 我正在使用MySQL ODBC 5.2w x64驱动程序,因为它似乎是最快的。

My box has an i7-3960X with 64GB RAM and a 480GB SSD. 我的盒子上有一个带64GB RAM和480GB SSD的i7-3960X。

I use Access Query Designer as I prefer the interface and I regularly need to append missing records from one table to the other. 我使用Access Query Designer是因为我偏爱该界面,因此我经常需要将丢失的记录从一个表追加到另一个表。

As a test, I have a simple Access Database with two linked tables: 作为测试,我有一个带有两个链接表的简单Access数据库:

tblData links to another Access Database and tblData链接到另一个Access数据库,并且

tblOnline uses a SYSTEM DSN to a linked ODBC table. tblOnline使用SYSTEM DSN链接到ODBC表。

Both tables contain over 10 million records. 这两个表都包含超过一千万条记录。 Some of my ported working tables already have over 30 million records. 我的某些移植工作表已经拥有超过3000万条记录。

To select records to append, I use a field called INDBYN which is either true or false. 要选择要追加的记录,我使用一个称为INDBYN的字段,该字段为true或false。

First I run an Update query on tblData: 首先,我在tblData上运行更新查询:

UPDATE tblData SET tblData.InDBYN = False;

Then I update all matching records: 然后,我更新所有匹配的记录:

UPDATE tblData INNER JOIN tblData ON tblData.IDMaster = tblOnline.IDMaster SET tblData.InDBYN = True; 

This works reasonably fast, even to the linked ODBC table. 即使对链接的ODBC表,此操作也相当快。

Lastly I Append all records where INDBYN is False to tblOnline. 最后,我将INDBYN为False的所有记录追加到tblOnline。 This is also acceptable speed, although slower than appends to a Linked Access table. 这也可以接受,尽管速度比附加到链接访问表的速度慢。

Within Access everything works 100% and is incredibly fast, except the DB is getting too big. 在Access中,除了DB太大之外,其他所有东西都可以100%地工作并且速度非常快。

On the Linked Access Table, it takes 2m15s to update 11,500,000 records. 在“链接访问表”上,需要2m15来更新11,500,000条记录。

However, I now need to move the SOURCE table to MySQL, as it is reaching the 2GB limit. 但是,我现在需要将SOURCE表移至MySQL,因为它已达到2GB的限制。

So in future I will need to run the UPDATE statement on a linked ODBC table. 因此,将来我将需要在链接的ODBC表上运行UPDATE语句。

So far, when I run the same simple UPDATE query on the linked ODBC table it runs for more than 20 minutes, and then bombs out saying the query has exceeded the 2GB memory limit. 到目前为止,当我在链接的ODBC表上运行相同的简单UPDATE查询时,它运行了20分钟以上,然后轰炸出该查询已超过2GB内存限制。

Both tables are identical in structure. 这两个表的结构相同。

I do not know how to resolve this and need advice please. 我不知道如何解决此问题,请咨询。

I prefer to use Access as the front-end as I have hundreds of queries already designed for the app, and there is no time to re-develop the app. 我更喜欢将Access用作前端,因为我已经为该应用程序设计了数百个查询,并且没有时间重新开发该应用程序。

I use the InnoDB engine and have tried various tweaks without success. 我使用InnoDB引擎并尝试了各种调整,但均未成功。 Since my database uses relational tables, it looked like the best option to use INNODB as opposed to MyISAM. 由于我的数据库使用关系表,因此它似乎是使用INNODB而不是MyISAM的最佳选择。

I have turned doublewrite on and off and tried various buffer pool sizes, including query cache. 我已打开和关闭doublewrite并尝试了各种缓冲池大小,包括查询缓存。 It does not make a difference on this particular query. 它对这个特定查询没有影响。

My current my.ini file looks like this: 我当前的my.ini文件如下所示:

#-----------------------------------------------------------------------
# MySQL Server Instance Configuration File
# ----------------------------------------------------------------------

[client]
no-beep

port=3306

[mysql]

default-character-set=utf8

server_type=3
[mysqld]

port=3306

basedir="C:\Program Files\MySQL\MySQL Server 5.6\"

datadir="E:\MySQLData\data\"

character-set-server=utf8

default-storage-engine=INNODB

sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

log-output=FILE
general-log=0
general_log_file="SQLSERVER.log"
slow-query-log=1
slow_query_log_file="SQLSERVER-slow.log"
long_query_time=10

log-error="SQLSERVER.err"

max_connections=100

query_cache_size = 20M

table_open_cache=2000

tmp_table_size=502M

thread_cache_size=9

myisam_max_sort_file_size=100G

myisam_sort_buffer_size=1002M

key_buffer_size=8M

read_buffer_size=64K
read_rnd_buffer_size=256K

sort_buffer_size=256K

innodb_additional_mem_pool_size=32M

innodb_flush_log_at_trx_commit = 1

innodb_log_buffer_size=16M

innodb_buffer_pool_size = 48G

innodb_log_file_size=48M

innodb_thread_concurrency = 0

innodb_autoextend_increment=64M

innodb_buffer_pool_instances=8

innodb_concurrency_tickets=5000

innodb_old_blocks_time=1000

innodb_open_files=2000

innodb_stats_on_metadata=0

innodb_file_per_table=1

innodb_checksum_algorithm=0

back_log=70

flush_time=0

join_buffer_size=256K

max_allowed_packet=4M

max_connect_errors=100

open_files_limit=4110

query_cache_type = 1

sort_buffer_size=256K

table_definition_cache=1400

binlog_row_event_max_size=8K

sync_relay_log=10000
sync_relay_log_info=10000

tmpdir = "G:/MySQLTemp"
innodb_write_io_threads = 16
innodb_doublewrite
innodb = ON
innodb_fast_shutdown = 1

query_cache_min_res_unit = 4096

query_cache_limit = 1048576

innodb_data_home_dir = "E:/MySQLData/data"

bulk_insert_buffer_size = 8388608

Any advice will be greatly appreciated. 任何建议将不胜感激。 Thank you in advance. 先感谢您。

Communication of MS Access with MySQL thru linked table is slow. 通过链接表的MS Access与MySQL的通信速度很慢。 Terribly slow. 太慢了。 That is the fact which can't be changed. 那是不可改变的事实。 Why is it happening? 为什么会这样呢? Access firstly load data from MySQL, then it process the command and finally it puts the data back. Access首先从MySQL加载数据,然后处理命令,最后将数据放回去。 In addition, it does this process row by row! 另外,它逐行执行此过程! However, you can avoid this if you don't need to use parameters or data from local tables in your "update" query. 但是,如果不需要在“更新”查询中使用本地表中的参数或数据,则可以避免这种情况。 (In another words - if your query is always same and it use only MySQL data) (换句话说-如果您的查询始终是相同的,并且仅使用MySQL数据)

Trick is to force MySQL server to process the query instead of Access! 技巧是强制MySQL服务器处理查询而不是Access! This can be achieved by creating "pass-thru" query in Access , where you can write directly your SQL code (in MySQL syntax). 这可以通过在Access中创建“直通”查询来实现,您可以其中直接编写SQL代码(采用MySQL语法)。 Access then sends this command to MySQL server and it is processed directly within that server. Access然后将此命令发送到MySQL服务器,并在该服务器中直接对其进行处理。 So your query will be almost as fast as doing it in local access table. 因此,您的查询将几乎与在本地访问表中进行查询一样快。

Access is a single-user system. Access是单用户系统。 MySQL with InnoDB is a transaction-protected multi-user system. 带有InnoDB的MySQL是受事务保护的多用户系统。

When you issue an UPDATE command that hits ten or so megarows, MySQL has to construct rollback information in case the operation fails before it hits all the rows. 当您发出一个达到十兆行的UPDATE命令时,MySQL必须构造回滚信息,以防操作在到达所有行之前失败。 This takes a lot of time and memory. 这需要大量时间和内存。

Try switching your table access method to MyISAM if you're going to do these truly massive UPDATE and INSERT commands. 如果要执行这些真正庞大的UPDATEINSERT命令,请尝试将表访问方法切换为MyISAM。 MyISAM isn't transaction-protected so these operations may run faster. MyISAM不受事务保护,因此这些操作可能运行得更快。

You may find it helpful to do your data migration with some tool other than ODBC. 您可能会发现使用ODBC以外的其他工具进行数据迁移会很有帮助。 ODBC is severely limited in its ability to handle lots of data, as you have discovered. 正如您所发现的,ODBC在处理大量数据方面的能力受到严格限制。 For example, you could export your Access tables to flat files and then import them with a MySQL client program. 例如,您可以将Access表导出为平面文件,然后使用MySQL客户端程序将其导入。 See here... https://stackoverflow.com/questions/9185/what-is-the-best-mysql-client-application-for-windows 看到这里... https://stackoverflow.com/questions/9185/what-is-the-best-mysql-client-application-for-windows

Once you've imported your data to MySQL, you then can run Access-based queries. 将数据导入MySQL后,即可运行基于Access的查询。 But avoid UPDATE requests that hit everything in the database. 但要避免UPDATE请求触及数据库中的所有内容。

Ollie, I get your point on avoiding UPDATES that hit all rows. 奥利(Ollie),我明白您要避免对所有行进行更新的更新。 I use that to flag rows which are missing from the destination database, and it has been a quick and easy way to append only the missing rows. 我用它来标记目标数据库中缺少的行,这是仅附加缺少的行的一种快速简便的方法。 I see SQLyog has an import tool to Append new records only, but this still runs through all rows in the import table, and runs for hours. 我看到SQLyog有一个只能附加新记录的导入工具,但这仍然贯穿导入表中的所有行,并且运行了几个小时。 I will see if I can export only the data I want to CSV, but would still be nice to get the ODBC connector to work faster than present, if at all possible. 我将看看我是否只能将我想要的数据导出为CSV,但是如果可能的话,让ODBC连接器比现在更快地工作仍然很不错。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM