简体   繁体   English

InnoDB事务:锁定等待超时

[英]InnoDB transactions: Lock wait timeout

I have a table in my database (actually a few related tables) that get can be manipulated manually from various points through our interface but also automatically from two sources on a continuous basis. 我的数据库中有一个表(实际上是一些相关的表),该表可以通过界面从各个点手动进行操作,也可以连续不断地从两个来源自动进行操作。 The periodic updates can contain huge amounts of data and can result in thousands of inserts/updates. 定期更新可能包含大量数据,并且可能导致数千次插入/更新。 In order to improve performance of the inserts/updates I have used "SET autocommit = 0" around the updates from these automated sources. 为了提高插入/更新的性能,我在这些自动来源的更新周围使用了“ SET autocommit = 0”。 This has resulted in the desired performance improvement, maybe even more than expected. 这导致了所需的性能改进,甚至可能超出预期。 However the problem now is that if the automated sources overlap or if a manual update is performed very often the database locks up and after a while throws an error: 但是,现在的问题是,如果自动源重叠或执行手动更新的频率很高,则数据库将锁定,并在一段时间后引发错误:

Lock wait timeout exceeded; 超过了锁定等待超时; try restarting transaction 尝试重新启动事务

This may be thrown even in a single statement with autocommit on and no transaction but I guess that is reasonable as well if it conflicts with a transaction. 即使在具有自动提交且没有事务的单个语句中,也可能会抛出该错误,但是如果它与事务冲突,我认为这也是合理的。 I have read various suggestions, unfortunately there is no ideal solution. 我已经阅读了各种建议,不幸的是没有理想的解决方案。 I guess my options are: 我猜我的选择是:

  1. Try to order updates/inserts on the tables so that locks on all threads are requested in the same order and there is no deadlock. 尝试对表上的更新/插入进行排序,以便以相同的顺序请求所有线程上的锁定,并且没有死锁。 Unfortunately this is no possible, updates need to be applied in the order they are received. 不幸的是,这是不可能的,需要按接收顺序应用更新。

  2. Use LOCK TABLES to serialize transactions. 使用LOCK TABLES序列化事务。 This is theoretically possible but a) Apart from the two automated sources the tables are updated from many points in the system, including triggers, schedules, manually from various interfaces. 从理论上讲这是可能的,但是a)除了两个自动化源之外,还从系统中的许多点更新了表,包括触发器,计划表,并通过各种界面手动进行了更新。 It would be a nightmare to identify and maintain LOCK tables around all these places and no easy way to know that all have been identified, and b) LOCK TABLES has to lock all tables involved and the updates/inserts though not often but sometimes may need to update many tables as a result of the updates and again need to identify and maintain all the tables that might be updated so that they are included in the LOCK TABLES. 识别和维护所有这些位置周围的LOCK表将是一场噩梦,而且没有容易的方法来知道所有这些表都已被识别,并且b)LOCK TABLES必须锁定所有涉及的表以及更新/插入,尽管这种锁定并不经常,但是有时可能需要作为更新的结果来更新许多表,并且再次需要标识和维护所有可能更新的表,以便将它们包含在LOCK TABLES中。

  3. Use a semaphore table before each update in order to achieve the serialization of updates as with LOCK TABLES above but without actually having to use LOCK TABLES. 与上面的LOCK TABLES一样,在每次更新之前使用信号量表来实现更新的序列化,但实际上不必使用LOCK TABLES。 This is an improvement but still has problem a) of LOCK TABLES above. 这是一个改进,但仍然存在上述LOCK TABLES的问题。

Any other suggestions? 还有其他建议吗? Could the improvement benefits of autocommit = 0 (transactions) be achieved some other way that does not involve locks? 是否可以通过其他不涉及锁定的方法来实现autocommit = 0(事务)的改进收益? Could innodb be configured to actually not lock or lock much less on updates/inserts? 是否可以将innodb配置为在更新/插入时实际上不锁定或锁定得更少?

Last resort option may be to move to MyISAM tables. 最后的选择可能是移至MyISAM表。 Would this actually achieve performance improvements with heavy inserts/update operations? 通过大量的插入/更新操作,这是否真的可以提高性能?

Thank you 谢谢

You can achieve the benefits of autocommit = 0 while still not using long transactions. 您可以在不使用长事务的情况下实现autocommit = 0的好处。

a) You can commit the transaction every X statements, assuming that you don't need to rollback the entire transaction a)您可以每隔X条语句提交事务,假设您不需要回滚整个事务

b) instead of using autocommit = 0 you can use ALTER TABLE x DISABLE keys / ALTER TABLE x ENABLE keys before/after the import. b)可以在导入之前/之后使用ALTER TABLE x DISABLE keys / ALTER TABLE x ENABLE keys来代替autocommit = 0。 This is the reason for the performance improvement of the operation - the non-unique indexes are not updated until the transaction finishes, and then are updated in bulk. 这是提高操作性能的原因-非唯一索引直到事务完成才更新,然后批量更新。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM