简体   繁体   中英

Why do MySQL InnoDB inserts / updates on large tables get very slow when there are a few indexes?

We have a series of tables that have grown organically to several million rows, in production doing an insert or update can take up to two seconds. However if I dump the table and recreate it from the dump queries are lightning fast.

We have rebuilt one of the tables by creating a copy rebuilding the indexes and then doing a rename switch and copying over any new rows, this worked because that table is only ever appended to. Doing this made the inserts and updates lightning quick.

My questions:

Why do inserts get slow over time? Why does recreating the table and doing an import fix this? Is there any way that I can rebuild indexes without locking a table for updates?

It sounds like it's either

  • Index unbalancing over time
  • Disk fragmentation
  • Internal innodb datafile(s) fragmentation

You could try analyze table foo which doesn't take locks, just a few index dives and takes a few seconds.

If this doesn't fix it, you can use

mysql> SET PROFILING=1;
mysql> INSERT INTO foo ($testdata);
mysql> show profile for QUERY 1;

and you should see where most of the time is spent.

Apparently innodb performs better when inserts are done in PK order, is this your case?

InnoDB performance is heavily dependent on RAM. If the indexes don't fit in RAM, performance can drop considerably and quickly. Rebuild the whole table improves performance because the data and indexes are now optimized.

If you are only ever inserting into the table, MyISAM is better suited for that. You won't have locking issues if only appending, since the record is added to the end of the file. MyISAM will also allow you to use MERGE tables, which are really nice for taking parts of the data offline or archiving without having to do exports and/or deletes.

track down the in use my.ini and increase the key_buffer_size I had a 1.5GB table with a large key where the Queries per second (all writes) were down to 17. I found it strange that the in the administration panel (while the table was locked for writing to speed up the process) it was doing 200 InnoDB reads per second to 24 writes per second.

It was forced to read the index table off disk. I changed the key_buffer_size from 8M to 128M and the performance jumped to 150 queries per second completed and only had to perform 61 reads to get 240 writes. (after restart)

Updating a table requires indices to be rebuilt. If you are doing bulk inserts, try to do them in one transaction (as the dump and restore does). If the table is write-biased I would think about dropping the indices anyway or let a background job do read-processing of the table (eg by copying it to an indexed one).

Could it be due to fragmentation of XFS?

Copy/pasted from http://stevesubuntutweaks.blogspot.com/2010/07/should-you-use-xfs-file-system.html :

To check the fragmentation level of a drive, for example located at /dev/sda6:

sudo xfs_db -c frag -r /dev/sda6

The result will look something like so:

actual 51270, ideal 174, fragmentation factor 99.66%

That is an actual result I got from the first time I installed these utilities, previously having no knowledge of XFS maintenance. Pretty nasty. Basically, the 174 files on the partition were spread over 51270 separate pieces. To defragment, run the following command:

sudo xfs_fsr -v /dev/sda6

Let it run for a while. the -v option lets it show the progress. After it finishes, try checking the fragmentation level again:

sudo xfs_db -c frag -r /dev/sda6

actual 176, ideal 174, fragmentation factor 1.14%

Much better!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM