简体   繁体   中英

MySQL very slow for alter table query

Why is it taking more than an hour to simply update this table to add a column? This table has 15M rows. It has 2 indexes and a single key primary key. The ALTER TABLE query has been in "copy to tmp table" state for 1 hour 15 minutes now.

ALTER TABLE `frugg`.`item_catalog_map` 
ADD COLUMN `conversion_url` TEXT NULL DEFAULT NULL

Table:

mysql> describe item_catalog_map;
+------------------------+---------------+------+-----+---------+-------+
| Field                  | Type          | Null | Key | Default | Extra |
+------------------------+---------------+------+-----+---------+-------+
| catalog_unique_item_id | varchar(255)  | NO   | PRI | NULL    |       |
| catalog_id             | int(11)       | YES  | MUL | NULL    |       |
| item_id                | int(11)       | YES  | MUL | NULL    |       |
| price                  | decimal(10,2) | YES  |     | 0.00    |       |
+------------------------+---------------+------+-----+---------+-------+

mysql> show index from item_catalog_map;
+------------------+------------+----------------------+--------------+------------------------+-----------+-------------+----------+--------+------+------------+---------+
| Table            | Non_unique | Key_name             | Seq_in_index | Column_name            | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+------------------+------------+----------------------+--------------+------------------------+-----------+-------------+----------+--------+------+------------+---------+
| item_catalog_map |          0 | PRIMARY              |            1 | catalog_unique_item_id | A         |    15485115 |     NULL | NULL   |      | BTREE      |         |
| item_catalog_map |          1 | IDX_ACD6184FCC3C66FC |            1 | catalog_id             | A         |          18 |     NULL | NULL   | YES  | BTREE      |         |
| item_catalog_map |          1 | IDX_ACD6184F126F525E |            1 | item_id                | A         |    15485115 |     NULL | NULL   | YES  | BTREE      |         |
+------------------+------------+----------------------+--------------+------------------------+-----------+-------------+----------+--------+------+------------+---------+

MySQL's ALTER TABLE performance can become a problem with very large tables. MySQL performs most alterations by making an empty table with the desired new structure, inserting all the data from the old table into the new one, and deleting the old table. This can take a very long time, especially if you're short on memory and the table is large and has lots of indexes. Many people have experience with ALTER TABLE operations that have taken hours or days to complete.

Anyway if you need to proceed with alter table, maybe the following resources could help you:

If you don't care about downtime, my suggestion is using three separated ALTER TABLE statements. The first statement removes all existing secondary indexes. The second statement applies all column related changes. The last statement adds dropped secondary indexes back and applies other index changes.

Another two tips:

  1. Before apply index changes, execute the two following statements and change the values back to 1 after finishing the index change.

     SET unique_checks=0; SET foreign_key_checks=0; 
  2. When create multiple secondary indexes, put them in one ALTER TABLE statement rather than multiple separated ALTER TABLE statements.

The following picture shows the performance difference. Approach 1 is your approach and approach 2 is my way. Approach 2 takes about 3.47% time comparing with approach 1 for a 50m table. The solution only works for MySQL (>=5.5) InnoDB engine.

在此输入图像描述

The Percona tools are a lifesaver for this stuff w/ big tables.

http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html

they basically:

  1. create duplicate table
  2. create trigger to sync tables
  3. bulk copy data
  4. verify
  5. swap tables

Takes forever, but who cares because this means you can change columns without downtime.

Your table has 15 million rows, which is something. The ALTER TABLE involves copying over all the data from the table and recreating the indexes. As a first measurement try copying the data file (item_catalog_map.MYD if it's MyISAM) in your filesystem and see how long that takes. This is the time the ALTER TABLE will at least take.

For minimize locking up of the large table that I want to alter, I do the following:

  • Create a new empty table based on the existing table and alter this new empty table.
  • Do a mysqldump of the large table such that it has one complete insert statement per record in the large table (switches -c and --skip-extended-insert)
  • Import this mysqldump into a different (empty) database with the empty renamed large_table.
  • Take a mysqldump of this new rename table from the other database and import it into the original database
  • Rename large_table and large_table_new in the original database.

     mysql> create table DATABASE_NAME.LARGE_TABLE_NEW like DATABASE_NAME.LARGE_TABLE; mysql> alter table DATABASE_NAME.LARGE_TABLE_NEW add column NEW_COLUMN_NAME COL_DATA_TYPE(SIZE) default null; $ mysqldump -c --no-create-info --skip-extended-insert --no-create-db -u root -p DATABASE_NAME LARGE_TABLE > LARGE_TABLE.sql mysql> create table test.LARGE_TABLE like DATABASE_NAME.LARGE_TABLE; $ mysql -u root -p -D test < LARGE_TABLE.sql mysql> rename table test.LARGE_TABLE to test.LARGE_TABLE_NEW; $ mysqldump -c --no-create-info --skip-extended-insert --no-create-db -u root -p test LARGE_TABLE_NEW > LARGE_TABLE_NEW.sql $ mysql -u root -p -D DATABASE_NAME < LARGE_TABLE_NEW.sql mysql> rename table DATABASE_NAME.LARGE_TABLE to DATABASE_NAME.LARGE_TABLE_OLD, DATABASE_NAME.LARGE_TABLE_NEW to DATABASE_NAME.LARGE_TABLE; 

我遇到了同样的问题,我只是使用“sudo service mysql restart”[ubuntu] 重新启动了 mysql 服务,此后立即执行了更改表命令。

I had a similar problem where an ALTER TABLE was incredibly slow for a table with 11 million rows running inside a MariaDB Docker container. The same ALTER TABLE was lightning fast on an RDS instance also with 11 million rows.

I'm running Mac OS 12.4 and the data for the Docker container lives on a filesystem volume. The bottleneck was the creation of the temporary table as other answers have noted.

The fix was to enable VirtioFS inside the Docker experimental settings. The ALTER TABLE then took around 10 minutes instead of what was likely to take 8 hours.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM