简体   繁体   中英

Creating a large MySQL index (1B rows) fails with “Lost connection to MySQL server during query”

I'm trying to create a composite index on a rather large MySQL table (over 1 billion rows, 144GB.)

ALTER TABLE table_name ADD INDEX id_date ( id, `date` );

I let it run overnight several times but it keeps failing with the message below (nothing else in the error logs.) I can't say for sure how long the query ran but possibly for about eight hours.

ERROR 2013 (HY000) at line 3: Lost connection to MySQL server during query

I tried it with SET expand_fast_index_creation=ON; but that seems to just makes it fail faster (an hour perhaps.)

The server runs on a dedicated Ubuntu box from Hetzner with 32G RAM, 4GB swap and 8 cores. Plenty of free disk space (1TB disk.)

Server version: 5.6.13-rc61.0-log Percona Server (GPL), Release 61.0

Here's the my.cnf file, mostly the result of trial-and-error:

[mysqld]
# General
binlog_cache_size = 8M
binlog_format = row
character-set-server = utf8
connect_timeout = 10
datadir = /var/lib/mysql/data
delay_key_write = OFF
expire_logs_days = 10
join_buffer_size = 8M
log-bin=/var/lib/mysql/logs/mysql-bin
log_warnings = 2
max_allowed_packet = 100M
max_binlog_size = 1024M
max_connect_errors = 20
max_connections = 512
max_heap_table_size = 64M
net_read_timeout = 600
net_write_timeout = 600
query_cache_limit = 8M
query_cache_size = 128M
server-id = 1
skip_name_resolve
slave_net_timeout = 60
thread_cache_size = 8
thread_concurrency = 24
tmpdir = /var/tmp
tmp_table_size = 64M
transaction_isolation = READ-COMMITTED
wait_timeout = 57600
net_buffer_length = 1M

# MyISAM
bulk_insert_buffer_size = 64M
key_buffer_size = 384M
myisam_recover_options = BACKUP,FORCE
myisam_sort_buffer_size = 128M

# InnoDB
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 25G
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
#innodb_lazy_drop_table = 1
innodb_log_buffer_size = 16M
innodb_log_files_in_group = 3
innodb_log_file_size = 1024M
innodb_max_dirty_pages_pct = 90
innodb_locks_unsafe_for_binlog = 1

[client]
default-character-set = utf8

[mysqldump]
max_allowed_packet = 16M

Any clues would be greatly appreciated!

As a workaround i would suggest to create a new table like the old one, add the index, insert the data from the old table (maybe reasonably chunked), and switch to the new table. And in your case it sounds like a good idea to check which storage engine you want to use for your data - if you have raw data which you want to process anyways, maybe "ARCHIVE" could be an option for you. Or, if there is any kind of structural / "relational" information kept in your data, try to normalize your data model and downsize the table in question.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM