简体   繁体   English

创建大型MySQL索引(1B行)失败,并显示“查询期间与MySQL服务器的连接丢失”

[英]Creating a large MySQL index (1B rows) fails with “Lost connection to MySQL server during query”

I'm trying to create a composite index on a rather large MySQL table (over 1 billion rows, 144GB.) 我试图在一个相当大的MySQL表(超过10亿行,144GB)上创建一个复合索引。

ALTER TABLE table_name ADD INDEX id_date ( id, `date` );

I let it run overnight several times but it keeps failing with the message below (nothing else in the error logs.) I can't say for sure how long the query ran but possibly for about eight hours. 我让它整夜运行了好几次,但是它始终失败,并显示以下消息(错误日志中没有其他内容。)我不能确定查询运行了多长时间,但可能要花大约八个小时。

ERROR 2013 (HY000) at line 3: Lost connection to MySQL server during query

I tried it with SET expand_fast_index_creation=ON; 我用SET expand_fast_index_creation=ON;尝试过SET expand_fast_index_creation=ON; but that seems to just makes it fail faster (an hour perhaps.) 但这似乎只会使其失效更快(也许一个小时)。

The server runs on a dedicated Ubuntu box from Hetzner with 32G RAM, 4GB swap and 8 cores. 该服务器在Hetzner的专用Ubuntu盒子上运行,带有32G RAM,4GB交换空间和8个内核。 Plenty of free disk space (1TB disk.) 大量可用磁盘空间(1TB磁盘)。

Server version: 5.6.13-rc61.0-log Percona Server (GPL), Release 61.0

Here's the my.cnf file, mostly the result of trial-and-error: 这是my.cnf文件,主要是反复试验的结果:

[mysqld]
# General
binlog_cache_size = 8M
binlog_format = row
character-set-server = utf8
connect_timeout = 10
datadir = /var/lib/mysql/data
delay_key_write = OFF
expire_logs_days = 10
join_buffer_size = 8M
log-bin=/var/lib/mysql/logs/mysql-bin
log_warnings = 2
max_allowed_packet = 100M
max_binlog_size = 1024M
max_connect_errors = 20
max_connections = 512
max_heap_table_size = 64M
net_read_timeout = 600
net_write_timeout = 600
query_cache_limit = 8M
query_cache_size = 128M
server-id = 1
skip_name_resolve
slave_net_timeout = 60
thread_cache_size = 8
thread_concurrency = 24
tmpdir = /var/tmp
tmp_table_size = 64M
transaction_isolation = READ-COMMITTED
wait_timeout = 57600
net_buffer_length = 1M

# MyISAM
bulk_insert_buffer_size = 64M
key_buffer_size = 384M
myisam_recover_options = BACKUP,FORCE
myisam_sort_buffer_size = 128M

# InnoDB
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 25G
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
#innodb_lazy_drop_table = 1
innodb_log_buffer_size = 16M
innodb_log_files_in_group = 3
innodb_log_file_size = 1024M
innodb_max_dirty_pages_pct = 90
innodb_locks_unsafe_for_binlog = 1

[client]
default-character-set = utf8

[mysqldump]
max_allowed_packet = 16M

Any clues would be greatly appreciated! 任何线索将不胜感激!

As a workaround i would suggest to create a new table like the old one, add the index, insert the data from the old table (maybe reasonably chunked), and switch to the new table. 作为一种解决方法,我建议创建一个像旧表一样的新表,添加索引,从旧表中插入数据(可能会合理地分块),然后切换到新表。 And in your case it sounds like a good idea to check which storage engine you want to use for your data - if you have raw data which you want to process anyways, maybe "ARCHIVE" could be an option for you. 在您的情况下,检查要用于数据的存储引擎听起来是个好主意-如果您有要处理的原始数据,则也许可以选择“ ARCHIVE”。 Or, if there is any kind of structural / "relational" information kept in your data, try to normalize your data model and downsize the table in question. 或者,如果您的数据中保留有任何类型的结构/“关系”信息,请尝试对数据模型进行规范化并缩小表的大小。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM