繁体   English   中英

将巨大的 InnoDB 表复制为文件?

[英]Copying huge InnoDB table as a file?

我们有一个巨大的 InnoDB 表( 153G ),我们需要将其移动到另一台服务器。 我们使用innodb_file_per_table 有没有办法以某种方式仅复制IBD文件? 目标数据库如何识别文件并了解其元数据? 如果需要,我们可以停止数据库进行复制。

编辑:其中一张大桌子是这样的:

CREATE TABLE `140M_table` (
  `field1` bigint(20) NOT NULL AUTO_INCREMENT,
  `field2` datetime DEFAULT NULL,
  `field3` varchar(255) DEFAULT NULL,
  `field4` int(11) DEFAULT NULL,
  `field5` varchar(255) DEFAULT NULL,
  `field6` varchar(255) DEFAULT NULL,
  `field7` int(11) DEFAULT NULL,
  `field8` bigint(20) DEFAULT NULL,
  `field9` varchar(20) DEFAULT NULL,
  PRIMARY KEY (`field1`),
  KEY `captureDateIdx` (`field2`,`feild6`),
  KEY `personaSubIdx` (`field3`,`field6`,`field5`),
  KEY `FKE6B5072DF27AF1FE` (`field8`),
  KEY `FKE6B5072D54F7256A` (`field7`),
  KEY `temp_key_idx` (`field9`),
  CONSTRAINT `FKE6B5072D54F7256A` FOREIGN KEY (`field7`) REFERENCES `small_table`     (`field7`),
  CONSTRAINT `FKE6B5072DF27AF1FE` FOREIGN KEY (`field8`) REFERENCES `2M_table` (`field9`)
) ENGINE=InnoDB AUTO_INCREMENT=136181731 DEFAULT CHARSET=latin1

您不能只复制 InnoDB 表。 即使使用 innodb_file_per_table,所有 InnoDB 文件也共享一个中央“缓存”文件(ibdata#、ib_logfile#),这些文件存储在 mysql 数据目录的根级别。

您需要将所有 InnoDB 表和共享缓存文件复制到新服务器,或转储表。 转储表可能需要更长的时间,因为您要“移动”数据 3 次(转储、复制、恢复)。 确保 my.cnf 中的 InnoDB 设置在两台服务器上相同,否则在新服务器上尝试启动 mysql 时会出错。

在分区的帮助下,问题将被解决为 ubder:

CREATE TABLE test_part_innodb (c1 int default NULL, c2 varchar(30) default NULL, c3 date default NULL) engine=InnoDB
-> PARTITION BY RANGE (year(c3)) (PARTITION p0 VALUES LESS THAN (2000),
-> PARTITION p1 VALUES LESS THAN (2001) , PARTITION p2 VALUES LESS THAN (2002) ,
-> PARTITION p3 VALUES LESS THAN (2003) , PARTITION p4 VALUES LESS THAN (2004) ,
-> PARTITION p5 VALUES LESS THAN (2005) , PARTITION p6 VALUES LESS THAN (2006) ,
-> PARTITION p7 VALUES LESS THAN (2007) , PARTITION p8 VALUES LESS THAN (2008) ,
-> PARTITION p9 VALUES LESS THAN (2009) , PARTITION p10 VALUES LESS THAN (2010),
-> PARTITION p99 VALUES LESS THAN MAXVALUE );
Query OK, 0 rows affected (0.13 sec)



show create table test_part_innodb\G;
*************************** 1. row ***************************
Table: test_part_innodb
Create Table: CREATE TABLE `test_part_innodb` (
`c1` int(11) DEFAULT NULL,
`c2` varchar(30) DEFAULT NULL,
`c3` date DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(c3))
(PARTITION p0 VALUES LESS THAN (2000) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (2001) ENGINE = InnoDB,
PARTITION p2 VALUES LESS THAN (2002) ENGINE = InnoDB,
PARTITION p3 VALUES LESS THAN (2003) ENGINE = InnoDB,
PARTITION p4 VALUES LESS THAN (2004) ENGINE = InnoDB,
PARTITION p5 VALUES LESS THAN (2005) ENGINE = InnoDB,
PARTITION p6 VALUES LESS THAN (2006) ENGINE = InnoDB,
PARTITION p7 VALUES LESS THAN (2007) ENGINE = InnoDB,
PARTITION p8 VALUES LESS THAN (2008) ENGINE = InnoDB,
PARTITION p9 VALUES LESS THAN (2009) ENGINE = InnoDB,
PARTITION p10 VALUES LESS THAN (2010) ENGINE = InnoDB,
PARTITION p99 VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)


delimiter //
CREATE PROCEDURE load_test_part_innodb()
begin
do
declare v int default 0;
while v do
insert into test_part_innodb
values (v,'testing partitions',adddate('2000-01-01',(rand(v)*36520) mod 3652));
set v = v + 1;
end while;
end
// 
Query OK, 0 rows affected (0.00 sec)

delimiter ;
call load_test_part_innodb();
Query OK, 1 row affected (1 hour 11 min 25.19 sec)

select count(*) from test_part_innodb where c3 > date '2000-01-01' and c3 < date '2000-12-31';
+----------+
| count(*) |
+----------+
| 99681 |
+----------+
1 row in set (0.22 sec)


show create view part_backup\G;
    *************************** 1. row ***************************
    View: part_backup
    Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `part_backup` AS select

    concat('mysqldump',' -u root',' -p ',' --no-create-info',' --where="',(case `p2`.`PARTITION_METHOD` when 'hash' then
    concat(`p2`.`PARTITION_EXPRESSION`,' % ',(select count(0) from `information_schema`.`partitions` `p3` where ((`p3`.`TABLE_SCHEMA` =
    `p2`.`TABLE_SCHEMA`) and (`p3`.`TABLE_NAME` = `p2`.`TABLE_NAME`))),' = ',(`p2`.`PARTITION_ORDINAL_POSITION` - 1)) when 'list' then
    concat(`p2`.`PARTITION_EXPRESSION`,' in (',`p2`.`PARTITION_DESCRIPTION`,')') when 'range' then
    concat(if((`p2`.`PARTITION_ORDINAL_POSITION` = 1),'',concat(`p2`.`PARTITION_EXPRESSION`,' >=
    ',replace(`p1`.`PARTITION_DESCRIPTION`,'MAXVALUE',~(0)),' and ')),concat(`p2`.`PARTITION_EXPRESSION`,'
    ',`p2`.`TABLE_SCHEMA`,'.',`p2`.`TABLE_NAME`) AS `Name_exp_1` from (`information_schema`.`partitions` `p2` left join
    `information_schema`.`partitions` `p1` on(((`p1`.`TABLE_SCHEMA` = `p2`.`TABLE_SCHEMA`) and (`p1`.`TABLE_NAME` = `p2`.`TABLE_NAME`) and

    ((`p1`.`PARTITION_ORDINAL_POSITION` + 1) = `p2`.`PARTITION_ORDINAL_POSITION`)))) where ((`p2`.`TABLE_SCHEMA` = database()) and
    (`p2`.`PARTITION_METHOD` in ('hash','list','range')))
    character_set_client: utf8
    collation_connection: utf8_general_ci
    1 row in set (0.01 sec)

desc part_backup;
+------------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------+------+-----+---------+-------+
| Name_exp_1 | mediumtext | YES | | NULL | |
+------------+------------+------+-----+---------+-------+
1 row in set (0.02 sec)

select * from part_backup;

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM