简体   繁体   中英

How to minimize the RAM requirements for copying data from (large) partitioned table to another partitioned table?

For two big tables, table1 and table2 with thousands of partitions and 150 million rows in table1, mysql/mariadb performs this query inefficiently.

insert into table2 select * from table1

In fact, using 8192 partitions on both tables, RAM was exhausted before the query ended. I had to terminate it when it had allocated 6.1 GB of RAM, since this particular box had only 8 GB of RAM. How can this task be performed with a lower RAM footprint?

By forcing mysql/mariadb to deal with data from one partition at a time, the task could be completed, using less than 500 MB RAM at any point.

The structure of the solution is like this:

insert into table2 select * from table1 partition (p<X>)

where X should be the full set of integers that correspond to the partitions, in my case from 0 to 8191. This can be implemented esing a stored procedure as this one:

drop procedure if exists my_partitioning_data_copy_procedure;

delimiter #
create procedure my_partitioning_data_copy_procedure()
begin

declare v_max int unsigned default 8191;
declare v_counter int unsigned default 0;

  start transaction;
  while v_counter < v_max do
    SET @expression = concat("insert into table2 select * from table1 partition (p", v_counter, ");");
    prepare myquery from @expression;
    execute myquery;
    set v_counter=v_counter+1;
  end while;
  commit;
end #

delimiter ;

call my_partitioning_data_copy_procedure();

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM