[英]How to split big sql dump file into small chunks and maintain each record in origin files despite later other records deletions
Here's what I want to do with (MySQL example): 这是我想做的(MySQL示例):
I have a problem with #4 step. 我在#4步骤上遇到了问题。
For instance I split table1.sql into 3 files: table1_a.sql and table1_b.sql and table1_c.sql . 例如,我将table1.sql分为3个文件: table1_a.sql和table1_b.sql和table1_c.sql 。 If on new dump there are new records that is fine - it's just added to table1_b.sql.
如果在新的转储上有新记录,那就很好-它只是添加到table1_b.sql中。
But if there are deleted records that were in table1_a.sql all next records will move and git will treat files table1_b.sql and table1_c.sql as changed and that not OK. 但是,如果在table1_a.sql中有已删除的记录,则所有下一个记录将移动,并且git将把文件table1_b.sql和table1_c.sql视为已更改,这不行。
Basicly it destroys whole idea keeping sql backup in SCM. 基本上,它破坏了将SQL备份保留在SCM中的整个想法。
My question: How to split big sql dump file into small chunks and maintain each record in origin files despite later other records deletions? 我的问题:尽管后来删除了其他记录,如何将大sql转储文件拆分为小块并在原始文件中维护每个记录?
Don't split them at all. 根本不要拆分它们。 Or split them by ranges of PK values.
或按PK值范围对它们进行拆分。 Or split them right down to 1 db row per file (and name the file after tablename + the content of the primary key).
或直接将它们拆分为每个文件1 db行(并以表名+主键的内容命名该文件)。
(That apart from the even more obvious XY answer, which was my instinctive reaction.) (除了更明显的XY响应,这是我的本能反应。)
要在500行的文件中拆分SQL转储,请在您的终端中执行:
$ split -l 5000 hit_2017-09-28_20-07-25.sql dbpart-
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.