简体   繁体   English

如何将mysql表分批转存?

[英]How can I dump mysql table in parts?

I have Linux server and a huge mysql table that I need to dump. 我有Linux服务器和一个需要转储的巨大mysql表。 The thing is, the sever is production and I don't want to crash it by dumping all at once. 问题是,服务器是生产服务器,我不想一次全部丢弃就使它崩溃。 Also I intend to pipe it over ssh to another server. 我也打算通过ssh将其通过管道传输到另一台服务器。 Because I don't want to fill up the disk space. 因为我不想填满磁盘空间。 I know about the mysqldump —where clause but I don't want to script those IDs. 我知道mysqldump —where子句,但是我不想编写这些ID的脚本。 Is there any native functionality in mysql that allows dumping in parts? mysql中是否有任何允许部分转储的本机功能? It doesn't have to be a mysqldump but it needs to be in parts so I don't crash the server and I'll need to pipe this over ssh. 它不一定必须是mysqldump,但它必须是局部的,这样我就不会使服务器崩溃,而我需要通过ssh传递它。

Additional info: records are never updated in this table. 附加信息:此表中永远不会更新记录。 They are only added 它们仅被添加

MySQL documentation: as outlined in their docs , mysqldump in not suited for large databases. MySQL文档:简列如自己的文档 ,mysqldump的在不适合大型数据库。 They suggest to backup raw data files. 他们建议备份原始数据文件。

If your concern really is the load and not crashing the production, then maybe you should take a look at this post : How can I slow down a MySQL dump as to not affect current load on the server? 如果您真正关心的是负载而不会使生产崩溃,那么也许您应该看一下以下文章: 如何降低MySQL转储的速度以不影响服务器上的当前负载? about how to backup large production databases, using the right mysqldump args. 有关如何使用正确的mysqldump args备份大型生产数据库的信息。

Slicing a production database may end up more dangerous in the end. 最终,对生产数据库进行切片可能会更加危险。 Also I don't know how often entries get updated in the db, but slicing the export would give you an inconsistent dump regarding the data, having slices of the same table, from different times 我也不知道条目在数据库中更新的频率,但是对导出进行切片会给您带来关于数据的不一致转储,具有来自同一时间的同一表的切片

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM