简体   繁体   English

mysqldump压缩

[英]mysqldump compression

I am trying to understand how mysqldump works: 我试图了解mysqldump的工作原理:

if I execute mysqldump on my pc and connect to a remote server: 如果我在我的电脑上执行mysqldump并连接到远程服务器:

mysqldump -u mark -h 34.32.23.23 -pxxx  --quick | gzip > dump.sql.gz

will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it? 服务器会压缩它并将其作为gzip发送给我,还是我的计算机首先接收所有数据然后压缩它?

Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network! 因为我有一个非常大的远程数据库导出,我想知道通过网络最快的方式!

You should make use of ssh + scp, 你应该使用ssh + scp,
because the dump on localhost is faster, 因为localhost上的转储更快,
and you only need to scp over the gzip (lesser network overhead) 并且你只需要scp over gzip(较小的网络开销)

likely you can do this 可能你可以这样做

ssh $username@34.32.23.23 "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"

scp $username@34.32.23.23:/tmp/dump.sql.gz .

(optional directory of /tmp, should be change to whatever directory you comfortable with) (/ tmp的可选目录,应更改为您熟悉的目录)

This is how I do it: 我是这样做的:

Do a partial export using SELECT INTO OUTFILE and create the files on the same server. 使用SELECT INTO OUTFILE进行部分导出并在同一服务器上创建文件。

If your table contains 10 million rows. 如果您的表包含1000万行。 Do a partial export of 1 million rows at a time, each time in a separate file. 每次部分导出100万行,每次都在一个单独的文件中。

Once the 1st file is ready you can compress and transfer it. 第一个文件准备就绪后,您可以压缩并传输它。 In the meantime MySQL can continue exporting data to the next file. 与此同时,MySQL可以继续将数据导出到下一个文件。

On the other server you can start loading the file into the new database. 在另一台服务器上,您可以开始将文件加载到新数据库中。

BTW, lot of this can be scripted. 顺便说一下,这很多都可以编写脚本。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM