[英]How do I dump a Postgres database QUICKLY
I have a massive Postgres 9.1 database (~450 GB) that I need to copy to a new machine, where I want to upgrade to Postgres 12.我有一个庞大的 Postgres 9.1 数据库(约 450 GB),我需要将其复制到一台新机器上,我想在其中升级到 Postgres 12。
Using pg_dump
stalls out after a few hours, writing only 60 GB.使用
pg_dump
几个小时后停止,只写入 60 GB。 How can I speed up the process dramatically?我怎样才能显着加快这个过程?
Nothing is going to do what pg_dump
does faster than pg_dump
itself.没有什么能比
pg_dump
本身更快地完成pg_dump
的工作。 Shut down access to the database while dumping if at all possible.尽可能在转储时关闭对数据库的访问。 Check the system for other sources of load (particularly I/O load) and remove them if possible.
检查系统是否存在其他负载源(尤其是 I/O 负载),如果可能,将其移除。 Check if the drive is failing.
检查驱动器是否出现故障。 Don't write the dump to the same disk that the database is being read from (if another disk isn't available on the source machine, consider something like
pg_dump whatever | ssh anothermachine 'cat > db.sql'
).不要将转储写入正在读取数据库的同一磁盘(如果源计算机上没有另一个磁盘可用,请考虑类似
pg_dump whatever | ssh anothermachine 'cat > db.sql'
)。 Or, just have patience.或者,只要有耐心。
If you are dumping the database over ssh, add the -Z
flag, followed by the zip level, to compress the output of pg_dump
.如果要通过 ssh 转储数据库,请添加
-Z
标志,后跟 zip 级别,以压缩pg_dump
的 output。 eg, pg_dump -Z 9
例如,
pg_dump -Z 9
See the documentation: https://www.postgresql.org/docs/9.3/app-pgdump.html请参阅文档: https://www.postgresql.org/docs/9.3/app-pgdump.html
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.