简体   繁体   English

优化 gsutil rsync/cp 命令的吞吐量需要考虑什么

[英]What to consider to optimize the throughput for gsutil rsync/cp command

I'm currently transferring 32 of 4GB files from Google Compute Engine instance to Google Cloud Storage.我目前正在将 4GB 文件中的 32 个从 Google Compute Engine 实例传输到 Google Cloud Storage。 And I am currently trying to maximize my throughput during this process with "-m" and "-o [GSUtil:parallel_composite_upload_threshold=150M, GSUtil:parallel_thread_count=32]".我目前正在尝试使用“-m”和“-o [GSUtil:parallel_composite_upload_threshold=150M, GSUtil:parallel_thread_count=32]”在此过程中最大化我的吞吐量。 But I was wondering if there are any other things I should consider and take advantage of(especially with boto configuration) to boost the throughput.但我想知道是否还有其他事情我应该考虑并利用(尤其是使用 boto 配置)来提高吞吐量。

The default options are fine.默认选项很好。

Increasing the buffer size beyond 512 KB has little impact on.network performance.将缓冲区大小增加到超过 512 KB 对网络性能几乎没有影响。 Increasing the number of threads beyond 4 has little impact as well.将线程数增加到 4 以上也几乎没有影响。

The size of the Compute Engine instance and the distance between Compute Engine and Cloud Storage will have the most impact on performance. Compute Engine 实例的大小以及 Compute Engine 和 Cloud Storage 之间的距离对性能的影响最大。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM