简体   繁体   English

使用 Vert.x 和 Netty 下载大文件时如何处理直接内存不足?

[英]How to handle running out of direct memory when downloading large files with Vert.x & Netty?

I have a Vert.x web service that will occasionally download large ZIP files from AWS S3.我有一个 Vert.x Web 服务,它偶尔会从 AWS S3 下载大型 ZIP 文件。 After downloading, the archive is unzipped and individual files are re-uploaded to AWS S3.下载后,档案被解压缩,个别文件被重新上传到 AWS S3。 The web service is hosted as a t2.large (8GB memory) instance in AWS Elastic Beanstalk. Web 服务作为t2.large (8GB 内存)实例托管在 AWS Elastic Beanstalk 中。 The Java application is currently configured with between 2-4GB of heap space, and the ZIP files will be at most 10GB in size (but most will be closer to 2-4GB at most). Java 应用程序当前配置了 2-4GB 的堆空间,ZIP 文件的大小最多为 10GB(但大多数最多接近 2-4GB)。

When the application tries to download ZIP files >2GB in size, either the initial download of the ZIP file or the re-upload of individual files always fails with a stack trace similar to the following:当应用程序尝试下载大于 2GB 的 ZIP 文件时,ZIP 文件的初始下载或单个文件的重新上传总是失败,堆栈跟踪类似于以下内容:

Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1895825439, max: 1908932608)

After doing some research, it appears that Vert.x uses Netty to speed up Network I/O, which in turn utilizes direct memory to improve download performance.经过一番研究,看来 Vert.x 使用 Netty 来加速网络 I/O,进而利用直接内存来提高下载性能。 It appears that the direct memory isn't being freed sufficiently quickly, which leads to out-of-memory exceptions like the above.似乎直接内存没有被足够快地释放,这会导致上述内存不足异常。

The simplest solution would just be to increase the instance size to 16GB t2.xlarge and allocate more direct memory at runtime (eg. -XX:MaxDirectMemorySize ), but I'd like to explore other solutions first.最简单的解决方案是将实例大小增加到 16GB t2.xlarge并在运行时分配更多的直接内存(例如-XX:MaxDirectMemorySize ),但我想先探索其他解决方案。 Is there a way to programmatically force Netty to free direct memory after it's no longer in use?有没有办法以编程方式强制 Netty 在不再使用后释放直接内存? Is there additional Vert.x configuration I can add that might alleviate this problem?我可以添加额外的 Vert.x 配置来缓解这个问题吗?

Please check this请检查这个

github.com/aws/aws-sdk-java-v2/issues/1301 github.com/aws/aws-sdk-java-v2/issues/1301

we have identified an issue within the SDK where it could cause excessive buffer usage and eventually OOM when using s3 async client to download a large object to a file.我们在 SDK 中发现了一个问题,当使用 s3 异步客户端将大对象下载到文件时,它可能会导致缓冲区使用过多并最终导致 OOM。 The fix #1335 is available in 2.7.4.修复 #1335 在 2.7.4 中可用。 Could you try with the latest version?可以用最新版试试吗? Feel free to re-open if you continue to see the issue.如果您继续看到该问题,请随时重新打开。 " – AWS PS 21 hours ago Delete " – AWS PS 21 小时前 删除

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM