简体   繁体   English

StreamingResponseBody 堆使用

[英]StreamingResponseBody heap usage

i have got simple method in controller which streams content from database, streaming works as intended, download starts right after calling endpoint.我在 controller 中有一个简单的方法,它从数据库流式传输内容,流式传输按预期工作,调用端点后立即开始下载。 Problem is heap usage, streaming 256 MB file takes 1GB heap space.问题是堆使用,流式传输 256 MB 文件需要 1GB 堆空间。 If I would replace service.writeContentToStream(param1, param2, out) with method that reads data from local file to input stream and copying to passed output stream result is same.如果我将service.writeContentToStream(param1, param2, out)替换为从本地文件读取数据以输入 stream 并复制到传递的 output stream 结果相同的方法。 Biggest file I can stream is 256 MB.我能找到的最大文件 stream 是 256 MB。 Is there possible solution to overcome heap size limit?是否有可能的解决方案来克服堆大小限制?

    @GetMapping("/{param1}/download-stream")
    public ResponseEntity<StreamingResponseBody> downloadAsStream(
            @PathVariable("param1") String param1,
            @RequestParam(value = "param2") String param2
    ) {
        Metadata metadata = service.getMetadata(param1);
        StreamingResponseBody stream = out ->  service.writeContentToStream(param1, param2, out);
           return ResponseEntity.ok()             
                .header(HttpHeaders.CONTENT_DISPOSITION, "attachment;" + getFileNamePart() + metadata.getFileName())
                .header(HttpHeaders.CONTENT_LENGTH, Long.toString(metadata.getFileSize()))
                .body(stream);
    }

service.writeContentToStream method service.writeContentToStream方法

 try (FileInputStream fis = new FileInputStream(fileName)) {
     StreamUtils.copy(fis, dataOutputStream);
 } catch (IOException e) {
     log.error("Error writing file to stream",e);
 }

Matadata class contains only information about filesize and filename, there is no content stored there Matadata class 仅包含有关文件大小和文件名的信息,没有存储内容

EDIT implementation of StreamUtils.copy() method, it comes from spring library StreamUtils.copy() .编辑StreamUtils.copy() 方法的实现,它来自 spring 库StreamUtils.copy() Buffer size is set to 4096. Setting buffer to smaller size does not allow me to download bigger files缓冲区大小设置为 4096。将缓冲区设置为较小的大小不允许我下载更大的文件

    /**
     * Copy the contents of the given InputStream to the given OutputStream.
     * Leaves both streams open when done.
     * @param in the InputStream to copy from
     * @param out the OutputStream to copy to
     * @return the number of bytes copied
     * @throws IOException in case of I/O errors
     */
    public static int copy(InputStream in, OutputStream out) throws IOException {
        Assert.notNull(in, "No InputStream specified");
        Assert.notNull(out, "No OutputStream specified");

        int byteCount = 0;
        byte[] buffer = new byte[BUFFER_SIZE];
        int bytesRead = -1;
        while ((bytesRead = in.read(buffer)) != -1) {
            out.write(buffer, 0, bytesRead);
            byteCount += bytesRead;
        }
        out.flush();
        return byteCount;
    }

Some ideas:一些想法:

  1. Run the server inside the Java profiler.在 Java 分析器中运行服务器。 For example JProfiler (it costs money).例如JProfiler (它要花钱)。

  2. Try ServletResponse.setBufferSize(...)尝试ServletResponse.setBufferSize(...)

  3. Check, if you have some filters configured in the application.检查您是否在应用程序中配置了一些过滤器。

  4. Check the output buffer of the application server.检查应用服务器的output缓冲区。 In case of the Tomcat it could be quite tricky.在 Tomcat 的情况下,它可能非常棘手。 It has a long list of possible buffers:它有一长串可能的缓冲区:

https://tomcat.apache.org/tomcat-8.5-doc/config/http.html https://tomcat.apache.org/tomcat-8.5-doc/config/http.html

I wrote an article back in 2016 regarding StreamingResponseBody when it was first released.我在 2016 年写了一篇关于StreamingResponseBody首次发布时的文章。 You can read that to get more of an idea.你可以阅读它以获得更多的想法。 But even without that what you are trying to do with the following code is not scalable at all (Imagine 100 users concurrently trying to download).但即使没有,您尝试对以下代码执行的操作也根本无法扩展(想象 100 个用户同时尝试下载)。

 try (FileInputStream fis = new FileInputStream(fileName)) {
     StreamUtils.copy(fis, dataOutputStream);
 } catch (IOException e) {
     log.error("Error writing file to stream",e);
 }

The above code is very memory intensive and nodes with high memory can only work with this and you always will have an upper bound on the file size (Can it download a 1TB file in 5 years?)上面的代码是非常密集的 memory 和高 memory 的节点只能使用这个,你总是会有文件大小的上限(它可以在 5 年内下载一个 1TB 的文件吗?)

What you should do is the following;你应该做的是以下;

try (FileInputStream fis = new FileInputStream(fileName)) {
    byte[] data = new byte[2048];
    int read = 0;
    while ((read = fis.read(data)) > 0) {
        dataOutputStream.write(data, 0, read);
    }
    dataOutputStream.flush();
} catch (IOException e) {
    log.error("Error writing file to stream",e);
}

This way your code can download files of any size given that the user is able to wait and will not require a lot of memory这样,您的代码可以下载任何大小的文件,因为用户可以等待并且不需要大量 memory

For me it was logging dependency, so if you are having problems with identifying the cause of heap usage, take a look at your logging configuration:对我来说,这是日志记录依赖性,所以如果您在确定堆使用的原因时遇到问题,请查看您的日志记录配置:

  <dependency>
        <groupId>org.zalando</groupId>
        <artifactId>logbook-spring-boot-starter</artifactId>
        <version>1.4.1</version>
        <scope>compile</scope>
  </dependency>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM