简体   繁体   English

下载大文件时浏览器崩溃

[英]Browser crashes while downloading large size files

I have a web api that reads a file from azure and downloads it into a byte array. 我有一个Web API,可从azure读取文件并将其下载到字节数组中。 The client receives this byte array and downloads it as pdf. 客户端接收到此字节数组并将其下载为pdf。 This does not work well with large files. 这不适用于大文件。 I am not able to figure out how can I send the bytes in chunks from web api to client. 我无法弄清楚如何将字节从Web api发送到客户端。

Below is the web api code which just returns the byte array to client: 以下是仅将字节数组返回给客户端的Web API代码:

        CloudBlockBlob blockBlob = container.GetBlockBlobReference(fileName);
        blockBlob.FetchAttributes();
        byte[] data = new byte[blockBlob.Properties.Length];
        blockBlob.DownloadToByteArray(data, 0);
        return report;

Client side code gets the data when ajax request completes, creates a hyperlink and set its download attribute which downloads the file: 客户端代码在ajax请求完成时获取数据,创建超链接并设置其下载文件的下载属性:

var a = document.createElement("a");
a.href = 'data:application/pdf;base64,' + data.$value;;
a.setAttribute("download", filename);

The error occurred for a file of 1.86 MB. 对于1.86 MB的文件,发生了错误。

The browser displays the message: Something went wrong while displaying the web page.To continue, reload the webpage. 浏览器显示以下消息:显示网页时出了点问题。要继续,请重新加载网页。

The issue is most likely your server running out of memory on these large files. 问题很可能是您的服务器在这些大文件上的内存不足。 Don't load the entire file into a variable only to then send it out as the response. 不要只将整个文件加载到变量中,然后将其作为响应发送出去。 This causes a double download, your server has to download it from azure storage and keep it in memory, then your client has to download it from the server. 这将导致两次下载,您的服务器必须从azure存储器中下载它并将其保留在内存中,然后您的客户端必须从服务器中下载它。 You can do a stream to stream copy instead so memory is not chewed up. 您可以改为使用流来进行流复制,这样就不会占用内存。 Here is an example from your WebApi Controller. 这是WebApi控制器中的示例。

public async Task<HttpResponseMessage> GetPdf()
{
    //normally us a using statement for streams, but if you use one here, the stream will be closed before your client downloads it.

    Stream stream;
    try
    {
        //container setup earlier in code

        var blockBlob = container.GetBlockBlobReference(fileName);

        stream = await blockBlob.OpenReadAsync();

        //Set your response as the stream content from Azure Storage
        response.Content = new StreamContent(stream);
        response.Content.Headers.ContentLength = stream.Length;

        //This could change based on your file type
        response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
    }
    catch (HttpException ex)
    {
        //A network error between your server and Azure storage
        return this.Request.CreateErrorResponse((HttpStatusCode)ex.GetHttpCode(), ex.Message);
    }
    catch (StorageException ex)
    {
        //An Azure storage exception
        return this.Request.CreateErrorResponse((HttpStatusCode)ex.RequestInformation.HttpStatusCode, "Error getting the requested file.");
    }
    catch (Exception ex)
    {
        //catch all exception...log this, but don't bleed the exception to the client
        return this.Request.CreateErrorResponse(HttpStatusCode.BadRequest, "Bad Request");
    }
    finally
    {
        stream = null;
    }
}

I have used (almost exactly) this code and have been able to download files well over 1GB in size. 我已经(几乎完全)使用了这段代码,并且能够下载大小超过1GB的文件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM