简体   繁体   English

MVC ASP.NET正在使用大量内存

[英]MVC ASP.NET is using a lot of memory

If I just browse some pages on the app, it sits at around 500MB. 如果我只浏览应用程序上的某些页面,它大约为500MB。 Many of these pages access the database but at this point in time, I only have roughly a couple of rows each for 10 tables, mostly storing strings and some small icons that are less than 50KB. 这些页面中的许多页面访问数据库,但此时,我只有大约几行用于10个表,大多数存储字符串和一些小于50KB的小图标。

The real problem occurs when when I download a file. 当我下载文件时,会出现真正的问题。 The file is roughly 140MB and is stored as a varbinary(MAX) in the database. 该文件大约为140MB,并作为varbinary(MAX)存储在数据库中。 The memory usage suddenly rises to 1.3GB for a split second and then falls back to 1GB. 内存使用量突然升至1.3GB,瞬间降至1GB。 The code for that action is here: 该操作的代码如下:

public ActionResult DownloadIpa(int buildId)
{
    var build = _unitOfWork.Repository<Build>().GetById(buildId);
    var buildFiles = _unitOfWork.Repository<BuildFiles>().GetById(buildId);
    if (buildFiles == null)
    {
        throw new HttpException(404, "Item not found");
    }

    var app = _unitOfWork.Repository<App>().GetById(build.AppId);
    var fileName = app.Name + ".ipa";

    app.Downloads++;
    _unitOfWork.Repository<App>().Update(app);
    _unitOfWork.Save();

    return DownloadFile(buildFiles.Ipa, fileName);
}

private ActionResult DownloadFile(byte[] file, string fileName, string type = "application/octet-stream")
{
    if (file == null)
    {
        throw new HttpException(500, "Empty file");
    }

    if (fileName.Equals(""))
    {
        throw new HttpException(500, "No name");
    }

    return File(file, type, fileName);            
}

On my local computer, If I don't do anything, the memory usage stays at 1GB. 在我的本地计算机上,如果我什么都不做,内存使用量将保持在1GB。 If I then go back and navigate to some pages, it falls back down to 500MB. 如果我然后返回导航到某些页面,它会回落到500MB。

On the deployment server, it stays at 1.6GB after the first download no matter what I do. 在部署服务器上,无论我做什么,它在第一次下载后都会保持在1.6GB。 I can force the memory usage to increase by continually downloading files until it reaches 3GB, where it drops back down to 1.6GB. 我可以通过不断下载文件来增加内存使用量,直到它达到3GB,然后下降到1.6GB。

In every controller, I have overriden the Dispose() method as so: 在每个控制器中,我都重写了Dispose()方法,如下所示:

protected override void Dispose(bool disposing)
{
    _unitOfWork.Dispose();
    base.Dispose(disposing);
}

This refers to: 这指的是:

public void Dispose()
{
    Dispose(true);
    GC.SuppressFinalize(this);
}

public void Dispose(bool disposing)
{
    if (!_disposed)
    {
        if (disposing)
        {
            _context.Dispose();
        }
    }

    _disposed = true;
}

So my unit of work should be disposed every time the controller is disposed. 因此,每次处理控制器时都应该处理我的工作单元。 I am using Unity and I register the unit of work with a Heirarchical Lifetime Manager. 我使用的是Unity,我使用Heirarchical Lifetime Manager注册了工作单元。

Here are a few of screenshots from the Profiler: 以下是Profiler的一些屏幕截图:

在此输入图像描述

在此输入图像描述

在此输入图像描述

I believe this could be the problem or I am going down the wrong track. 我相信这可能是问题,或者我走错了路。 Why would Find() use 300MB? 为什么Find()使用300MB?

EDIT: 编辑:

Repository: 库:

public class Repository<TEntity> : IRepository<TEntity> where TEntity : class
{
    internal IDbContext Context;
    internal IDbSet<TEntity> DbSet;

    public Repository(IDbContext context)
    {
        Context = context;
        DbSet = Context.Set<TEntity>();
    }

    public virtual IEnumerable<TEntity> GetAll()
    {            
        return DbSet.ToList();
    }

    public virtual TEntity GetById(object id)
    {
        return DbSet.Find(id);
    }

    public TEntity GetSingle(Expression<Func<TEntity, bool>> predicate)
    {
        return DbSet.Where(predicate).SingleOrDefault();
    }

    public virtual RepositoryQuery<TEntity> Query()
    {
        return new RepositoryQuery<TEntity>(this);
    }

    internal IEnumerable<TEntity> Get(
        Expression<Func<TEntity, bool>> filter = null,
        Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null,
        List<Expression<Func<TEntity, object>>> includeProperties = null)
    {
        IQueryable<TEntity> query = DbSet;

        if (includeProperties != null)
        {
            includeProperties.ForEach(i => query.Include(i));
        }

        if (filter != null)
        {
            query = query.Where(filter);
        }

        if (orderBy != null)
        {
            query = orderBy(query);
        }

        return query.ToList();
    }

    public virtual void Insert(TEntity entity)
    {
        DbSet.Add(entity);
    }

    public virtual void Update(TEntity entity)
    {
        DbSet.Attach(entity);
        Context.Entry(entity).State = EntityState.Modified;
    }

    public virtual void Delete(object id)
    {
        var entity = DbSet.Find(id);

        Delete(entity);
    }

    public virtual void Delete(TEntity entity)
    {
        if (Context.Entry(entity).State == EntityState.Detached)
        {
            DbSet.Attach(entity);
        }

        DbSet.Remove(entity);
    }
}

EDIT 2: 编辑2:

I ran dotMemory for a variety of scenarios and this is what I got. 我为各种场景运行了dotMemory,这就是我得到的。

在此输入图像描述

The red circles indicate that sometimes there are multiple rises and drops happening on one page visit. 红色圆圈表示有时在一次访问时会发生多次上升和下降。 The blue circle indicates download of a 40MB file. 蓝色圆圈表示下载40MB文件。 The green circle indicates download of 140MB file. 绿色圆圈表示下载140MB文件。 Furthermore, a lot of the time, the memory usage keeps on increasing for a few more seconds even after the page has instantly loaded. 此外,在很多时候,即使页面立即加载,内存使用量也会持续增加几秒钟。

Because the file is large, it is allocated on the Large Object Heap, which is collected with a gen2 collection (which you see in your profile, the purple blocks is the large object heap, and you see it collected after 10 seconds). 因为文件很大,所以它是在大对象堆上分配的,它是用gen2集合收集的(你在配置文件中看到,紫色块是大对象堆,你看到它在10秒后收集)。

On your production server, you most likely have much more memory than on your local machine. 在生产服务器上,您的内存可能比本地计算机上的内存多得多。 Because there is less memory pressure, the collections won't occur as frequently, which explains why it would add up to a higher number - there are several files on the LOH before it gets collected. 因为内存压力较小,所以集合不会频繁出现,这就解释了为什么它会增加更多的数字 - 在收集之前LOH上有几个文件。

I wouldn't be surprised at all if, across different buffers in MVC and EF, some data gets copied around in unsafe blocks too, which explains the unmanaged memory growth (the thin spike for EF, the wide plateau for MVC) 如果在MVC和EF中的不同缓冲区中,一些数据也会在不安全的块中被复制,我不会感到惊讶,这解释了非托管内存增长(EF的薄尖峰,MVC的广阔平台)

Finally, a 500MB baseline is for a large project not completely surprising (madness! but true!) 最后,一个500MB的基线是一个大项目并不完全令人惊讶(疯狂!但是真的!)

So an answer to your question why it uses so much memory that is quite probable is "because it can", or in other words, because there is no memory pressure to perform a gen2 collection, and the downloaded files sit unused in your large object heap until collection evicts them because memory is abundant on your production server. 因此,回答你的问题,为什么它使用如此多的内存很可能是“因为它可以”,或者换句话说,因为没有内存压力来执行gen2集合,并且下载的文件在大对象中未被使用堆直到集合驱逐它们,因为生产服务器上的内存很丰富。

This is probably not even a real problem: if there were more memory pressure, there would be more collection, and you'd see lower memory usage. 这可能不是一个真正的问题:如果有更多的内存压力,会有更多的收集,你会看到更低的内存使用率。

As for what to do about it, I'm afraid you're out of luck with the Entity Framework. 至于如何应对,我担心你对实体框架不满意。 As far as I know it has no streaming API. 据我所知,它没有流API。 WebAPI does allow streaming the response by the way, but that won't help you much if you have the whole large object sitting in memory anyway (though it might possibly help some with the unmanaged memory in the (by me) unexplored parts of MVC. WebAPI确实允许按顺序传输响应,但是如果整个大对象都在内存中,这对你没有多大帮助(尽管它可能有助于某些人在MVC未经探索的部分中使用非托管内存) 。

Add a GC.Collect() to the Dispose method for testing purposes. 将GC.Collect()添加到Dispose方法以进行测试。 If the leak stays it is a real leak. 如果泄漏仍然存在,那就是真正的泄漏。 If it vanishes it was just delayed GC. 如果它消失了它只是延迟GC。

You did that and said: 你这样做并说:

@usr Memory usage now hardly reaches 600MB. @usr内存使用量现在几乎达不到600MB。 So really just delayed? 真的只是推迟了?

Clearly, there is no memory leak if GC.Collect removes the memory that you were worried about. 显然,如果GC.Collect删除了您担心的内存,则没有内存泄漏。 If you want to make really sure, run your test 10 times. 如果你想确定,请运行你的测试10次。 Memory usage should be stable. 内存使用率应该稳定。

Processing such big files in single chunks can lead to multiplied memory usage as the file travels through the different components and frameworks. 当文件通过不同的组件和框架时,在单个块中处理这样的大文件会导致内存使用倍增。 It can be a good idea to switch to a streaming approach. 切换到流式方法可能是个好主意。

Apparently, that consists of System.Web and all it's children taking up around 200MB. 显然,它由System.Web和它的所有孩子占用大约200MB。 This is quoted as the absolute minimum for your application pool. 这被引用为应用程序池的绝对最小值。

Our web application using EF 6, with a model consisting of 220+ entities in .Net 4.0 starts up at around 480MB idle. 我们使用EF 6的Web应用程序,在.Net 4.0中包含220多个实体的模型,启动时空闲时间约为480MB。 We perform some AutoMapper operations at startup. 我们在启动时执行一些AutoMapper操作。 Memory consumption peaks and then returns to around 500MB in daily use. 内存消耗达到峰值,然后在日常使用中返回到大约500MB。 We've just accepted this as the norm. 我们刚认为这是常态。

Now, for your file download spikes. 现在,为您的文件下载峰值。 The issue under web forms when using an ashx handler or the like was explored in this question: ASP.net memory usage during download 在这个问题中探讨了使用ashx处理程序等时Web表单下的问题: 下载过程中的ASP.net内存使用情况

I don't know how that relates to the FileActionResult in MVC, but you can see that the buffer size needed to be controlled manually to minimise the memory spike. 我不知道它与MVC中的FileActionResult有什么关系,但你可以看到需要手动控制缓冲区大小以最小化内存峰值。 Try to apply the principles behind the answer from that question by: 尝试通过以下方式应用该问题答案背后的原则:

Response.BufferOutput = false;
var stream = new MemoryStream(file);
stream.Position = 0;
return new FileStreamResult(stream, type); // Or just pass the "file" parameter as a stream

After applying this change, what does the memory behaviour look like? 应用此更改后,内存行为是什么样的?

See 'Debugging memory problems (MSDN)' for more details. 有关详细信息,请参阅“调试内存问题(MSDN)”

You may need to read the data in chunks and write to the output stream. 您可能需要以块的形式读取数据并写入输出流。 Take a look at SqlDataReader.GetBytes http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.getbytes(v=vs.110).aspx 看看SqlDataReader.GetBytes http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.getbytes(v=vs.110).aspx

This could be one of a few things: 这可能是以下几点之一:

As your file is rather large and is stored in your database and you are getting it via Entity Framework , you are caching this data in a few places. 由于您的文件相当大并且存储在数据库中并且您是通过Entity Framework获取的,因此您将在几个位置缓存此数据。 Each EF request caches that data until your context is disposed. 每个EF请求都会缓存该数据,直到您的上下文被释放为止。 When you return the file from the action, the data is then loaded again and then streamed to the client. 从操作返回文件时,数据将再次加载,然后流式传输到客户端。 All of this happens in ASP .NET as explained already. 所有这些都发生在ASP .NET如已经解释过的那样。

A solution to this issue to not to stream large files directly from the database with EF and ASP .NET . 这个问题的解决方案是不使用EFASP .NET直接从数据库中流式传输大型文件。 A better solution is to use a background process to cache large files locally to the website and then have the client download them with a direct URL . 更好的解决方案是使用后台进程将大型文件本地缓存到网站,然后让客户端使用直接URL下载它们 This allows IIS to manage the streaming, saves your website a request and saves a lot of memory. 这允许IIS管理流媒体,为您的网站保存请求并节省大量内存。

OR (less likely) 或(不太可能)

Seeing that you are using Visual Studio 2013 , this sounds awfully like a Page Inspector issue. 看到您使用的是Visual Studio 2013 ,这听起来非常类似于Page Inspector问题。

What happens is when you run your website with IIS Express from Visual Studio , Page Inspector caches all of the response data - including that of your file - causing a lot of memory to be used. 当您使用Visual Studio IIS Express运行网站时,会发生什么情况, Page Inspector缓存所有响应数据(包括文件的响应数据),从而导致大量内存被使用。 Try adding: 尝试添加:

<appSettings>
    <add key="PageInspector:ServerCodeMappingSupport" value="Disabled" />
</appSettings>

to your web.config to disable Page Inspector to see if that helps. 到您的web.config以禁用Page Inspector以查看是否有帮助。

TL;DR TL; DR

Cache the large file locally and let the client download the file directly. 在本地缓存大文件,让客户端直接下载文件。 Let IIS handle the hard work for you. 让IIS为您处理艰苦的工作。

I suggest trying Ionic.Zip library. 我建议尝试Ionic.Zip库。 I use it in one of our sites with a requirement to download multiple files into one unit. 我在我们的一个站点中使用它,需要将多个文件下载到一个单元中。

I recently tested it with a group of files while one of the files is as large as 600MB: 我最近用一组文件测试了它,其中一个文件大到600MB:

  • Total size of zipped/compressed folder: 260MB 压缩/压缩文件夹的总大小:260MB
  • Total Size of unzipped folder: 630MB 解压缩文件夹总大小:630MB
  • Memory usage spiked from 350MB to 650MB during download 下载过程中内存使用量从350MB增加到650MB
  • Total time: 1m 10s to download, no VPN 总时间:下载1m 10s,无VPN

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM