简体   繁体   中英

Free memory of byte[]

I'm currently struggling with memory usage in c#.

The tool I'm currently working on is able to up- and download files. For that it is using byte arrays as buffers for the file contents. After an up- or download operation I can dispose the WebResponse and the Stream(+Reader/Writer) objects, however the byte array stays alive in memory forever. It goes out of scope and I even 'null' it, so I guess the garbage collection is never running.

While searching, I have found lots of articles that suggest to never ever run GC manually, however having a minimalistic background app that constantly takes up 100 or even 1000 MB of RAM (which keeps increasing the longer you use it) is anything but decent.

So, what else could be done in such a case if the usage of GC isn't recommended?

Edit 3 / Solution: I ended up using a 16kb byte buffer that gets filled with data from file i/o. After that the buffer contents are written to the RequestStream and further actions (updating the progress bar etc.) are taken.

Edit 2: It seems to be related to LOH. I will do tests on friday and note the results here then.

Edit: this is the code, maybe I'm missing a reference?

    internal void ThreadRun(object sender, DoWorkEventArgs e)
    {
        BackgroundWorker worker = sender as BackgroundWorker;

        UploadItem current = Upload.GetCurrent();

        if (current != null)
        {
            string localFilePath = current.src;
            string fileName = Path.GetFileName(localFilePath);
            elapsed = 0;
            progress = 0;

            try
            {
                string keyString = Util.GetRandomString(8);

                worker.ReportProgress(0, new UploadState(0, 0, 0));

                FtpWebRequest req0 = Util.CreateFtpsRequest("ftp://" + m.textBox1.Text + "/" + keyString, m.textBox2.Text, m.textBox3.Text, WebRequestMethods.Ftp.MakeDirectory);

                req0.GetResponse();

                FtpWebRequest req1 = Util.CreateFtpsRequest("ftp://" + m.textBox1.Text + "/" + keyString + "/" + fileName, m.textBox2.Text, m.textBox3.Text, WebRequestMethods.Ftp.UploadFile);

                worker.ReportProgress(0, new UploadState(1, 0, 0));

                byte[] contents = File.ReadAllBytes(localFilePath);

                worker.ReportProgress(0, new UploadState(2, 0, 0));

                req1.ContentLength = contents.Length;

                Stream reqStream = req1.GetRequestStream();

                Stopwatch timer = new Stopwatch();
                timer.Start();

                if (contents.Length > 100000)
                {
                    int hundredth = contents.Length / 100;

                    for (int i = 0; i < 100; i++)
                    {
                        worker.ReportProgress(i, new UploadState(3, i * hundredth, timer.ElapsedMilliseconds));
                        reqStream.Write(contents, i * hundredth, i < 99 ? hundredth : contents.Length - (99 * hundredth));
                    }
                }
                else
                {
                    reqStream.Write(contents, 0, contents.Length);
                    worker.ReportProgress(99, new UploadState(3, contents.Length, timer.ElapsedMilliseconds));
                }

                int contSize = contents.Length;
                contents = null;

                reqStream.Close();

                FtpWebResponse resp = (FtpWebResponse)req1.GetResponse();

                reqStream.Dispose();

                if (resp.StatusCode == FtpStatusCode.ClosingData)
                {
                    FtpWebRequest req2 = Util.CreateFtpsRequest("ftp://" + m.textBox1.Text + "/storedfiles.sfl", m.textBox2.Text, m.textBox3.Text, WebRequestMethods.Ftp.AppendFile);

                    DateTime now = DateTime.Now;

                    byte[] data = Encoding.Unicode.GetBytes(keyString + "/" + fileName + "/" + Util.BytesToText(contSize) + "/" + now.Day + "-" + now.Month + "-" + now.Year + " " + now.Hour + ":" + (now.Minute < 10 ? "0" : "") + now.Minute + "\n");

                    req2.ContentLength = data.Length;

                    Stream stream2 = req2.GetRequestStream();

                    stream2.Write(data, 0, data.Length);
                    stream2.Close();

                    data = null;

                    req2.GetResponse().Dispose();
                    stream2.Dispose();

                    worker.ReportProgress(100, new UploadState(4, 0, 0));
                    e.Result = new UploadResult("Upload successful!", "A link to your file has been copied to the clipboard.", 5000, ("http://" + m.textBox1.Text + "/u/" + m.textBox2.Text + "/" + keyString + "/" + fileName).Replace(" ", "%20"));
                }
                else
                {
                    e.Result = new UploadResult("Error", "An unknown error occurred: " + resp.StatusCode, 5000, "");
                }
            }
            catch (Exception ex)
            {
                e.Result = new UploadResult("Connection failed", "Cannot connect. Maybe your credentials are wrong, your account has been suspended or the server is offline.", 5000, "");
                Console.WriteLine(ex.StackTrace);
            }
        }
    }

At the core, the problem is that you read from your file in one big chunk. If the file is very large (greater than 85,000 bytes to be precise), then your byte array will get stored in the LOH (large object heap).

If you read up on the 'large object heap' (there is plenty of information on the topic if you google it), you will find that it tends to get collected a lot less frequently by the GC than the other heap areas, not to mention that it won't compact the memory either by default, which leads to fragmentation and eventually 'out of memory' exceptions.

In your case, all you need to do is read and write the bytes in smaller chunks, with a fixed-sized byte array buffer (eg: 4096), instead of trying to read the file all at once. In other words, you read a few bytes into your buffer, then you write them out. Then you read a few more into that same buffer, then you write it out again. And you keep doing that in a loop until you've read the whole file.

See: Here for documentation on how to read your file in smaller chunks, instead of using

File.ReadAllBytes(localFilePath);

By doing this, you will always be dealing with reasonable amounts of bytes at any given time that the GC will have no problem collecting in a timely fashion when you are done.

Just write smarter code. No need at all to load the entire file into a byte[] to upload it to an FTP server. All you need is a FileStream. Use its CopyTo() method to copy from the FileStream to the NetworkStream you got from GetRequestStream().

If you want to show progress then you'll have to copy yourself, a 4096 byte buffer gets the job done. Roughly:

using (var fs = File.OpenRead(localFilePath)) {
    byte[] buffer = new byte[4096];
    int total = 0;
    worker.ReportProgress(0);
    for(;;) {
       int read = fs.Read(buffer, 0, buffer.Length);
       if (read == 0) break;
       reqStream.Write(buffer, 0, read);
       total += read;
       worker.ReportProgress(total * 100 / fs.Length);
    }
 }

Untested, ought to be in the ball-park.

The garbage collection is different for large objects - and only available with .NET Framework 4.5.1 and newer.

This code will free the large object heap:

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); 

See also: https://msdn.microsoft.com/en-us/library/system.runtime.gcsettings.largeobjectheapcompactionmode(v=vs.110).aspx?cs-save-lang=1&cs-lang=csharp#code-snippet-2

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM