简体   繁体   中英

How to optimize memory usage in this algorithm?

I'm developing a log parser, and I'm reading files of strings of more than 150MB.- This is my approach, Is there any way to optimize what is in the While statement? The problem is that is consuming a lot of memory.- I also tried with a stringbuilder facing the same memory comsuption.-

private void ReadLogInThread()
        {
            string lineOfLog = string.Empty;

            try
            {
                StreamReader logFile = new StreamReader(myLog.logFileLocation);
                InformationUnit infoUnit = new InformationUnit();

                infoUnit.LogCompleteSize = myLog.logFileSize;

                while ((lineOfLog = logFile.ReadLine()) != null)
                {
                    myLog.transformedLog.Add(lineOfLog); //list<string>
                    myLog.logNumberLines++;

                    infoUnit.CurrentNumberOfLine = myLog.logNumberLines;
                    infoUnit.CurrentLine = lineOfLog;
                    infoUnit.CurrentSizeRead += lineOfLog.Length;


                    if (onLineRead != null)
                        onLineRead(infoUnit);
                }
            }
            catch { throw; }
        }

Thanks in advance!

EXTRA: Im saving each line because after reading the log I will need to check for some information on every stored line.- The language is C#

Memory economy can be achieved if your log lines are actually can be parsed to a data row representation.

Here is a typical log line i can think of:

Event at: 2019/01/05:0:24:32.435, Reason: Operation, Kind: DataStoreOperation, Operation Status: Success

This line takes 200 bytes in memory. At the same time, following representation just takes belo 16 bytes:

Enum LogReason { Operation, Error, Warning };
Enum EventKind short { DataStoreOperation, DataReadOperation };
Enum OperationStatus short { Success, Failed };

LogRow
{
  DateTime EventTime;
  LogReason Reason;
  EventKind Kind;
  OperationStatus Status;
}

Another optimization possibility is just parsing a line to array of string tokens, this way you could make use of string interning. For example, if a word "DataStoreOperation" takes 36 bytes, and if it has 1000000 entiries in the file, the economy is (18*2 - 4) * 1000000 = 32 000 000 bytes.

Try to make your algorithm sequential.

Using an IEnumerable instead of a List helps playing nice with memory, while keeping same semantic as working with a list, if you don't need random access to lines by index in the list.

IEnumerable<string> ReadLines()
{
  // ...
  while ((lineOfLog = logFile.ReadLine()) != null)
  {
    yield return lineOfLog;
  }
}
//...
foreach( var line in ReadLines() )
{
  ProcessLine(line);
}

I am not sure if it will fit your project but you can store the result in StringBuilder instead of strings list.

For example, this process on my machine takes 250MB memory after loading (file is 50MB):

static void Main(string[] args)
{
    using (StreamReader streamReader = File.OpenText("file.txt"))
    {
        var list = new List<string>();
        string line;
        while (( line=streamReader.ReadLine())!=null)
        {
            list.Add(line);
        }
    }
}

On the other hand, this code process will take only 100MB:

static void Main(string[] args)
{
    var stringBuilder = new StringBuilder();
    using (StreamReader streamReader = File.OpenText("file.txt"))
    {
        string line;
        while (( line=streamReader.ReadLine())!=null)
        {
            stringBuilder.AppendLine(line);
        }
    }
}

Memory usage keeps going up because you're simply adding them to a List<string>, constantly growing. If you want to use less memory one thing you can do is to write the data to disk, rather than keeping it in scope. Of course, this will greatly cause speed to degrade.

Another option is to compress the string data as you're storing it to your list, and decompress it coming out but I don't think this is a good method.

Side Note:

You need to add a using block around your streamreader.

using (StreamReader logFile = new StreamReader(myLog.logFileLocation))

Consider this implementation: (I'm speaking c/c++, substitute c# as needed)

Use fseek/ftell to find the size of the file.

Use malloc to allocate a chunk of memory the size of the file + 1;
Set that last byte to '\0' to terminate the string.

Use fread to read the entire file into the memory buffer.
You now have char * which holds the contents of the file as a 
string.

Create a vector of const char * to hold pointers to the positions 
in memory where each line can be found.   Initialize the first element 
of the vector to the first byte of the memory buffer.

Find the carriage control characters (probably \r\n)   Replace the 
\r by \0 to make the line a string.   Increment past the \n.  
This new pointer location is pushed back onto the vector.

Repeat the above until all of the lines in the file have been NUL 
terminated, and are pointed to by elements in the vector.

Iterate though the vector as needed to investigate the contents of 
each line, in your business specific way.

When you are done, close the file, free the memory,  and continue 
happily along your way.

1) Compress the strings before you store them (ie see System.IO.Compression and GZipStream). This would probably kill the performance of your program though since you'd have to uncompress to read each line.

2) Remove any extra white space characters or common words you can do without. ie if you can understand what the log is saying with the words "the, a, of...", remove them. Also, shorten any common words (ie change "error" to "err" and "warning" to "wrn"). This would slow down this step in the process but shouldn't affect performance of the rest.

What encoding is your original file? If it is ascii then just the strings alone are going to take over 2x the size of the file just to load up into your array. AC# character is 2 bytes and a C# string adds an extra 20 bytes per string in addition to the characters.

In your case, since it is a log file, you can probably exploit the fact that there is a lot of repetition in the the messages. You most likely can parse the incoming line into a data structure which reduces the memory overhead. For example, if you have a timestamp in the log file you can convert that to a DateTime value which is 8 bytes . Even a short timestamp of 1/1/10 would add 12 bytes to the size of a string, and a timestamp with time information would be even longer. Other tokens in your log stream might be able to be turned into a code or an enum in a similar manner.

Even if you have the leave the value as a string, if you can break it down into pieces that are used a lot, or remove boilerplate that is not needed at all you can probably cut down on your memory usage. If there are a lot of common strings you can Intern them and only pay for 1 string no matter how many you have.

If you must store the raw data, and assuming that your logs are mostly ASCII, then you can save some memory by storing UTF8 bytes internally. Strings are UTF16 internally, so you're storing an extra byte for each character. So by switching to UTF8 you're cutting memory use by half (not counting class overhead, which is still significant). Then you can convert back to normal strings as needed.

static void Main(string[] args)
{
    List<Byte[]> strings = new List<byte[]>();

    using (TextReader tr = new StreamReader(@"C:\test.log"))
    {
        string s = tr.ReadLine();
        while (s != null)
        {
            strings.Add(Encoding.Convert(Encoding.Unicode, Encoding.UTF8, Encoding.Unicode.GetBytes(s)));
            s = tr.ReadLine();
        }
    }

    // Get strings back
    foreach( var str in strings)
    {
        Console.WriteLine(Encoding.UTF8.GetString(str));
    }
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM