简体   繁体   中英

Exception of type 'System.OutOfMemoryException' was thrown

Basically I use Entity Framework to query a huge database. I want to return a string list then log it to a text file.

List<string> logFilePathFileName = new List<string>();
var query = from c in DBContext.MyTable where condition = something select c;
foreach (var result in query)
{
    filePath = result.FilePath;
    fileName = result.FileName;
    string temp = filePath + "." + fileName;
    logFilePathFileName.Add(temp);
    if(logFilePathFileName.Count %1000 ==0)
        Console.WriteLine(temp+"."+logFilePathFileName.Count);
}

However I got an exception when logFilePathFileName.Count=397000 . The exception is:

Exception of type 'System.OutOfMemoryException' was thrown.

A first chance exception of type 'System.OutOfMemoryException' occurred in System.Data.Entity.dll

UPDATE:

What I want to use a different query say: select top 1000 then add to the list, but I don't know after 1000 then what?

Most probabbly it's not about a RAM as is, so increasing your RAM or even compiling and running your code in 64 bit machine will not have a positive effect, in this case.

I think it's related to a fact that .NET collections are limited to maximum 2GB RAM space (no difference either 32 or 64 bit).

To resolve this, split your list to much smaller chunks and most probabbly your problem will gone.

Just one possible solution:

foreach (var result in query)
{
    ....
    if(logFilePathFileName.Count %1000 ==0) {
        Console.WriteLine(temp+"."+logFilePathFileName.Count);
        //WRITE SOMEWHERE YOU NEED 
        logFilePathFileName = new List<string>(); //RESET LIST !|
    }
}

EDIT

If you want fragment a query , you can use Skip(...) and Take(...)

Just an explanatory example:

var fisrt1000 = query.Skip(0).Take(1000);
var second1000 = query.Skip(1000).Take(1000);

... and so on..

Naturally put it in your iteration and parametrize it based on bounds of data you know or need.

Why are you collecting the data in a List<string> if all you need to do is write it to a text file?

You might as well just:

  • Open the text file;
  • Iterate over the records, appending each string to the text file (without storing the strings in memory);
  • Flush and close the text file.

You will need far less memory than now, because you won't be keeping all those strings unnecessarily in memory.

You probably need to set some vmargs for memory! Also... look into writing it straight to your file and not holding it in a List

What Roy Dictus says sounds the best way. Also you can try to add a limit to your query. So your database result won't be so large.

For info on: Limiting query size with entity framework

You shouldn't read all records from database to list. It required a lot of memory. You an combine reading records and writing them to file. For example read 1000 records from db to list and save(append) them to text file, clear used memory (list.Clear()) and continue with new records.

From several other topics on StackOverflow I read that the Entity Framework is not designed to handle bulk data like that. The EF will cache/track all data in the context and will cause the exception in cases of huge bulks of data. Options are to use SQL directly or split up your records in smaller sets.

I used to use the gc arraylist in VS c++ similar to the gc List that you used, to works fin with small and intermediate data sets, but when using Big Dat, same problem 'System.OutOfMemoryException' was thrown. As the size of these gcs cannot exceed 2 GB and therefore become inefficient with Big data, I built my own linked list, which gives the same functionality, dynamic increase and get by index, basically, it is a normal linked list class, with a dynamic array inside to provide getting data by index, it duplicates the space, but you may delete the linked list after updating the array is you do not need it keeping only the dynamic array, this would solve the problem. see the code:

struct LinkedNode
{
    long data;
    LinkedNode* next;
};


class LinkedList
{
public:
    LinkedList();
    ~LinkedList();
    LinkedNode* head;
    long Count;
    long * Data;
    void add(long data);
    void update();
    //long get(long index);
};

LinkedList::LinkedList(){
    this->Count = 0;
    this->head = NULL;
}

LinkedList::~LinkedList(){
    LinkedNode * temp; 
    while(head){
        temp= this->head ;
        head = head->next;
        delete temp;
    }
    if (Data)
        delete [] Data; Data=NULL;
}

void LinkedList::add  (long data){
    LinkedNode * node = new LinkedNode();
    node->data = data;
    node->next = this->head;
    this->head = node;
    this->Count++;}

void LinkedList::update(){
    this->Data= new long[this->Count];
    long i = 0;
    LinkedNode * node =this->head;
    while(node){
        this->Data[i]=node->data;
        node = node->next;
        i++;
    }
}

If you use this, please refer to my work https://www.liebertpub.com/doi/10.1089/big.2018.0064

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM