简体   繁体   中英

Prevent my RAM memory from reaching 100%

I have a very simple python script that reads a CSV file and sorts the rows according to the timestamps. However, the file is large enough (16 GB) that its reading uses ram memory completely. When it reaches 100% (ie 64 GB RAM memory), my system completely freezes, and I am forced to restart my computer.

Here is the code:

import pandas as pd
from time import time

filename = 'AKER_OB.csv'

start_ = time()
file_ = pd.read_csv(filename)
end_ = time()
duration = end_ - start_
print("The duration to load that file : {}".format(duration))

file_.to_datetime(df['TimeStamps'], format="%Y-%m-%d %H:%M:%S").sort_values()

Head of AKER_OB.csv :

TimeStamp,Bid1,BidSize1,Bid2,BidSize2,Bid3,BidSize3,Bid4,BidSize4,Bid5,BidSize5,Bid6,BidSize6,Bid7,BidSize7,Bid8,BidSize8,Bid9,BidSize9,Bid10,BidSize10,Bid11,BidSize11,Bid12,BidSize12,Bid13,BidSize13,Bid14,BidSize14,Bid15,BidSize15,Bid16,BidSize16,Bid17,BidSize17,Bid18,BidSize18,Bid19,BidSize19,Bid20,BidSize20,Ask1,AskSize1,Ask2,AskSize2,Ask3,AskSize3,Ask4,AskSize4,Ask5,AskSize5,Ask6,AskSize6,Ask7,AskSize7,Ask8,AskSize8,Ask9,AskSize9,Ask10,AskSize10,Ask11,AskSize11,Ask12,AskSize12,Ask13,AskSize13,Ask14,AskSize14,Ask15,AskSize15,Ask16,AskSize16,Ask17,AskSize17,Ask18,AskSize18,Ask19,AskSize19,Ask20,AskSize20
2016-10-08 00:00:00,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:01,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:02,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:03,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:04,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:06,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:07,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:08,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

What is the correct way to fix this problem? A full answer with code snippet will be appreciated.

Essentially, you have to implement your own out-of-memory sorting.

  1. Split your file in two or more pieces with Pandas CSV chunker , sort each piece (one piece at a time!), save it into a separate CSV file, and free the memory with del .

  2. Merge the sorted files by opening all of the saved pre-sorted files with CSV chunkers, combining the rows from the chunks, as needed, and appending the sorted rows to the output file.

Just split the read of the file by chunks. A similar case .

Also consider to add a swap partition or file to your OS, it will help the issue to be out of RAM for other situations.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM