簡體   English   中英

如何使這段代碼不消耗這么多 RAM memory?

[英]How to make this code not to consume so much RAM memory?

我有這兩個 function,當我運行它們時,我的 kernel 很快就死了。 我能做些什么來防止它? 它發生在將大約 10 個文件附加到 dataframe 之后。不幸的是,json 文件太大了(每個文件大約 150 MB,有幾十個),我不知道如何將它們連接在一起。

import os
import pandas as pd
from pandas.io.json import json_normalize
import json

def filtering_nodes(df):
    id_list = df.index.tolist()
    print("Dropping rows without 4 nodes and 3 members...")
    for x in id_list:
        if len(df['Nodes'][x]) != 4 and len(df['Members'][x]) != 3:
            df = df.drop(x)
    print("Converting to csv...")
    df.to_csv("whole_df.csv", sep='\t')
    return df

def merge_JsonFiles(filename):
    result = list()
    cnt = 0
    
    df_all = None
    data_all = None
    
    for f1 in filename:
        print("Appending file: ", f1)
        with open('../../data' + f1, 'r') as infile:
            data_all = json.loads(infile.read())
        if cnt == 0:
            df_all = pd.json_normalize(data_all, record_path =['List2D'], max_level =2 ,sep = "-")
        else:
            df_all = df_all.append(pd.json_normalize(data_all, record_path =['List2D'], max_level =2 ,sep = "-"), ignore_index = True)
        cnt += 1
        
    return df_all

files = os.listdir('../../data')
df_all_test = merge_JsonFiles(files)
df_all_test_drop = filtering_nodes(df_all_test)

編輯:由於@jlandercy 的回答,我做了這個:

def merging_to_csv():
    for path in pathlib.Path("../../data/loads_data/Dane/hilti/").glob("*.json"):
        # Open source file one by one:
        with path.open() as handler:
            df = pd.json_normalize(json.load(handler), record_path =['List2D'])
        # Identify rows to drop (boolean indexing):
        q = (df["Nodes"] != 4) & (df["Members"] != 3)
        # Inplace drop (no extra copy in RAM):
        df.drop(q, inplace=True)
        # Append data to disk instead of RAM:
        df.to_csv("output.csv", mode="a", header=False)

merging_to_csv()

我有這種類型的錯誤:

KeyError                                  Traceback (most recent call last)
<ipython-input-55-cf18265ca50e> in <module>
----> 1 merging_to_csv()

<ipython-input-54-698c67461b34> in merging_to_csv()
     51         q = (df["Nodes"] != 4) & (df["Members"] != 3)
     52         # Inplace drop (no extra copy in RAM):
---> 53         df.drop(q, inplace=True)
     54         # Append data to disk instead of RAM:
     55         df.to_csv("output.csv", mode="a", header=False)

/opt/conda/lib/python3.7/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
    309                     stacklevel=stacklevel,
    310                 )
--> 311             return func(*args, **kwargs)
    312 
    313         return wrapper

/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in drop(self, labels, axis, index, columns, level, inplace, errors)
   4906             level=level,
   4907             inplace=inplace,
-> 4908             errors=errors,
   4909         )
   4910 

/opt/conda/lib/python3.7/site-packages/pandas/core/generic.py in drop(self, labels, axis, index, columns, level, inplace, errors)
   4148         for axis, labels in axes.items():
   4149             if labels is not None:
-> 4150                 obj = obj._drop_axis(labels, axis, level=level, errors=errors)
   4151 
   4152         if inplace:

/opt/conda/lib/python3.7/site-packages/pandas/core/generic.py in _drop_axis(self, labels, axis, level, errors)
   4183                 new_axis = axis.drop(labels, level=level, errors=errors)
   4184             else:
-> 4185                 new_axis = axis.drop(labels, errors=errors)
   4186             result = self.reindex(**{axis_name: new_axis})
   4187 

/opt/conda/lib/python3.7/site-packages/pandas/core/indexes/base.py in drop(self, labels, errors)
   6016         if mask.any():
   6017             if errors != "ignore":
-> 6018                 raise KeyError(f"{labels[mask]} not found in axis")
   6019             indexer = indexer[~mask]
   6020         return self.delete(indexer)

KeyError: '[ True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True  True  True  True  True  True  True  True  True  True  True  True\n  True] not found in axis'

怎么了? 我將在此處上傳兩個最小的 json 文件: https://drive.google.com/drive/folders/1xlC-kK6NLGr0isdy1Ln2tzGmel45GtPC?usp=sharing

您在原始方法中面臨多個問題:

  • dataframe 的多個副本: df = df.drop(...) ;
  • 由於append ,整個信息存儲在 RAM 中;
  • 不需要 for 循環來過濾行,請改用 boolean 索引。

這是根據您提供的數據樣本解決問題的基線代碼段:

import json
import pathlib
import pandas as pd
    
# Iterate source files:
for path in pathlib.Path(".").glob("result*.json"):
    # Open source file one by one:
    with path.open() as handler:
        # Normalize JSON model:
        df = pd.json_normalize(json.load(handler), record_path =['List2D'], max_level=2, sep="-")
    # Apply len to list fields to identify rows to drop (boolean indexing):
    q = (df["Nodes"].apply(len) != 4) & (df["Members"].apply(len) != 3)
    # Filter and append data to disk instead of RAM:
    df.loc[~q,:].to_csv("output.csv", mode="a", header=False)

它在 RAM 中一個一個地加載文件,然后將 append 過濾行加載到磁盤而不是 RAM。 這些修復將大大減少 RAM 使用量,並且應該保持為最大 JSON 文件的兩倍。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM