簡體   English   中英

使用joblib使得python在腳本運行時消耗越來越多的RAM

[英]Using joblib makes python consume increasing amounts of RAM as the script runs

我有大量要加載的文件,進行一些處理,然后存儲處理過的數據。 為此,我有以下代碼:

from os import listdir
from os.path import dirname, abspath, isfile, join
import pandas as pd
import sys
import time
# Multi-threading
from joblib import Parallel, delayed
import multiprocessing

# Number of cores
TOTAL_NUM_CORES = multiprocessing.cpu_count()
# Path of this script's file
FILES_PATH = dirname(abspath(__file__))

def read_and_convert(f,num_files):
    # Read the file
    dataframe = pd.read_csv(FILES_PATH + '\\Tick\\' + f, low_memory=False, header=None, names=['Symbol', 'Date_Time', 'Bid', 'Ask'], index_col=1, parse_dates=True)
    # Resample the data to have minute-to-minute data, Open-High-Low-Close format.
    data_bid = dataframe['Bid'].resample('60S').ohlc()
    data_ask = dataframe['Ask'].resample('60S').ohlc()
    # Concatenate the OLHC data
    data_ask_bid = pd.concat([data_bid, data_ask], axis=1, keys=['Bid', 'Ask'])
    # Keep only non-weekend data (from Monday 00:00 until Friday 22:00)
    data_ask_bid = data_ask_bid[(((data_ask_bid.index.weekday >= 0) & (data_ask_bid.index.weekday <= 3)) | ((data_ask_bid.index.weekday == 4) & (data_ask_bid.index.hour < 22)))]
    # Save the processed and concatenated data of each month in a different folder
    data_ask_bid.to_csv(FILES_PATH + '\\OHLC\\' + f)
    print(f)

def main():
    start_time = time.time()
    # Get the paths for all the tick data files
    files_names = [f for f in listdir(FILES_PATH + '\\Tick\\') if isfile(join(FILES_PATH + '\\Tick\\', f))]

    num_cores = int(TOTAL_NUM_CORES/2)
    print('Converting Tick data to OHLC...')
    print('Using ' + str(num_cores) + ' cores.')
    # Open and convert files in parallel
    Parallel(n_jobs=num_cores)(delayed(read_and_convert)(f,len(files_names)) for f in files_names)
    # for f in files_names: read_and_convert(f,len(files_names)) # non-parallel
    print("\nTook %s seconds." % (time.time() - start_time))

if __name__ == "__main__":
    main()

前幾個文件以這種方式處理得非常快,但隨着腳本處理更多和更多文件,速度開始變得邋。 隨着更多文件的處理,RAM逐漸變得更加豐富,如下所示。 當joblib在文件中循環時,是不是刷新了不需要的數據?

在此輸入圖像描述

gc.collect()添加到並行運行的函數的最后一行可以避免RAM飽和。 gc.collect()是Python的垃圾收集器。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM