簡體   English   中英

如何在pandas中有效地加入/合並/連接大數據框?

[英]How to efficiently join/merge/concatenate large data frame in pandas?

目的是創建一個大數據框架,我可以在其上執行操作,例如平均每列的行等。

問題是隨着數據幀的增加,每次迭代的速度也會增加,所以我無法完成計算。

注意:我的df只有兩列,其中col1是不必要的,因此為什么我加入它。 col1是一個字符串, col2是一個浮點數。 行數是3k。 以下是一個例子:

folder_paths    float
folder/Path     1.12630137
folder/Path2    1.067517426
folder/Path3    1.06443264
folder/Path4    1.049119625
folder/Path5    1.039635769

問題關於如何提高代碼效率以及瓶頸在哪里的任何想法? 此外,我不確定merge是否可行。

當前的想法我正在考慮的一個解決方案是分配內存並指定列類型: col1是字符串, col2是浮點數。

df = pd.DataFrame() # create an empty data frame

for i in range(1000):
    if i is 0:
        df = generate_new_df(arg1, arg2)
    else:
        df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')

我也試過使用pd.concat ,但結果非常相似:每次迭代后的時間增加

df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)

pd.concat的結果

run 1
time 0.34s
run 2    
time 0.34s
run 3    
time 0.32s
run 4    
time 0.33s
run 5    
time 0.42s
run 6    
time 0.41s
run 7    
time 0.45s
run 8    
time 0.46s
run 9    
time 0.54s
run 10   
time 0.58s
run 11   
time 0.73s
run 12   
time 0.72s
run 13   
time 0.79s
run 14   
time 0.87s
run 15   
time 0.95s
run 16   
time 1.06s
run 17   
time 1.19s
run 18   
time 1.24s
run 19   
time 1.37s
run 20   
time 1.57s
run 21   
time 1.68s
run 22   
time 1.93s
run 23   
time 1.86s
run 24   
time 1.96s
run 25   
time 2.11s
run 26   
time 2.32s
run 27   
time 2.42s
run 28   
time 2.57s

使用列表的dfListpd.concat產生了類似的結果。 以下是代碼和結果。

dfList=[]
for i in range(1000):
    dfList.append(generate_new_df(arg1, arg2))

df = pd.concat(dfList, axis=1)

結果:

run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.

現在仍然有點不清楚你的問題到底是什么,但我會假設你的主要瓶頸是你試圖將大量數據幀同時加載到一個列表中並且你遇到了內存/分頁問題。 考慮到這一點,這里有一種方法可能有所幫助,但您必須自己測試,因為我無法訪問您的generate_new_df函數或您的數據。

方法是在此答案中使用merge_with_concat函數的變體,並將最初的較小數量的數據幀合並在一起,然后將它們全部合並在一起。

例如,如果您有1000個數據幀,則可以一次合並100個,以便為您提供10個大數據幀,然后將最后10個數據幀合並為最后一步。 這應該確保您沒有任何一個點太大的數據幀列表。

您可以使用下面的兩個函數(我假設您的generate_new_df函數將文件名作為其參數之一)並執行以下操作:

def chunk_dfs(file_names, chunk_size):
    """" yields n dataframes at a time where n == chunksize """
    dfs = []
    for f in file_names:
        dfs.append(generate_new_df(f))
        if len(dfs) == chunk_size:
            yield dfs
            dfs  = []
    if dfs:
        yield dfs


def merge_with_concat(dfs, col):                                             
    dfs = (df.set_index(col, drop=True) for df in dfs)
    merged = pd.concat(dfs, axis=1, join='outer', copy=False)
    return merged.reset_index(drop=False)

col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100

merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM