[英]Pandas dataframe split by memory usage
有沒有一種方法可以將熊貓數據幀拆分為受內存使用限制的多個數據幀?
def split_dataframe(df, size):
# size of each row
row_size = df.memory_usage().sum() / len(df)
# maximum number of rows of each segment
row_limit = size // row_size
# number of segments
seg_num = (len(df) + row_limit - 1) // row_limit
# split df
segments = [df.iloc[i*row_limit : (i+1)*row_limit] for i in range(seg_num)]
return segments
最簡單的方法是,如果數據框的列是一致的數據類型(即不是對象)。 這是您可能如何進行此操作的示例。
import numpy as np
import pandas as pd
from __future__ import division
df = pd.DataFrame({'a': [1]*100, 'b': [1.1, 2] * 50, 'c': range(100)})
# calculate the number of bytes a row occupies
row_bytes = df.dtypes.apply(lambda x: x.itemsize).sum()
mem_limit = 1024
# get the maximum number of rows in a segment
max_rows = mem_limit / row_bytes
# get the number of dataframes after splitting
n_dfs = np.ceil(df.shape[0] / max_rows)
# get the indices of the dataframe segments
df_segments = np.array_split(df.index, n_dfs)
# create a list of dataframes that are below mem_limit
split_dfs = [df.loc[seg, :] for seg in df_segments]
split_dfs
另外,如果可以按列而不是按行拆分,則pandas有一個方便的memory_usage
方法。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.