[英]Group several columns then aggregate a set of columns in Pandas (It crashes badly compared to R's data.table)
I am relatively new to the world of Python and trying to use it as a back-up platform to do data analysis. 我对Python世界相对较新,并试图将其用作进行数据分析的备份平台。 I generally use data.table
for my data analysis needs. 我通常使用data.table
来满足我的数据分析需求。
The issue is that when I run group-aggregate operation on big CSV file (randomized, zipped, uploaded at http://www.filedropper.com/ddataredact_1 ), Python throws: 问题是,当我在大型CSV文件上运行group-aggregate操作(随机,压缩,上传到http://www.filedropper.com/ddataredact_1 )时,Python抛出:
grouping pandas return getattr(obj, method)(*args, **kwds) ValueError: negative dimensions are not allowed 分组pandas返回getattr(obj,方法)(* args,** kwds)ValueError:不允许负尺寸
OR (I have even encountered...) 或者(我甚至遇到过......)
File "C:\\Anaconda3\\lib\\site-packages\\pandas\\core\\reshape\\util.py", line 65, in cartesian_product for i, x in enumerate(X)] File "C:\\Anaconda3\\lib\\site-packages\\pandas\\core\\reshape\\util.py", line 65, in for i, x in enumerate(X)] File "C:\\Anaconda3\\lib\\site-packages\\numpy\\core\\fromnumeric.py", line 445, in repeat return _wrapfunc(a, 'repeat', repeats, axis=axis) File "C:\\Anaconda3\\lib\\site-packages\\numpy\\core\\fromnumeric.py", line 51, in _wrapfunc return getattr(obj, method)(*args, **kwds) MemoryError 文件“C:\\ Anaconda3 \\ lib \\ site-packages \\ pandas \\ core \\ reshape \\ util.py”,第65行,在cartesian_product中为i,x在枚举(X)中文件“C:\\ Anaconda3 \\ lib \\ site- packages \\ pandas \\ core \\ reshape \\ util.py“,第65行,in为i,x为枚举(X)]文件”C:\\ Anaconda3 \\ lib \\ site-packages \\ numpy \\ core \\ fromnumeric.py“,line 445,重复返回_wrapfunc(a,'repeat',重复,轴=轴)文件“C:\\ Anaconda3 \\ lib \\ site-packages \\ numpy \\ core \\ fromnumeric.py”,第51行,在_wrapfunc中返回getattr(obj ,方法)(* args,** kwds)MemoryError
I have spent three days trying to reduce the file size (I was able to reduce the size by 89%), adding breakpoints, debugging it, but I was not able to make any progress. 我花了三天时间尝试减小文件大小(我能够将大小缩小89%),添加断点,调试它,但我无法取得任何进展。
Surprisingly, I thought of running the same group/aggregate operation in data.table
in R, and it hardly took 1 second. 令人惊讶的是,我想在R中的data.table
中运行相同的组/聚合操作,并且它几乎data.table
1秒。 Moreover, I didn't have to do any data type conversion etc., suggested at https://www.dataquest.io/blog/pandas-big-data/ . 此外,我没有做任何数据类型转换等,建议在https://www.dataquest.io/blog/pandas-big-data/ 。
I also researched other threads: Avoiding Memory Issues For GroupBy on Large Pandas DataFrame , Pandas: df.groupby() is too slow for big data set. 我还研究了其他线程: 避免大型Pandas DataFrame上的GroupBy内存问题 , Pandas:df.groupby()对于大数据集来说太慢了。 Any alternatives methods? 任何替代方法? , and pandas groupby with sum() on large csv file? ,和pandas groupby与大csv文件上的sum()? . 。 It seems these threads are more about matrix multiplication. 似乎这些线程更多的是关于矩阵乘法。 I'd appreciate if you wouldn't tag this as duplicate. 如果您不将此标记为重复,我将不胜感激。
Here's my Python code: 这是我的Python代码:
finaldatapath = "..\Data_R"
ddata = pd.read_csv(finaldatapath +"\\"+"ddata_redact.csv", low_memory=False,encoding ="ISO-8859-1")
#before optimization: 353MB
ddata.info(memory_usage="deep")
#optimize file: Object-types are the biggest culprit.
ddata_obj = ddata.select_dtypes(include=['object']).copy()
#Now convert this to category type:
#Float type didn't help much, so I am excluding it here.
for col in ddata_obj:
del ddata[col]
ddata.loc[:, col] = ddata_obj[col].astype('category')
#release memory
del ddata_obj
#after optimization: 39MB
ddata.info(memory_usage="deep")
#Create a list of grouping variables:
group_column_list = [
"Business",
"Device_Family",
"Geo",
"Segment",
"Cust_Name",
"GID",
"Device ID",
"Seller",
"C9Phone_Margins_Flag",
"C9Phone_Cust_Y_N",
"ANDroid_Lic_Type",
"Type",
"Term",
'Cust_ANDroid_Margin_Bucket',
'Cust_Mobile_Margin_Bucket',
# # 'Cust_Android_App_Bucket',
'ANDroind_App_Cust_Y_N'
]
print("Analyzing data now...")
def ddata_agg(x):
names = {
'ANDroid_Margin': x['ANDroid_Margin'].sum(),
'Margins': x['Margins'].sum(),
'ANDroid_App_Qty': x['ANDroid_App_Qty'].sum(),
'Apple_Margin':x['Apple_Margin'].sum(),
'P_Lic':x['P_Lic'].sum(),
'Cust_ANDroid_Margins':x['Cust_ANDroid_Margins'].mean(),
'Cust_Mobile_Margins':x['Cust_Mobile_Margins'].mean(),
'Cust_ANDroid_App_Qty':x['Cust_ANDroid_App_Qty'].mean()
}
return pd.Series(names)
ddata=ddata.reset_index(drop=True)
ddata = ddata.groupby(group_column_list).apply(ddata_agg)
The code crashes in above .groupby
operation. 代码在上面的.groupby
操作中崩溃。
Can someone please help me? 有人可以帮帮我吗? Compared to my other posts, I have probably spent the most amount of time on this StackOverflow post, trying to fix it and learn new stuff about Python. 与我的其他帖子相比,我可能花了大量时间在这篇StackOverflow帖子上,尝试修复它并学习有关Python的新内容。 However, I have reached saturation--it even more frustrates me because R
's data.table
package processes this file in <2 seconds. 但是,我已经达到饱和状态 - 它让我更加沮丧,因为R
的data.table
包在<2秒内处理这个文件。 This post is not about pros and cons of R and Python, but about using Python to be more productive. 这篇文章不是关于R和Python的优缺点,而是关于使用Python提高效率。
I am completely lost, and I'd appreciate any help. 我完全迷失了,我会感激任何帮助。
Here's my data.table
R
code: 这是我的data.table
R
代码:
path_r = "../ddata_redact.csv"
ddata<-data.table::fread(path_r,stringsAsFactors=FALSE,data.table = TRUE, header = TRUE)
group_column_list <-c(
"Business",
"Device_Family",
"Geo",
"Segment",
"Cust_Name",
"GID",
"Device ID",
"Seller",
"C9Phone_Margins_Flag",
"C9Phone_Cust_Y_N",
"ANDroid_Lic_Type",
"Type",
"Term",
'Cust_ANDroid_Margin_Bucket',
'Cust_Mobile_Margin_Bucket',
# # 'Cust_Android_App_Bucket',
'ANDroind_App_Cust_Y_N'
)
ddata<-ddata[, .(ANDroid_Margin = sum(ANDroid_Margin,na.rm = TRUE),
Margins=sum(Margins,na.rm = TRUE),
Apple_Margin=sum(Apple_Margin,na.rm=TRUE),
Cust_ANDroid_Margins = mean(Cust_ANDroid_Margins,na.rm = TRUE),
Cust_Mobile_Margins = mean(Cust_Mobile_Margins,na.rm = TRUE),
Cust_ANDroid_App_Qty = mean(Cust_ANDroid_App_Qty,na.rm = TRUE),
ANDroid_App_Qty=sum(ANDroid_App_Qty,na.rm = TRUE)
),
by=group_column_list]
Adding to Josemz's comment, here are two threads on agg
vs. apply
: What is the difference between pandas agg and apply function? 添加到Josemz的评论,这里有关于agg
和apply
两个主题: pandas agg和apply函数有什么区别? and Pandas difference between apply() and aggregate() functions 和Pandas在apply()和aggregate()函数之间的区别
I think what you're looking for is agg instead of apply . 我认为你要找的是agg而不是apply 。 You can pass a dict mapping columns to the functions you want to apply, so I think this would work for you: 您可以将dict映射列传递给要应用的函数,因此我认为这对您有用:
ddata = ddata.groupby(group_column_list).agg({
'ANDroid_Margin' : sum,
'Margins' : sum,
'ANDroid_App_Qty' : sum,
'Apple_Margin' : sum,
'P_Lic' : sum,
'Cust_ANDroid_Margins': 'mean',
'Cust_Mobile_Margins' : 'mean',
'Cust_ANDroid_App_Qty': 'mean'})
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.