繁体   English   中英

如何提高 pandas 行操作的速度?

[英]How can I improve the speed of pandas rows operations?

我有一个 large.csv 文件,它有 11'000'000 行和 3 列:id、magh、mixid2。 我要做的是 select 具有相同 id 的行,然后检查这些行是否具有相同的 mixid2; 如果为真,我删除行,如果为假,我使用所选行的信息初始化 class。 那是我的代码:

obs=obs.set_index('id')
obs=obs.sort_index()
#dropping elements with only one mixid2 and filling S
ID=obs.index.unique()
S=[]
good_bye_list = []
for i in tqdm(ID):
    app=obs.loc[i]
    if len(np.unique([app['mixid2'],])) != 1:
        #fill the class list
        S.append(star(app['magh'].values,app['mixid2'].values,z_in))
    else :
    #drop
        good_bye_list.append(i)

obs=obs.drop(good_bye_list) 

.csv 文件非常大,因此计算所有内容需要 40 分钟。 我怎样才能提高速度?

感谢您的帮助。

这是.csv文件:

id,mixid2,magh
3447001203296326,557,14.25
3447001203296326,573,14.25
3447001203296326,525,14.25
3447001203296326,541,14.25
3447001203296330,540,15.33199977874756
3447001203296330,573,15.33199977874756
3447001203296333,172,17.476999282836914
3447001203296333,140,17.476999282836914
3447001203296333,188,17.476999282836914
3447001203296333,156,17.476999282836914
3447001203296334,566,15.626999855041506
3447001203296334,534,15.626999855041506
3447001203296334,550,15.626999855041506
3447001203296338,623,14.800999641418455
3447001203296338,639,14.800999641418455
3447001203296338,607,14.800999641418455
3447001203296344,521,12.8149995803833
3447001203296344,537,12.8149995803833
3447001203296344,553,12.8149995803833
3447001203296345,620,12.809000015258787
3447001203296345,543,12.809000015258787
3447001203296345,636,12.809000015258787
3447001203296347,558,12.315999984741213
3447001203296347,542,12.315999984741213
3447001203296347,526,12.315999984741213
3447001203296352,615,12.11299991607666
3447001203296352,631,12.11299991607666
3447001203296352,599,12.11299991607666
3447001203296360,540,16.926000595092773
3447001203296360,556,16.926000595092773
3447001203296360,572,16.926000595092773
3447001203296360,524,16.926000595092773
3447001203296367,490,15.80799961090088
3447001203296367,474,15.80799961090088
3447001203296367,458,15.80799961090088
3447001203296369,639,15.175000190734865
3447001203296369,591,15.175000190734865
3447001203296369,623,15.175000190734865
3447001203296369,607,15.175000190734865
3447001203296371,460,14.975000381469727
3447001203296373,582,14.532999992370605
3447001203296373,614,14.532999992370605
3447001203296373,598,14.532999992370605
3447001203296374,184,14.659000396728516
3447001203296374,203,14.659000396728516
3447001203296374,152,14.659000396728516
3447001203296374,136,14.659000396728516
3447001203296374,168,14.659000396728516
3447001203296375,592,14.723999977111815
3447001203296375,608,14.723999977111815
3447001203296375,624,14.723999977111815
3447001203296375,92,14.723999977111815
3447001203296375,76,14.723999977111815
3447001203296375,108,14.723999977111815
3447001203296375,576,14.723999977111815
3447001203296376,132,14.0649995803833
3447001203296376,164,14.0649995803833
3447001203296376,180,14.0649995803833
3447001203296376,148,14.0649995803833
3447001203296377,168,13.810999870300293
3447001203296377,152,13.810999870300293
3447001203296377,136,13.810999870300293
3447001203296377,184,13.810999870300293
3447001203296378,171,13.161999702453613
3447001203296378,187,13.161999702453613
3447001203296378,155,13.161999702453613
3447001203296378,139,13.161999702453613
3447001203296380,565,13.017999649047852
3447001203296380,517,13.017999649047852
3447001203296380,549,13.017999649047852
3447001203296380,533,13.017999649047852
3447001203296383,621,13.079999923706055
3447001203296383,589,13.079999923706055
3447001203296383,605,13.079999923706055
3447001203296384,541,12.732000350952148
3447001203296384,557,12.732000350952148
3447001203296384,525,12.732000350952148
3447001203296385,462,12.784000396728516
3447001203296386,626,12.663999557495115
3447001203296386,610,12.663999557495115
3447001203296386,577,12.663999557495115
3447001203296389,207,12.416000366210938
3447001203296389,255,12.416000366210938
3447001203296389,223,12.416000366210938
3447001203296389,239,12.416000366210938
3447001203296390,607,12.20199966430664
3447001203296390,591,12.20199966430664
3447001203296397,582,16.635000228881836
3447001203296397,598,16.635000228881836
3447001203296397,614,16.635000228881836
3447001203296399,630,17.229999542236328
3447001203296404,598,15.970000267028807
3447001203296404,631,15.970000267028807
3447001203296404,582,15.970000267028807
3447001203296408,540,16.08799934387207
3447001203296408,556,16.08799934387207
3447001203296408,524,16.08799934387207
3447001203296408,572,16.08799934387207
3447001203296409,632,15.84000015258789
3447001203296409,616,15.84000015258789

您好,欢迎来到 StackOverflow。

在 pandas 中,经验法则是原始循环总是比专用函数慢。 要将 function 应用于满足某些条件的行的子数据帧,您可以使用groupby

在您的情况下, function 有点...... unpythonic 因为S的实例化是一种副作用,并且删除您当前正在迭代的行是危险的。 例如,在字典中你不应该这样做。 也就是说,您可以像这样创建 function:

In [37]: def my_func(df): 
    ...:     if df['mixid2'].nunique() == 1: 
    ...:         return None 
    ...:     else: 
    ...:         S.append(df['mixid2']) 
    ...:         return df 

并将其应用于您 DataFrame 通过

S = []
obs.groupby('id').apply(my_func)  

这将遍历具有相同id的所有子数据帧,如果mixid2中只有一个唯一值,则将其丢弃。 否则,它将值附加到列表S

生成的 DataFrame 短 3 行

Out[38]: 
                                   id  mixid2       magh
id                                                      
3447001203296326 0   3447001203296326     557  14.250000
                 1   3447001203296326     573  14.250000
...                               ...     ...        ...
3447001203296409 98  3447001203296409     632  15.840000
                 99  3447001203296409     616  15.840000

[97 rows x 3 columns]

S包含 28 个元素。 你可以像以前一样传入star构造函数。

我猜你想mixid2 groupby超过 1 次set_index的元素。 为了得到原始形状,我们在过滤后使用reset_index

df = obs.set_index('mixid2').loc[~df.groupby('mixid2').count().id.eq(1)].reset_index()
df.shape
(44, 3)

我不完全确定,如果我理解正确的话。 但是您可以做的是首先删除 dataframe 中的重复项,然后使用 groupby function 获取具有相同 ID 的所有剩余数据点:

# dropping all duplicates based on id an mixid2
df.drop_duplicates(["id", "mixid2"], inplace=True)

# then iterate over all groups:
for index, grp in df.groupby(["id"]):
    pass  # do stuff here with the grp

通常依赖 pandas 内部函数是一个好主意,因为它们大部分都经过了很好的优化。

new_df = app.groupby(['id','mixid2'], as_index=False).agg('count')
new_df = new_df[new_df['magh'] > 1]

然后将 new_df 传递给您的 function。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM