简体   繁体   中英

Speed up pandas groupby apply

I have a dataframe and I want to group it by one column and at the same time apply many functions to it. Unfortunately, it simply takes too long. I need some sort ten fold improvement. I have read about vectorizations but I'm loosing many of the pandas capabilities.

This is my approach, first I define all the functions I need:

def f(x):
    d = {}
    d['min_min_approved'] = x['scoring_dol_amount'][x['payment_status']=='approved'].min()
    d['max_max_approved'] = x['scoring_dol_amount'][x['payment_status']=='approved'].max()
    d['sum_approved'] = x['scoring_dol_amount'][x['payment_status']=='approved'].sum()
    d['avg_approved'] = x['scoring_dol_amount'][x['payment_status']=='approved'].mean()
    d['std_approved'] = x['scoring_dol_amount'][x['payment_status']=='approved'].std()
    d['sum_approved_tpn'] = x['scoring_dol_amount'][x['payment_status']=='approved'].count()
    d['sum_rejected_tpn'] = x['scoring_dol_amount'][x['payment_status']=='rejected'].count()
    d['sum_rejected_tpn_hr'] = x['scoring_dol_amount'][x['payment_status_detail']=='cc_rejected_high_risk'].count()
    d['sum_rejected'] = x['scoring_dol_amount'][x['payment_status']=='rejected'].sum()
    d['sum_rejected_hr'] = x['scoring_dol_amount'][x['payment_status_detail']=='cc_rejected_high_risk'].sum()
    d['avg_rejected'] = x['scoring_dol_amount'][x['payment_status']=='rejected'].mean()
    d['std_rejected'] = x['scoring_dol_amount'][x['payment_status']=='approved'].std()
    d['sum_late_hours'] = x['scoring_dol_amount'][(x['payment_date_created'].dt.hour >=23) | (x['payment_date_created'].dt.hour <=6)].count()
    #d['ratio_receive'] = (x['scoring_dol_amount'][x['payment_status']=='approved'].sum())/(x['scoring_dol_amount'][x['payment_status']=='rejected'].sum()+x['scoring_dol_amount'][x['payment_status']=='approved'].sum())
    #d['ratio_receive_tpn'] = (x['scoring_dol_amount'][x['payment_status']=='approved'].count())/(x['scoring_dol_amount'][x['payment_status']=='rejected'].count()+x['scoring_dol_amount'][x['payment_status']=='approved'].count())
    #d['distinct_tc']= x['tc'].nunique()
    #d['distinct_doc']= x['payer_identification_number'].nunique()
    #d['ratio_tc']= (x['tc'].nunique())/(x['scoring_dol_amount'][x['payment_status']=='approved'].count())
    #d['ratio_doc']= (x['payer_identification_number'].nunique())/(x['scoring_dol_amount'][x['payment_status']=='approved'].count())

    return pd.Series(d, index=['min_min_approved', 'max_max_approved', 'sum_approved', 'avg_approved','std_approved','sum_approved_tpn','sum_rejected_tpn','sum_rejected_tpn_hr','sum_rejected','sum_rejected_hr','avg_rejected','std_rejected','sum_late_hours'])#,'ratio_receive','ratio_receive_tpn','distinct_tc','distinct_doc','ratio_tc','ratio_doc'])

And I'm applying it this way:

dataset_recibido=dataset_recibido.set_index('cust_id')
dataset_recibido.groupby(dataset_recibido.index).apply(f)

How can I speed up this?

Seems like you built something, already included in pandas. Just groupby() cust_id and payment_status columns you are currently filtering on and use agg()

dataset_recibido.groupby(['cust_id','payment_status']])\
                          .agg(['count','mean','std','sum','min','max'])

The builtin function is faster than custom apply , in your case, you can use 3 individual groupby using payment_status and payment_status_detail , payment_date_created as the key:

group1 = x.groupby(["cust_id", "payment_status"])
stats1 = group1['scoring_dol_amount'].agg(["mean", "std", "sum", "min", "max", "count"])

group2 = x.groupby(["cust_id", "payment_status_detail"])
stats2 = group2['scoring_dol_amount'].agg(["sum", "count"])

group3 = x.groupby(["cust_id", (x['payment_date_created'].dt.hour >=23) | (x['payment_date_created'].dt.hour <=6)])
stats3 = group3['scoring_dol_amount'].count()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM