繁体   English   中英

具有灵活聚合周期的按熊猫数据框分组的平均值

[英]Mean of a grouped-by pandas dataframe with flexible aggregation period

这里,我需要计算值 ==1 和值 = 0 的行的列持续时间和公里的平均值。这次我希望聚合周期是灵活的。

df
Out[20]: 
                          Date duration km   value
0   2015-03-28 09:07:00.800001    0      0    0
1   2015-03-28 09:36:01.819998    1      2    1
2   2015-03-30 09:36:06.839997    1      3    1 
3   2015-03-30 09:37:27.659997    nan    5    0 
4   2015-04-22 09:51:40.440003    3      7    0
5   2015-04-23 10:15:25.080002    0      nan  1

对于 1 天的聚合期,我可以使用之前建议的解决方案:

df.pivot_table(values=['duration','km'],columns=['value'],index=df['Date'].dt.date,aggfunc='mean'

ndf.columns = [i[0]+str(i[1]) for i in ndf.columns]

            duration0  duration1  km0  km1
Date                                      
2015-03-28        0.0        1.0  0.0  2.0
2015-03-30        NaN        1.0  5.0  3.0
2015-04-22        3.0        NaN  7.0  NaN
2015-04-23        NaN        0.0  NaN  NaN

但是,我不知道如何更改聚合周期以防万一,例如,我想将它作为函数的参数传递......因此,使用pd.Grouper(freq=freq_aggregation)freq_aggregation = 'd''60s'将是首选...

让我们用pd.Grouperunstack ,和列映射:

freq_str = '60s'
df_out = df.groupby([pd.Grouper(freq=freq_str, key='Date'),'value'])['duration','km'].agg('mean').unstack()

df_out.columns = df_out.columns.map('{0[0]}{0[1]}'.format)

df_out

输出:

                     duration0  duration1  km0  km1
Date                                               
2015-03-28 09:07:00        0.0        NaN  0.0  NaN
2015-03-28 09:36:00        NaN        1.0  NaN  2.0
2015-03-30 09:36:00        NaN        1.0  NaN  3.0
2015-03-30 09:37:00        NaN        NaN  5.0  NaN
2015-04-22 09:51:00        3.0        NaN  7.0  NaN
2015-04-23 10:15:00        NaN        0.0  NaN  NaN

现在,让我们将 freq_str 更改为 'D':

freq_str = 'D'
df_out = df.groupby([pd.Grouper(freq=freq_str, key='Date'),'value'])['duration','km'].agg('mean').unstack()

df_out.columns = df_out.columns.map('{0[0]}{0[1]}'.format)

print(df_out)

输出:

            duration0  duration1  km0  km1
Date                                      
2015-03-28        0.0        1.0  0.0  2.0
2015-03-30        NaN        1.0  5.0  3.0
2015-04-22        3.0        NaN  7.0  NaN
2015-04-23        NaN        0.0  NaN  NaN

您可以将 grouper 传递给数据透视表的索引。 希望这是你正在寻找的东西,即

ndf = df.pivot_table(values=['duration','km'],columns=['value'],index=pd.Grouper(key='Date', freq='60s'),aggfunc='mean')
ndf.columns = [i[0]+str(i[1]) for i in ndf.columns]

输出:

duration0  duration1  km0  km1
Date                                               
2015-03-28 09:07:00        0.0        NaN  0.0  NaN
2015-03-28 09:36:00        NaN        1.0  NaN  2.0
2015-03-30 09:36:00        NaN        1.0  NaN  3.0
2015-03-30 09:37:00        NaN        NaN  5.0  NaN
2015-04-22 09:51:00        3.0        NaN  7.0  NaN
2015-04-23 10:15:00        NaN        0.0  NaN  NaN

如果频率是D那么

duration0  duration1  km0  km1
Date                                      
2015-03-28        0.0        1.0  0.0  2.0
2015-03-30        NaN        1.0  5.0  3.0
2015-04-22        3.0        NaN  7.0  NaN
2015-04-23        NaN        0.0  NaN  NaN

使用 groupby

df = df.set_index('Date')    
df.groupby([pd.TimeGrouper('D'), 'value']).mean()

                 duration   km
Date       value               
2017-10-11 0      1.500000  4.0
           1      0.666667  2.5


df.groupby([pd.TimeGrouper('60s'), 'value']).mean()

                           duration   km
Date                value               
2017-10-11 09:07:00 0      0.0       0.0
2017-10-11 09:36:00 1      1.0       2.5
2017-10-11 09:37:00 0     NaN        5.0
2017-10-11 09:51:00 0      3.0       7.0
2017-10-11 10:15:00 1      0.0      NaN 

如果你想把它拆开,那就拆开它。

df.groupby([pd.TimeGrouper('D'), 'value']).mean().unstack()

           duration        km     
value             0    1    0    1
Date                              
2017-10-11 1.50     0.67 4.00 2.50

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM