简体   繁体   中英

Rolling Window for different Groups

I have a data frame that contains date time as an index, and an additional grouping variable status . TUFNWGTP is a weight, used for comparison across groups

            status      shopping        TUFNWGTP
TUDIARYDATE                                     
2003-01-03     emp  0.000000e+00  8155462.672158
2003-01-04     emp  0.000000e+00  1735322.527819
2003-01-04     emp  7.124781e+09  3830527.482672
2003-01-02   unemp  0.000000e+00  6622022.995205
2003-01-09     emp  0.000000e+00  3068387.344956

When I was trying to aggregate over month per status, I was doing

test = dfNew.groupby([pd.TimeGrouper("QS", label='left'), 'status']).sum()
result = pd.DataFrame(test['shopping']/test['TUFNWGTP'], columns=['shopping_weighted'])
result.unstack().plot()

季度汇总

These were fluctuating too much for real time series comparison. I then did the same exercise, grouping by month:

test2 = dfNew.groupby([pd.TimeGrouper("AS", label='left'), 'status']).sum()
result2 = pd.DataFrame(test2['shopping']/test2['TUFNWGTP'], columns=['shopping_weighted'])
result2.unstack().plot()
plt.show()

年度汇总

Still spiky. Now I would like to compute a rolling window for each of the groups in status . I tried to first compute the quarterly window, and then create a rolling mean over 12 months :

pd.stats.moments.rolling_mean(test['shopping']/test['TUFNWGTP'], 12).unstack().plot()
plt.show()

This gives me the downwards trend more clearly. However, this would give me two time series which look extremely similar for the two different status groups, I think that pandas is somehow averaging across groups. How should I proceed?

滚动窗口

Here's some data for your own reproduction - it's the quarterly aggregated data used for the first graph ( test ):

                        shopping      TUFNWGTP
TUDIARYDATE status                            
2003-01-01  emp     8.292987e+12  1.265939e+10
            unemp   8.920840e+11  1.175799e+09
2003-04-01  emp     9.253035e+12  1.338543e+10
            unemp   7.551139e+11  1.131358e+09
2003-07-01  emp     9.237080e+12  1.375033e+10
            unemp   7.440140e+11  1.004834e+09
2003-10-01  emp     1.064579e+13  1.339203e+10
            unemp   1.061342e+12  1.080896e+09
2004-01-01  emp     8.562482e+12  1.284793e+10
            unemp   8.235667e+11  1.169355e+09
2004-04-01  emp     8.773047e+12  1.326451e+10
            unemp   5.907015e+11  1.093678e+09
2004-07-01  emp     9.479579e+12  1.350767e+10
            unemp   1.115300e+12  1.162550e+09
2004-10-01  emp     1.136157e+13  1.375178e+10
            unemp   8.104915e+11  8.251867e+08
2005-01-01  emp     8.105330e+12  1.351932e+10
            unemp   6.082188e+11  1.064661e+09
2005-04-01  emp     9.176033e+12  1.358672e+10
            unemp   8.631214e+11  9.917538e+08
2005-07-01  emp     9.937520e+12  1.414141e+10
            unemp   6.275015e+11  8.850640e+08
2005-10-01  emp     1.044345e+13  1.378072e+10
            unemp   9.742346e+11  9.248803e+08
2006-01-01  emp     9.533602e+12  1.349918e+10
            unemp   5.105317e+11  9.877952e+08
2006-04-01  emp     8.446490e+12  1.349727e+10
            unemp   8.582609e+11  1.007284e+09
2006-07-01  emp     9.167158e+12  1.404490e+10
            unemp   8.219319e+11  9.176818e+08
2006-10-01  emp     1.188230e+13  1.413748e+10
            unemp   1.641259e+12  1.058742e+09
2007-01-01  emp     9.410542e+12  1.408026e+10
            unemp   5.747821e+11  8.084116e+08
2007-04-01  emp     9.492969e+12  1.401190e+10
            unemp   4.231717e+11  9.895104e+08
2007-07-01  emp     9.602594e+12  1.417303e+10
            unemp   7.458046e+11  9.295575e+08
2007-10-01  emp     1.106523e+13  1.449304e+10
            unemp   1.204043e+12  1.112283e+09

You are quite right that

pd.stats.moments.rolling_mean(test['shopping']/test['TUFNWGTP'], 12).unstack().plot()

is mixing values from the two groups. You can see that the first 11 rows are NaNs regardless of status :

In [82]: pd.stats.moments.rolling_mean(test['shopping']/test['TUFNWGTP'], 12)
Out[82]: 
            status
2003-01-01  emp            NaN
            unemp          NaN
2003-04-01  emp            NaN
            unemp          NaN
2003-07-01  emp            NaN
            unemp          NaN
2003-10-01  emp            NaN
            unemp          NaN
2004-01-01  emp            NaN
            unemp          NaN
2004-04-01  emp            NaN
            unemp     1.078546
2004-07-01  emp       1.077651
            unemp     1.086730
2004-10-01  emp       1.050206

So instead of using test , unstack test first so you get two columns -- one for emp and one for unemp :

result = pd.DataFrame(
    test['shopping']/test['TUFNWGTP'], columns=['shopping_weighted'])
result = result.unstack()
print(result.head())

yields

           shopping_weighted          
status                   emp     unemp
2003-01-01          1.100091  0.871605
2003-04-01          1.188454  1.369590
2003-07-01          0.987842  1.103778
2003-10-01          0.888269  1.133720
2004-01-01          0.950096  1.239608

Then apply the rolling_mean to result , so you get two columns of rolling means:

In [94]: pd.stats.moments.rolling_mean(result, 12).head(20)
Out[94]: 
           shopping_weighted          
status                   emp     unemp
...
2005-07-01               NaN       NaN
2005-10-01          0.994440  1.109355
2006-01-01          0.978686  1.128826
2006-04-01          0.964123  1.104678
2006-07-01          0.961347  1.104975
2006-10-01          0.971852  1.111623
2007-01-01          0.973510  1.085946
2007-04-01          0.986782  1.080206
2007-07-01          0.990422  1.095752
2007-10-01          1.006258  1.077732

For example,

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
np.random.seed(1)


dates = pd.date_range('2003-01-03', '2015-03-01', freq='D')
N = len(dates)
index = sorted(np.random.choice(dates, N, replace=True))
status = np.random.choice(['emp', 'unemp'], N, replace=True)
shopping = np.random.random(N)
TUFNWGTP = np.random.random(N)
dfNew = pd.DataFrame({'status': status, 'shopping': shopping, 'TUFNWGTP': TUFNWGTP},
                     index=dates)
mask = dfNew['status'] == 'unemp'
dfNew.loc[mask, 'shopping'] *= 1.1
test = dfNew.groupby([pd.TimeGrouper("QS", label='left'), 'status']).sum()
result = pd.DataFrame(
    test['shopping']/test['TUFNWGTP'], columns=['shopping_weighted'])
result = result.unstack()
pd.stats.moments.rolling_mean(result, 12).plot()
plt.show()

yields 在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM