I am trying to do a rolling sum across partitioned data based on a moving 2 business day window. It feels like it should be both easy and widely used, but the solution is beyond me.
#generate sample data
import pandas as pd
import numpy as np
import datetime
vals = [-4,17,-4,-16,2,20,3,10,-17,-8,-21,2,0,-11,16,-24,-10,-21,5,12,14,9,-15,-15]
grp = ['X']*6 + ['Y'] * 6 + ['X']*6 + ['Y'] * 6
typ = ['foo']*12+['bar']*12
dat = ['19/01/18','19/01/18','22/01/18','22/01/18','23/01/18','24/01/18'] * 4
#create dataframe with sample data
df = pd.DataFrame({'group': grp,'type':typ,'value':vals,'date':dat})
df.date = pd.to_datetime(df.date)
df.head(12)
gives the following (note this is just the head 12 rows):
date group type value
0 19/01/2018 X foo -4
1 19/01/2018 X foo 17
2 22/01/2018 X foo -4
3 22/01/2018 X foo -16
4 23/01/2018 X foo 2
5 24/01/2018 X foo 20
6 19/01/2018 Y foo 3
7 19/01/2018 Y foo 10
8 22/01/2018 Y foo -17
9 22/01/2018 Y foo -8
10 23/01/2018 Y foo -21
11 24/01/2018 Y foo 2
The desired results are (all rows shown here):
date group type 2BD Sum
1 19/01/2018 X foo 13
2 22/01/2018 X foo -7
3 23/01/2018 X foo -18
4 24/01/2018 X foo 22
5 19/01/2018 Y foo 13
6 22/01/2018 Y foo -12
7 23/01/2018 Y foo -46
8 24/01/2018 Y foo -19
9 19/01/2018 X bar -11
10 22/01/2018 X bar -19
11 23/01/2018 X bar -18
12 24/01/2018 X bar -31
13 19/01/2018 Y bar 17
14 22/01/2018 Y bar 40
15 23/01/2018 Y bar 8
16 24/01/2018 Y bar -30
I have viewed this question and tried
df.groupby(['group','type']).rolling('2d',on='date').agg({'value':'sum'}
).reset_index().groupby(['group','type','date']).agg({'value':'sum'}).reset_index()
Which would work fine if 'value' is always positive, but this is not the case here. I have tried many other ways that have caused errors that I can list if it is of value. Can anyone help?
IIUC, Starting from your code
import pandas as pd
import numpy as np
import datetime
vals = [-4,17,-4,-16,2,20,3,10,-17,-8,-21,2,0,-11,16,-24,-10,-21,5,12,14,9,-15,-15]
grp = ['X']*6 + ['Y'] * 6 + ['X']*6 + ['Y'] * 6
typ = ['foo']*12+['bar']*12
dat = ['19/01/18','19/01/18','22/01/18','22/01/18','23/01/18','24/01/18'] * 4
df = pd.DataFrame({'group': grp,'type':typ,'value':vals,'date':dat})
df.date = pd.to_datetime(df.date)
We start off by grouping by group
s, type
s and date
s and just summing within each day:
df2 = df.groupby(["group", "type", "date"]).sum().reset_index().sort_values("date")
Now you can just perform a rolling
sum() with min_periods=1
so that your first value is not NaN
. However, you wouldn't
k = df2.groupby(["group", "type"]).value.rolling(window=2, min_periods=1).sum()
This yields
group type
X bar 0 -11.0
1 -19.0
2 -18.0
3 -31.0
foo 4 13.0
5 -7.0
6 -18.0
7 22.0
Y bar 8 17.0
9 40.0
10 8.0
11 -30.0
foo 12 13.0
13 -12.0
14 -46.0
15 -19.0
which is already what you want, but without your date
values. To get the dates, we can do a trick here, which is just change the third level your this Multi-Index obj for your date
values in a similar df grouped the same way. Hence, we can do
aux = df2.groupby(["group", "type", "date"]).date.rolling(2).count().index.get_level_values(2)
and substitute the index:
k.index = pd.MultiIndex.from_tuples([(k.index[x][0], k.index[x][1], aux[x]) for x in range(len(k.index))])
Finally, you have your expected output:
k.to_frame()
group type date value
0 X bar 2018-01-19 -11.0
1 X bar 2018-01-22 -19.0
2 X bar 2018-01-23 -18.0
3 X bar 2018-01-24 -31.0
4 X foo 2018-01-19 13.0
5 X foo 2018-01-22 -7.0
6 X foo 2018-01-23 -18.0
7 X foo 2018-01-24 22.0
8 Y bar 2018-01-19 17.0
9 Y bar 2018-01-22 40.0
10 Y bar 2018-01-23 8.0
11 Y bar 2018-01-24 -30.0
12 Y foo 2018-01-19 13.0
13 Y foo 2018-01-22 -12.0
14 Y foo 2018-01-23 -46.0
15 Y foo 2018-01-24 -19.0
I expected the following to work:
g = lambda ts: ts.rolling('2B', on='date')['value'].sum()
df.groupby(['group', 'type']).apply(g)
However, I get an error as a business day is not a fixed frequency.
This brings me to suggesting the following solution, a lot uglier:
value_per_bday = lambda df: df.resample('B', on='date')['value'].sum()
df = df.groupby(['group', 'type']).apply(value_per_bday).stack()
value_2_bdays = lambda x: x.rolling(2, min_periods=1).sum()
df = df.groupby(axis=0, level=['group', 'type']).apply(value_2_bdays)
Maybe it sounds better with a function, your pick.
def resample_and_sum(x):
x = x.resample('B', on='date')['value'].sum()
x = x.rolling(2, min_periods=1).sum()
return x
df = df.groupby(['group', 'type']).apply(resample_and_sum).stack()
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.