简体   繁体   中英

Applying rolling window over non-consecutive values in pandas

I need to calculate a new column for a dataframe with a given structure by applying a rolling window to values that are not positioned next to each other in the dataframe.

My dataframe is defined by something like this:

df = pd.DataFrame([    
    {'date': date(2019,1,1), 'id': 1, 'value': 1},
    {'date': date(2019,1,1), 'id': 2, 'value': 10},
    {'date': date(2019,1,1), 'id': 3, 'value': 100},
    {'date': date(2019,1,2), 'id': 1, 'value': 2},
    {'date': date(2019,1,2), 'id': 2, 'value': 20},
    {'date': date(2019,1,2), 'id': 3, 'value': 200},
    {'date': date(2019,1,3), 'id': 1, 'value': 3},
    {'date': date(2019,1,3), 'id': 2, 'value': 30},
    {'date': date(2019,1,3), 'id': 3, 'value': 300},  
    {'date': date(2019,1,6), 'id': 1, 'value': 4},
    {'date': date(2019,1,6), 'id': 2, 'value': 40},
    {'date': date(2019,1,6), 'id': 3, 'value': 400},
                  ])
df=df.set_index(['date', 'id'], drop=False).sort_index()

which gives as df looking like this:

                   date     id  value
date        id      
--------------+--------------------------   
2019-01-01  1 | 2019-01-01  1   1
            2 | 2019-01-01  2   10
            3 | 2019-01-01  3   100
2019-01-02  1 | 2019-01-02  1   2
            2 | 2019-01-02  2   20
            3 | 2019-01-02  3   200
2019-01-03  1 | 2019-01-03  1   3
            2 | 2019-01-03  2   30
            3 | 2019-01-03  3   300
2019-01-06  1 | 2019-01-06  1   4
            2 | 2019-01-06  2   40
            3 | 2019-01-06  3   400

I want to measure the change in column value from one (given) day to the next for each id . So for id==1 the change from 2019-01-01 to 2019-01-02 is (2-1) / 1 = 2 , and from 2019-01-03 to 2019-01-06 is (4-3) / 3 = 0.333 .

I can calculate the desired column if i restructure the df like this so that all values are next to each other:

restructured = df.reset_index(drop=True).set_index(['date']).sort_index()
df1 = restructured.groupby('id').rolling(2).apply(lambda x: (x.max()-x.min())/x.min(), raw=False)

resulting in the desired value(s) in column value :

                 id     value
id  date 
---------------+--------------------        
1   2019-01-01 | NaN    NaN
    2019-01-02 | 0.0    1.000000
    2019-01-03 | 0.0    0.500000
    2019-01-06 | 0.0    0.333333
2   2019-01-01 | NaN    NaN
    2019-01-02 | 0.0    1.000000
    2019-01-03 | 0.0    0.500000
    2019-01-06 | 0.0    0.333333
3   2019-01-01 | NaN    NaN
    2019-01-02 | 0.0    1.000000
    2019-01-03 | 0.0    0.500000
    2019-01-06 | 0.0    0.333333

How can I join/merge this column to df in the original structure or calculate the values in another way so that the resulting dataframe looks like this (first df with added column change_pct ):

                   date     id  value   change_pct
date        id      
--------------+---------------------------------    
2019-01-01  1 | 2019-01-01  1   1       NaN
            2 | 2019-01-01  2   10      NaN
            3 | 2019-01-01  3   100     NaN
2019-01-02  1 | 2019-01-02  1   2       1.000000
            2 | 2019-01-02  2   20      1.000000
            3 | 2019-01-02  3   200     1.000000
2019-01-03  1 | 2019-01-03  1   3       0.500000
            2 | 2019-01-03  2   30      0.500000
            3 | 2019-01-03  3   300     0.500000
2019-01-06  1 | 2019-01-06  1   4       0.333333
            2 | 2019-01-06  2   40      0.333333
            3 | 2019-01-06  3   400     0.333333

IIUC, this might be more simple.

df['change_pct']=df.groupby('id')['value'].pct_change()

To do this, DO NOT run this df=df.set_index(['date', 'id'], drop=False).sort_index() . Just run the above line directly on your df.

Output

        date    id  value   change_pct
0   2019-01-01  1   1       NaN
1   2019-01-01  2   10      NaN
2   2019-01-01  3   100     NaN
3   2019-01-02  1   2       1.000000
4   2019-01-02  2   20      1.000000
5   2019-01-02  3   200     1.000000
6   2019-01-03  1   3       0.500000
7   2019-01-03  2   30      0.500000
8   2019-01-03  3   300     0.500000
9   2019-01-06  1   4       0.333333
10  2019-01-06  2   40      0.333333
11  2019-01-06  3   400     0.333333

您可以groupby与该指数的一部分level kwarg:

df.value.groupby(id, level=1).rolling(2).apply(lambda x: (x.max()-x.min())/x.min(), raw=False)

The answer by SH-SF guided me to solve the problem:

The problem becomes easy, if I just work on the non-indexed df:

df = pd.DataFrame([    
    {'date': date(2019,1,1), 'id': 1, 'value': 1},
    {'date': date(2019,1,1), 'id': 2, 'value': 10},
    {'date': date(2019,1,1), 'id': 3, 'value': 100},
    {'date': date(2019,1,2), 'id': 1, 'value': 2},
    {'date': date(2019,1,2), 'id': 2, 'value': 20},
    {'date': date(2019,1,2), 'id': 3, 'value': 200},
    {'date': date(2019,1,3), 'id': 1, 'value': 3},
    {'date': date(2019,1,3), 'id': 2, 'value': 30},
    {'date': date(2019,1,3), 'id': 3, 'value': 300},  
    {'date': date(2019,1,6), 'id': 1, 'value': 4},
    {'date': date(2019,1,6), 'id': 2, 'value': 40},
    {'date': date(2019,1,6), 'id': 3, 'value': 400},
])

df=df.sort_values(['id', 'date']) # make sure everything is in correct order

window_size=2 # the window size is adjustable

#calculate values
c= df.groupby('id')['value'].rolling(window_size).apply(lambda x: (x.max()-x.min())/x.min(), raw=False)

df[change_pct] = c.values # create new column in df

#now I can create the structure I need
df=df.set_index(['date', 'id'], drop=False).sort_index()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM