简体   繁体   中英

Rolling average all values of pandas DataFrame

I have a pandas DataFrame and I want to calculate on a rolling basis the average of all the value: for all the columns, for all the observations in the rolling window.

I have a solution with loops but feels very inefficient. Note that I can have NaNs in my data, so calculating the sum and dividing by the shape of the window would not be safe (as I want a nanmean ).

Any better approach?

Setup

import numpy as np
import pandas as pd

np.random.seed(1)

df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=['A', 'B'])

df[df>5] = np.nan  # EDIT: add nans

My Attempt

n_roll = 2

df_stacked = df.values
roll_avg = {}
for idx in range(n_roll, len(df_stacked)+1):
    roll_avg[idx-1] = np.nanmean(df_stacked[idx - n_roll:idx, :].flatten())

roll_avg = pd.Series(roll_avg)
roll_avg.index = df.index[n_roll-1:]
roll_avg = roll_avg.reindex(df.index)

Desired Result

roll_avg
Out[33]: 
0         NaN
1    5.000000
2    1.666667
3    0.333333
4    1.000000
5    3.000000
6    3.250000
7    3.250000
8    3.333333
9    4.000000

Thanks!

Here's one NumPy solution with sliding windows off view_as_windows -

from skimage.util.shape import view_as_windows

# Setup o/p array
out = np.full(len(df),np.nan)

# Get sliding windows of length n_roll along axis=0
w = view_as_windows(df.values,(n_roll,1))[...,0]

# Assign nan-ignored mean values computed along last 2 axes into o/p
out[n_roll-1:] = np.nanmean(w, (1,2))

Memory efficiency with views -

In [62]: np.shares_memory(df,w)
Out[62]: True

To be able to get the same result in case of nan , you can use column_stack on all the df.shift(i).values for i in range(n_roll) , use nanmean on axis=1, and then you need to replace the first n_roll-1 value with nan after:

roll_avg = pd.Series(np.nanmean(np.column_stack([df.shift(i).values for i in range(n_roll)]),1))
roll_avg[:n_roll-1] = np.nan

and with the second input with nan , you get as expected

0         NaN
1    5.000000
2    1.666667
3    0.333333
4    1.000000
5    3.000000
6    3.250000
7    3.250000
8    3.333333
9    4.000000
dtype: float64

Using the answer referenced in the comment, one can do:

wsize = n_roll
cols = df.shape[1]
out = group.stack(dropna=False).rolling(window=wsize * cols, min_periods=1).mean().reset_index(-1, drop=True).sort_index()
out.groupby(out.index).last()
out.iloc[:nroll-1] = np.nan

In my case it was important to specify dropna=False in stack , otherwise the length of the rolling window would not be correct.

But I am looking forward to other approaches as this does not feel very elegant/efficient.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM