[英]Simple moving average 2D array python
I am trying to compute a simple moving average for each line of a 2D array.我正在尝试为二维数组的每一行计算一个简单的移动平均值。 The data in each row is a separate data set, so I can't just compute the SMA over the whole array, I need to do it seperately in each line.每行中的数据是一个单独的数据集,所以我不能只计算整个数组的 SMA,我需要在每一行中单独计算。 I have tried a for loop but it is taking the window as rows, rather than individual values.我尝试了一个 for 循环,但它将 window 作为行,而不是单个值。
The equation I am using to compute the SMA is: a1+a2+...an/n This is the code I have so far:我用来计算 SMA 的等式是:a1+a2+...an/n 这是我目前的代码:
import numpy as np
#make amplitude array
amplitude=[0,1,2,3, 5.5, 6,5,2,2, 4, 2,3,1,6.5,5,7,1,2,2,3,8,4,9,2,3,4,8,4,9,3]
#split array up into a line for each sample
traceno=5 #number of traces in file
samplesno=6 #number of samples in each trace. This wont change.
amplitude_split=np.array(amplitude, dtype=np.int).reshape((traceno,samplesno))
#define window to average over:
window_size=3
#doesn't work for values that come before the window size. i.e. index 2 would not have enough values to divide by 3
#define limits:
lowerlimit=(window_size-1)
upperlimit=samplesno
i=window_size
for row in range(traceno):
for n in range(samplesno):
while lowerlimit<i<upperlimit:
this_window=amplitude_split[(i-window_size):i]
window_average=sum(this_window)/window_size
i+=1
print(window_average)
My expected output for this data set is:我对该数据集的预期 output 是:
[[1, 2, 3.33, 4.66]
[3, 2.66, 2.66, 3. ]
[4, 6, 4.33, 3.33]
[4.33, 5, 7, 5. ]
[5, 5.33, 7, 5.33]]
But I am getting:但我得到:
[2. 3. 3. 4.66666667 2.66666667 3.66666667]
[2.66666667 3.66666667 5. 5. 4. 2.33333333]
[2. 4.33333333 7. 5. 6.33333333 2.33333333]
You can use convolution to [1, 1, ..., 1]
of window_size
and then divide it to window_size
to get average (no need for loop):您可以对window_size
的[1, 1, ..., 1]
使用卷积,然后将其除以window_size
以获得平均值(无需循环):
from scipy.signal import convolve2d
window_average = convolve2d(amplitude_split, np.ones((1, window_size)), 'valid') / window_size)
convolution to ones
basically adds up elements in the window.卷积到ones
基本上将 window 中的元素相加。
output: output:
[[1. 2. 3.33333333 4.66666667]
[3. 2.66666667 2.66666667 3. ]
[4. 6. 4.33333333 3.33333333]
[4.33333333 5. 7. 5. ]
[5. 5.33333333 7. 5.33333333]]
That should be easy to compute with np.correlate
, using a vector np.ones(window_size) / window_size
, but unfortunately that function does not seem to be able to broadcast the correlation operation.这应该很容易用np.correlate
计算,使用向量np.ones(window_size) / window_size
,但不幸的是 function 似乎无法广播相关操作。 So here is another simple way to compute that with np.cumsum
:所以这是另一种使用np.cumsum
计算的简单方法:
import numpy as np
amplitude = [ 0, 1, 2, 3, 5.5, 6,
5, 2, 2, 4, 2, 3,
1, 6.5, 5, 7, 1, 2,
2, 3, 8, 4, 9, 2,
3, 4, 8, 4, 9, 3]
traceno = 5
samplesno = 6
amplitude_split = np.array(amplitude, dtype=np.int).reshape((traceno, samplesno))
window_size = 3
# Scale down by window size
a = amplitude_split * (1.0 / window_size)
# Cumsum across columns
b = np.cumsum(a, axis=1)
# Add an initial column of zeros
c = np.pad(b, [(0, 0), (1, 0)])
# Take difference to get means
result = c[:, window_size:] - c[:, :-window_size]
print(result)
# [[1. 2. 3.33333333 4.66666667]
# [3. 2.66666667 2.66666667 3. ]
# [4. 6. 4.33333333 3.33333333]
# [4.33333333 5. 7. 5. ]
# [5. 5.33333333 7. 5.33333333]]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.