I'm trying to implement Eulerian Video magnification as per this paper, but when utilising the Butterworth bandpass filter, it keeps running into a "ValueError: object of too small depth for desired array"
This is my code for the Butterworth bandpass filter:
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = scipy.signal.lfilter([b], [a], data, axis=0) #The line that errors
return y
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = scipy.signal.butter(order, [low, high], btype='band')
return b, a
I'm using the butter_bandpass_filter in the following line of code:
magnify_motion(tired_me, 0.4, 3)
def magnify_motion(video, low, high, n=4, sigma=3, amp=20):
lap_video_lst = video.get_laplacian_lst(n=n, sigma=sigma)
print("lap_video_lst shapes:")
for i in range(n):
print("{}:".format(i), get_list_shape(lap_video_lst[i]))
ret_lst = []
for layer in range(n):
filtered_layer = butter_bandpass_filter(lap_video_lst[layer], low, high, video.fps) #This line
filtered_layer *= amp
ret_lst.append(filtered_layer)
return ret_lst
Where each lap_video_lst[layer] is formatted as a numpy array of all the frames of the video with shape (frame_count, height, width, colour_channels), and are as follows when printed:
0: (330, 360, 640, 3)
1: (330, 180, 320, 3)
2: (330, 90, 160, 3)
3: (330, 45, 80, 3)
Note that the reason each "layer" is of different dimensions is that they are the Laplacian pyramid of the original video.
In case it is useful, I This is the shape of the b and a np array, along with their respective values.
b: (1, 11)
[[ 0.00069339 0. -0.00346694 0. 0.00693387 0.
-0.00693387 0. 0.00346694 0. -0.00069339]]
a: (1, 11)
[[ 1. -8.02213491 29.18702261 -63.4764537 91.44299881
-91.21397148 63.81766134 -30.92689236 9.93534351 -1.91057439
0.16700076]]
This is the full error trace in case there's some detail there that I'm overlooking:
Traceback (most recent call last):
File "d:\Desktop\Stuff\Uni notes B\2021 Fall\Cs194\Projects\Project Final 1\tester.py", line 84, in <module>
main()
File "d:\Desktop\Stuff\Uni notes B\2021 Fall\Cs194\Projects\Project Final 1\tester.py", line 71, in main
magnify_motion(tired_me, 0.4, 3)
File "d:\Desktop\Stuff\Uni notes B\2021 Fall\Cs194\Projects\Project Final 1\tester.py", line 32, in magnify_motion
filtered_layer = butter_bandpass_filter(lap_video_lst[layer], low, high, video.fps)
File "d:\Desktop\Stuff\Uni notes B\2021 Fall\Cs194\Projects\Project Final 1\tester.py", line 17, in butter_bandpass_filter
y = scipy.signal.lfilter([b], [a], data, axis=0)
File "C:\Users\nick-\AppData\Roaming\Python\Python38\site-packages\scipy\signal\signaltools.py", line 1972, in lfilter
raise ValueError('object of too small depth for desired array')
ValueError: object of too small depth for desired array
Any tips would be helpful: Thanks :D
I am not reading the paper.
The problem you are seeing is fixed by applying
y = scipy.signal.lfilter(b, a, data, axis=0)
This means that the filter will treat each array data[:,x,y,c]
as a signal, filtering the value of each pixel, probably making the video to have some motion blur, different from spatial filtering that is used to make edges sharper or smoother. For spatial filtering you would use axis=1
or axis=2
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.