I am trying to replicate the study in this paper to verify it.
Long story short:
I extracted the row of pixels from all the frames in the video using:
values_list = []
for filename in glob.glob('frames//*.png'):
img = cv2.imread(filename,0)
values_list.append(img[100, :]) #Get all rows at y-axis 17 which is the row pixels
Then I created a plot using:
fig, ax = plt.subplots()
width = 10
xlim = 0, width*len(values_list)
ylim = 0, max([len(v) for v in values_list]) + 2
ax.set(xlim=xlim, ylim=ylim, autoscale_on=False)
for i in range(len(values_list)):
plt.imshow(np.array(values_list[i]).reshape(-1, 1), extent=[i * width, (i + 1) * width, 0, len(values_list[i])],
origin='lower', cmap='gray')
ax.set_aspect('auto')
fig.set_size_inches(20, 10.5)
plt.savefig('myimage.png', format='png', dpi=1000)
This gives the following plot
The above is equivalent to what they are showing at b) c) and d) from the authors' paper (but Horizontal instead of vertical and grayscale instead of colors). How did they convert it to the equivalent of e) f) and g) as shown below?
All the claim is
This modulation ( b) c) and d)) is poor visible in the figures. To make this modulation more evident, we subtracted a slowly varying component along vertical direction in a diagram separately from each pixel time-variable value thus enhancing an alternating component (AC) of light modulation, which is varying at the heartbeat or higher rate.
How to subtract a slowly varying component from the pixels stored in values_list
which are the pixels of the image at every frame?
Extras:
The values_list
to replicate the graph is as follows
Download the video link at the bottom of the paper under Electronic supplementary material
and use the following code to convert from videos to frames, and then apply code above.
import cv2
vidcap = cv2.VideoCapture('video/2.mp4')
success, image = vidcap.read()
count = 0
while success:
cv2.imwrite("frames/%d.png" % count, image)
success, image = vidcap.read()
count += 1
import glob
import cv2
import matplotlib.pyplot as plt
import numpy as np
import random
values_list = []
values_mean = []
count = 0
for filename in glob.glob('video//frames//*.png'):
count +=1
img = cv2.imread(filename,0)
values_list.append(img[100,:]) #Get all rows at x-axis 17 which is the row pixels
values_mean.append(np.round(np.mean(img[100:]), decimals=0))
values_list = np.array(values_list)
values_mean = np.array(values_mean).reshape(-1,1)
new_column_value = values_mean - values_list
new_column_value_scaled = np.interp(new_column_value, (new_column_value.min(), new_column_value.max()),(0, 255))
plotted_values_list = new_column_value_scaled
fig, ax = plt.subplots()
width = 10
xlim = 0, width*len(values_list)
ylim = 0, max([len(v) for v in values_list]) + 2
ax.set(xlim=xlim, ylim=ylim, autoscale_on=False)
for i in range(len(plotted_values_list)):
plt.imshow(np.array(plotted_values_list[i,:]).reshape(-1, 1), extent=[i * width, (i + 1) * width, 0,
len(plotted_values_list[i,:])],origin='lower', cmap='gray')
ax.set_aspect('auto')
fig.set_size_inches(20, 10.5)
plt.savefig('myimage_whole.png', format='png', dpi=500)
#plt.show()
Which gives out this image:
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.