简体   繁体   English

在python中显示原始图像像素而不是蒙版

[英]Show original image pixels instead of mask in python

I have a deep learning model which returns to me an array which when plotted like this 我有一个深度学习模型,它返回给我一个数组,当这样绘制

res = deeplab_model.predict(np.expand_dims(resized2,0))
labels = np.argmax(res.squeeze(),-1) #remove single dimension values, gives the indices of maximum values in the array  
plt.imshow(labels[:-pad_x])

(the last line above just removes some unclear lines before plotting them) (上面的最后一行只是在绘制之前删除了一些不清楚的线条)

looks like this 看起来像这样
屏蔽输出

original image is like this 原始图像是这样的

原始图像

when I do the 当我这样做的时候

print(labels[labels>0])
print(labels.shape)
print(len(labels))

I get this 我明白了

[12 12 12 ... 12 12 12]
(512, 512)
512

I want to show the colored pixels in the original image where mask appears and turn everything else to black (or blur or some other color I'll choose), how can I do that? 我想显示原始图像中出现蒙版的彩色像素,并将其他所有内容变为黑色(或模糊或我选择的其他颜色),我该怎么做?

It's not entirely clear how the labels array works here. 目前还不完全清楚标签阵列的工作原理。 Assuming that it contains values greater than zero where the cat and dog are, you can create the masked image with something like, 假设它包含猫和狗所在的大于零的值,您可以使用类似的东西创建蒙版图像,

mask = lables > 0
newimage = np.zeros(image.shape)
newimage[mask] = image[mask]

where I've create a zero image based on the original and set the original pixels where the labels are greater than zero. 我根据原始图像创建零图像,并设置标签大于零的原始像素。

我能够扭转这一局面并实现我想要的目标

mask = labels[:-pad_x] == 0 resizedOrig = cv2.resize(frame, (512,384)) resizedOrig[mask] = 0

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM