简体   繁体   English

使用Python中的OpenCV访问轮廓边界内的像素值

[英]Access pixel values within a contour boundary using OpenCV in Python

I'm using OpenCV 3.0.0 on Python 2.7.9. 我在Python 2.7.9上使用OpenCV 3.0.0。 I'm trying to track an object in a video with a still background, and estimate some of its properties. 我正在尝试跟踪具有静止背景的视频中的对象,并估计其某些属性。 Since there can be multiple moving objects in an image, I want to be able to differentiate between them and track them individually throughout the remaining frames of the video. 由于图像中可能存在多个移动对象,因此我希望能够区分它们并在视频的剩余帧中单独跟踪它们。

One way I thought I could do that was by converting the image to binary, getting the contours of the blobs (tracked object, in this case) and get the coordinates of the object boundary. 我认为可以做到的一种方法是将图像转换为二进制,获取斑点的轮廓(在这种情况下为跟踪对象)并获取对象边界的坐标。 Then I can go to these boundary coordinates in the grayscale image, get the pixel intensities surrounded by that boundary, and track this color gradient/pixel intensities in the other frames. 然后我可以转到灰度图像中的这些边界坐标,获得由该边界包围的像素强度,并跟踪其他帧中的这种颜色梯度/像素强度。 This way, I could keep two objects separate from each other, so they won't be considered as new objects in the next frame. 这样,我可以保持两个对象彼此分开,因此它们不会被视为下一帧中的新对象。

I have the contour boundary coordinates, but I don't know how to retrieve the pixel intensities within that boundary. 我有轮廓边界坐标,但我不知道如何检索该边界内的像素强度。 Could someone please help me with that? 有人可以帮帮我吗?

Thanks! 谢谢!

Going with our comments, what you can do is create a list of numpy arrays, where each element is the intensities that describe the interior of the contour of each object. 继续我们的评论,你可以做的是创建一个numpy数组列表,其中每个元素是描述每个对象轮廓内部的强度。 Specifically, for each contour, create a binary mask that fills in the interior of the contour, find the (x,y) coordinates of the filled in object, then index into your image and grab the intensities. 具体来说,对于每个轮廓,创建一个二进制遮罩,填充轮廓内部,找到填充对象的(x,y)坐标,然后索引到图像中并获取强度。

I don't know exactly how you set up your code, but let's assume you have an image that's grayscale called img . 我不知道你是如何设置代码的,但我们假设你有一个灰度图像img You may need to convert the image to grayscale because cv2.findContours works on grayscale images. 您可能需要将图像转换为灰度,因为cv2.findContours适用于灰度图像。 With this, call cv2.findContours normally: 有了这个,通常调用cv2.findContours

import cv2
import numpy as np

#... Put your other code here....
#....

# Call if necessary
#img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Call cv2.findContours
contours,_ = cv2.findContours(img, cv2.RETR_LIST, cv2.cv.CV_CHAIN_APPROX_NONE)

contours is now a list of 3D numpy arrays where each is of size N x 1 x 2 where N is the total number of contour points for each object. contours现在是3D numpy数组的列表,其中每个都是大小为N x 1 x 2 ,其中N是每个对象的轮廓点总数。

As such, you can create our list like so: 因此,您可以像这样创建我们的列表:

# Initialize empty list
lst_intensities = []

# For each list of contour points...
for i in range(len(contours)):
    # Create a mask image that contains the contour filled in
    cimg = np.zeros_like(img)
    cv2.drawContours(cimg, contours, i, color=255, thickness=-1)

    # Access the image pixels and create a 1D numpy array then add to list
    pts = np.where(cimg == 255)
    lst_intensities.append(img[pts[0], pts[1]])

For each contour, we create a blank image then draw the filled-in contour in this blank image. 对于每个轮廓,我们创建一个空白图像,然后在此空白图像中绘制填充的轮廓。 You can fill in the area that the contour occupies by specifying the thickness parameter to be -1. 您可以通过将thickness参数指定为-1来填充轮廓占用的区域。 I set the interior of the contour to 255. After, we use numpy.where to find all row and column locations in an array that match a certain condition. 我将轮廓的内部设置为255.之后,我们使用numpy.where查找数组中与特定条件匹配的所有行和列位置。 In our case, we want to find the values that are equal to 255. After, we use these points to index into our image to grab the pixel intensities that are interior to the contour. 在我们的例子中,我们想要找到等于255的值。之后,我们使用这些点来索引我们的图像以获取轮廓内部的像素强度。

lst_intensities contains that list of 1D numpy arrays where each element gives you the intensities that belong to the interior of the contour of each object. lst_intensities包含1D numpy数组的列表,其中每个元素为您提供属于每个对象轮廓内部的强度。 To access each array, simply do lst_intensities[i] where i is the contour you want to access. 要访问每个数组,只需执行lst_intensities[i] ,其中i是您要访问的轮廓。

Answer from @rayryeng is excellent! 来自@rayryeng的回答非常好!

One small thing from my implementation is: The np.where() returns a tuple, which contains an array of row indices and an array of column indices. 我实现的一件小事是: np.where()返回一个元组,它包含一个行索引数组和一个列索引数组。 So, pts[0] includes a list of row indices , which correspond to height of the image, pts[1] includes a list of column indices , which correspond to the width of the image. 因此, pts[0]包括row indices列表,其对应于图像的高度, pts[1]包括column indices的列表,其对应于图像的宽度。 The img.shape returns (rows, cols, channels) . img.shape返回(rows, cols, channels) So I think it should be img[pts[0], pts[1]] to slice the ndarray behind the img. 所以我认为应该是img[pts[0], pts[1]]来切割img背后的ndarray

I am sorry I cannot add this as a comment in the first correct answer since I have not enough reputation to do so. 对不起,我不能在第一个正确的答案中添加这个作为评论,因为我没有足够的声誉这样做。

Actually, there is a little improvement in the nice code from above: we can skip the line in which we were getting the points because of both grayscale image and np.zeros temp image have the same shape, we could use that 'where' inside the brackets directly. 实际上,上面的漂亮代码有一点改进:我们可以跳过我们得到点的线,因为灰度图像和np.zeros临时图像具有相同的形状,我们可以使用里面的'where'括号直接。 Something like this: 像这样的东西:

# (...) opening image, converting into grayscale, detect contours (...)
intensityPer = 0.15
for c in contours:
    temp = np.zeros_like(grayImg)
    cv2.drawContours(temp, [c], 0, (255,255,255), -1)
    if np.mean(grayImg[temp==255]) > intensityPer*255:
        pass # here your code

By this sample, we assure the mean intensity of the area within the contour will be at least 15% of the maximum intensity 通过该样本,我们确保轮廓内的区域的平均强度将是最大强度的至少15%

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM