简体   繁体   English

使用OpenCV检测灰色内容

[英]Detect gray things with OpenCV

I'd like to detect an object using OpenCV that is distinctly different from other elements in the scene as it's gray. 我想使用OpenCV检测一个对象,它与场景中的其他元素明显不同,因为它是灰色的。 This is good because I can just run a test with R == G == B and it allows to be independent of luminosity, but doing it pixel by pixel is slow. 这很好,因为我可以用R == G == B进行测试,并且它允许独立于亮度,但是逐像素地进行测试很慢。

Is there a faster way to detect gray things? 有没有更快的方法来检测灰色的东西? Maybe there's an OpenCV method that does the R == G == B test... cv2.inRange does color thresholding, it's not quite what I'm looking for. 也许有一个OpenCV方法做R == G == B测试... cv2.inRange做颜色阈值处理,它不是我想要的。

The fastest method I can find in Python is to use slicing to compare each channel. 我在Python中可以找到的最快的方法是使用切片来比较每个通道。 After a few test runs, this method is upwards of 200 times faster than two nested for-loops. 经过几次测试后,此方法比两个嵌套的for循环快200倍。

bg = im[:,:,0] == im[:,:,1] # B == G
gr = im[:,:,1] == im[:,:,2] # G == R
slices = np.bitwise_and(bg, gr, dtype= np.uint8) * 255

This will generate a binary image where gray objects are indicated by white pixels. 这将生成二进制图像,其中灰色对象由白色像素指示。 If you do not need a binary image, but only a logical array where grey pixels are indicated by True values, this method gets even faster: 如果您不需要二进制图像,但只需要一个逻辑数组,其中灰色像素由True值表示,则此方法会更快:

slices = np.bitwise_and(bg, gr)

Omitting the type cast and multiplication yields a method 500 times faster than nested loops. 省略类型转换和乘法产生的方法比嵌套循环快500倍。

Running this operation on this test image: 在此测试映像上运行此操作:

图像与灰色物体

Gives the following result: 给出以下结果:

灰色物体检测面罩

As you can see, the gray object is correctly detected. 如您所见,正确检测到灰色对象。

I'm surprised that such a simple check is slow, probably you are not coding it efficiently. 我很惊讶这么简单的检查很慢,可能你没有高效编码。

Here is a short piece of code that should do that for you. 这是一段应该为您完成的短代码。 It optimal neither in speed nor in memory, but quite in number of lines of code :) 它既不是在速度上也不是在内存中最佳,而是在代码行数上相当:)

std::vector<cv::Mat> planes;
cv::split(image, planes);
cv::Mat mask = planes[0] == planes[1];
mask &= planes[1] == planes[2];

For the sake of it, here is a comparison with something that would be the fastest way to do it in my opinion (without parallelization) 为了它,这里是一个比较,在我看来这是最快的方式(没有并行化)

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

#include <iostream>
#include <vector>

#include <sys/time.h> //gettimeofday

static
double
P_ellapsedTime(struct timeval t0, struct timeval t1)
{
  //return ellapsed time in seconds
  return (t1.tv_sec-t0.tv_sec)*1.0 + (t1.tv_usec-t0.tv_usec)/1000000.0;
}



int
main(int argc, char* argv[])
{


  struct timeval t0, t1;
  cv::Mat image = cv::imread(argv[1]);
  assert(image.type() == CV_8UC3);
  std::vector<cv::Mat> planes;
  std::cout << "Image resolution=" << image.rows << "x" << image.cols << std::endl;
  gettimeofday(&t0, NULL);
  cv::split(image, planes);
  cv::Mat mask = planes[0] == planes[1];
  mask &= planes[1] == planes[2];
  gettimeofday(&t1, NULL);
  std::cout << "Time using split: " << P_ellapsedTime(t0, t1) << "s" << std::endl;

  cv::Mat mask2 = cv::Mat::zeros(image.size(), CV_8U);
  unsigned char *imgBuf = image.data;
  unsigned char *maskBuf = mask2.data;
  gettimeofday(&t0, NULL);
  for (; imgBuf != image.dataend; imgBuf += 3, maskBuf++)
    *maskBuf = (imgBuf[0] == imgBuf[1] && imgBuf[1] == imgBuf[2]) ? 255 : 0;
  gettimeofday(&t1, NULL);
  std::cout << "Time using loop: " << P_ellapsedTime(t0, t1) << "s" << std::endl;

  cv::namedWindow("orig", 0);
  cv::imshow("orig", image);
  cv::namedWindow("mask", 0);
  cv::imshow("mask", mask);
  cv::namedWindow("mask2", 0);
  cv::imshow("mask2", mask2);
  cv::waitKey(0);

}

Bench on an image: 图像上的长凳:

Image resolution=3171x2179
Time using split: 0.06353s
Time using loop: 0.029044s

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM