简体   繁体   English

如何在opencv中制作自己的特征检测方法?

[英]How to make my own feature detection method in opencv?

Let's take a look on this basic tutorial named Features2D + Homography to find a known object. 让我们来看看这个名为Features2D + Homography的基本教程,找到一个已知对象。 It uses SurfFeatureDetector to detect features: 它使用SurfFeatureDetector来检测功能:

  SurfFeatureDetector detector( minHessian );
  std::vector<KeyPoint> keypoints_object, keypoints_scene;
  detector.detect( img_object, keypoints_object );
  detector.detect( img_scene, keypoints_scene );

Then it uses SurfDescriptorExtractor to calculate descriptors (feature vectors) using detected features. 然后,它使用SurfDescriptorExtractor使用检测到的特征计算描述符(特征向量)。

My questions are: 我的问题是:

  1. if I want to create my own feature detector (for example with Trajkovic or Harris algorithms) which Descriptor Extractor shall I use? 如果我想创建我自己的特征检测器(例如使用Trajkovic或Harris算法),我将使用Descriptor Extractor?
  2. are the features, that were found in SurfFeatureDetector, just the common points or the areas of points? 是SurfFeatureDetector中的特征,只是公共点或点的区域?

**** addition * *** **** 另外 * ***

1) In this example the Surf algorithm of feature detection was used. 1)在该示例中,使用了特征检测的Surf算法。 I have made my own algorithm (Trajkovic) and it works great - all the corners (image features) are found. 我已经制作了自己的算法(Trajkovic)并且效果很好 - 找到了所有角落(图像特征)。 Then I try to use SurfDescriptorExtractor as it was used in example. 然后我尝试使用SurfDescriptorExtractor,因为它在示例中使用。 The problem is that SurfDescriptorExtractor don't want to use my founded points in correct way (resulted picture appears with wrong connections, that means, that extractor didn't calculate the vectors correctly). 问题是SurfDescriptorExtractor不想以正确的方式使用我的创建点(结果图片显示错误的连接,这意味着,提取器没有正确计算向量)。

2) I need to make it exactly using opencv, that's the point; 2)我需要准确地使用opencv,这就是重点;

3) The "feature detector" is an algorithm, that tries to find keypoints (features or corners) on an image, "descriptor extractor" - is an algorithm, that calculates the feature vectors for the best understanding the keypoint position and direction; 3)“特征检测器”是一种算法,它试图在图像上找到关键点(特征或角点),“描述符提取器” - 是一种算法,它可以计算特征向量,以便最好地理解关键点的位置和方向;

4) In conclusion, in example all keypoints are connected on both images (as you can see on the last picture of tutorial) and then highlighted with rectangle. 4)总之,在示例中,所有关键点都连接在两个图像上(如教程的最后一张图片所示),然后用矩形突出显示。 But when I use Trajkovic algorithm, they connected in wrong way, that's why there is no highlighted rectangle. 但是当我使用Trajkovic算法时,它们以错误的方式连接,这就是为什么没有突出显示的矩形。

While we can't pinpoint the problem without looking at your implementation, and possibly even doing some investigating, I can point you in a direction of a solution: the OpenCV source code, you can compare against their implementation. 虽然我们无法在不查看您的实现的情况下查明问题,甚至可能进行一些调查,但我可以指出您的解决方案:OpenCV源代码,您可以与其实现进行比较。

Have a look at the detectAndCompute() function: 看看detectAndCompute()函数:

Harris corner detection works a bit differently and so is its API: Harris角点检测的工作方式有所不同,其API也是如此:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM