简体   繁体   English

如何使用opencv 2.4.9获得有效的ORB结果?

[英]How to get efficient Result in ORB using opencv 2.4.9?

int method = 0;

std::vector<cv::KeyPoint> keypoints_object, keypoints_scene;
cv::Mat descriptors_object, descriptors_scene;

cv::ORB orb;

int minHessian = 500;
//cv::OrbFeatureDetector detector(500);
//ORB orb(25, 1.0f, 2, 10, 0, 2, 0, 10);
cv::OrbFeatureDetector detector(25, 1.0f, 2, 10, 0, 2, 0, 10);
//cv::OrbFeatureDetector detector(500,1.20000004768,8,31,0,2,ORB::HARRIS_SCORE,31);
cv::OrbDescriptorExtractor extractor;

//-- object
if( method == 0 ) { //-- ORB
    orb.detect(img_object, keypoints_object);
    //cv::drawKeypoints(img_object, keypoints_object, img_object, cv::Scalar(0,255,255));
    //cv::imshow("template", img_object);

    orb.compute(img_object, keypoints_object, descriptors_object);
} else { //-- SURF test
    detector.detect(img_object, keypoints_object);
    extractor.compute(img_object, keypoints_object, descriptors_object);
}
// http://stackoverflow.com/a/11798593
//if(descriptors_object.type() != CV_32F)
//    descriptors_object.convertTo(descriptors_object, CV_32F);


//for(;;) {
    cv::Mat frame = cv::imread("E:\\Projects\\Images\\2-134-2.bmp", 1);
    cv::Mat img_scene = cv::Mat(frame.size(), CV_8UC1);
    cv::cvtColor(frame, img_scene, cv::COLOR_RGB2GRAY);
    //frame.copyTo(img_scene);
    if( method == 0 ) { //-- ORB
        orb.detect(img_scene, keypoints_scene);
        orb.compute(img_scene, keypoints_scene, descriptors_scene);
    } else { //-- SURF
        detector.detect(img_scene, keypoints_scene);
        extractor.compute(img_scene, keypoints_scene, descriptors_scene);
    }

    //-- matching descriptor vectors using FLANN matcher
    cv::BFMatcher matcher;
    std::vector<cv::DMatch> matches;
    cv::Mat img_matches;
    if(!descriptors_object.empty() && !descriptors_scene.empty()) {
        matcher.match (descriptors_object, descriptors_scene, matches);

        double max_dist = 0; double min_dist = 100;

        //-- Quick calculation of max and min idstance between keypoints
        for( int i = 0; i < descriptors_object.rows; i++)
        { double dist = matches[i].distance;
            if( dist < min_dist ) min_dist = dist;
            if( dist > max_dist ) max_dist = dist;
        }
        //printf("-- Max dist : %f \n", max_dist );
        //printf("-- Min dist : %f \n", min_dist );
        //-- Draw only good matches (i.e. whose distance is less than 3*min_dist)
        std::vector< cv::DMatch >good_matches;

        for( int i = 0; i < descriptors_object.rows; i++ )

        { if( matches[i].distance < (max_dist/1.6) )
            { good_matches.push_back( matches[i]); }
        }

        cv::drawMatches(img_object, keypoints_object, img_scene, keypoints_scene, \
                good_matches, img_matches, cv::Scalar::all(-1), cv::Scalar::all(-1),
                std::vector<char>(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

        //-- localize the object
        std::vector<cv::Point2f> obj;
        std::vector<cv::Point2f> scene;

        for( size_t i = 0; i < good_matches.size(); i++) {
            //-- get the keypoints from the good matches
            obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
            scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
        }
        if( !obj.empty() && !scene.empty() && good_matches.size() >= 4) {
            cv::Mat H = cv::findHomography( obj, scene, cv::RANSAC );

            //-- get the corners from the object to be detected
            std::vector<cv::Point2f> obj_corners(4);
            obj_corners[0] = cv::Point(0,0);
            obj_corners[1] = cv::Point(img_object.cols,0);
            obj_corners[2] = cv::Point(img_object.cols,img_object.rows);
            obj_corners[3] = cv::Point(0,img_object.rows);

            std::vector<cv::Point2f> scene_corners(4);

            cv::perspectiveTransform( obj_corners, scene_corners, H);

            //-- Draw lines between the corners (the mapped object in the scene - image_2 )
            cv::line( img_matches, \
                    scene_corners[0] + cv::Point2f(img_object.cols, 0), \
                    scene_corners[1] + cv::Point2f(img_object.cols, 0), \
                    cv::Scalar(0,255,0), 4 );
            cv::line( img_matches, \
                    scene_corners[1] + cv::Point2f(img_object.cols, 0), \
                    scene_corners[2] + cv::Point2f(img_object.cols, 0), \
                    cv::Scalar(0,255,0), 4 );
            cv::line( img_matches, \
                    scene_corners[2] + cv::Point2f(img_object.cols, 0), \
                    scene_corners[3] + cv::Point2f(img_object.cols, 0), \
                    cv::Scalar(0,255,0), 4 );
            cv::line( img_matches, \
                    scene_corners[3] + cv::Point2f(img_object.cols, 0), \
                    scene_corners[0] + cv::Point2f(img_object.cols, 0), \
                    cv::Scalar(0,255,0), 4 );

        }
    }

        t =(double) getTickCount() - t;
    printf("Time :%f",(double)(t*1000./getTickFrequency()));

    cv::imshow("match result", img_matches );
    cv::waitKey();


return 0;

Here I am performing template matching between two Images. 我在这里执行两个图像之间的模板匹配。 where I extract key points using ORB algorithm and matching that with BF Matcher but I am not getting good result. 我使用ORB算法提取关键点并将其与BF匹配器匹配,但我没有得到好结果。 Here I am adding Image to understand problem 我在这里添加Image来理解问题 从帧图像中查找对象图像

Here as you can see Dark Blue line on teddy which is actually a rectangle which would be drawn around object from frame Image when object will be recognized by matching key points. 在这里,你可以看到泰迪熊上的深蓝色线,它实际上是一个矩形,当从对象关键点识别对象时,它将从帧图像周围绘制。 Here I am using Opencv 2.4.9, what changes should I make to get good result? 在这里我使用的是Opencv 2.4.9,我应该做些什么改变才能获得好的结果?

In any feature detection+extraction followed by a homography estimation, there are many parameters you can play with. 在任何特征检测+提取,然后是单应性估计,您可以使用许多参数。 However the main point to realise is that it's almost always the issue of Computation Time VS. 然而,要实现的要点是,它几乎总是计算时间VS的问题。 Accuracy . 准确性

The most crucial fail point of your code is your ORB initialization: 您的代码最关键的失败点是ORB初始化:

cv::OrbFeatureDetector detector(25, 1.0f, 2, 10, 0, 2, 0, 10);
  1. The first parameter tells the extractor to only use the top 25 results from the detector. 第一个参数告诉提取器仅使用检测器的前25个结果。 For a reliable estimation of an 8 DOF homography with no constraints on parameters, you should have an order of magnitude more features than parameters, ie 80, or just make it an even 100. 为了可靠地估计8 DOF单应性而不受参数限制,您应该具有比参数更多的特征,即80,或者只是使其成为100。
  2. The second parameter is for scaling the images down (or the detector patch up) between octaves (or levels). 第二个参数用于在八度(或级别)之间缩小图像(或检测器补丁)。 using the number 1.0f means you don't change the scale between octaves, this makes no sense, especially since your third parameter is the number of levels which is 2 and not 1. The default is 1.2f for scale and 8 levels, for less calculations, use a scaling of 1.5f and 4 levels (again, just a suggestion, other parameters will work too). 使用数字1.0f意味着你不改变八度音阶之间的音阶,这没有任何意义,特别是因为你的第三个参数是2的数量而不是1的水平。默认值是1.2f的音阶和8个音阶,对于更少的计算,使用1.5f4级的缩放(再次,只是一个建议,其他参数也将工作)。
  3. your fourth and last parameters say that the patch size to calculate on is 10x10, that's pretty small, but if you work on low resolution that's fine. 你的第四个和最后一个参数表示要计算的补丁大小是10x10,这个非常小,但是如果你在低分辨率上工作就没问题了。
  4. your score type (one before last parameter) can change runtime a bit, you can use the ORB::FAST_SCORE instead of the ORB::HARRIS_SCORE but it doesn't matter much. 您的分数类型(最后一个参数之前的一个)可以稍微改变运行时,您可以使用ORB::FAST_SCORE而不是ORB::HARRIS_SCORE但它并不重要。

Last but not least, when you initialise the BruteForce Matcher object, you should remember to use the cv::NORM_HAMMING type since ORB is a binary feature, this will make the norm calculations on the matching process actually mean something. 最后但并非最不重要的是,当你初始化BruteForce Matcher对象时,你应该记得使用cv::NORM_HAMMING类型,因为ORB是一个二进制特性,这将使匹配过程的规范计算实际意味着什么。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM