简体   繁体   English

Image Stitching wars透视尺寸问题

[英]Image Stitching warsPerspective size issue

I am trying to stitch two images.我正在尝试拼接两张图片。 tech stack is opecv c++ on vs 2017.技术堆栈是 opecv c++ on vs 2017。

The image that I had considered are:我考虑过的图像是:

image1 of code:代码的图像 1:

and

image2 of code:代码image2:

I have found the homoography matrix using this code.我使用这段代码找到了单应矩阵。 I have considered image1 and image2 as given above.我考虑了上面给出的 image1 和 image2。

    int minHessian = 400;
    Ptr<SURF> detector = SURF::create(minHessian);
    vector< KeyPoint > keypoints_object, keypoints_scene;
    detector->detect(gray_image1, keypoints_object);
    detector->detect(gray_image2, keypoints_scene);

    
    Mat img_keypoints;
    drawKeypoints(gray_image1, keypoints_object, img_keypoints);
    imshow("SURF Keypoints", img_keypoints);

    Mat img_keypoints1;
    drawKeypoints(gray_image2, keypoints_scene, img_keypoints1);
    imshow("SURF Keypoints1", img_keypoints1);
    //-- Step 2: Calculate descriptors (feature vectors)
    Mat descriptors_object, descriptors_scene;
    detector->compute(gray_image1, keypoints_object, descriptors_object);
    detector->compute(gray_image2, keypoints_scene, descriptors_scene);

    //-- Step 3: Matching descriptor vectors using FLANN matcher

    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::FLANNBASED);
    vector< DMatch > matches;
    matcher->match(descriptors_object, descriptors_scene, matches);


    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints 
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }

    printf("-- Max dist: %f \n", max_dist);
    printf("-- Min dist: %f \n", min_dist);


    //-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
    vector< DMatch > good_matches;
    Mat result, H;
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance < 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }
    Mat img_matches;
    drawMatches(gray_image1, keypoints_object, gray_image2, keypoints_scene, good_matches, img_matches, Scalar::all(-1),
        Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
    imshow("Good Matches", img_matches);
    std::vector< Point2f > obj;
    std::vector< Point2f > scene;
    cout << "Good Matches detected" << good_matches.size() << endl;
    for (int i = 0; i < good_matches.size(); i++)
    {
        //-- Get the keypoints from the good matches
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }


    // Find the Homography Matrix for img 1 and img2
    H = findHomography(obj, scene, RANSAC);

The next step would be to warp these.下一步将是扭曲这些。 I used perspectivetransform function to find the corner of image1 on the stitched image.我使用 perspectivetransform function 在拼接图像上找到 image1 的角。 I had considered this as the number of columns to be used in the Mat result .This is the code I wrote ->我曾将其视为Mat result中要使用的列数。这是我编写的代码 ->

    vector<Point2f> imageCorners(4);
    imageCorners[0] = Point(0, 0);
    imageCorners[1] = Point(image1.cols, 0);
    imageCorners[2] = Point(image1.cols, image1.rows);
    imageCorners[3] = Point(0, image1.rows);
    vector<Point2f> projectedCorners(4);
    perspectiveTransform(imageCorners, projectedCorners, H);
    Mat result;
    warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
    Mat half(result, Rect(0, 0, image2.cols, image2.rows));
    image2.copyTo(half);
    imshow("result", result);

I am getting a stitched output of these images.我得到这些图像的拼接 output。 But the issue is with the size of the image.但问题在于图像的大小。 I was doing a comparison by combining the two original images manually with the result of the above code.我正在通过手动将两个原始图像与上述代码的结果结合起来进行比较。 The size of the result from code is more.代码结果的大小更大。 What should I do to make it of perfect size?我应该怎么做才能使其尺寸完美? The ideal size should be image1.cols + image2.cols - overlapping length .理想的大小应该是image1.cols + image2.cols - overlapping length

warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));

This line seems problematic.这条线似乎有问题。 You should choose the extremum points for the size.您应该选择尺寸的极值点。

Rect rec = boundingRect(projectedCorners);
warpPerspective(image1, result, H, rec.size());

But you will lose the parts if rec.tl() falls to negative axes, so you should shift the homography matrix to fall in the first quadrant.但是如果rec.tl()下降到负轴,你将丢失这些部分,所以你应该将单应矩阵移动到第一象限。 See Warping to perspective section of my answer to Fast and Robust Image Stitching Algorithm for many images in Python .请参阅我对 Python 中许多图像的快速稳健图像拼接算法的回答的透视部分。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM