简体   繁体   中英

OpenCV: Assertion failed (src.checkVector(2, CV_32F)

I'm currently trying to correct the perspective of an image within an UIImage extension.

When getPerspectiveTranform gets called I'm getting the following Asserting.

Error

OpenCV Error: Assertion failed (src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4) in getPerspectiveTransform, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/imgwarp.cpp, line 6748
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/build-storage/build/master_iOS-mac/opencv/modules/imgproc/src/imgwarp.cpp:6748: error: (-215) src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4 in function getPerspectiveTransform

Code

- (UIImage *)performPerspectiveCorrection {
    Mat src = [self genereateCVMat];
    Mat thr;
    cv::cvtColor(src, thr, CV_BGR2GRAY);

    cv::threshold(thr, thr, 70, 255, CV_THRESH_BINARY);

    std::vector< std::vector <cv::Point> > contours; // Vector for storing contour
    std::vector< cv::Vec4i > hierarchy;
    int largest_contour_index=0;
    int largest_area=0;

    cv::Mat dst(src.rows,src.cols, CV_8UC1, cv::Scalar::all(0)); //create destination image

    cv::findContours(thr.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0)); // Find the contours in the image

    for (int i = 0; i< contours.size(); i++) {
        double a = cv::contourArea(contours[i], false); //  Find the area of contour
        if (a > largest_area){
            largest_area=a;
            largest_contour_index=i; //Store the index of largest contour
        }
    }

    cv::drawContours( dst,contours, largest_contour_index, cvScalar(255,255,255),CV_FILLED, 8, hierarchy );

    std::vector<std::vector<cv::Point> > contours_poly(1);
    approxPolyDP( cv::Mat(contours[largest_contour_index]), contours_poly[0],5, true );
    cv::Rect boundRect = cv::boundingRect(contours[largest_contour_index]);

    if(contours_poly[0].size() >= 4){
        std::vector<cv::Point> quad_pts;
        std::vector<cv::Point> squre_pts;

        quad_pts.push_back(cv::Point(contours_poly[0][0].x,contours_poly[0][0].y));
        quad_pts.push_back(cv::Point(contours_poly[0][1].x,contours_poly[0][1].y));
        quad_pts.push_back(cv::Point(contours_poly[0][3].x,contours_poly[0][3].y));
        quad_pts.push_back(cv::Point(contours_poly[0][2].x,contours_poly[0][2].y));

        squre_pts.push_back(cv::Point(boundRect.x,boundRect.y));
        squre_pts.push_back(cv::Point(boundRect.x,boundRect.y+boundRect.height));
        squre_pts.push_back(cv::Point(boundRect.x+boundRect.width,boundRect.y));
        squre_pts.push_back(cv::Point(boundRect.x+boundRect.width,boundRect.y+boundRect.height));

        Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
        Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
        warpPerspective(src, transformed, transmtx, src.size());

        return [UIImage imageByCVMat:transformed];
    }
    else {
        NSLog(@"Make sure that your are getting 4 corner using approxPolyDP...");
        return self;
    }
}

I know it's late but I confronted the same issue, so maybe it'll help anyone.

The error occours because src and dst in getPerspectiveTransform(src, dst); has to be type of vector<Point2f> not vector<Point> .

So it should be like:

std::vector<cv::Point2f> quad_pts;
std::vector<cv::Point2f> squre_pts;


quad_pts.push_back(cv::Point2f(contours_poly[0][0].x,contours_poly[0][0].y));

// etc.

squre_pts.push_back(cv::Point2f(boundRect.x,boundRect.y));

//etc.

@Domaijnik hi,opencv sift return a type of point2f keypoints according to Doc . it still give me the same error. here's my code

import cv2
import numpy as np

# pts1,pts2 are sift keypoints
def ransac(pts1, pts2, img_l, img_r, max_iters=500, epsilon=1):
    best_matches = []
    # Number of samples
    N = 4

    for i in range(max_iters):
        # Get 4 random samples from features
        id1 = np.random.randint(0, len(pts1), N)
        id2 = np.random.randint(0, len(pts2), N)
        src = []
        dst = []
        for i in range(N):
            # Edit1 : pt is Point2f
            src.append(pts1[id1[i]].pt)
            dst.append(pts2[id2[i]].pt)

        src = np.mat(src)
        dst = np.mat(dst)

        # Calculate the homography matrix H
        H = cv2.getPerspectiveTransform(src, dst)
        Hp = cv2.perspectiveTransform(pts1[None], H)[0]

        # Find the inliers by computing the SSD(p',Hp) and saving inliers (feature pairs) that are SSD(p',Hp) &lt; epsilon
        inliers = []
        for i in range(len(pts1)):
            ssd = np.sum(np.square(pts2[i] - Hp[i]))
            if ssd &lt; epsilon:
                inliers.append([pts1[i], pts2[i]])

        # Keep the largest set of inliers and the corresponding homography matrix
        if len(inliers) &gt; len(best_matches):
            best_matches = inliers

    return best_matches

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM