简体   繁体   English

自动透视校正OpenCV

[英]Automatic perspective correction OpenCV

I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. 我正在尝试在我的iOS程序中实现自动透视校正,当我使用我在教程中找到的测试图像时,一切都按预期工作。 But when I take a picture I get back a weird result. 但是当我拍照时,我得到了一个奇怪的结果。

I am using code found in this tutorial 我正在使用本教程中的代码

When I give it an image that looks like this: 当我给它一个看起来像这样的图像:

在此输入图像描述

I get this as the result: 我得到了这个结果:

在此输入图像描述

Here is what dst gives me that might help. 这是dst给我的可能dst东西。

在此输入图像描述

I am using this to call the method which contains the code. 我用它来调用包含代码的方法。

quadSegmentation(Img, bw, dst, quad);

Can anyone tell me when I am getting so many green lines compared to the tutorial? 当我从教程中获得如此多的绿线时,有人能告诉我吗? And how I might be able to fix this and properly crop the image to only contain the card? 我怎么能够解决这个问题并正确裁剪图像只包含卡?

For perspective transform you need, 对于你需要的透视变换,

source points->Coordinates of quadrangle vertices in the source image. 源点 - >源图像中四边形顶点的坐标。

destination points-> Coordinates of the corresponding quadrangle vertices in the destination image. 目标点 - >目标图像中相应四边形顶点的坐标。

Here we will calculate these point by contour process. 在这里,我们将通过轮廓过程计算这些点。

Calculate Coordinates of quadrangle vertices in the source image 计算源图像中四边形顶点的坐标

  • You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc.. 只需通过模糊,阈值处理,然后找到轮廓,找到最大轮廓等,您将获得您的卡作为轮廓。
  • After finding largest contour just calculate approximates a polygonal curve , here you should get 4 Point which represent corners of your card. 找到最大轮廓后,只需计算近似一条多边形曲线 ,这里你应得到4点代表你卡片的角落。 You can adjust the parameter epsilon to make 4 co-ordinates. 您可以调整参数epsilon以进行4次坐标。

在此输入图像描述

Calculate Coordinates of the corresponding quadrangle vertices in the destination image 计算目标图像中相应四边形顶点的坐标

  • This can be easily find out by calculating bounding rectangle for largest contour. 通过计算最大轮廓的边界矩形可以很容易地找到这一点。

在此输入图像描述

In below image the red rectangle represent source points and green for destination points. 在下图中,红色矩形表示源点,绿色表示目标点。

在此输入图像描述

Adjust the co-ordinates order and Apply Perspective transform 调整坐标顺序和应用透视变换

See the final result 查看最终结果

在此输入图像描述

Code

 Mat src=imread("card.jpg");
 Mat thr;
 cvtColor(src,thr,CV_BGR2GRAY);
 threshold( thr, thr, 70, 255,CV_THRESH_BINARY );

 vector< vector <Point> > contours; // Vector for storing contour
 vector< Vec4i > hierarchy;
 int largest_contour_index=0;
 int largest_area=0;

 Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
 findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
 for( int i = 0; i< contours.size(); i++ ){
    double a=contourArea( contours[i],false);  //  Find the area of contour
    if(a>largest_area){
    largest_area=a;
    largest_contour_index=i;                //Store the index of largest contour
    }
 }

 drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
 vector<vector<Point> > contours_poly(1);
 approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
 Rect boundRect=boundingRect(contours[largest_contour_index]);
 if(contours_poly[0].size()==4){
    std::vector<Point2f> quad_pts;
    std::vector<Point2f> squre_pts;
    quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
    quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
    quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
    quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
    squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
    squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
    squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
    squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));

    Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
    Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
    warpPerspective(src, transformed, transmtx, src.size());
    Point P1=contours_poly[0][0];
    Point P2=contours_poly[0][1];
    Point P3=contours_poly[0][2];
    Point P4=contours_poly[0][3];


    line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
    line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
    line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
    line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
    rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
    rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);

    imshow("quadrilateral", transformed);
    imshow("thr",thr);
    imshow("dst",dst);
    imshow("src",src);
    imwrite("result1.jpg",dst);
    imwrite("result2.jpg",src);
    imwrite("result3.jpg",transformed);
    waitKey();
   }
   else
    cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;

teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. teethe这通常发生在您依赖其他人代码来解决您的特定问题而不是采用代码时。 Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works): 看一下处理阶段以及它们和你的图像之间的区别(从开始使用它们的图像并确保代码工作的方式是一个好主意):

  1. Get the edge map. 获取边缘地图。 - will probably work since your edges are fine - 可能会有效,因为你的边缘很好
  2. Detect lines with Hough transform. 用Hough变换检测线。 - fail since you have lines not only on the contour but also inside of your card. - 失败,因为您不仅在轮廓上而且在卡片内部都有线条。 So expect a lot of false alarm lines 所以期待很多误报线
  3. Get the corners by finding intersections between lines. 通过查找线之间的交叉点来获取角点。 - fail for the above mentioned reason - 由于上述原因而失败
  4. Check if the approximate polygonal curve has 4 vertices. 检查近似多边形曲线是否有4个顶点。 - fail - 失败
  5. Determine top-left, bottom-left, top-right, and bottom-right corner. 确定左上角,左下角,右上角和右下角。 - fail - 失败
  6. Apply the perspective transformation. 应用透视变换。 - fail completely - 完全失败

To fix your problem you have to ensure that only lines on the periphery are extracted. 要解决您的问题,您必须确保只提取外围的线条。 If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. 如果你总是有一个黑暗的背景,你可以使用这个事实来丢弃具有其他对比度/极性的线条。 Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines). 或者,您可以提取所有线条,然后选择最接近图像边界的线条(如果您的背景没有线条)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM