简体   繁体   English

iOS使用openCV检测来自摄像头的矩形

[英]iOS detect rectangles from camera with openCV

I'm trying to detect the edges of a business card (and draw them) with an iPhone camera, using openCV. 我正在尝试使用openCV用iPhone相机检测名片的边缘(并绘制它们)。 I'm new to this framework, as well as Computer Vision or C++. 我是这个框架的新手,以及计算机视觉或C ++。

I'm trying to use the solution explained here: https://stackoverflow.com/a/14123682/3708095 , which github project is https://github.com/foundry/OpenCVSquares 我正在尝试使用此处解释的解决方案: https//stackoverflow.com/a/14123682/3708095 ,其中github项目是https://github.com/foundry/OpenCVSquares

It works with a predefined image, but I'm trying to get it working with the camera. 它适用于预定义的图像,但我正在尝试使用相机。

To do so, I'm using the CvVideoCameraDelegate protocol implementing it in CVViewController.mm like they explain in http://docs.opencv.org/doc/tutorials/ios/video_processing/video_processing.html like this: 为此,我正在使用CvVideoCameraDelegate protocolCVViewController.mm实现它,就像他们在http://docs.opencv.org/doc/tutorials/ios/video_processing/video_processing.html中解释一样:

#ifdef __cplusplus
-(void)processImage:(cv::Mat &)matImage
{
//NSLog (@"Processing Image");
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

    matImage = CVSquares::detectedSquaresInImage(matImage, self.tolerance, self.threshold, self.levels, [self accuracy]);

    UIImage *image = [[UIImage alloc]initWithCVMat:matImage orientation:UIImageOrientationDown];

    dispatch_async(dispatch_get_main_queue(), ^{
        self.imageView.image = image;
    });
});

}
#endif

EDIT: 编辑:

If I do it like this, it gives me a EXC_BAD_ACCESS... 如果我这样做,它给了我一个EXC_BAD_ACCESS ...

If I clone the matImage before processing it, by logging it, it seems to process the image and even find rectangles, but the rectangle is not drawn to the image output to the imageView. 如果我在处理它之前克隆matImage,通过记录它,它似乎处理图像甚至找到矩形,但矩形不会被绘制到图像输出到imageView。

cv::Mat temp = matImage.clone();    

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

    UIImage *image = [[UIImage alloc]CVSquares::detectedSquaresInImage(temp, self.tolerance, self.threshold, self.levels, [self accuracy])
                                       orientation:UIImageOrientationDown];

    dispatch_async(dispatch_get_main_queue(), ^{
        self.imageView.image = image;
    });
});

I'm pretty sure I'm missing something, probably because I'm not passing correctly some object, ot pointer to object or so, and the object that I need to be modified is not. 我很确定我错过了一些东西,可能是因为我没有正确传递某个对象,指向对象的指针等等,而我需要修改的对象则没有。

Anyway, If this is not the right approach, I would really appreciate a tutorial or example where they do something like this, either using openCV or GPUImage (not familiar with it either)... 无论如何,如果这不是正确的方法,我真的很感谢他们做这样的事情的教程或例子,使用openCV或GPUImage (也不熟悉它)......

So the solution was actually pretty simple... 所以解决方案实际上非常简单......

Instead of trying to use matImage to set the imageView.image , it just needed to transform matImage to be actually modified in the imageView since the CvVideoCamera was already initialized with (and linked to) the imageView: 而不是试图用matImage设置imageView.image ,它只是需要变换matImage在ImageView的实际修改,因为CvVideoCamera已经与初始化(和链接)的ImageView的:

self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.imageView];

finally the function was like this: 最后这个功能是这样的:

#ifdef __cplusplus
-(void)processImage:(cv::Mat &)matImage
{
    matImage = CVSquares::detectedSquaresInImage(matImage, self.angleTolerance, self.threshold, self.levels, self.accuracy);
}
#endif

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM