简体   繁体   English

了解OpenCV的不失真功能

[英]Understanding OpenCV's undistort function

I'm looking to undistort an image using the distortion coefficients that I've computed for my camera, without changing the camera matrix. 我正在寻找使用我为相机计算的失真系数来改变图像,而不改变相机矩阵。 This is exactly what undistort() does, but I wanted to draw the output to a larger canvas image. 这正是undistort()作用,但我想将输出绘制成更大的画布图像。

When I tried this: 当我尝试这个时:

Mat drawtransform = getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, size, 1.0, size * 2);
undistort(inputimage, undistorted, cameraMatrix, distCoeffs, drawtransform);

It still wrote out the same sized image, but only the top left quarter of the scaled-up-by-two undistorted result. 它仍然写出了相同大小的图像,但只有左上角四分之一的按比例放大的未失真结果。 Like the documentation says , undistort writes into a target image of the same size. 就像文档说的那样 ,undistort会写入相同大小的目标图像。

It's pretty obvious that I can just go copy out and reimplement a slightly tweaked version of undistort() but I am having some trouble understanding what it is doing. 很明显,我可以复制并重新实现一个略微调整的undistort()版本,但我在理解它在做什么时遇到了一些麻烦。 Here's the source: 这是来源:

void cv::undistort( InputArray _src, OutputArray _dst, InputArray _cameraMatrix,
                    InputArray _distCoeffs, InputArray _newCameraMatrix )
{
    Mat src = _src.getMat(), cameraMatrix = _cameraMatrix.getMat();
    Mat distCoeffs = _distCoeffs.getMat(), newCameraMatrix = _newCameraMatrix.getMat();

    _dst.create( src.size(), src.type() );
    Mat dst = _dst.getMat();

    CV_Assert( dst.data != src.data );

    int stripe_size0 = std::min(std::max(1, (1 << 12) / std::max(src.cols, 1)), src.rows);
    Mat map1(stripe_size0, src.cols, CV_16SC2), map2(stripe_size0, src.cols, CV_16UC1);

    Mat_<double> A, Ar, I = Mat_<double>::eye(3,3);

    cameraMatrix.convertTo(A, CV_64F);
    if( distCoeffs.data )
        distCoeffs = Mat_<double>(distCoeffs);
    else
    {
        distCoeffs.create(5, 1, CV_64F);
        distCoeffs = 0.;
    }

    if( newCameraMatrix.data )
        newCameraMatrix.convertTo(Ar, CV_64F);
    else
        A.copyTo(Ar);

    double v0 = Ar(1, 2);
    for( int y = 0; y < src.rows; y += stripe_size0 )
    {
        int stripe_size = std::min( stripe_size0, src.rows - y );
        Ar(1, 2) = v0 - y;
        Mat map1_part = map1.rowRange(0, stripe_size),
            map2_part = map2.rowRange(0, stripe_size),
            dst_part = dst.rowRange(y, y + stripe_size);

        initUndistortRectifyMap( A, distCoeffs, I, Ar, Size(src.cols, stripe_size),
                                 map1_part.type(), map1_part, map2_part );
        remap( src, dst_part, map1_part, map2_part, INTER_LINEAR, BORDER_CONSTANT );
    }
}

About half of the lines here are for sanity checking and initializing input parameters. 这里大约一半的行用于检查和初始化输入参数。 What I'm confused about is what's going on with map1 and map2 . 令我困惑的是map1map2正在发生什么。 These names are sadly less descriptive than most. 这些名称遗憾的是描述性不如大多数。 I must be missing some explanation, maybe it's tucked away in some introduction page, or under the doc for another function. 我必须错过一些解释,也许它隐藏在一些介绍页面中,或者在另一个功能的doc下。

map1 is a two channel signed short integer matrix and map2 is an unsigned short integer matrix, both are of dimension (height, max(4096/width, 1)). map1是双通道有符号短整数矩阵, map2是无符号短整数矩阵,两者都是维度(高度,最大值(4096 /宽度,1))。 The question is, why? 问题是,为什么? What will these maps contain? 这些地图包含什么? What is the significance and purpose of this striping? 这种条纹的意义和目的是什么? What is the significance and purpose of the strange dimension of the stripes? 条纹奇怪维度的意义和目的是什么?

You might want to read the description for the function remap . 您可能希望阅读函数重映射的描述。 The map represents the pixel X,Y location in the source image for every pixel in the destination image. 该映射表示目标图像中每个像素的源图像中的像素X,Y位置。 Map1_part is every X location in the source, and Map2_part is every Y location in the source. Map1_part是源中的每个X位置,Map2_part是源中的每个Y位置。

Without reading into it much, the striping could be a method of speeding up the transformation process. 如果不仔细阅读,条带化可能是加速转型过程的一种方法。

EDIT: 编辑:

Also, if you are looking to just scale your image to a larger dimension you could just re-size the output image. 此外,如果您只想将图像缩放到更大的尺寸,则可以重新调整输出图像的大小。

double scaleX = 2.0;
double scaleY = 2.0;
cv::Mat undistortedScaled;

cv::resize(undistorted, undistortedScaled, cv::Size(0,0), scaleX, scaleY);

Use initUndistortRectifyMap to obtain the transformation to the scale you desire , then apply its output (the two matrices you mention) to remap . 使用initUndistortRectifyMap获取所需规模的转换,然后将其输出(您提到的两个矩阵)应用于重映射

The first map is used to compute the transform the x coordinate at each pixel position, the second is used to transform the y coordinate. 第一个映射用于计算每个像素位置的x坐标变换,第二个映射用于变换y坐标。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM