简体   繁体   English

OpenGL不喜欢OpenCV调整大小

[英]OpenGL doesn't like OpenCV resize

I'd like to use an opencv mat data as a opengl texture. 我想将opencv mat数据用作opengl纹理。 I'm developing a Qt4.8 application (but passing through qimage is something i don't really need) extending a QGLWidget. 我正在开发一个Qt4.8应用程序(但是我真的不需要通过qimage传递)扩展了QGLWidget。 But something is wrong.. 但是出了点问题。

First the problem in screenshot, then the code I'm using. 首先是屏幕截图中的问题,然后是我正在使用的代码。

If I don't resize the cv::Mat (grabbed from a video) everything is ok. 如果我不调整cv :: Mat的大小(从视频中抓取),一切正常。 If I scale it as the half (scaleFactor=2) of the dimension, everything is ok. 如果将其缩放为尺寸的一半(scaleFactor = 2),一切正常。 If the scale factor is 2.8 or 2.9.. everything is ok. 如果比例因子是2.8或2.9 ..一切正常。 But.. at some scaleFactor.. it is buggy. 但是..在某种程度上Factor ..这是越野车。

Here the screenshots with a nice red background for understand the opengl quad dimension: 这里的屏幕截图带有红色背景,用于理解opengl quad尺寸:

scaleFactor = 2 scaleFactor = 2 在此处输入图片说明

scaleFactor = 2.8 scaleFactor = 2.8 在此处输入图片说明

scaleFactor = 3 scaleFactor = 3 在此处输入图片说明

scaleFactor = 3.2 scaleFactor = 3.2 在此处输入图片说明

Now the code of the paint method. 现在绘制方法的代码。 I have found the code for copy the cv::Mat data into the gl texture from this nice blog post . 我从这篇不错的博客文章中找到了将cv :: Mat数据复制到gl纹理中的代码。

void VideoViewer::paintGL()
{
    glClear (GL_COLOR_BUFFER_BIT);
    glClearColor (1.0, 0.0, 0.0, 1.0);

    glEnable(GL_BLEND);

    // Use a simple blendfunc for drawing the background
    glBlendFunc(GL_ONE, GL_ZERO);

    if (!cvFrame.empty()) {
        glEnable(GL_TEXTURE_2D);

        GLuint tex = matToTexture(cvFrame);
        glBindTexture(GL_TEXTURE_2D, tex);

        glBegin(GL_QUADS);
        glTexCoord2f(1, 1); glVertex2f(0, cvFrame.size().height);
        glTexCoord2f(1, 0); glVertex2f(0, 0);
        glTexCoord2f(0, 0); glVertex2f(cvFrame.size().width, 0);
        glTexCoord2f(0, 1); glVertex2f(cvFrame.size().width, cvFrame.size().height);
        glEnd();

        glDeleteTextures(1, &tex);
        glDisable(GL_TEXTURE_2D);

        glFlush();
    }
}

GLuint VideoViewer::matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
    // http://r3dux.org/2012/01/how-to-convert-an-opencv-cvmat-to-an-opengl-texture/

    // Generate a number for our textureID's unique handle
    GLuint textureID;
    glGenTextures(1, &textureID);

    // Bind to our texture handle
    glBindTexture(GL_TEXTURE_2D, textureID);

    // Catch silly-mistake texture interpolation method for magnification
    if (magFilter == GL_LINEAR_MIPMAP_LINEAR  ||
        magFilter == GL_LINEAR_MIPMAP_NEAREST ||
        magFilter == GL_NEAREST_MIPMAP_LINEAR ||
        magFilter == GL_NEAREST_MIPMAP_NEAREST)
    {
        std::cout << "VideoViewer::matToTexture > "
                  << "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
                  << std::endl;
        magFilter = GL_LINEAR;
    }

    // Set texture interpolation methods for minification and magnification
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);

    // Set texture clamping method
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);

    // Set incoming texture format to:
    // GL_BGR       for CV_CAP_OPENNI_BGR_IMAGE,
    // GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
    // Work out other mappings as required ( there's a list in comments in main() )
    GLenum inputColourFormat = GL_BGR;
    if (mat.channels() == 1)
    {
        inputColourFormat = GL_LUMINANCE;
    }

    // Create the texture
    glTexImage2D(GL_TEXTURE_2D,     // Type of texture
                 0,                 // Pyramid level (for mip-mapping) - 0 is the top level
                 GL_RGB,            // Internal colour format to convert to
                 mat.cols,          // Image width  i.e. 640 for Kinect in standard mode
                 mat.rows,          // Image height i.e. 480 for Kinect in standard mode
                 0,                 // Border width in pixels (can either be 1 or 0)
                 inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
                 GL_UNSIGNED_BYTE,  // Image data type
                 mat.ptr());        // The actual image data itself

    return textureID;
}

and how the cv::Mat is loaded and scaled: 以及cv :: Mat的加载和缩放方式:

void VideoViewer::retriveScaledFrame()
{
    video >> cvFrame;

    cv::Size s = cv::Size(cvFrame.size().width/scaleFactor, cvFrame.size().height/scaleFactor);
    cv::resize(cvFrame, cvFrame, s);
}

Some times the image is correctly rendered sometimes not.. why? 有时有时无法正确渲染图像。为什么? For sure there is something wrong in some mismatch of the order of pixel storing between opencv and opengl.. but, how to resolve it? 可以肯定,opencv和opengl之间的像素存储顺序有些不匹配。但是,如何解决呢? why sometimes is ok and sometimes no? 为什么有时还好,有时不呢?

Yes it was a problem of pixel storing in memory. 是的,这是像素存储在内存中的问题。 OpenCV and OpenGL could store pixels in different ways, and I had to understood better how this works. OpenCV和OpenGL可以以不同的方式存储像素,我必须更好地理解其工作原理。

In OpenGL you can specify those parameters with glPixelStorei and GL_UNPACK_ALIGNMENT , GL_UNPACK_ROW_LENGTH . 在OpenGL中,您可以使用glPixelStoreiGL_UNPACK_ALIGNMENTGL_UNPACK_ROW_LENGTH指定这些参数。

A nice answer about this can be found here . 一个很好的答案可以在这里找到。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM