简体   繁体   中英

OpenGL integer texture raising GL_INVALID_VALUE

I have an Nvidia GTX 970, with the latest (441.66) driver for Win 10 x64 (18362 build), which is obviously fully OpenGL 4.6 compliant, and currently compiling an app with VS2017.

My problem is, that I seem to be unable to use any other texture type then GL_UNSIGNED_BYTE . I'm currently trying to set up a single channel, unsigned integer (32 bit) texture, but however I try to allocate the texture, OpenGL immediately raises the GL_INVALID_VALUE error, and the shader's result turns all black.

So far I tryed allocating immutably: glTexStorage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048);

And mutably: glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);

I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.

What am I missing here?

Minimum reproducable version:

#include <iostream>
#include <GL/glew.h>
#include <SDL.h>

void OGLErrorCheck()
{
    const GLenum errorCode = glGetError();
    if (errorCode != GL_NO_ERROR)
    {
        const GLubyte* const errorString = gluErrorString(errorCode);
        std::cout << errorString;
    }
}

int main(int argc, char* argv[])
{    
    glewInit();
    glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, nullptr);
    OGLErrorCheck();
    getchar();
    return 0;
}

This yields GL_INVALID_OPERATION . This is linked with the latest SDL and GLEW, both free software, available for download at https://www.libsdl.org/ and http://glew.sourceforge.net/ respectively.

From the specs about glTexStorage2D :

 void glTexStorage2D( GLenum target, GLsizei levels, GLenum internalformat, GLsizei width, GLsizei height );

[…]
GL_INVALID_VALUE is generated if width , height or levels are less than 1

And the value for levels you pass to glTexStorage2D is 0 .

First of all you have to create an OpenGL Context . eg:
(See also Using OpenGL With SDL )

if (SDL_Init(SDL_INIT_VIDEO) < 0)
    return 0;

SDL_Window *window = SDL_CreateWindow("ogl wnd", 0, 0, width, height, SDL_WINDOW_OPENGL);
if (window == nullptr)
    return 0;
SDL_GLContext context = SDL_GL_CreateContext(window);

if (glewInit() != GLEW_OK)
    return 0;

Then you have to generate an texture name by glGenTextures :

GLuint tobj;
glGenTextures(1, &tobj);

After that you've to bind the named texture to a texturing target by glBindTexture :

glBindTexture(GL_TEXTURE_2D, tobj);

Finally you can specify the two-dimensional texture image by glTexImage2D

glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, nullptr);

Note, the texture format has to be GL_RED_INTEGER rather than GL_RED , because the source texture image has to be interpreted as integral data, rather than normalized floating point data. The format and type parameter specify the format of the source data. The internalformat parameter specifies the format of the target texture image.

Set the texture parameters by glTexParameter ( this can be done before glTexImage2D , too):

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

If you do not generate mipmaps (by glGenerateMipmap ), then setting the GL_TEXTURE_MIN_FILTER is important. Since the default filter is GL_NEAREST_MIPMAP_LINEAR the texture would be mipmap incomplete, if you don not change the minifying function to GL_NEAREST or GL_LINEAR .

And mutably: glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);

I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.

What am I missing here?

For unormalized integer texture formatss, the format parameter of glTex[Sub]Image is not allowed to be just GL_RED , you have to use GL_RED_INTEGER .The format,type combination GL_RED, GL_UNSIGNED_INT is for specifying normalized fixed point or floating point formats only.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM