简体   繁体   中英

glGenTextures works but returns the same texture name every single time

I have a class called Texture . This class is responsible for managing a texture. In the programs startup the OpenGL context is initialised properly (which is what makes this question different from most involving unexpected glGenTextures behaviour). Texture gets a texture name from glGenTextures() and then loads and binds image data into memory for that texture name int the function texGLInit() . This works, and the texture is displayed exactly as expected.

However, I also want my Texture to be able to change the texture that it displays when a user clicks a button and selects one from the HDD in an OpenFileDiaglog . The function for this is called reloadTexture() . This function attempts to delete the old image/pixel data from memory and replace it with new data from a file selected by the user. However when this happens, it deletes the texture name using glDeleteTextures , and then assigns a new texture name and loads the new pixel data into memory using the function texGLInit() . But the texture name is exactly the same as the one before (usually '1') 100% of the time.

The image displayed after this happens is odd. It has the new images dimensions, but still the old images pixels. Put simply, it distorts the old image down to the new images size. It's still drawing using the supposedly deleted pixel data. What should happen is that the screen now displays the new image file on screen. I believe this is something to do with the texture name not being unique.

The code is included below:

Texture::Texture(string filename)//---Constructor loads in the initial image. Works fine!
{
    textureID[0]=0;

    const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
    //printf(fnPtr);

    lodepng::load_file(buffer, fnPtr);//load the file into a buffer

    unsigned error = lodepng::decode(image,w,h,buffer);//lodepng's decode function will load the pixel data into image vector from the buffer
    //display any errors with the texture
    if(error)
    {
        cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
    }
    //execute the code that'll throw exceptions to do with the images size
    checkPOT(w);
    checkPOT(h);


    //image now contains our pixeldata. All ready for OpenGL to do its thing

    //let's get this texture up in the video memory
    texGLInit();

    Draw_From_Corner = CENTER;  
}

void Texture::reloadTexture(string filename)//Reload texture replaces the texture name and image/pixel data bound to this texture
{
    //first and foremost clear the image and buffer vectors back down to nothing so we can start afresh 
    buffer.clear();
    image.clear();
    w = 0;
    h = 0;
    //also delete the texture name we were using before
    glDeleteTextures(1, &textureID[0]);




    const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
    //printf(fnPtr);

    lodepng::load_file(buffer, fnPtr);//load the file into a buffer

    unsigned error = lodepng::decode(image,w,h,buffer);//lodepng's decode function will load the pixel data into image vector from the buffer
    //display any errors with the texture
    if(error)
    {
        cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
    }
    //execute the code that'll throw exceptions to do with the images size
    checkPOT(w);
    checkPOT(h);

    //image now contains our pixeldata. All ready for  to do its thing

    //let's get this texture up in the video memoryOpenGL
    texGLInit();

    Draw_From_Corner = CENTER;
}

void Texture::texGLInit()//Actually gets the new texture name loads the pixeldata into openGL
{
    glGenTextures(1, &textureID[0]);
    ////printf("\ntextureID = %u", textureID[0]);
    glBindTexture(GL_TEXTURE_2D, textureID[0]);//evrything we're about to do is about this texture
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
    //glDisable(GL_COLOR_MATERIAL);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image[0]);
    //we COULD free the image vectors memory right about now. But we'll do it when there's a need to. At the beginning of the reloadtexture func, makes sure it happens when we need it to.



}

For what it's worth here is the draw function in the Texture class.

void Texture::draw(point* origin, ANCHOR drawFrom)
{
    //let us set the DFC enum here.
    Draw_From_Corner = drawFrom;

    glEnable(GL_TEXTURE_2D);
    //printf("\nDrawing texture at (%f, %f)",centerPoint.x, centerPoint.y);
    glBindTexture(GL_TEXTURE_2D, textureID[0]);//bind the texture
    //create a quick vertex array for the primitive we're going to bind the texture to
    ////printf("TexID = %u",textureID[0]);
    GLfloat vArray[8];

#pragma region anchor switch
    switch (Draw_From_Corner)
    {
    case CENTER:

        vArray[0] = origin->x-(w/2); vArray[1] = origin->y-(h/2);//bottom left i0
        vArray[2] = origin->x-(w/2); vArray[3] = origin->y+(h/2);//top left i1
        vArray[4] = origin->x+(w/2); vArray[5] = origin->y+(h/2);//top right i2
        vArray[6] = origin->x+(w/2); vArray[7] = origin->y-(h/2);//bottom right i3
        break;

    case BOTTOMLEFT:

        vArray[0] = origin->x; vArray[1] = origin->y;//bottom left i0
        vArray[2] = origin->x; vArray[3] = origin->y+h;//top left i1
        vArray[4] = origin->x+w; vArray[5] = origin->y+h;//top right i2
        vArray[6] = origin->x+w; vArray[7] = origin->y;//bottom right i3

        break;

    case TOPLEFT:

        vArray[0] = origin->x; vArray[1] = origin->y-h;//bottom left i0
        vArray[2] = origin->x; vArray[3] = origin->y;//top left i1
        vArray[4] = origin->x+w; vArray[5] = origin->y;//top right i2
        vArray[6] = origin->x+w; vArray[7] = origin->y-h;//bottom right i3

        break;

    case TOPRIGHT:

        vArray[0] = origin->x-w; vArray[1] = origin->y-h;//bottom left i0
        vArray[2] = origin->x-w; vArray[3] = origin->y;//top left i1
        vArray[4] = origin->x; vArray[5] = origin->y;//top right i2
        vArray[6] = origin->x; vArray[7] = origin->y-h;//bottom right i3

        break;

    case BOTTOMRIGHT:

        vArray[0] = origin->x-w; vArray[1] = origin->y;//bottom left i0
        vArray[2] = origin->x-w; vArray[3] = origin->y+h;//top left i1
        vArray[4] = origin->x-h; vArray[5] = origin->y;//top right i2
        vArray[6] = origin->x; vArray[7] = origin->y;//bottom right i3

        break;

    default: //same as center

        vArray[0] = origin->x-(w/2); vArray[1] = origin->y-(h/2);//bottom left i0
        vArray[2] = origin->x-(w/2); vArray[3] = origin->y+(h/2);//top left i1
        vArray[4] = origin->x+(w/2); vArray[5] = origin->y+(h/2);//top right i2
        vArray[6] = origin->x+(w/2); vArray[7] = origin->y-(h/2);//bottom right i3

        break;


    }
#pragma  endregion          

    //create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
    GLfloat tArray[8] = 
    {
        //this has been tinkered with from my normal order. I think LodePNG is bringing the PD upside down. SO A QUICK FIX HERE WAS NECESSARY.
        0.0f,1.0f,//0
        0.0f,0.0f,//1
        1.0f,0.0f,//2
        1.0f,1.0f//3
    };

    //and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
    GLubyte iArray[6] =
    {
        0,1,2,
        0,2,3
    };

    //Activate arrays
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);

    //Give openGL a pointer to our vArray and tArray
    glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
    glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);

    //Draw it all
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);

    //glDrawArrays(GL_TRIANGLES,0,6);

    //Disable the vertex arrays
    glDisableClientState(GL_VERTEX_ARRAY);
    glDisableClientState(GL_TEXTURE_COORD_ARRAY);
    glDisable(GL_TEXTURE_2D);
}

Could anybody tell me why OpenGL would not be loading and drawing from the new pixel data that I've loaded into it? Like I said, I suspect it's something to do with glGenTextures not giving me a new texture name.

You're calling glTexImage2D and passing a pointer to client memory. Beware, the documentation says:

If a non-zero named buffer object is bound to the GL_PIXEL_UNPACK_BUFFER target (see glBindBuffer ) while a texture image is specified, data is treated as a byte offset into the buffer object's data store.

You may wish to call glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0) to unbind any buffer object just to be safe.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM