[英]glGenTextures works but returns the same texture name every single time
我有一個叫做Texture
的類。 此類負責管理紋理。 在程序啟動時,OpenGL上下文已正確初始化(這使此問題與大多數涉及意外glGenTextures行為的問題有所不同)。 Texture
從glGenTextures()
獲得一個紋理名稱,然后將該紋理名稱的圖像數據加載並綁定到內存中,並使用texGLInit()
函數。 這樣可以正常工作,並且紋理將按預期顯示。
但是,我還希望我的Texture能夠更改當用戶單擊按鈕並從OpenFileDiaglog
的HDD中選擇一個按鈕時顯示的紋理。 用於此的函數稱為reloadTexture()
。 此功能試圖從內存中刪除舊的圖像/像素數據,並用用戶選擇的文件中的新數據替換它。 但是,發生這種情況時,它將使用glDeleteTextures
刪除紋理名稱,然后分配一個新的紋理名稱,並使用texGLInit()
函數將新的像素數據加載到內存中。 但是紋理名稱與100%的時間之前(通常為“ 1”)的名稱完全相同。
發生這種情況后顯示的圖像很奇怪。 它具有新圖像尺寸,但仍具有舊圖像像素。 簡而言之,它會使舊圖像失真為新圖像尺寸。 它仍在使用假定刪除的像素數據進行繪制。 應該發生的是,屏幕現在在屏幕上顯示新的圖像文件。 我相信這與紋理名稱不唯一有關。
該代碼包括在下面:
Texture::Texture(string filename)//---Constructor loads in the initial image. Works fine!
{
textureID[0]=0;
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
//printf(fnPtr);
lodepng::load_file(buffer, fnPtr);//load the file into a buffer
unsigned error = lodepng::decode(image,w,h,buffer);//lodepng's decode function will load the pixel data into image vector from the buffer
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
//execute the code that'll throw exceptions to do with the images size
checkPOT(w);
checkPOT(h);
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
Draw_From_Corner = CENTER;
}
void Texture::reloadTexture(string filename)//Reload texture replaces the texture name and image/pixel data bound to this texture
{
//first and foremost clear the image and buffer vectors back down to nothing so we can start afresh
buffer.clear();
image.clear();
w = 0;
h = 0;
//also delete the texture name we were using before
glDeleteTextures(1, &textureID[0]);
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
//printf(fnPtr);
lodepng::load_file(buffer, fnPtr);//load the file into a buffer
unsigned error = lodepng::decode(image,w,h,buffer);//lodepng's decode function will load the pixel data into image vector from the buffer
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
//execute the code that'll throw exceptions to do with the images size
checkPOT(w);
checkPOT(h);
//image now contains our pixeldata. All ready for to do its thing
//let's get this texture up in the video memoryOpenGL
texGLInit();
Draw_From_Corner = CENTER;
}
void Texture::texGLInit()//Actually gets the new texture name loads the pixeldata into openGL
{
glGenTextures(1, &textureID[0]);
////printf("\ntextureID = %u", textureID[0]);
glBindTexture(GL_TEXTURE_2D, textureID[0]);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image[0]);
//we COULD free the image vectors memory right about now. But we'll do it when there's a need to. At the beginning of the reloadtexture func, makes sure it happens when we need it to.
}
對於這里值得的是Texture類中的draw函數。
void Texture::draw(point* origin, ANCHOR drawFrom)
{
//let us set the DFC enum here.
Draw_From_Corner = drawFrom;
glEnable(GL_TEXTURE_2D);
//printf("\nDrawing texture at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textureID[0]);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
////printf("TexID = %u",textureID[0]);
GLfloat vArray[8];
#pragma region anchor switch
switch (Draw_From_Corner)
{
case CENTER:
vArray[0] = origin->x-(w/2); vArray[1] = origin->y-(h/2);//bottom left i0
vArray[2] = origin->x-(w/2); vArray[3] = origin->y+(h/2);//top left i1
vArray[4] = origin->x+(w/2); vArray[5] = origin->y+(h/2);//top right i2
vArray[6] = origin->x+(w/2); vArray[7] = origin->y-(h/2);//bottom right i3
break;
case BOTTOMLEFT:
vArray[0] = origin->x; vArray[1] = origin->y;//bottom left i0
vArray[2] = origin->x; vArray[3] = origin->y+h;//top left i1
vArray[4] = origin->x+w; vArray[5] = origin->y+h;//top right i2
vArray[6] = origin->x+w; vArray[7] = origin->y;//bottom right i3
break;
case TOPLEFT:
vArray[0] = origin->x; vArray[1] = origin->y-h;//bottom left i0
vArray[2] = origin->x; vArray[3] = origin->y;//top left i1
vArray[4] = origin->x+w; vArray[5] = origin->y;//top right i2
vArray[6] = origin->x+w; vArray[7] = origin->y-h;//bottom right i3
break;
case TOPRIGHT:
vArray[0] = origin->x-w; vArray[1] = origin->y-h;//bottom left i0
vArray[2] = origin->x-w; vArray[3] = origin->y;//top left i1
vArray[4] = origin->x; vArray[5] = origin->y;//top right i2
vArray[6] = origin->x; vArray[7] = origin->y-h;//bottom right i3
break;
case BOTTOMRIGHT:
vArray[0] = origin->x-w; vArray[1] = origin->y;//bottom left i0
vArray[2] = origin->x-w; vArray[3] = origin->y+h;//top left i1
vArray[4] = origin->x-h; vArray[5] = origin->y;//top right i2
vArray[6] = origin->x; vArray[7] = origin->y;//bottom right i3
break;
default: //same as center
vArray[0] = origin->x-(w/2); vArray[1] = origin->y-(h/2);//bottom left i0
vArray[2] = origin->x-(w/2); vArray[3] = origin->y+(h/2);//top left i1
vArray[4] = origin->x+(w/2); vArray[5] = origin->y+(h/2);//top right i2
vArray[6] = origin->x+(w/2); vArray[7] = origin->y-(h/2);//bottom right i3
break;
}
#pragma endregion
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
//this has been tinkered with from my normal order. I think LodePNG is bringing the PD upside down. SO A QUICK FIX HERE WAS NECESSARY.
0.0f,1.0f,//0
0.0f,0.0f,//1
1.0f,0.0f,//2
1.0f,1.0f//3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
誰能告訴我,為什么OpenGL無法從我已加載到其中的新像素數據進行加載和繪制? 就像我說的,我懷疑這與glGenTextures沒有給我一個新的紋理名稱有關。
您正在調用glTexImage2D
並將一個指針傳遞給客戶端內存。 當心,文檔說:
如果在指定紋理圖像時將非零命名緩沖區對象綁定到
GL_PIXEL_UNPACK_BUFFER
目標(請參見glBindBuffer
),則將data
視為緩沖區對象數據存儲區中的字節偏移量。
為了glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
您可能希望調用glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0)
取消綁定任何緩沖區對象。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.