[英]Obj-C setValuesForKeysWithDictionary 64-bit vs 32-bit
[英]iPad texture loading differences (32-bit vs. 64-bit)
我正在繪制一個繪圖應用程序,我注意到在32位iPad和64位iPad上加載的紋理存在顯着差異。
這是在32位iPad上繪制的紋理:
這是在64位iPad上繪制的紋理:
64位是我想要的,但似乎它可能正在丟失一些數據?
我使用以下代碼創建默認畫筆紋理:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
然后實際設置畫筆紋理如下:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
更新:
我已經將CGFloats更新為GLfloats但沒有成功。 也許這個渲染代碼存在問題?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
頂點結構如下:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
更新:
這個問題實際上似乎不是基於32位,基於64位,而是A7 GPU和GL驅動程序的不同之處。 我通過在64位iPad上運行32位構建和64位構建來發現這一點。 在應用程序的兩個版本中,紋理最終看起來完全相同。
我希望你檢查兩件事。
檢查OpenGL中的alpha混合邏輯(或選項)。
檢查插值邏輯,該邏輯與拖動速度成正比。
看來你沒有第二個或沒有效果,這是繪制應用程序所必需的
我不認為問題出在紋理中,而是在框架緩沖區中復合線元素。
您的代碼片段看起來像是逐段繪制,因此有幾個重疊的段在彼此之上繪制。 如果幀緩沖器的深度較低,則會出現偽影,尤其是在混合區域的較亮區域。
您可以使用Xcode的OpenGL調試器檢查幀緩沖區。 通過在設備上運行代碼激活它,然后單擊“Capture OpenGL ES Frame”按鈕: 。
在“Debug Navigator”中選擇“glBindFramebuffer”命令,然后查看控制台區域中的幀緩沖區描述:
有趣的部分是GL_FRAMEBUFFER_INTERNAL_FORMAT
。
在我看來,問題在於你在編寫不同的圖像傳遞時使用的混合模式。 我假設您上傳的紋理僅用於顯示,並將內存中的圖像保留在您合成不同繪圖操作的位置,或者您使用glReadPixels讀回圖像內容? 基本上,您的第二個圖像看起來像一個直接的alpha圖像,就像預先乘以的alpha圖像一樣。 為了確保它不是紋理問題,在上傳到紋理之前將NSImage保存到文件,並檢查圖像是否實際正確。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.