简体   繁体   English

iPad纹理加载差异(32位与64位)

[英]iPad texture loading differences (32-bit vs. 64-bit)

I am working on a drawing application and I am noticing significant differences in textures loaded on a 32-bit iPad vs. a 64-bit iPad. 我正在绘制一个绘图应用程序,我注意到在32位iPad和64位iPad上加载的纹理存在显着差异。

Here is the texture drawn on a 32-bit iPad: 这是在32位iPad上绘制的纹理:

在此输入图像描述

Here is the texture drawn on a 64-bit iPad: 这是在64位iPad上绘制的纹理:

在此输入图像描述

The 64-bit is what I desire, but it seems like maybe it is losing some data? 64位是我想要的,但似乎它可能正在丢失一些数据?

I create a default brush texture with this code: 我使用以下代码创建默认画笔纹理:

UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);

size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
    1.0,1.0,1.0, 1.0,
    1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);

CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;

CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
                             0, myCentrePoint, myRadius,
                             options);

CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();

[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];

UIGraphicsEndImageContext();

And then actually set the brush texture like this: 然后实际设置画笔纹理如下:

-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;

// first, delete the old texture if needed
if (brushTexture){
    glDeleteTextures(1, &brushTexture);
    brushTexture = 0;
}

// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;

// Make sure the image exists
if(brushCGImage) {
    // Get the width and height of the image
    GLint width = CGImageGetWidth(brushCGImage);
    GLint height = CGImageGetHeight(brushCGImage);

    // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
    // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.

    // Allocate  memory needed for the bitmap context
    GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
    // Use  the bitmatp creation function provided by the Core Graphics framework.
    CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
    // After you create the context, you can draw the  image to the context.
    CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
    // You don't need the context at this point, so you need to release it to avoid memory leaks.
    CGContextRelease(brushContext);

    // Use OpenGL ES to generate a name for the texture.
    glGenTextures(1, &brushTexture);
    // Bind the texture name.
    glBindTexture(GL_TEXTURE_2D, brushTexture);
    // Set the texture parameters to use a minifying filter and a linear filer (weighted average)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    // Specify a 2D texture image, providing the a pointer to the image data in memory
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
    // Release  the image data; it's no longer needed
    free(brushData);
}
}

Update: 更新:

I've updated CGFloats to be GLfloats with no success. 我已经将CGFloats更新为GLfloats但没有成功。 Maybe there is an issue with this rendering code? 也许这个渲染代码存在问题?

if(frameBuffer){
    // draw the stroke element
    [self prepOpenGLStateForFBO:frameBuffer];
    [self prepOpenGLBlendModeForColor:element.color];
    CheckGLError();
}

// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;

// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];

glLineWidth(2);

// if the element has any data, then draw it
if(vertexBuffer){
    glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
    glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
    glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
    glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
    CheckGLError();
}

if(frameBuffer){
    [self unprepOpenGLState];
}

The vertex struct is the following: 顶点结构如下:

struct Vertex{
    GLfloat Position[2];    // x,y position
    GLfloat Color [4];      // rgba color
    GLfloat Texture[2];    // x,y texture coord
};

Update: 更新:

The issue does not actually appear to be 32-bit, 64-bit based, but rather something different about the A7 GPU and GL drivers. 这个问题实际上似乎不是基于32位,基于64位,而是A7 GPU和GL驱动程序的不同之处。 I found this out by running a 32-bit build and 64-bit build on the 64-bit iPad. 我通过在64位iPad上运行32位构建和64位构建来发现这一点。 The textures ended up looking exactly the same on both builds of the app. 在应用程序的两个版本中,纹理最终看起来完全相同。

I would like you to check two things. 我希望你检查两件事。

  1. Check your alpha blending logic(or option) in OpenGL. 检查OpenGL中的alpha混合逻辑(或选项)。

  2. Check your interpolation logic which is proportional to velocity of dragging. 检查插值逻辑,该逻辑与拖动速度成正比。

It seems you don't have second one or not effective which is required to drawing app 看来你没有第二个或没有效果,这是绘制应用程序所必需的

I don't think the problem is in the texture but in the frame buffer to which you composite the line elements. 我不认为问题出在纹理中,而是在框架缓冲区中复合线元素。

Your code fragments look like you draw segments by segment, so there are several overlapping segments drawn on top of each other. 您的代码片段看起来像是逐段绘制,因此有几个重叠的段在彼此之上绘制。 If the depth of the frame buffer is low there will be artifacts, especially in the lighter regions of the blended areas. 如果帧缓冲器的深度较低,则会出现伪影,尤其是在混合区域的较亮区域。

You can check the frame buffer using Xcode's OpenGL debugger. 您可以使用Xcode的OpenGL调试器检查帧缓冲区。 Activate it by running your code on the device and click the little "Capture OpenGL ES Frame" button: 通过在设备上运行代码激活它,然后单击“Capture OpenGL ES Frame”按钮: 捕获OpenGL ES框架 .

Select a "glBindFramebuffer" command in the "Debug Navigator" and look at the frame buffer description in the console area: 在“Debug Navigator”中选择“glBindFramebuffer”命令,然后查看控制台区域中的帧缓冲区描述:

帧缓冲区描述

The interesting part is the GL_FRAMEBUFFER_INTERNAL_FORMAT . 有趣的部分是GL_FRAMEBUFFER_INTERNAL_FORMAT

In my opinion, the problem is in the blending mode you use when composing different image passes. 在我看来,问题在于你在编写不同的图像传递时使用的混合模式。 I assume that you upload the texture for display only, and keep the in-memory image where you composite different drawing operations, or you read-back the image content using glReadPixels ? 我假设您上传的纹理仅用于显示,并将内存中的图像保留在您合成不同绘图操作的位置,或者您使用glReadPixels读回图像内容? Basically your second images appears like a straight-alpha image drawn like a pre-multiplied alpha image. 基本上,您的第二个图像看起来像一个直接的alpha图像,就像预先乘以的alpha图像一样。 To be sure that it isn't a texture problem, save a NSImage to file before uploading to the texture, and check that the image is actually correct. 为了确保它不是纹理问题,在上传到纹理之前将NSImage保存到文件,并检查图像是否实际正确。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM