简体   繁体   English

如何从EAGLView获取UIImage?

[英]How to get UIImage from EAGLView?

I am trying to get a UIImage from what is displayed in my EAGLView. 我试图从我的EAGLView中显示的内容中获取UIImage。 Any suggestions on how to do this? 有关如何做到这一点的任何建议?

Here is a cleaned up version of Quakeboy's code. 这是Quakeboy代码的清理版本。 I tested it on iPad, and works just fine. 我在iPad上测试过,效果很好。 The improvements include: 改进包括:

  • works with any size EAGLView 适用于任何尺寸的EAGLView
  • works with retina display (point scale 2) 适用于视网膜显示(点刻度2)
  • replaced nested loop with memcpy 用memcpy替换嵌套循环
  • cleaned up memory leaks 清理内存泄漏
  • saves the UIImage in the photoalbum as a bonus. 将UIImage保存在photoalbum中作为奖励。

Use this as a method in your EAGLView: 将此作为EAGLView中的方法使用:

-(void)snapUIImage
{
    int s = 1;
    UIScreen* screen = [ UIScreen mainScreen ];
    if ( [ screen respondsToSelector:@selector(scale) ] )
        s = (int) [ screen scale ];

    const int w = self.frame.size.width;
    const int h = self.frame.size.height;
    const NSInteger myDataLength = w * h * 4 * s * s;
    // allocate array and read pixels into it.
    GLubyte *buffer = (GLubyte *) malloc(myDataLength);
    glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    // gl renders "upside down" so swap top to bottom into new array.
    // there's gotta be a better way, but this works.
    GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
    for(int y = 0; y < h*s; y++)
    {
        memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
    }
    free(buffer); // work with the flipped buffer, so get rid of the original one.

    // make data provider with data.
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
    // prep the ingredients
    int bitsPerComponent = 8;
    int bitsPerPixel = 32;
    int bytesPerRow = 4 * w * s;
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
    CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
    // make the cgimage
    CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
    // then make the uiimage from that
    UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
    UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
    CGImageRelease( imageRef );
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    free(buffer2);
}

I was unable to get the other answers here to work correctly for me. 我无法在这里得到其他答案,以便为我正常工作。

After a few days I finally got a working solution to this. 几天后,我终于得到了一个有效的解决方案。 There is code provided by Apple which produces a UIImage from a EAGLView. Apple提供的代码可以从EAGLView生成UIImage。 Then you simply need to flip the image vertically since UIkit is upside down. 然后,您只需要垂直翻转图像,因为UIkit是颠倒的。

Apple Provided Method - Modified to be inside the view you want to make into an image. Apple提供的方法 - 已修改为要在要生成图像的视图中。

    -(UIImage *) drawableToCGImage 
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);

// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);

NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                ref, NULL, true, kCGRenderingIntentDefault);

// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
    // Set the scale parameter to your OpenGL ES view's contentScaleFactor
    // so that you get a high-resolution snapshot when its value is greater than 1.0
    CGFloat scale = self.contentScaleFactor;
    widthInPoints = width2 / scale;
    heightInPoints = height2 / scale;
    UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
    widthInPoints = width2;
    heightInPoints = height2;
    UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}

CGContextRef cgcontext = UIGraphicsGetCurrentContext();

// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);

return image;

} }

And heres a method to flip the image 并且还有一种翻转图像的方法

- (UIImage *) flipImageVertically:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
                                                       1, 0, 0, -1, 0, tempImageView.frame.size.height
                                                       );
CGContextConcatCTM(context, flipVertical);

[tempImageView.layer renderInContext:context];

UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];

return flippedImage;

} }

And here's a link to the Apple dev page where I found the first method for reference. 这里是Apple dev页面的链接,在那里我找到了第一个参考方法。 http://developer.apple.com/library/ios/#qa/qa1704/_index.html http://developer.apple.com/library/ios/#qa/qa1704/_index.html

-(UIImage *) saveImageFromGLView
{
    NSInteger myDataLength = 320 * 480 * 4;
    // allocate array and read pixels into it.
    GLubyte *buffer = (GLubyte *) malloc(myDataLength);
    glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    // gl renders "upside down" so swap top to bottom into new array.
    // there's gotta be a better way, but this works.
    GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
    for(int y = 0; y <480; y++)
    {
        for(int x = 0; x <320 * 4; x++)
        {
            buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
        }
    }
    // make data provider with data.
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
    // prep the ingredients
    int bitsPerComponent = 8;
    int bitsPerPixel = 32;
    int bytesPerRow = 4 * 320;
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
    CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
    // make the cgimage
    CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
    // then make the uiimage from that
    UIImage *myImage = [UIImage imageWithCGImage:imageRef];

    CGImageRelease( imageRef );
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    free(buffer2);

    return myImage;
}

EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:] . 编辑:作为下面的demianturner笔记,您不再需要渲染图层,您现在可以(并且应该)使用更高级别的[UIView drawViewHierarchyInRect:] Other than that; 除此之外; this should work the same. 这应该是一样的。

An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. EAGLView只是一种视图,其底层CAEAGLLayer只是一种层。 That means, that the standard approach for converting a view/layer into a UIImage will work. 这意味着,将视图/图层转换为UIImage的标准方法将起作用。 (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.) (链接的问题是UIWebview的事实并不重要;这只是另一种观点。)

CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release: CGDataProviderCreateWithData附带一个释放回调来释放数据,你应该在哪里发布:

void releaseBufferData(void *info, const void *data, size_t size)
{
    free((void*)data);
}

Then do this like other examples, but NOT to free data here: 然后像其他示例一样执行此操作,但不要在此处释放数据:

GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
....
CGDataProviderRelease(provider);

Or simply use CGDataProviderCreateWithCFData without release callback stuff instead: 或者只是使用CGDataProviderCreateWithCFData而不使用发布回调的东西:

GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
....
CGDataProviderRelease(provider);
free(bufferData); // Remember to free it

For more information, please check this discuss: 有关更多信息,请查看此讨论:

What's the right memory management pattern for buffer->CGImageRef->UIImage? 缓冲区 - > CGImageRef-> UIImage的内存管理模式是什么?

With this above code of Brad Larson, you have to edit your EAGLView.m 使用上面的Brad Larson代码,您必须编辑EAGLView.m

- (id)initWithCoder:(NSCoder*)coder{
    self = [super initWithCoder:coder];
    if (self) {
        CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
        eaglLayer.opaque = TRUE;
        eaglLayer.drawableProperties = 
            [NSDictionary dictionaryWithObjectsAndKeys: 
                [NSNumber numberWithBool:YES],  kEAGLDrawablePropertyRetainedBacking, 
                kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
    }
    return self;
}

You have to change numberWithBool value YES 您必须更改numberWithBoolYES

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM