简体   繁体   English

iPhone-多个OpenGL(CAEaglelayer)视图的屏幕截图

[英]iPhone - Screenshot of multiple OpenGL (CAEaglelayer) views

I am working on paint app taking reference from GLPaint app. 我正在研究来自GLPaint应用程序的绘画应用程序。 In this app there are two canvas views, one view is moving from left to right (animating) and other view is used as background view (as shown in figure). 在此应用程序中,有两个画布视图,一个视图从左向右移动(动画),另一个视图用作背景视图(如图所示)。

在此处输入图片说明

I am using CAEAGLLayer for filling colors in both views (using subclassing technique). 我正在使用CAEAGLLayer在两个视图中填充颜色(使用子类化技术)。 It is working as expected. 它按预期工作。 Now I have to take screenshot of the complete view (outlines and both OpenGL views), but I am getting screenshot of only one view (either moving view or background view). 现在,我必须截取完整视图的屏幕快照(轮廓和两个OpenGL视图),但是我只获取一个视图的屏幕快照(移动视图或背景视图)。 Code related to screenshot is associated with both views but at a time only one view's content is saved. 与屏幕截图相关的代码与两个视图都关联,但是一次仅保存一个视图的内容。

Code snippet for screenshot as follows. 屏幕截图的代码段如下。

- (UIImage*)snapshot:(UIView*)eaglview{

GLint backingWidth, backingHeight;

// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point, 
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.

glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);

// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS

NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
    // Set the scale parameter to your OpenGL ES view's contentScaleFactor
    // so that you get a high-resolution snapshot when its value is greater than 1.0
    CGFloat scale = eaglview.contentScaleFactor;
    widthInPoints = width / scale;
    heightInPoints = height / scale;
    UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
    widthInPoints = width;
    heightInPoints = height;
    UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
  // Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);

 return image; 
}

Is there any way to combine content of both CAEaglelayer views? 有什么办法可以合并两个CAEaglelayer视图的内容?

Please help. 请帮忙。

Thank you very much. 非常感谢你。

you could create screenshots of each view by separate and then combine them as follows: 您可以通过单独创建每个视图的屏幕截图,然后将其组合如下:

UIGraphicsBeginImageContext(canvasSize);

[openGLImage1 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
[openGLImage2 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];

UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

you should use appropriate canvasSize and frame to draw each generated UIImage, it is just a sample of how you could do it 您应该使用适当的canvasSize和frame来绘制每个生成的UIImage,这只是如何实现的示例

See here for a much better way of doing this. 请参阅此处 ,以获取更好的方法。 It basically allows you to capture a larger view that contains all of your OpenGL (and other) views into one, fully composed screenshot that's identical to what you see on the screen. 基本上,它使您可以捕获包含所有OpenGL(和其他)视图的更大视图,并将其捕获到一个完整组成的屏幕快照中,该屏幕快照与您在屏幕上看到的完全相同。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM