[英]Using the GPU on iOS for Overlaying one image on another Image (Video Frame)
I am working on some image processing in my app.我正在我的应用程序中进行一些图像处理。 Taking live video and adding an image onto of it to use it as an overlay.
拍摄实时视频并在其上添加图像以将其用作叠加层。 Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended.
不幸的是,这需要大量的 CPU 来完成,这会导致程序的其他部分变慢并且无法按预期工作。 Essentially I want to make the following code use the GPU instead of the CPU.
基本上我想让下面的代码使用 GPU 而不是 CPU。
- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {
CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];
// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:@"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];
CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:@"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:@"inputBackgroundImage"];
CIImage * blendOutput = [blendFilter outputImage];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = @{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];
CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);
return outputImage;}
Suggestions in order:建议按顺序:
AVCaptureVideoPreviewLayer
with a UIImageView
on top insufficient?UIImageView
的AVCaptureVideoPreviewLayer
是否不足? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.CIImage
s and use -imageByApplyingTransform:
to adjust the ghost.-imageByApplyingTransform:
两个东西变成CIImage
并使用-imageByApplyingTransform:
来调整重影。CVOpenGLESTextureCache
to push core video frames to the GPU, and the ghost will simply permanently live there.CVOpenGLESTextureCache
将核心视频帧推送到 GPU,而幽灵将只是永久存在于那里。 Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect
to save yourself from needing to know GLSL if you don't already.GLKBaseEffect
以免自己需要了解 GLSL(如果您还没有)。 All you should need to do is package up some vertices and make a drawing call.The biggest performance issue is that each frame you create EAGLContext and CIContext .最大的性能问题是你创建的每一帧EAGLContext和CIContext 。 This needs to be done only once outside of your processUsingCoreImage method.
这只需在 processUsingCoreImage 方法之外完成一次。
Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :此外,如果您想避免 CPU-GPU 往返,而不是创建核心图形图像(createCGImage),因此您可以像这样直接在 EaglLayer 中渲染 Cpu 处理:
[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.