简体   繁体   English

使用 iOS 上的 GPU 将一个图像覆盖在另一个图像上(视频帧)

[英]Using the GPU on iOS for Overlaying one image on another Image (Video Frame)

I am working on some image processing in my app.我正在我的应用程序中进行一些图像处理。 Taking live video and adding an image onto of it to use it as an overlay.拍摄实时视频并在其上添加图像以将其用作叠加层。 Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended.不幸的是,这需要大量的 CPU 来完成,这会导致程序的其他部分变慢并且无法按预期工作。 Essentially I want to make the following code use the GPU instead of the CPU.基本上我想让下面的代码使用 GPU 而不是 CPU。

- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {

    CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];


// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:@"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];

CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:@"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:@"inputBackgroundImage"];

CIImage * blendOutput = [blendFilter outputImage];

EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = @{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];

CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);

return outputImage;}

Suggestions in order:建议按顺序:

  1. do you really need to composite the two images?你真的需要合成这两个图像吗? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient?顶部带有UIImageViewAVCaptureVideoPreviewLayer是否不足? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.然后,您只需将当前的幻影变换应用于图像视图(或其层),并让合成器将两者粘合在一起,它将使用 GPU。
  2. if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package.如果不是,那么第一个调用端口应该是 CoreImage——它将 GPU 图像操作包装到一个相对简单的 Swift/Objective-C 包中。 There is a simple composition filter so all you need to do is make the two things into CIImage s and use -imageByApplyingTransform: to adjust the ghost.有一个简单的合成过滤器,所以你需要做的就是-imageByApplyingTransform:两个东西变成CIImage并使用-imageByApplyingTransform:来调整重影。
  3. failing both of those, then you're looking at an OpenGL solution.两者都失败了,那么您正在寻找 OpenGL 解决方案。 You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there.您特别希望使用CVOpenGLESTextureCache将核心视频帧推送到 GPU,而幽灵将只是永久存在于那里。 Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already.GLCameraRipple 示例开始了解这些内容,然后查看GLKBaseEffect以免自己需要了解 GLSL(如果您还没有)。 All you should need to do is package up some vertices and make a drawing call.您需要做的就是打包一些顶点并进行绘图调用。

The biggest performance issue is that each frame you create EAGLContext and CIContext .最大的性能问题是你创建的每一帧EAGLContextCIContext This needs to be done only once outside of your processUsingCoreImage method.这只需在 processUsingCoreImage 方法之外完成一次。

Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :此外,如果您想避免 CPU-GPU 往返,而不是创建核心图形图像(createCGImage),因此您可以像这样直接在 EaglLayer 中渲染 Cpu 处理:

[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM