简体   繁体   中英

Using the GPU on iOS for Overlaying one image on another Image (Video Frame)

I am working on some image processing in my app. Taking live video and adding an image onto of it to use it as an overlay. Unfortunately this is taking massive amounts of CPU to do which is causing other parts of the program to slow down and not work as intended. Essentially I want to make the following code use the GPU instead of the CPU.

- (UIImage *)processUsingCoreImage:(CVPixelBufferRef)input {

    CIImage *inputCIImage = [CIImage imageWithCVPixelBuffer:input];


// Use Core Graphics for this
UIImage * ghostImage = [self createPaddedGhostImageWithSize:CGSizeMake(1280, 720)];//[UIImage imageNamed:@"myImage"];
CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];

CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"];
[blendFilter setValue:ghostCIImage forKeyPath:@"inputImage"];
[blendFilter setValue:inputCIImage forKeyPath:@"inputBackgroundImage"];

CIImage * blendOutput = [blendFilter outputImage];

EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *contextOptions = @{ kCIContextWorkingColorSpace : [NSNull null] ,[NSNumber numberWithBool:NO]:kCIContextUseSoftwareRenderer};
CIContext *context = [CIContext contextWithEAGLContext:myEAGLContext options:contextOptions];

CGImageRef outputCGImage = [context createCGImage:blendOutput fromRect:[blendOutput extent]];
UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
CGImageRelease(outputCGImage);

return outputImage;}

Suggestions in order:

  1. do you really need to composite the two images? Is an AVCaptureVideoPreviewLayer with a UIImageView on top insufficient? You'd then just apply the current ghost transform to the image view (or its layer) and let the compositor glue the two together, for which it will use the GPU.
  2. if not then first port of call should be CoreImage — it wraps up GPU image operations into a relatively easy Swift/Objective-C package. There is a simple composition filter so all you need to do is make the two things into CIImage s and use -imageByApplyingTransform: to adjust the ghost.
  3. failing both of those, then you're looking at an OpenGL solution. You specifically want to use CVOpenGLESTextureCache to push core video frames to the GPU, and the ghost will simply permanently live there. Start from the GLCameraRipple sample as to that stuff, then look into GLKBaseEffect to save yourself from needing to know GLSL if you don't already. All you should need to do is package up some vertices and make a drawing call.

The biggest performance issue is that each frame you create EAGLContext and CIContext . This needs to be done only once outside of your processUsingCoreImage method.

Also if you want to avoid the CPU-GPU roundtrip, instead of creating a Core Graphics image (createCGImage ) thus Cpu processing you can render directly in EaglLayer like this :

[context drawImage:blendOutput inRect: fromRect: ];
[myEaglContext presentRenderBuffer:G:_RENDERBUFFER];

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM