简体   繁体   English

将电影中的帧绘制到CGBitmapContext中

[英]Drawing frames from a movie into a CGBitmapContext

I have an app that needs to render frames from a video/movie into a CGBitmapContext with an arbitrary CGAffineTransform. 我有一个应用程序,需要使用任意CGAffineTransform将视频/电影中的帧渲染到CGBitmapContext中。 I'd like it to have a decent frame rate, like 20fps at least. 我希望它有一个不错的帧速率,至少20fps。

I've tried using AVURLAsset and [AVAssetImageGenerator copyCGImageAtTime:], and as the documentation for this method clearly states, it's quite slow, taking me down to 5fps sometimes. 我尝试使用AVURLAsset和[AVAssetImageGenerator copyCGImageAtTime:],并且该方法的文档明确指出,它的运行速度非常慢,有时会降低到5fps。

What is a better way to do this? 有什么更好的方法可以做到这一点? I'm THINKING that I could set up an AVPlayer with an AVPlayerLayer, then use [CGLayer renderInContext:] with my transform. 我认为我可以使用AVPlayerLayer设置一个AVPlayer,然后在我的转换中使用[CGLayer renderInContext:]。 Would this work? 这行得通吗? Or perhaps does a AVPlayerLayer not run when it notices that it's not being shown on the screen? 还是当AVPlayerLayer注意到屏幕上没有显示它时,它是否不运行?

Any other ways to suggest? 还有其他建议吗?

I ended up getting lovely, quick UIImages from the frames of a video by: 我最终通过以下方法从视频帧中获得了可爱,快速的UIImage:
1) Creating an AVURLAsset with the video's URL. 1)使用视频的URL创建一个AVURLAsset。
2) Creating an AVAssetReader with the asset. 2)使用资产创建一个AVAssetReader。
3) Setting the readers's timeRange property. 3)设置读者的timeRange属性。
4) Creating an AVAssetReaderTrackOutput with the first track from the asset. 4)使用资产的第一条轨道创建AVAssetReaderTrackOutput。
5) Adding the output to the reader. 5)将输出添加到阅读器。

Then for each frame: 然后对于每个帧:
6) Calling [output copyNextSampleBuffer]. 6)调用[输出copyNextSampleBuffer]。
7) Passing the sample buffer into CMSampleBufferGetImageBuffer. 7)将样本缓冲区传递到CMSampleBufferGetImageBuffer。
8) Passing the image buffer into CVPixelBufferLockBaseAddress, read-only 8)将图像缓冲区传递到CVPixelBufferLockBaseAddress,只读
9) Getting the base address of the image buffer with CVPixelBufferGetBaseAddress 9)使用CVPixelBufferGetBaseAddress获取图像缓冲区的基地址
10) Calling CGBitmapContextCreate with dimensions from the image buffer, passing the base address in as the location of the CGBitmap's pixels. 10)从图像缓冲区中调用带有尺寸的CGBitmapContextCreate,将基址作为CGBitmap像素的位置。
11) Calling CGBitmapContextCreateImage to get the CGImageRef. 11)调用CGBitmapContextCreateImage以获取CGImageRef。

I was very pleased to find that this works surprisingly well for scrubbing. 我非常高兴地发现这对于擦洗效果非常好。 If the user wants to go back to an earlier part of the video, simply create a new AVAssetReader with the new time range and go. 如果用户想返回视频的较早部分,只需使用新的时间范围创建一个新的AVAssetReader即可。 It's quite fast! 相当快!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM