简体   繁体   English

在iPhone上录制灰度视频的方法?

[英]approach for recording grayscale video on iphone?

I am building an iphone app that needs to record grayscale video and save it to the camera roll. 我正在构建一个需要录制灰度视频并将其保存到相机胶卷的iPhone应用程序。 I'm stumped at how best to approach this. 我很难理解如何最好地接近这一点。

I am thinking along the following lines: 我在考虑以下几点:

  1. Use a shader and opengl to transform the video to grayscale 使用着色器和opengl将视频转换为灰度
  2. Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file. 使用AVFoundation(AVAssetWriter和AVAssetWriterInputPixelBufferAdaptor)将视频写入文件。

My questions are: 我的问题是:

  1. Is this the right approach (simplest, best performance)? 这是正确的方法(最简单,最佳性能)吗?
  2. If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor? 如果是这样,那么从opengl输出到AVAssetWriterInputPixelBufferAdaptor的CVPixelBufferRef输入的最佳方法是什么?
  3. If not, what would be a better approach? 如果没有,那会是更好的方法吗?

Any nudge in the right direction is much appreciated! 任何推动正确的方向非常感谢!

In general, I'd agree with this approach. 总的来说,我同意这种方法。 Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. 在OpenGL ES 2.0着色器中进行处理应该是像这样进行视频帧更改的最高效方式,但这不会非常简单。 Fortunately, you can start from a pre-existing template that already does this. 幸运的是,您可以从已经存在的模板开始。

You can use the sample application I wrote here (and explained here ) as a base. 您可以使用我在此处编写的示例应用程序(并在此处进行说明)作为基础。 I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). 我在此示例中使用自定义着色器来跟踪图像中的颜色,但您可以轻松地将其更改为将视频帧转换为灰度(我甚至看到有人这样做过一次)。 The code for feeding camera video into a texture and processing it could be used verbatim from that sample. 将摄像机视频输入纹理并对其进行处理的代码可以从该样本中逐字使用。

In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. 在该应用程序中的一个显示选项中,我首先将处理后的图像渲染到帧缓冲对象,然后使用glReadPixels()将生成的图像拉回到我可以在CPU上使用的字节。 You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk. 您可以使用它在GPU处理完帧后获取原始图像数据,然后将这些字节输入CVPixelBufferCreateWithBytes()以生成用于写入磁盘的CVPixelBufferRef。

(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. (编辑:2/29/2012)作为对此的更新,我刚刚在我的开源GPUImage框架中实现了这种视频录制,因此我可以评论编码部分的具体性能。 It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels() , and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate). 事实证明,您可以从相机捕获视频,对其进行实时过滤,使用glReadPixels()从OpenGL ES中获取视频,然后在iPhone 4上以30 FPS将其写为640x480帧中的实时H.264视频(最大相机帧率)。

There were a few things that I needed to do in order to get this recording speed. 为了获得这种录制速度,我需要做一些事情。 You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. 您需要确保将AVAssetWriterInputPixelBufferAdaptor设置为使用kCVPixelFormatType_32BGRA作为输入像素缓冲区的颜色格式。 Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels() . 然后,当使用glReadPixels()时,您需要使用颜色混合着色器重新渲染RGBA场景以提供BGRA输出。 Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. 如果没有这种颜色设置,您的视频录制帧速率将降至iPhone 4的5-8 FPS,使用它可轻松达到30 FPS。 You can look at the GPUImageMovieWriter class source code to see more about how I did this. 您可以查看GPUImageMovieWriter类源代码,以了解有关我如何执行此操作的更多信息。

Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. 使用GPUImage框架,可以通过简单地创建GPUImageVideoCamera,附加饱和度设置为0的GPUImageSaturationFilter的目标,然后附加GPUImageMovieWriter作为其目标来处理上述过滤和编码任务。 The framework will handle the OpenGL ES interactions for you. 该框架将为您处理OpenGL ES交互。 I've done this, and it works well on all iOS devices I've tested. 我做到了这一点,它在我测试的所有iOS设备上运行良好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM