简体   繁体   English

如何控制AVAssetWriter以正确的FPS写入

[英]How do I control AVAssetWriter to write at the correct FPS

Let me see if I understood it correctly. 让我看看我是否理解正确。

At the present most advanced hardware, iOS allows me to record at the following fps: 30, 60, 120 and 240. 在目前最先进的硬件上,iOS允许我以下列fps进行录制:30,60,120和240。

But these fps behave differently. 但这些fps表现不同。 If I shoot at 30 or 60 fps, I expect the videos files created from shooting at these fps to play at 30 and 60 fps respectively. 如果我以30或60 fps拍摄,我希望通过这些fps拍摄创建的视频文件分别以30和60 fps播放。

But if I shoot at 120 or 240 fps, I expect the video files creating from shooting at these fps to play at 30 fps, or I will not see the slow motion. 但是如果我以120或240 fps的速度拍摄,我希望以这些fps拍摄的视频文件以30 fps的速度播放,否则我将看不到慢动作。

A few questions: 几个问题:

  1. am I right? 我对吗?
  2. is there a way to shoot at 120 or 240 fps and play at 120 and 240 fps respectively? 有没有办法以120或240 fps的速度分别以120和240 fps的速度进行拍摄? I mean play at the fps the videos were shoot without slo-mo? 我的意思是在没有慢动作的情况下拍摄视频的fps?
  3. How do I control that framerate when I write the file? 在编写文件时如何控制该帧速率?

I am creating the AVAssetWriter input like this... 我正在创建像这样的AVAssetWriter输入......

  NSDictionary *videoCompressionSettings = @{AVVideoCodecKey                  : AVVideoCodecH264,
                                             AVVideoWidthKey                  : @(videoWidth),
                                             AVVideoHeightKey                 : @(videoHeight),
                                             AVVideoCompressionPropertiesKey  : @{ AVVideoAverageBitRateKey      : @(bitsPerSecond),
                                                                                   AVVideoMaxKeyFrameIntervalKey : @(1)}
                                             };

    _assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];

and there is no apparent way to control that. 并没有明显的方法来控制它。

NOTE: I have tried different numbers where that 1 is. 注意:我尝试了不同的数字,其中1是。 I have tried 1.0/fps , I have tried fps and I have removed the key. 我试过1.0/fps ,我尝试了fps ,我已经删除了密钥。 No difference. 没有不同。

This is how I setup `AVAssetWriter: 这就是我设置`AVAssetWriter的方法:

  AVAssetWriter *newAssetWriter = [[AVAssetWriter alloc] initWithURL:_movieURL fileType:AVFileTypeQuickTimeMovie
                                          error:&error];

  _assetWriter = newAssetWriter;
  _assetWriter.shouldOptimizeForNetworkUse = NO;

  CGFloat videoWidth = size.width;
  CGFloat videoHeight  = size.height;

  NSUInteger numPixels = videoWidth * videoHeight;
  NSUInteger bitsPerSecond;

  // Assume that lower-than-SD resolutions are intended for streaming, and use a lower bitrate
  //  if ( numPixels < (640 * 480) )
  //    bitsPerPixel = 4.05; // This bitrate matches the quality produced by AVCaptureSessionPresetMedium or Low.
  //  else
  NSUInteger bitsPerPixel = 11.4; // This bitrate matches the quality produced by AVCaptureSessionPresetHigh.

  bitsPerSecond = numPixels * bitsPerPixel;

  NSDictionary *videoCompressionSettings = @{AVVideoCodecKey                  : AVVideoCodecH264,
                                             AVVideoWidthKey                  : @(videoWidth),
                                             AVVideoHeightKey                 : @(videoHeight),
                                             AVVideoCompressionPropertiesKey  : @{ AVVideoAverageBitRateKey      : @(bitsPerSecond)}
                                             };

  if (![_assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) {
    NSLog(@"Couldn't add asset writer video input.");
    return;
  }

 _assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                              outputSettings:videoCompressionSettings
                                                            sourceFormatHint:formatDescription];
  _assetWriterVideoInput.expectsMediaDataInRealTime = YES;      

  NSDictionary *adaptorDict = @{
                                (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA),
                                (id)kCVPixelBufferWidthKey : @(videoWidth),
                                (id)kCVPixelBufferHeightKey : @(videoHeight)
                                };

  _pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
                         initWithAssetWriterInput:_assetWriterVideoInput
                         sourcePixelBufferAttributes:adaptorDict];


  // Add asset writer input to asset writer
  if (![_assetWriter canAddInput:_assetWriterVideoInput]) {
    return;
  }

  [_assetWriter addInput:_assetWriterVideoInput];

captureOutput method is very simple. captureOutput方法非常简单。 I get the image from the filter and write it to file using: 我从过滤器获取图像并使用以下方法将其写入文件:

if (videoJustStartWriting)
    [_assetWriter startSessionAtSourceTime:presentationTime];

  CVPixelBufferRef renderedOutputPixelBuffer = NULL;
  OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil,
                                                    _pixelBufferAdaptor.pixelBufferPool,
                                                    &renderedOutputPixelBuffer);

  if (err) return; //          NSLog(@"Cannot obtain a pixel buffer from the buffer pool");

  //_ciContext is a metal context
  [_ciContext render:finalImage
     toCVPixelBuffer:renderedOutputPixelBuffer
              bounds:[finalImage extent]
          colorSpace:_sDeviceRgbColorSpace];

   [self writeVideoPixelBuffer:renderedOutputPixelBuffer
                  withInitialTime:presentationTime];


- (void)writeVideoPixelBuffer:(CVPixelBufferRef)pixelBuffer withInitialTime:(CMTime)presentationTime
{

  if ( _assetWriter.status == AVAssetWriterStatusUnknown ) {
    // If the asset writer status is unknown, implies writing hasn't started yet, hence start writing with start time as the buffer's presentation timestamp
    if ([_assetWriter startWriting]) {
      [_assetWriter startSessionAtSourceTime:presentationTime];
    }
  }

  if ( _assetWriter.status == AVAssetWriterStatusWriting ) {
    // If the asset writer status is writing, append sample buffer to its corresponding asset writer input

      if (_assetWriterVideoInput.readyForMoreMediaData) {
        if (![_pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime]) {
          NSLog(@"error", [_assetWriter.error localizedFailureReason]);
        }
      }
  }

  if ( _assetWriter.status == AVAssetWriterStatusFailed ) {
    NSLog(@"failed");
  }

}

I put the whole thing to shoot at 240 fps. 我把整个东西以240 fps的速度拍摄。 These are presentation times of frames being appended. 这些是附加帧的呈现时间。

time ======= 113594.311510508
time ======= 113594.324011508
time ======= 113594.328178716
time ======= 113594.340679424
time ======= 113594.344846383

if you do some calculation between them you will see that the framerate is about 240 fps. 如果你在它们之间进行一些计算,你会发现帧速率约为240 fps。 So the frames are being stored with the correct time. 因此,帧以正确的时间存储。

But when I watch the video the movement is not in slow motion and quick time says the video is 30 fps. 但是,当我观看视频时,动作不是慢动作,快速时间表示视频是30 fps。

Note: this app grabs frames from the camera, the frames goes into CIFilters and the result of those filters is converted back to a sample buffer that is stored to file and displayed on the screen. 注意:此应用程序从相机抓取帧,帧进入CIF过滤器,这些过滤器的结果将转换回样本缓冲区,该缓冲区存储到文件并显示在屏幕上。

I'm reaching here, but I think this is where you're going wrong. 我到达这里,但我认为这是你出错的地方。 Think of your video capture as a pipeline. 将您的视频捕获视为管道。

(1) Capture buffer -> (2) Do Something With buffer -> (3) Write buffer as frames in video.

Sounds like you've successfully completed (1) and (2), you're getting the buffer fast enough and you're processing them so you can vend them as frames. 听起来你已经成功地完成了(1)和(2),你得到的缓冲区足够快,你正在处理它们,所以你可以把它们作为帧出售。

The problem is almost certainly in (3) writing the video frames. 问题几乎可以肯定是(3)编写视频帧。

https://developer.apple.com/reference/avfoundation/avmutablevideocomposition https://developer.apple.com/reference/avfoundation/avmutablevideocomposition

Check out the frameDuration setting in your AVMutableComposition, you'll need something like CMTime(1, 60) //60FPS or CMTime(1, 240) // 240FPS to get what you're after (telling the video to WRITE this many frames and encode at this rate). 检查AVMutableComposition中的frameDuration设置,你需要像CMTime(1,60)// 60FPS或CMTime(1,240)// 240FPS这样的东西来获得你想要的东西(告诉视频写这么多帧)并以此速率编码)。

Using AVAssetWriter, it's exactly the same principle but you set the frame rate as a property in the AVAssetWriterInput outputSettings adding in the AVVideoExpectedSourceFrameRateKey. 使用AVAssetWriter,它的原理完全相同,但您将帧速率设置为AVAsetWriterInput outputSettings中添加AVVideoExpectedSourceFrameRateKey的属性。

NSDictionary *videoCompressionSettings = @{AVVideoCodecKey                  : AVVideoCodecH264,
                                         AVVideoWidthKey                  : @(videoWidth),
                                         AVVideoHeightKey                 : @(videoHeight),
                                       AVVideoExpectedSourceFrameRateKey : @(60),
                                         AVVideoCompressionPropertiesKey  : @{ AVVideoAverageBitRateKey      : @(bitsPerSecond),
                                                                               AVVideoMaxKeyFrameIntervalKey : @(1)}
                                         };

To expand a little more - you can't strictly control or sync your camera capture exactly to the output / playback rate, the timing just doesn't work that way and isn't that exact, and of course the processing pipeline adds overhead. 要进一步扩展 - 你不能严格控制或同步你的摄像头捕获精确到输出/播放速率,时间只是不这样工作,并不是那么精确,当然处理管道增加了开销。 When you capture frames they are time stamped, which you've seen, but in the writing / compression phase, it's using only the frames it needs to produce the output specified for the composition. 当您捕获帧时,它们会被标记为时间戳,但是在写入/压缩阶段,它只使用它所需的帧来生成为合成指定的输出。

It goes both ways, you could capture only 30 FPS and write out at 240 FPS, the video would display fine, you'd just have a lot of frames "missing" and being filled in by the algorithm. 它有两种方式,你只能捕获30 FPS并以240 FPS写出,视频显示正常,你只需要很多帧“丢失”并被算法填充。 You can even vend only 1 frame per second and play back at 30FPS, the two are separate from each other (how fast I capture Vs how many frames and what I present per second) 你甚至可以每秒仅售出1帧并以30FPS回放,两者是相互分开的(我有多快捕捉到多少帧和我每秒出现的帧数)

As to how to play it back at different speed, you just need to tweak the playback speed - slow it down as needed. 至于如何以不同的速度播放它,你只需要调整播放速度 - 根据需要减慢播放速度。

If you've correctly set the time base (frameDuration), it will always play back "normal" - you're telling it "play back is X Frames Per Second", of course, your eye may notice a difference (almost certainly between low FPS and high FPS), and the screen may not refresh that high (above 60FPS), but regardless the video will be at a "normal" 1X speed for it's timebase. 如果你已经正确设置了时基(frameDuration),它将始终播放“正常” - 你告诉它“回放是每秒X帧”,当然,你的眼睛可能会注意到差异(几乎可以肯定)低FPS和高FPS),屏幕可能无法刷新到那么高(60FPS以上),但无论视频的时基是否为“正常”1X速度。 By slowing the video, if my timebase is 120, and I slow it to .5x I know effectively see 60FPS and one second of playback takes two seconds. 通过减慢视频速度,如果我的时基为120,我将它减慢到.5x,我知道有效地看到60FPS,一秒钟的播放需要两秒钟。

You control the playback speed by setting the rate property on AVPlayer https://developer.apple.com/reference/avfoundation/avplayer 您可以通过在AVPlayer上设置rate属性来控制播放速度https://developer.apple.com/reference/avfoundation/avplayer

The iOS screen refresh is locked at 60fps, so the only way to "see" the extra frames is, as you say, to slow down the playback rate, aka slow motion. iOS屏幕刷新锁定为60fps,所以“看”额外帧的唯一方法是,如你所说,减慢播放速度,即慢动作。

So 所以

  1. yes, you are right 是的,你是对的
  2. the screen refresh rate (and perhaps limitations of the human visual system, assuming you're human?) means that you cannot perceive 120 & 240fps frame rates. 屏幕刷新率(也许是人类视觉系统的限制,假设你是人类?)意味着你无法感知120和240fps的帧速率。 You can play them at normal speed by downsampling to the screen refresh rate. 可以通过下采样到屏幕刷新率以正常速度播放它们。 Surely this is what AVPlayer already does, although I'm not sure if that's the answer you're looking for. 当然这就是AVPlayer已经做的,虽然我不确定这是否是您正在寻找的答案。
  3. you control the framerate of the file when you write it with the CMSampleBuffer presentation timestamps. 使用CMSampleBuffer演示时间戳写入文件时,可以控制文件的帧速率。 If your frames are coming from the camera, you're probably passing the timestamps straight through, in which case check that you really are getting the framerate you asked for (a log statement in your capture callback should be enough to verify this). 如果您的帧来自摄像机,您可能会直接传递时间戳,在这种情况下,检查您确实获得了所要求的帧率(捕获回调中的日志语句应该足以验证这一点)。 If you're procedurally creating frames, then you choose the presentation timestamps so that they're spaced 1.0/desiredFrameRate seconds apart! 如果你是在程序上创建框架,那么你选择显示时间戳,使它们间隔1.0 / desiredFrameRate秒!

Is 3. not working for you? 是3.不适合你吗?

ps you can discard & ignore AVVideoMaxKeyFrameIntervalKey - it's a quality setting and has nothing to do with playback framerate. ps你可以丢弃和忽略AVVideoMaxKeyFrameIntervalKey - 这是一个质量设置,与播放帧率无关。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM