简体   繁体   English

在iOS上捕获视频/音频时,如何使用AVAssetWriter将音频写入第一帧?

[英]How do you write audio to the first frame with AVAssetWriter while capturing video/audio on iOS?

Long story short, I am trying to implement a naive solution for streaming video from the iOS camera/microphone to a server. 长话短说,我正在尝试实施一个天真的解决方案,用于将视频从iOS摄像头/麦克风传输到服务器。

I am using AVCaptureSession with audio and video AVCaptureOutputs , and then using AVAssetWriter / AVAssetWriterInput to capture video and audio in the captureOutput:didOutputSampleBuffer:fromConnection method and write the resulting video to a file. 我使用AVCaptureSession与音频和视频AVCaptureOutputs ,然后用AVAssetWriter / AVAssetWriterInput捕捉视频和音频在captureOutput:didOutputSampleBuffer:fromConnection方法和程序将最终的视频文件。

To make this a stream, I am using an NSTimer to break the video files into 1 second chunks (by hot-swapping in a different AVAssetWriter that has a different outputURL ) and upload these to a server over HTTP. 为了使这个流,我使用的是NSTimer的视频文件分解成1秒块(在不同的热插拔AVAssetWriter具有不同outputURL ),并上传这些通过HTTP服务器。

This is working, but the issue I'm running into is this: the beginning of the .mp4 files appear to always be missing audio in the first frame, so when the video files are concatenated on the server (running ffmpeg ) there is a noticeable audio skip at the intersections of these files. 这是有效的,但我遇到的问题是: .mp4文件的开头似乎总是在第一帧中丢失音频,所以当视频文件在服务器上连接(运行ffmpeg )时,有一个明显的音频跳过这些文件的交叉点。 The video is just fine - no skipping. 视频很好 - 没有跳过。

空白的音频

I tried many ways of making sure there were no CMSampleBuffers dropped and checked their timestamps to make sure they were going to the right AVAssetWriter , but to no avail. 我尝试了很多方法来确保没有丢弃CMSampleBuffers并检查它们的时间戳以确保它们转到正确的AVAssetWriter ,但无济于事。

Checking the AVCam example with AVCaptureMovieFileOutput and AVCaptureLocation example with AVAssetWriter and it appears the files they generate do the same thing. 使用带有AVAssetWriter AVCaptureMovieFileOutput和AVCaptureLocation示例检查AVCam示例 ,看起来它们生成的文件执行相同的操作。

Maybe there is something fundamental I am misunderstanding here about the nature of audio/video files, as I'm new to video/audio capture - but thought I'd check before I tried to workaround this by learning to use ffmpeg as some seem to do to fragment the stream (if you have any tips on this, too, let me know!). 也许有一些基本的东西,我在这里误解了音频/视频文件的性质,因为我是视频/音频捕捉的新手 - 但我想在我尝试通过学习使用ffmpeg来解决这个问题之前我会检查一下做分段流(如果你有任何提示,请告诉我!)。 Thanks in advance! 提前致谢!

I had the same problem and solved it by recording audio with a different API, Audio Queue. 我有同样的问题,并通过使用不同的API,音频队列录制音频来解决它。 This seems to solve it, just need to take care of timing in order to avoid sound delay. 这似乎解决了它,只需要照顾时间,以避免声音延迟。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM