简体   繁体   English

iOS:处理来自AVPlayer视频轨道的音频

[英]iOS: Process audio from AVPlayer video track

I plan to refactor my recording system in My iOS app. 我打算在我的iOS应用程序中重构我的录音系统。 Context: Up to now, I record video and audio separately, starting recording both approximatly at same time. 背景:到目前为止,我分别录制视频和音频,同时开始大致录制。 Once record is finished, same system, I play the video and audio separately, applying AudioUnits on the fly on audio. 一旦完成记录,同样的系统,我分别播放视频和音频,在音频上动态应用AudioUnits。 Finally, I merge the video and modified audio. 最后,我合并了视频和修改过的音频。 It happens that both records don't start at the same time (for any reasons), producing an unsynchronized result. 碰巧两个记录不会同时启动(出于任何原因),从而产生不同步的结果。

Would it be possible to refactor my system like this: 是否可以像这样重构我的系统:

1) Record normal video with audio into mov file --> I would be sure that audio+video would be synchronized.

2) During viewing the result with AVPlayer, process the audio part on the fly. (I will use AudioKit) --> that's the part I m not confident. 
   Would I be able to send the audio buffer to Audiokit (which would process it) and give back the processed audio to AVPlayer like if it was the original AVPlayer audio part?

3) Save a final file with video and audio modified --> easy part with AVFundation

Please, ask for any information ;) 请询问任何信息;)

I can think of one fairly simple way to do this. 我可以想到一个相当简单的方法来做到这一点。

Basically you just need to open your video file in an AKPlayer instance. 基本上你只需要在AKPlayer实例中打开你的视频文件。 Then, you mute your video audio. 然后,您将视频音频静音。 Now, you have the video audio in AudioKit. 现在,您在AudioKit中拥有视频音频。 It's pretty simple to lock the video and audio together using a common clock. 使用公共时钟将视频和音频锁定在一起非常简单。 Pseudo-code of the flow: 流的伪代码:

// This will represent a common clock using the host time
var audioClock = CMClockGetHostTimeClock()

// your video player
let videoPlayer = AVPlayer( url: videoURL )
videoPlayer.masterClock = audioClock
videoPlayer.automaticallyWaitsToMinimizeStalling = false

....

var audioPlayer: AKPlayer?

// your video-audio player
if let player = try? AKPlayer(url: videoURL) {
    audioPlayer = player
}

func schedulePlayback(videoTime: TimeInterval, audioTime: TimeInterval, hostTime: UInt64 ) {
    audioPlay( audioTime, hostTime: hostTime )
    videoPlay(at: 0, hostTime: hostTime)
}


func audioPlay(at time: TimeInterval = 0, hostTime: UInt64 = 0) {
    audioPlayer.play(when: time, hostTime: hostTime)    
}

func videoPlay(at time: TimeInterval = 0, hostTime: UInt64 = 0 ) {
    let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
    let cmVTime = CMTimeMakeWithSeconds(time, 1000000)
    let futureTime = CMTimeAdd(cmHostTime, cmVTime)
    videoPlayer.setRate(1, time: kCMTimeInvalid, atHostTime: futureTime)
}

You can connect the player up to any AudioKit processing chain in the normal way. 您可以以正常方式将播放器连接到任何AudioKit处理链。

When you want to export your audio, run an AKNodeRecorder on the final output processing chain. 如果要导出音频,请在最终输出处理链上运行AKNodeRecorder。 Record this to file, then merge your audio into your video. 将此记录到文件中,然后将您的音频合并到您的视频中。 I'm not sure if the AudioKit offline processing that is being worked on is ready yet, so you may need to play the audio in real time to capture the processing output. 我不确定正在处理的AudioKit离线处理是否已准备就绪,因此您可能需要实时播放音频以捕获处理输出。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM