简体   繁体   English

使用 AVAudioEngine 从内存播放立体声音频缓冲区

[英]Playing a stereo audio buffer from memory with AVAudioEngine

I am trying to play a stereo audio buffer from memory (not from a file) in my iOS app but my application crashes when I attempt to attach the AVAudioPlayerNode 'playerNode' to the AVAudioEngine 'audioEngine'.我正在尝试从我的 iOS 应用程序中的内存(而不是文件)播放立体声音频缓冲区,但是当我尝试将 AVAudioPlayerNode 'playerNode' 附加到 AVAudioEngine 'audioEngine' 时,我的应用程序崩溃了。 The error code that I get is as follows:我得到的错误代码如下:

Thread 1: Exception: "required condition is false: _outputFormat.channelCount == buffer.format.channelCount"

I don't know if this due to the way I have declared the AVAudioEngine, the AVAudioPlayerNode, if there is something wrong with the buffer which I am generating, or if I am attaching the nodes incorrectly (or something else!).我不知道这是否由于我声明 AVAudioEngine、AVAudioPlayerNode 的方式,如果我生成的缓冲区有问题,或者我是否错误地附加了节点(或其他东西!)。 I have a feeling that it is something to do with how I am creating a new buffer.我有一种感觉,这与我如何创建新缓冲区有关。 I am trying to make a stereo buffer from two separate 'mono' arrays, and perhaps its format is not correct.我正在尝试从两个单独的“单声道”数组制作立体声缓冲区,也许它的格式不正确。

I have declared audioEngine: AVAudioEngine!我已经声明了audioEngine:AVAudioEngine! and playerNode: AVAudioPlayerNode!和 playerNode: AVAudioPlayerNode! globally:全球:

var audioEngine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!

I then load a mono source audio file that my app is going to process (the data out of this file will not be played, it will be loaded into an array, processed and then loaded into a new buffer):然后我加载一个我的应用程序将要处理的单声道源音频文件(该文件中的数据不会被播放,它将被加载到一个数组中,经过处理,然后加载到一个新的缓冲区中):

    // Read audio file
    let audioFileFormat = audioFile.processingFormat
    let frameCount = UInt32(audioFile.length)
    let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFileFormat, frameCapacity: frameCount)!
    
    // Read audio data into buffer
    do {
        try audioFile.read(into: audioBuffer)
    } catch let error {
        print(error.localizedDescription)
    }
    // Convert buffer to array of floats
    let input: [Float] = Array(UnsafeBufferPointer(start: audioBuffer.floatChannelData![0], count: Int(audioBuffer.frameLength)))

The array is then sent to a convolution function twice that returns a new array each time.然后将该数组发送到卷积函数两次,每次返回一个新数组。 This is because the mono source file needs to become a stereo audio buffer:这是因为单声道源文件需要成为立体声音频缓冲区:

    maxSignalLength = input.count + 256
    let leftAudioArray: [Float] = convolve(inputAudio: input, impulse: normalisedLeftImpulse)
    let rightAudioArray: [Float] = convolve(inputAudio: input, impulse: normalisedRightImpulse)

The maxSignalLength variable is currently the length of the input signal + the length of the impulse response (normalisedImpulseResponse) that is being convolved with, which at the moment is 256. This will become an appropriate variable at some point. maxSignalLength 变量当前是输入信号的长度 + 正在与之卷积的脉冲响应(normalisedImpulseResponse)的长度,目前为 256。这将在某个时候成为一个合适的变量。

I then declare and load the new buffer and its format, I have a feeling that the mistake is somewhere around here as this will be the buffer that is played:然后我声明并加载新的缓冲区及其格式,我感觉错误就在这里,因为这将是播放的缓冲区:

    let bufferFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: hrtfSampleRate, channels: 2, interleaved: false)!
    let outputBuffer = AVAudioPCMBuffer(pcmFormat: bufferFormat, frameCapacity: AVAudioFrameCount(maxSignalLength))!

Notice that I am not creating an interleaved buffer, I load the stereo audio data to the buffer as follows (which I think may also be wrong):请注意,我不是在创建交错缓冲区,而是按如下方式将立体声音频数据加载到缓冲区中(我认为这也可能是错误的):

        for ch in 0 ..< 2 {
        for i in 0 ..< maxSignalLength {
            
            var val: Float!
            
            if ch == 0 { // Left
                
                val = leftAudioArray[i]
                // Limit
                if val > 1 {
                    val = 1
                }
                if val < -1 {
                    val = -1
                }
                
            } else if ch == 1 { // Right
                
                val = rightAudioArray[i]
                // Limit
                if val < 1 {
                    val = 1
                }
                if val < -1 {
                    val = -1
                }
            }
            
            outputBuffer.floatChannelData![ch][i] = val
        }
    }

The audio is also limited to values between -1 and 1.音频也仅限于 -1 和 1 之间的值。

Then I finally come to (attempting to) load the buffer to the audio node, attach the audio node to the audio engine, start the audio engine and then play the node.然后我终于来(试图)将缓冲区加载到音频节点,将音频节点附加到音频引擎,启动音频引擎然后播放节点。

    let frameCapacity = AVAudioFramePosition(outputBuffer.frameCapacity)
    let frameLength = outputBuffer.frameLength
    
    playerNode.scheduleBuffer(outputBuffer, at: nil, options: AVAudioPlayerNodeBufferOptions.interrupts, completionHandler: nil)
    playerNode.prepare(withFrameCount: frameLength)
    let time = AVAudioTime(sampleTime: frameCapacity, atRate: hrtfSampleRate)
    
    audioEngine.attach(playerNode)
    audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: outputBuffer.format)
    audioEngine.prepare()
    do {
        try audioEngine.start()
    } catch let error {
        print(error.localizedDescription)
    }
    
    playerNode.play(at: time)

The error that I get in runtime is:我在运行时遇到的错误是:

AVAEInternal.h:76    required condition is false: [AVAudioPlayerNode.mm:712:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)]

It doesn't show the line that this error occurs on.它不显示发生此错误的行。 I have been stuck on this for a while now, and have tried lots of different things, but there doesn't seem to be very much clear information about playing audio from memory and not from files with AVAudioEngine from what I could find.我已经坚持了一段时间,并尝试了很多不同的事情,但似乎没有关于从内存中播放音频而不是从我能找到的带有 AVAudioEngine 的文件中播放音频的非常明确的信息。 Any help would be greatly appreciated.任何帮助将不胜感激。

Thanks!谢谢!

Edit #1: Better title编辑 #1:更好的标题

Edit# 2: UPDATE - I have found out why I was getting the error.编辑#2:更新 - 我已经找到了为什么我收到错误。 It seemed to be caused by setting up the playerNode before attaching it to the audioEngine.这似乎是由在将 playerNode 附加到 audioEngine 之前设置它引起的。 Swapping the order stopped the program from crashing and throwing the error:交换订单阻止程序崩溃并抛出错误:

    let frameCapacity = AVAudioFramePosition(outputBuffer.frameCapacity)
    let frameLength = outputBuffer.frameLength
    
    audioEngine.attach(playerNode)
    audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: outputBuffer.format)
    audioEngine.prepare()
    
    playerNode.scheduleBuffer(outputBuffer, at: nil, options: AVAudioPlayerNodeBufferOptions.interrupts, completionHandler: nil)
    playerNode.prepare(withFrameCount: frameLength)
    let time = AVAudioTime(sampleTime: frameCapacity, atRate: hrtfSampleRate)
    
    do {
        try audioEngine.start()
    } catch let error {
        print(error.localizedDescription)
    }

    playerNode.play(at: time)

However, I don't have any sound.然而,我没有任何声音。 After creating an array of floats of the outputBuffer with the same method as used for the input signal, and taking a look at its contents with a break point it seems to be empty, so I must also be incorrectly storing the data to the outputBuffer.使用与输入信号相同的方法创建 outputBuffer 的浮点数组,并用断点查看其内容后,它似乎是空的,因此我也必须错误地将数据存储到 outputBuffer。

You might be creating and filling your buffer incorrectly.您可能错误地创建和填充缓冲区。 Try doing it thus:尝试这样做:

let fileURL = Bundle.main.url(forResource: "my_file", withExtension: "aiff")!
let file = try! AVAudioFile(forReading: fileURL)
let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: UInt32(file.length))!
try! file.read(into: buffer)

I have fixed the issue!我已经解决了这个问题!

I tried a lot of solutions and have ended up completely re-writing the audio engine section of my app and I now have the AVAudioEngine and AVAudioPlayerNode declared within the ViewController class as the following:我尝试了很多解决方案,最终完全重写了我的应用程序的音频引擎部分,现在我在 ViewController 类中声明了 AVAudioEngine 和 AVAudioPlayerNode,如下所示:

class ViewController: UIViewController {

var audioEngine: AVAudioEngine = AVAudioEngine()
var playerNode: AVAudioPlayerNode = AVAudioPlayerNode()

...

I am still unclear if it is better to declare these globally or as class variables in iOS, however I can confirm that my application is playing audio with these declared within the ViewController class.我仍然不清楚是将这些全局声明还是在 iOS 中声明为类变量更好,但是我可以确认我的应用程序正在播放音频,这些在 ViewController 类中声明。 I do know that they shouldn't be declared in a function as they will disappear and stop playing when the function goes out of scope.我确实知道它们不应该在函数中声明,因为它们会在函数超出范围时消失并停止播放。

However, I still was not getting any audio output until I set the AVAudioPCMBuffer.frameLength to frameCapacity.但是,我将AVAudioPCMBuffer.frameLength设置为 frameCapacity之前我仍然没有获得任何音频输出

I could find very little information online regarding creating a new AVAudioPCMBuffer from an array of floats, but this seems to be the missing step that I needed to do to make my outputBuffer playable.我在网上找到的关于从浮动数组创建新的 AVAudioPCMBuffer 的信息很少,但这似乎是我需要做的使 outputBuffer 可播放的缺失步骤。 Before I set this, it was at 0 by default.在我设置之前,它默认为 0。

The frameLength member isn't required in the AVAudioFormat class declaration. AVAudioFormat 类声明中不需要 frameLength 成员。 But it is important and my buffer wasn't playable until I set it manually, and after the class instance declaration:但它很重要,我的缓冲区在我手动设置它之前是不可播放的,并且在类实例声明之后:

let bufferFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: hrtfSampleRate, channels: 2, interleaved: false)!
let frameCapacity = UInt32(audioFile.length)
guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: bufferFormat, frameCapacity: frameCapacity) else {
    fatalError("Could not create output buffer.")
}
outputBuffer.frameLength = frameCapacity // Important!

This took a long time to find out, hopefully this will help someone else in the future.这花了很长时间才找到,希望这会在将来对其他人有所帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM