简体   繁体   中英

Playing a stereo audio buffer from memory with AVAudioEngine

I am trying to play a stereo audio buffer from memory (not from a file) in my iOS app but my application crashes when I attempt to attach the AVAudioPlayerNode 'playerNode' to the AVAudioEngine 'audioEngine'. The error code that I get is as follows:

Thread 1: Exception: "required condition is false: _outputFormat.channelCount == buffer.format.channelCount"

I don't know if this due to the way I have declared the AVAudioEngine, the AVAudioPlayerNode, if there is something wrong with the buffer which I am generating, or if I am attaching the nodes incorrectly (or something else!). I have a feeling that it is something to do with how I am creating a new buffer. I am trying to make a stereo buffer from two separate 'mono' arrays, and perhaps its format is not correct.

I have declared audioEngine: AVAudioEngine! and playerNode: AVAudioPlayerNode! globally:

var audioEngine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!

I then load a mono source audio file that my app is going to process (the data out of this file will not be played, it will be loaded into an array, processed and then loaded into a new buffer):

    // Read audio file
    let audioFileFormat = audioFile.processingFormat
    let frameCount = UInt32(audioFile.length)
    let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFileFormat, frameCapacity: frameCount)!
    
    // Read audio data into buffer
    do {
        try audioFile.read(into: audioBuffer)
    } catch let error {
        print(error.localizedDescription)
    }
    // Convert buffer to array of floats
    let input: [Float] = Array(UnsafeBufferPointer(start: audioBuffer.floatChannelData![0], count: Int(audioBuffer.frameLength)))

The array is then sent to a convolution function twice that returns a new array each time. This is because the mono source file needs to become a stereo audio buffer:

    maxSignalLength = input.count + 256
    let leftAudioArray: [Float] = convolve(inputAudio: input, impulse: normalisedLeftImpulse)
    let rightAudioArray: [Float] = convolve(inputAudio: input, impulse: normalisedRightImpulse)

The maxSignalLength variable is currently the length of the input signal + the length of the impulse response (normalisedImpulseResponse) that is being convolved with, which at the moment is 256. This will become an appropriate variable at some point.

I then declare and load the new buffer and its format, I have a feeling that the mistake is somewhere around here as this will be the buffer that is played:

    let bufferFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: hrtfSampleRate, channels: 2, interleaved: false)!
    let outputBuffer = AVAudioPCMBuffer(pcmFormat: bufferFormat, frameCapacity: AVAudioFrameCount(maxSignalLength))!

Notice that I am not creating an interleaved buffer, I load the stereo audio data to the buffer as follows (which I think may also be wrong):

        for ch in 0 ..< 2 {
        for i in 0 ..< maxSignalLength {
            
            var val: Float!
            
            if ch == 0 { // Left
                
                val = leftAudioArray[i]
                // Limit
                if val > 1 {
                    val = 1
                }
                if val < -1 {
                    val = -1
                }
                
            } else if ch == 1 { // Right
                
                val = rightAudioArray[i]
                // Limit
                if val < 1 {
                    val = 1
                }
                if val < -1 {
                    val = -1
                }
            }
            
            outputBuffer.floatChannelData![ch][i] = val
        }
    }

The audio is also limited to values between -1 and 1.

Then I finally come to (attempting to) load the buffer to the audio node, attach the audio node to the audio engine, start the audio engine and then play the node.

    let frameCapacity = AVAudioFramePosition(outputBuffer.frameCapacity)
    let frameLength = outputBuffer.frameLength
    
    playerNode.scheduleBuffer(outputBuffer, at: nil, options: AVAudioPlayerNodeBufferOptions.interrupts, completionHandler: nil)
    playerNode.prepare(withFrameCount: frameLength)
    let time = AVAudioTime(sampleTime: frameCapacity, atRate: hrtfSampleRate)
    
    audioEngine.attach(playerNode)
    audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: outputBuffer.format)
    audioEngine.prepare()
    do {
        try audioEngine.start()
    } catch let error {
        print(error.localizedDescription)
    }
    
    playerNode.play(at: time)

The error that I get in runtime is:

AVAEInternal.h:76    required condition is false: [AVAudioPlayerNode.mm:712:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)]

It doesn't show the line that this error occurs on. I have been stuck on this for a while now, and have tried lots of different things, but there doesn't seem to be very much clear information about playing audio from memory and not from files with AVAudioEngine from what I could find. Any help would be greatly appreciated.

Thanks!

Edit #1: Better title

Edit# 2: UPDATE - I have found out why I was getting the error. It seemed to be caused by setting up the playerNode before attaching it to the audioEngine. Swapping the order stopped the program from crashing and throwing the error:

    let frameCapacity = AVAudioFramePosition(outputBuffer.frameCapacity)
    let frameLength = outputBuffer.frameLength
    
    audioEngine.attach(playerNode)
    audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: outputBuffer.format)
    audioEngine.prepare()
    
    playerNode.scheduleBuffer(outputBuffer, at: nil, options: AVAudioPlayerNodeBufferOptions.interrupts, completionHandler: nil)
    playerNode.prepare(withFrameCount: frameLength)
    let time = AVAudioTime(sampleTime: frameCapacity, atRate: hrtfSampleRate)
    
    do {
        try audioEngine.start()
    } catch let error {
        print(error.localizedDescription)
    }

    playerNode.play(at: time)

However, I don't have any sound. After creating an array of floats of the outputBuffer with the same method as used for the input signal, and taking a look at its contents with a break point it seems to be empty, so I must also be incorrectly storing the data to the outputBuffer.

You might be creating and filling your buffer incorrectly. Try doing it thus:

let fileURL = Bundle.main.url(forResource: "my_file", withExtension: "aiff")!
let file = try! AVAudioFile(forReading: fileURL)
let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: UInt32(file.length))!
try! file.read(into: buffer)

I have fixed the issue!

I tried a lot of solutions and have ended up completely re-writing the audio engine section of my app and I now have the AVAudioEngine and AVAudioPlayerNode declared within the ViewController class as the following:

class ViewController: UIViewController {

var audioEngine: AVAudioEngine = AVAudioEngine()
var playerNode: AVAudioPlayerNode = AVAudioPlayerNode()

...

I am still unclear if it is better to declare these globally or as class variables in iOS, however I can confirm that my application is playing audio with these declared within the ViewController class. I do know that they shouldn't be declared in a function as they will disappear and stop playing when the function goes out of scope.

However, I still was not getting any audio output until I set the AVAudioPCMBuffer.frameLength to frameCapacity.

I could find very little information online regarding creating a new AVAudioPCMBuffer from an array of floats, but this seems to be the missing step that I needed to do to make my outputBuffer playable. Before I set this, it was at 0 by default.

The frameLength member isn't required in the AVAudioFormat class declaration. But it is important and my buffer wasn't playable until I set it manually, and after the class instance declaration:

let bufferFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: hrtfSampleRate, channels: 2, interleaved: false)!
let frameCapacity = UInt32(audioFile.length)
guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: bufferFormat, frameCapacity: frameCapacity) else {
    fatalError("Could not create output buffer.")
}
outputBuffer.frameLength = frameCapacity // Important!

This took a long time to find out, hopefully this will help someone else in the future.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM