简体   繁体   中英

How can I specify the format of AVAudioEngine Mic-Input?

I'd like to record the some audio using AVAudioEngine and the users Microphone. I already have a working sample, but just can't figure out how to specify the format of the output that I want...

My requirement would be that I need the AVAudioPCMBuffer as I speak which it currently does...

Would I need to add a seperate node that does some transcoding? I can't find much documentation/samples on that problem...

And I am also a noob when it comes to Audio-Stuff. I know that I want NSData containing PCM-16bit with a max sample-rate of 16000 (8000 would be better)

Here's my working sample:

private var audioEngine = AVAudioEngine()

func startRecording() {

  let format = audioEngine.inputNode!.inputFormatForBus(bus)

  audioEngine.inputNode!.installTapOnBus(bus, bufferSize: 1024, format: format) { (buffer: AVAudioPCMBuffer, time:AVAudioTime) -> Void in

     let audioFormat = PCMBuffer.format
     print("\(audioFormat)")
  }

  audioEngine.prepare()
  do {
     try audioEngine.start()
  } catch { /* Imagine some super awesome error handling here */ }
}

If I changed the format to let' say

let format = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatInt16, sampleRate: 8000.0, channels: 1, interleaved: false)

then if will produce an error saying that the sample rate needs to be the same as the hwInput...

Any help is very much appreciated!!!

EDIT: I just found AVAudioConverter but I need to be compatible with iOS8 as well...

You cannot change audio format directly on input nor output nodes. In the case of the microphone, the format will always be 44KHz, 1 channel, 32bits. To do so, you need to insert a mixer in between. Then when you connect inputNode > changeformatMixer > mainEngineMixer, you can specify the details of the format you want.

Something like:

var inputNode = audioEngine.inputNode
var downMixer = AVAudioMixerNode()

//I think you the engine's I/O nodes are already attached to itself by default, so we attach only the downMixer here:
audioEngine.attachNode(downMixer)

//You can tap the downMixer to intercept the audio and do something with it:
downMixer.installTapOnBus(0, bufferSize: 2048, format: downMixer.outputFormatForBus(0), block:  //originally 1024
            { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
                print(NSString(string: "downMixer Tap"))
                do{
                    print("Downmixer Tap Format: "+self.downMixer.outputFormatForBus(0).description)//buffer.audioBufferList.debugDescription)

        })

//let's get the input audio format right as it is
let format = inputNode.inputFormatForBus(0)
//I initialize a 16KHz format I need:
let format16KHzMono = AVAudioFormat.init(commonFormat: AVAudioCommonFormat.PCMFormatInt16, sampleRate: 11050.0, channels: 1, interleaved: true)

//connect the nodes inside the engine:
//INPUT NODE --format-> downMixer --16Kformat--> mainMixer
//as you can see I m downsampling the default 44khz we get in the input to the 16Khz I want 
audioEngine.connect(inputNode, to: downMixer, format: format)//use default input format
audioEngine.connect(downMixer, to: audioEngine.outputNode, format: format16KHzMono)//use new audio format
//run the engine
audioEngine.prepare()
try! audioEngine.start()

I would recommend using an open framework such as EZAudio, instead, though.

The only thing I found that worked to change the sampling rate was

AVAudioSettings.sharedInstance().setPreferredSampleRate(...)

You can tap off engine.inputNode and use the input node's output format:

engine.inputNode.installTap(onBus: 0, bufferSize: 2048,
                            format: engine.inputNode.outputFormat(forBus: 0))

Unfortunately, there is no guarantee that you will get the sample rate that you want, although it seems like 8000, 12000, 16000, 22050, 44100 all worked.

The following did NOT work:

  1. Setting the my custom format in a tap off engine.inputNode. (Exception)
  2. Adding a mixer with my custom format and tapping that. (Exception)
  3. Adding a mixer, connecting it with the inputNode's format, connecting the mixer to the main mixer with my custom format, then removing the input of the outputNode so as not to send the audio to the speaker and get instant feedback. (Worked, but got all zeros)
  4. Not using my custom format at all in the AVAudioEngine, and using AVAudioConverter to convert from the hardware rate in my tap. (Length of the buffer was not set, no way to tell if results were correct)

This was with iOS 12.3.1.

In order to change the sample rate of input node, you have to first connect the input node to a mixer node, and specify a new format in the parameter.

let input = avAudioEngine.inputNode
let mainMixer = avAudioEngine.mainMixerNode
let newAudioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)
avAudioEngine.connect(input, to: mainMixer, format: newAudioFormat)

Now you can call installTap function on input node with the newAudioFormat.

One more thing I'd like to point out is, since the new launch of iPhone12, the default sample rate of input node has been no longer 44100 anymore. It has been upgraded to 48000.

You cannot change the configuration of input node, try to create a mixer node with the format that you want, attach it to the engine, then connect it to the input node and then connect the mainMixer to the node that you just created. Now you can install a tap on this node to get PCM data.

Note that for some strange reasons, you don't have a lot of choice for sample rate! At least not on iOS 9.1, Use standard 11025, 22050 or 44100. Any other sample rate will fail!

If you just need to change the sample rate and channel, I recommend using row-level API. You do not need to use a mixer or converter. Here you can find the Apple document about low-level recording. If you want, you will be able to convert to Objective-C class and add protocol.

Audio Queue Services Programming Guide

If your goal is simply to end up with AVAudioPCMBuffers that contains audio in your desired format, you can convert the buffers returned in the tap block using AVAudioConverter. This way, you actually don't need to know or care what the format of the inputNode is.

class MyBufferRecorder {
    
    private let audioEngine:AVAudioEngine = AVAudioEngine()
    private var inputNode:AVAudioInputNode!
    private let audioQueue:DispatchQueue = DispatchQueue(label: "Audio Queue 5000")
    private var isRecording:Bool = false
    
    func startRecording() {
        
        if (isRecording) {
            return
        }
        isRecording = true
        
        // must convert (unknown until runtime) input format to our desired output format
        inputNode = audioEngine.inputNode
        let inputFormat:AVAudioFormat! = inputNode.outputFormat(forBus: 0)
    
        // 9600 is somewhat arbitrary... min seems to be 4800, max 19200... it doesn't matter what we set
        // because we don't re-use this value -- we query the buffer returned in the tap block for it's true length.
        // Using [weak self] in the tap block is probably a better idea, but it results in weird warnings for now
        inputNode.installTap(onBus: 0, bufferSize: AVAudioFrameCount(9600), format: inputFormat) { (buffer, time) in
            
            // not sure if this is necessary
            if (!self.isRecording) {
                print("\nDEBUG - rejecting callback, not recording")
                return }
            
            // not really sure if/why this needs to be async
            self.audioQueue.async {

                // Convert recorded buffer to our preferred format
                
                let convertedPCMBuffer = AudioUtils.convertPCMBuffer(bufferToConvert: buffer, fromFormat: inputFormat, toFormat: AudioUtils.desiredFormat)
            
                // do something with converted buffer
            }
        }
        do {
            // important not to start engine before installing tap
            try audioEngine.start()
        } catch {
            print("\nDEBUG - couldn't start engine!")
            return
        }
        
    }
    
    func stopRecording() {
        print("\nDEBUG - recording stopped")
        isRecording = false
        inputNode.removeTap(onBus: 0)
        audioEngine.stop()
    }
    
}

Separate class:

import Foundation
import AVFoundation

// assumes we want 16bit, mono, 44100hz
// change to what you want
class AudioUtils {
    
    static let desiredFormat:AVAudioFormat! = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: Double(44100), channels: 1, interleaved: false)
    
    // PCM <--> PCM
    static func convertPCMBuffer(bufferToConvert: AVAudioPCMBuffer, fromFormat: AVAudioFormat, toFormat: AVAudioFormat) -> AVAudioPCMBuffer {
        
        let convertedPCMBuffer = AVAudioPCMBuffer(pcmFormat: toFormat, frameCapacity: AVAudioFrameCount(bufferToConvert.frameLength))
        var error: NSError? = nil
        
        let inputBlock:AVAudioConverterInputBlock = {inNumPackets, outStatus in
            outStatus.pointee = AVAudioConverterInputStatus.haveData
            return bufferToConvert
        }
        let formatConverter:AVAudioConverter = AVAudioConverter(from:fromFormat, to: toFormat)!
        formatConverter.convert(to: convertedPCMBuffer!, error: &error, withInputFrom: inputBlock)
        
        if error != nil {
            print("\nDEBUG - " + error!.localizedDescription)
        }
        
        return convertedPCMBuffer!
        
    }
}

This is by no means production ready code -- I'm also learning IOS Audio... so please, please let me know any errors, best practices, or dangerous things going on in that code and I'll keep this answer updated.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM