简体   繁体   中英

Swift 3 on iOS 10.2: How can I subscribe to individual samples from the microphone? Need to process audio in realtime

I'm trying to build an application for iOS 10.2 which works by processing the Microphone input in realtime. For the purposes of this application, I need to be able to perform calculation against each individual sample of PCM audio each time the microphone receives it.

I have a prototype for usage of the microphone with AudioUnits and AVAudioSession which polls the microphone every few milliseconds with an AURenderCallback function and pulls the samples it's collected since the last sampling , but this is too slow and unreliable for my use case.

Is there anything like an event I can handle to pull a sample and then execute my code each time a single sample of audio is recorded by the microphone? How would I do this?

If you find the iOS audio render callbacks to be unreliable, you may be doing something wrong in your code (trying to do too much processing per sample, or using them outside the audio unit callback, etc.) See my example RemoteIO recording code in this RecordAudio.swift gist .

Older iOS devices may be limited to a minimum of 5.8 millisecond buffers, or even longer when the app is in the background. On newer iOS devices, you may be able to get reliable render callbacks for as few as 16 samples. But there's very little reason to process samples that often, as the iOS hardware IO DMA and the ADCs and DACs likely have much greater latency than 16 samples, and the display only updates at 60 Hz.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM