[英]Which approach to use for reading audio samples in real-time
For a particular project, I need to: 对于特定项目,我需要:
I've looked at various approaches—Audio Units, Audio Graphs, AVAudioEngine, AudioTapProcessor—but not sure which is the right path for a Swift project aimed at iOS8 and iOS9 only. 我研究过各种方法-音频单元,音频图,AVAudioEngine,AudioTapProcessor-但不确定哪一个是仅针对iOS8和iOS9的Swift项目的正确路径。 The AudioTapProcessor works well for accessing the audio samples from an audio file, but not sure about the mic input nor the Swift support.
AudioTapProcessor可以很好地用于访问音频文件中的音频样本,但不能确定麦克风输入或Swift支持。
Which approach best fits these requirements? 哪种方法最符合这些要求? Thanks for reading.
谢谢阅读。
UPDATE: I went with AVAudioEngine and so far it's been a pretty great fit. 更新:我使用了AVAudioEngine,到目前为止,它非常适合。
Audio Units and graphs would go hand and hand. 音频单元和图表将齐头并进。 Audio units are the components and the graph is the mechanism that connects them together.
音频单元是组件,图形是将它们连接在一起的机制。 Using units and a graph will give you the best realtime (low latency) performance and options.
使用单位和图表将为您提供最佳的实时(低延迟)性能和选项。 I find that Objective C fits better with core audio since core audio was originally ac api.
由于核心音频最初是ac api,因此我发现Objective C更适合核心音频。
I recently answered a question concerning ring buffers and used this project as a demo. 我最近回答了一个有关环形缓冲区的问题,并将该项目用作演示。 This project plays a tone while it records from the mic and allows you to process by reading the latest samples from a ring.
该项目在从麦克风录音时会发出声音,并允许您通过读取铃声中的最新样本进行处理。 This may be a good starting point.
这可能是一个很好的起点。 You can remove the tone playing if needed.
您可以根据需要删除正在播放的音调。
I believe that a minimalistic low-level approach with a single audio component of kAudioUnitSubType_RemoteIO would perform reliably in terms of low latency and Swift support. 我相信,具有kAudioUnitSubType_RemoteIO的单个音频组件的简约低级方法在低延迟和Swift支持方面将可靠地执行。 Presuming that the interface (named here for convenience) myAudioController is properly declared and initialized, the following code inside the registered render callback should do the real-time I/O mapping (here written in C , though):
假定该接口 (这里命名为方便起见)myAudioController正确声明和初始化, 注册呈现回调 中下面的代码应该做的实时I / O映射(这里用C写的,虽然):
myAudioController *myController = (myAudioController *)inRefCon;
//this would render inNumberFrames of data from audio input...
AudioUnitRender(myController->audioUnit,
ioActionFlags,
inTimeStamp,
bus1,
inNumberFrames,
ioData);
//from here on, individual samples can be monitored and processed...
AudioBuffer buffer = ioData->mBuffers[0];
The equivalent code snippet in Swift would probably look like this: Swift中的等效代码片段可能如下所示:
let myController = UnsafeMutablePointer<myAudioController>(inRefCon).memory
AudioUnitRender(myController.audioUnit,
ioActionFlags,
inTimeStamp,
bus1,
inNumberFrames,
ioData)
for buffer in UnsafeMutableAudioBufferListPointer(ioData) { ... }
Please feel free to consult Apple's reference page for details - it is very well documented: https://developer.apple.com/library/ios/documentation/AudioUnit/Reference/AUComponentServicesReference/#//apple_ref/c/func/AudioUnitRender 请随时查阅Apple的参考页面以获取详细信息-记录非常清楚: https : //developer.apple.com/library/ios/documentation/AudioUnit/Reference/AUComponentServicesReference/#//apple_ref/c/func/AudioUnitRender
This might also be a valuable site, with C examples from a must-read textbook re-written in Swift : https://github.com/AlesTsurko/LearningCoreAudioWithSwift2.0 这可能也是一个有价值的站点,其中的必需阅读教科书中的C示例已用Swift重新编写: https : //github.com/AlesTsurko/LearningCoreAudioWithSwift2.0
Important is understanding what you are doing. 重要的是要了解自己在做什么。 Everything else should be pretty much self-explanatory and should not involve too much work.
其他所有内容都应该是不言自明的,不应涉及过多的工作。 Hope this can help…
希望这可以帮助...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.