简体   繁体   English

使用哪种方法实时读取音频样本

[英]Which approach to use for reading audio samples in real-time

For a particular project, I need to: 对于特定项目,我需要:

  1. Access individual audio samples from the microphone or an audio file, 从麦克风或音频文件访问单个音频样本,
  2. Extract a property from these values every 250 ms or so, and then 每隔250毫秒左右从这些值中提取一个属性,然后
  3. Display that property on-screen in ~real-time (up to 100 ms delay is fine). 在屏幕上实时显示该属性(最多可以延迟100毫秒)。

I've looked at various approaches—Audio Units, Audio Graphs, AVAudioEngine, AudioTapProcessor—but not sure which is the right path for a Swift project aimed at iOS8 and iOS9 only. 我研究过各种方法-音频单元,音频图,AVAudioEngine,AudioTapProcessor-但不确定哪一个是仅针对iOS8和iOS9的Swift项目的正确路径。 The AudioTapProcessor works well for accessing the audio samples from an audio file, but not sure about the mic input nor the Swift support. AudioTapProcessor可以很好地用于访问音频文件中的音频样本,但不能确定麦克风输入或Swift支持。

Which approach best fits these requirements? 哪种方法最符合这些要求? Thanks for reading. 谢谢阅读。

UPDATE: I went with AVAudioEngine and so far it's been a pretty great fit. 更新:我使用了AVAudioEngine,到目前为止,它非常适合。

Audio Units and graphs would go hand and hand. 音频单元和图表将齐头并进。 Audio units are the components and the graph is the mechanism that connects them together. 音频单元是组件,图形是将它们连接在一起的机制。 Using units and a graph will give you the best realtime (low latency) performance and options. 使用单位和图表将为您提供最佳的实时(低延迟)性能和选项。 I find that Objective C fits better with core audio since core audio was originally ac api. 由于核心音频最初是ac api,因此我发现Objective C更适合核心音频。

I recently answered a question concerning ring buffers and used this project as a demo. 我最近回答了一个有关环形缓冲区的问题,并将该项目用作演示。 This project plays a tone while it records from the mic and allows you to process by reading the latest samples from a ring. 该项目在从麦克风录音时会发出声音,并允许您通过读取铃声中的最新样本进行处理。 This may be a good starting point. 这可能是一个很好的起点。 You can remove the tone playing if needed. 您可以根据需要删除正在播放的音调。

I believe that a minimalistic low-level approach with a single audio component of kAudioUnitSubType_RemoteIO would perform reliably in terms of low latency and Swift support. 我相信,具有kAudioUnitSubType_RemoteIO的单个音频组件的简约低级方法在低延迟和Swift支持方面将可靠地执行。 Presuming that the interface (named here for convenience) myAudioController is properly declared and initialized, the following code inside the registered render callback should do the real-time I/O mapping (here written in C , though): 假定该接口 (这里命名为方便起见)myAudioController正确声明和初始化, 注册呈现回调 下面的代码应该做的实时I / O映射(这里用C写的,虽然):

myAudioController *myController = (myAudioController *)inRefCon;

//this would render inNumberFrames of data from audio input...
AudioUnitRender(myController->audioUnit,
                            ioActionFlags,
                            inTimeStamp,
                            bus1,
                            inNumberFrames,
                            ioData);

//from here on, individual samples can be monitored and processed...                           
AudioBuffer buffer = ioData->mBuffers[0]; 

The equivalent code snippet in Swift would probably look like this: Swift中的等效代码片段可能如下所示:

let myController = UnsafeMutablePointer<myAudioController>(inRefCon).memory
AudioUnitRender(myController.audioUnit, 
                            ioActionFlags, 
                            inTimeStamp, 
                            bus1, 
                            inNumberFrames, 
                            ioData)

for buffer in UnsafeMutableAudioBufferListPointer(ioData) { ... }

Please feel free to consult Apple's reference page for details - it is very well documented: https://developer.apple.com/library/ios/documentation/AudioUnit/Reference/AUComponentServicesReference/#//apple_ref/c/func/AudioUnitRender 请随时查阅Apple的参考页面以获取详细信息-记录非常清楚: https : //developer.apple.com/library/ios/documentation/AudioUnit/Reference/AUComponentServicesReference/#//apple_ref/c/func/AudioUnitRender

This might also be a valuable site, with C examples from a must-read textbook re-written in Swift : https://github.com/AlesTsurko/LearningCoreAudioWithSwift2.0 这可能也是一个有价值的站点,其中的必需阅读教科书中的C示例已用Swift重新编写: https : //github.com/AlesTsurko/LearningCoreAudioWithSwift2.0

Important is understanding what you are doing. 重要的是要了解自己在做什么。 Everything else should be pretty much self-explanatory and should not involve too much work. 其他所有内容都应该是不言自明的,不应涉及过多的工作。 Hope this can help… 希望这可以帮助...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用AVAssetReader和timeRange实时读取样本 - reading samples with AVAssetReader and timeRange in real time iOS-JAVA实现实时数据同步的最佳方法 - iOS - JAVA Best approach to Achieve Real-Time Data sync 如何在Swift for iOS应用中进行实时音频卷积? - How to do real-time audio convolution in Swift for iOS app? 是否有其他API可以在iOS上实现实时音频处理? - Is there another API that will enable real-time audio processing on iOS? 合并多个音频文件以在iOS上实时播放 - Combining multiple audio files for real-time playback on iOS 暂停/恢复时实时 AVAssetWriter 同步音视频 - Real-time AVAssetWriter synchronise audio and video when pausing/resuming 使用AudioKit进行实时音频卷积以实现立体声脉冲响应 - Real-time audio convolution with AudioKit for stereo impulse response 如何在 IOS 中使用 OpusCodec 编码和解码实时音频? - How to encode and decode Real-time Audio using OpusCodec in IOS? 实时读取Apple Watch syslog(NSLog()) - Reading Apple Watch syslog (NSLog()) in real-time 我可以使用 AVAudioEngine 从文件中读取,使用音频单元处理并写入文件,比实时更快吗? - Can I use AVAudioEngine to read from a file, process with an audio unit and write to a file, faster than real-time?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM