[英]How to do real-time audio convolution in Swift for iOS app?
I am trying to build an iOS app where I have one mono-channel input (real-time from mic) and double-channel impulse response which needs to be real-time convolved with mono channel input and impulse response and will provide an output which is double-channel output (stereo). 我正在尝试构建一个iOS应用,其中我有一个单声道输入(来自麦克风的实时)和双声道冲激响应,需要将其与单声道输入和冲激响应实时卷积在一起,并将提供输出是双通道输出(立体声)。 Is there a way to do that on iOS with Apple's Audio Toolbox? 有没有办法使用Apple的音频工具箱在iOS上做到这一点?
You should first decide whether you will be doing convolution in the time or frequency domain - there are benefits to both depending on the length of your signal/impulse response. 首先应该确定是在时域还是在频域进行卷积-两者都有好处,这取决于信号/脉冲响应的长度。 This is somewhere you should do your own research. 这是您应该自己进行研究的地方。
For time domain, rolling your own convolution should be straightforward enough. 对于时域,滚动自己的卷积应该足够简单。 For frequency domain you will be needing a FFT function, you could roll your own but more efficient versions will exist. 对于频域,您将需要FFT函数,您可以自己滚动,但将存在更有效的版本。 For example the Accelerate framework has this implemented already. 例如,加速框架已经实现了。
But for basic I/O Audio Toolbox is a valid choice .. 但是对于基本的I / O音频工具箱是一个有效的选择。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.