简体   繁体   English

如何在iOS中捕获音频输出?

[英]How to capture audio output in iOS?

I'm playing an audio stream from the internet in my app, and I would like to display a graphic equalizer. 我正在应用程序中播放来自互联网的音频流,并且我想显示一个图形均衡器。 The library that I'm using for the streaming is FreeStreamer . 我用于流式传输的库是FreeStreamer For drawing the graphic equalizer I'm using ZLHistogramAudioPlot . 为了绘制图形均衡器,我使用ZLHistogramAudioPlot These two libraries are the only ones that fit my needs. 这两个库是唯一适合我需要的库。 The problem is I can't get them to work together. 问题是我无法让他们一起工作。

The ZLHistogramAudioPlot requires a buffer and bufferSize in order to update it's view. ZLHistogramAudioPlot需要一个buffer和bufferSize才能更新其视图。 Here is it's update method: 这是它的更新方法:

- (void)updateBuffer:(float *)buffer withBufferSize:(UInt32)bufferSize {
    [self setSampleData:buffer length:bufferSize];
}

Unfortunately, the FreeStreamer library doesn't provide a method to read the audiot output as it goes out towards the sound card. 不幸的是, FreeStreamer库没有提供一种在音频输出向声卡输出时读取音频输出的方法。 So, what I need is a way to read the audio output stream that's about to play through the speakers (not the byte stream from the internet, because that's received in chunks, and then buffered, which means that the histogram won't be in real-time). 因此,我需要的是一种读取将要通过扬声器播放的音频输出流的方法(而不是来自互联网的字节流,因为它是分块接收的,然后进行缓冲,这意味着直方图将不在即时的)。

I've discovered that AURemoteIO from Apple's CoreAudio framework can be used to do this, but Apple's sample project is complex beyond understanding, and there are very little to none examples online about using AURemoteIO . 我发现可以使用Apple的CoreAudio框架中的AURemoteIO来执行此操作,但是Apple的示例项目非常复杂,难以理解,在线上几乎没有使用AURemoteIO示例。

Is this the best way to achieve this, and if so, any helpful info/links would be greatly appreciated. 这是实现此目标的最佳方法吗?如果是的话,将不胜感激任何有用的信息/链接。

Here is a possible answer from looking through the FreeStreamer headers 这是通过查看FreeStreamer标头获得的可能答案

#define minForSpectrum 1024

@implementation MyClass {
    TPCircularBuffer SpectrumAnalyzerBuffer;
}

- (void)dealloc {
    TPCircularBufferCleanup(&SpectrumAnalyzerBuffer);
}

-(instancetype) init {
   self = [super init];
   if (self) {
      TPCircularBufferInit(&SpectrumAnalyzerBuffer, 16384);
      self.audioController.activeStream.delegate = self;
   }
   return self;
}

- (void)audioStream:(FSAudioStream *)audioStream samplesAvailable:(const int16_t *)samples count:(NSUInteger)count {
    // incoming data is integer

    SInt16 *buffer = samples;
    Float32 *floatBuffer = malloc(sizeof(Float32)*count);
    // convert to float
    vDSP_vflt16(buffer, 1, floatBuffer, 1, count);

    // scale
    static float scale = 1.f / (INT16_MAX/2);
    static float zero = 0.f;

    vDSP_vsmsa(floatBuffer, 1, &scale, &zero, floatBuffer, 1, count);

    TPCircularBufferProduceBytes(&SpectrumAnalyzerBuffer, floatBuffer, count*sizeof(Float32));

    free(floatBuffer);   
}

- (void) timerCallback: (NSTimer*) timer {

    Float32 *spectrumBufferData = TPCircularBufferTail(&SpectrumAnalyzerBuffer, &availableSpectrum);

    if (availableSpectrum >= minForSpectrum) {
        // note visualiser may want chunks of a fixed size if its doing fft
        [histogram updateBuffer: spectrumBufferData length: minForSpectrum];
        TPCircularBufferConsume(&SpectrumAnalyzerBuffer, minForSpectrum);
    }


}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM