简体   繁体   English

Core Audio Ring Buffer Data 出现空白

[英]Core Audio Ring Buffer Data comes out blank

I am working off a demo from the book "Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS."我正在编写“学习核心音频:Mac 和 iOS 音频编程实践指南”一书中的演示。 Chapter 8 shows how to set up a simple AudioUnit graph to play through from the AUHAL input unit to an output unit.第 8 章展示了如何设置一个简单的 AudioUnit 图形以从 AUHAL 输入单元播放到 output 单元。 This setup doesn't actually connect the audio units;此设置实际上并未连接音频单元; instead, both units use a callback and pass audio data through an instance of CARingBuffer.相反,两个单元都使用回调并通过 CARingBuffer 的实例传递音频数据。 I'm coding for MacOS 10.15.6, and using code directly from the publisher here .我正在为 MacOS 10.15.6 编写代码,并直接使用来自发布者代码。 Here's a picture of how it works:这是它如何工作的图片:

在此处输入图像描述

The code builds and runs, but I get no audio.代码构建并运行,但我没有音频。 Note that later, after introducing a speech synthesis unit, I do get playback, so I know the basics are working.请注意,稍后,在引入语音合成单元后,我确实可以播放,所以我知道基础工作正常。

InputRenderProc asks the AUHAL unit for input and stores it in the ring buffer. InputRenderProc向 AUHAL 单元请求输入并将其存储在环形缓冲区中。

    MyAUGraphPlayer *player = (MyAUGraphPlayer*) inRefCon;
    
    // have we ever logged input timing? (for offset calculation)
    if (player->firstInputSampleTime < 0.0) {
        player->firstInputSampleTime = inTimeStamp->mSampleTime;
        if ((player->firstOutputSampleTime > -1.0) &&
            (player->inToOutSampleTimeOffset < 0.0)) {
            player->inToOutSampleTimeOffset = player->firstInputSampleTime - player->firstOutputSampleTime;
        }
    }
    
    // render into our buffer
    OSStatus inputProcErr = noErr;
    inputProcErr = AudioUnitRender(player->inputUnit,
                                   ioActionFlags,
                                   inTimeStamp,
                                   inBusNumber,
                                   inNumberFrames,
                                   player->inputBuffer);

    if (! inputProcErr) {
        inputProcErr = player->ringBuffer->Store(player->inputBuffer,
                                                 inNumberFrames,
                                                 inTimeStamp->mSampleTime);
        
        UInt32 sz = sizeof(player->inputBuffer);
        printf ("stored %d frames at time %f (%d bytes)\n", inNumberFrames, inTimeStamp->mSampleTime, sz);
                        
        for (int i = 0; i < player->inputBuffer->mNumberBuffers; i++ ){
            //printf("stored audio string[%d]: %s\n", i, player->inputBuffer->mBuffers[i].mData);
        }
                 
                
    }

If I uncomment the printf statement, I see what looks like audio data being stored.如果我取消注释printf语句,我会看到正在存储的音频数据。

stored audio string[1]: #P'\274a\353\273\336^\274x\205 \2741\330B\2747'\274\371\361U\274\346\274\274}\212C\274\334\365%\274\261\367\273\340\307/\274E
stored 512 frames at time 134610.000000 (8 bytes)

However, when I fetch from the ring buffer in the GraphRenderCallback like this...但是,当我像这样从 GraphRenderCallback 中的环形缓冲区中获取时......

    MyAUGraphPlayer *player = (MyAUGraphPlayer*) inRefCon;
    
    // have we ever logged output timing? (for offset calculation)
    if (player->firstOutputSampleTime < 0.0) {
        player->firstOutputSampleTime = inTimeStamp->mSampleTime;
        if ((player->firstInputSampleTime > -1.0) &&
            (player->inToOutSampleTimeOffset < 0.0)) {
            player->inToOutSampleTimeOffset = player->firstInputSampleTime - player->firstOutputSampleTime;
        }
    }
    
    
    // copy samples out of ring buffer
    OSStatus outputProcErr = noErr;
    // new CARingBuffer doesn't take bool 4th arg
    outputProcErr = player->ringBuffer->Fetch(ioData,
                                              inNumberFrames,
                                              inTimeStamp->mSampleTime + player->inToOutSampleTimeOffset);
    

I get nothing (I know I can't expect proper null-terminated string output, but I thought I'd see something).我什么也没得到(我知道我不能指望正确的空终止字符串 output,但我想我会看到一些东西)。

fetched 512 frames at time 160776.000000
fetched audio string[0, size 2048]: xx
fetched audio string[1, size 2048]: xx
fetched 512 frames at time 161288.000000
fetched audio string[0, size 2048]: xx
fetched audio string[1, size 2048]: xx

This is not a permission problem;这不是权限问题; I have other non-AudioUnit code that can get mic input.我还有其他可以获取麦克风输入的非 AudioUnit 代码。 In addition, I created a plist that makes this app prompt for mic access every time, so I know that is working.此外,我创建了一个 plist,让这个应用程序每次都提示访问麦克风,所以我知道这是有效的。 I cannot understand why data goes into this ring buffer, but never comes out.我不明白为什么数据会进入这个环形缓冲区,但永远不会出来。

These days you need to declare that you want to use the microphone, providing an explanation string.这些天你需要声明你想使用麦克风,提供一个解释字符串。 This wasn't the case in 2012 when Learning Core Audio was published. 2012 年 Learning Core Audio 发布时,情况并非如此。

In short, you now need to:简而言之,您现在需要:

  1. add an NSMicrophoneUsageDescription string to your Info.plistNSMicrophoneUsageDescription字符串添加到您的Info.plist
  2. add sandboxing capability and enable Audio Input添加沙盒功能并启用音频输入

The sample code you're using is a command line tool, so adding an Info.plist to it in Xcode isn't as simple as with a.app package.您使用的示例代码是一个命令行工具,因此在 Xcode 中添加Info.plist并不像使用 a.app package 那样简单 Also the code does not seem to work if you run it from Xcode.此外,如果您从 Xcode 运行该代码似乎也不起作用 In my case it has to be run for Terminal.app.就我而言,它必须为 Terminal.app 运行。 This may be due to the fact that my Terminal has microphone permissions (viewable in System Preferences > Security & Privacy > Microphone ).这可能是因为我的终端具有麦克风权限(可在System Preferences > Security & Privacy > Microphone中查看)。 You can and probably should explicitly request microphone access from the user (yourself in this case!) by using requestAccessForMediaType on an AVCaptureDevice .您可以并且可能应该通过在AVCaptureDevice上使用requestAccessForMediaType来明确请求用户(在这种情况下是您自己!)的麦克风访问权限。 That's right, AVFoundation code in a Core Audio tutorial, what's the world coming to.没错, Core Audio教程中的AVFoundation代码,世界将何去何从。

There are more details on the above steps in this answer此答案中有关上述步骤的更多详细信息

ps I think the person who thought capturing zeroes instead of returning an error was a good idea is probably good friends with whoever invented returning HTTP 200 with an error code in the body. ps 我认为认为捕获零而不是返回错误是个好主意的人可能与发明返回 HTTP 200 并在正文中带有错误代码的人是好朋友。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM