简体   繁体   English

在2个AudioUnit之间添加AULowPass过滤器

[英]Adding a AULowPass filter, in between 2 AudioUnits

I have modified the code provided by Tim Boldstad http://timbolstad.com/2010/03/16/core-audio-getting-started-pt2/ (may God bless him), and added a small slider to be able to change the output tone frequency form 40hz to 200000 hz. 我已经修改了蒂姆·博尔斯塔德(Tim Boldstad)提供的代码,网址为http://timbolstad.com/2010/03/16/core-audio-getting-started-pt2/ (请上帝保佑他),并添加了一个小滑块,可以进行更改输出音频频率形式为40hz至200000 hz。 I now want to be able to use a LPF on the tone generated. 我现在希望能够对生成的音调使用LPF。

First of all, does any1 have a detailed guide which explains how to do this. 首先,any1是否有详细的指南来说明如何执行此操作。 I've tried simply adding a node in between, but it doesn't work, apparently, I need to convert 16 bit integer samples to the floating 8.24 format, before giving audio sample inputs to the filter, and then i have to convert it back to 16 bit integer. 我尝试过在它们之间简单地添加一个节点,但是它不起作用,显然,在将音频样本输入提供给滤波器之前,我需要将16位整数样本转换为浮动8.24格式,然后必须对其进行转换返回16位整数。 Is this the problem? 这是问题吗? or have i connected the node wrongly? 还是我错误地连接了节点? Where am i supposed to set the filters cutoff frequency and other parameters? 我应该在哪里设置滤波器的截止频率和其他参数?

Can anyone explain what AudioUnitGetProperty does? 谁能解释AudioUnitGetProperty的功能? Apple documentation on these topics are EXTREMELY fragmented and utterly worthless :( 苹果在这些主题上的文档极其零散,毫无价值:(

-(void) initializeAUGraph
{

OSStatus result= noErr;

    result = NewAUGraph(&mGraph);

    AUNode outputNode;
    AUNode mixerNode;
    AUNode effectsNode;

    AudioComponentDescription effects_desc;
    effects_desc.componentType = kAudioUnitType_Effect;
    effects_desc.componentSubType = kAudioUnitSubType_LowPassFilter;
    effects_desc.componentFlags = 0;
    effects_desc.componentFlagsMask = 0;
    effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    AudioComponentDescription mixer_desc;
    mixer_desc.componentType=kAudioUnitType_Mixer;
    mixer_desc.componentSubType=kAudioUnitSubType_MultiChannelMixer;
    mixer_desc.componentFlags=0;
    mixer_desc.componentFlagsMask=0;
    mixer_desc.componentManufacturer=kAudioUnitManufacturer_Apple;

    AudioComponentDescription output_desc;
    output_desc.componentType = kAudioUnitType_Output;
    output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
    output_desc.componentFlags = 0;
    output_desc.componentFlagsMask = 0;
    output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

   result= AUGraphAddNode(mGraph, &output_desc, &outputNode);
   result= AUGraphAddNode(mGraph, &mixer_desc, &mixerNode);
    result=AUGraphAddNode(mGraph, &effects_desc, &effectsNode);

    result=AUGraphConnectNodeInput(mGraph, mixerNode, 0, effectsNode, 0);
    result=AUGraphConnectNodeInput(mGraph, effectsNode, 0, outputNode, 0);

    result=AUGraphOpen(mGraph);

    //getting mixxer

    result = AUGraphNodeInfo(mGraph, mixerNode, NULL, &mMixer);
    result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);

    UInt32 numbuses = 1;
    UInt32 size = sizeof(numbuses);
    result = AudioUnitSetProperty(mMixer, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &numbuses, size);


    //=====

    CAStreamBasicDescription desc;

    // Loop through and setup a callback for each source you want to send to the mixer.
    // Right now we are only doing a single bus so we could do without the loop.
    for (int i = 0; i < numbuses; ++i) 
    {

        // Setup render callback struct
        // This struct describes the function that will be called
        // to provide a buffer of audio samples for the mixer unit.
        AURenderCallbackStruct renderCallbackStruct;
        renderCallbackStruct.inputProc = &renderInput;
        renderCallbackStruct.inputProcRefCon = self;

        // Set a callback for the specified node's specified input
        result = AUGraphSetNodeInputCallback(mGraph, mixerNode, i, &renderCallbackStruct);

        // Get a CAStreamBasicDescription from the mixer bus.
        size = sizeof(desc);
        result = AudioUnitGetProperty(  mMixer,
                                      kAudioUnitProperty_StreamFormat,
                                      kAudioUnitScope_Input,
                                      i,
                                      &desc,
                                      &size);
        // Initializes the structure to 0 to ensure there are no spurious values.
        memset (&desc, 0, sizeof (desc));                               

        // Make modifications to the CAStreamBasicDescription
        // We're going to use 16 bit Signed Ints because they're easier to deal with
        // The Mixer unit will accept either 16 bit signed integers or
        // 32 bit 8.24 fixed point integers.

        desc.mSampleRate = kGraphSampleRate; // set sample rate
        desc.mFormatID = kAudioFormatLinearPCM;
        desc.mFormatFlags      = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        desc.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
        desc.mChannelsPerFrame = 1;
        desc.mFramesPerPacket = 1;
        desc.mBytesPerFrame = ( desc.mBitsPerChannel / 8 ) * desc.mChannelsPerFrame;
        desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;

        printf("Mixer file format: "); desc.Print();
        // Apply the modified CAStreamBasicDescription to the mixer input bus
        result = AudioUnitSetProperty(  mMixer,
                                      kAudioUnitProperty_StreamFormat,
                                      kAudioUnitScope_Input,
                                      i,
                                      &desc,
                                      sizeof(desc));
    }

    // Apply the CAStreamBasicDescription to the mixer output bus
    result = AudioUnitSetProperty(   mMixer,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  0,
                                  &desc,
                                  sizeof(desc));

    //************************************************************
    //*** Setup the audio output stream ***
    //************************************************************

    // Get a CAStreamBasicDescription from the output Audio Unit
    result = AudioUnitGetProperty(  mMixer,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  0,
                                  &desc,
                                  &size);

    // Initializes the structure to 0 to ensure there are no spurious values.
    memset (&desc, 0, sizeof (desc));

    // Make modifications to the CAStreamBasicDescription
    // AUCanonical on the iPhone is the 8.24 integer format that is native to the iPhone.
    // The Mixer unit does the format shifting for you.
    desc.SetAUCanonical(1, true);
    desc.mSampleRate = kGraphSampleRate;

    // Apply the modified CAStreamBasicDescription to the output Audio Unit
    result = AudioUnitSetProperty(  mMixer,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  0,
                                  &desc,
                                  sizeof(desc));

    // Once everything is set up call initialize to validate connections
    result = AUGraphInitialize(mGraph);
}

Can anyone explain what AudioUnitGetProperty does? 谁能解释AudioUnitGetProperty的功能?

Well, it gets the value of a property from an Audio Unit. 好吧,它是从音频单元获取属性的值的。 A "property" is typically something you deal with as a programmer (eg audio stream format, connection state), whereas a "parameter" is usually something you expose to the user (eg low pass cutoff frequency, mixer volume). “属性”通常是您作为程序员处理的东西(例如,音频流格式,连接状态),而“参数”通常是您向用户公开的东西(例如,低通截止频率,混音器音量)。 Notice that there are AudioUnitGetParameter and AudioUnitSetParameter functions to compliment the AudioUnitGetProperty and AudioUnitSetProperty functions. 请注意,有AudioUnitGetParameterAudioUnitSetParameter函数来补充AudioUnitGetPropertyAudioUnitSetProperty函数。

You're basically expected to "just know" what an Audio Unit's properties / parameters are and what values they're expecting. 基本上,您应该“只知道”一个音频单元的属性/参数以及它们期望的值。 The best source of documentation on this are two headers in AudioUnit.framework. 关于此内容的最佳文档来源是AudioUnit.framework中的两个标头。 Namely, AudioUnitProperties.h and AudioUnitParameters.h . 即, AudioUnitProperties.hAudioUnitParameters.h The next best source is Xcode's autocomplete. 第二好的来源是Xcode的自动完成功能。 For example, the AULowPass' parameters are kLowPassParam_CutoffFrequency and kLowPassParam_Resonance , so you can just type kLowPassParam and Xcode will show you what's available. 例如,AULowPass的参数是kLowPassParam_CutoffFrequencykLowPassParam_Resonance ,因此您只需键入kLowPassParam ,Xcode就会显示可用的内容。 The other AUs typically follow this scheme. 其他AU通常遵循此方案。

...but it doesn't work, apparently ...但是显然无效

I'm going to need more information. 我需要更多信息。 Do you mean you just can't hear the difference? 您是说您听不见区别吗? The AULowPass starts with a very high cutoff frequency, so unless you set it something lower you probably won't hear any difference at all. AULowPass以很高的截止频率开始,因此,除非您将其设置得较低,否则您可能根本听不到任何区别。

Try setting the cutoff frequency to something quite low, for example 500hz. 尝试将截止频率设置为较低的值,例如500hz。 You do that like this: 您可以这样操作:

AudioUnitSetParameter(mEffects,
                      kLowPassParam_CutoffFrequency,
                      kAudioUnitScope_Global,
                      0,
                      500,
                      0);

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM