簡體   English   中英

使用AudioUnits進行雙工音頻通信

[英]Duplex Audio communication using AudioUnits

我正在開發一個具有以下要求的應用程序:

  1. 記錄來自iOS設備(iPhone / iPad)的實時音頻並通過網絡發送到服務器
  2. 在iOS設備(iPhone / iPad)上播放從網絡服務器接收到的音頻

上面提到的事情需要同時完成。

我為此使用了AudioUnit

我遇到了一個問題,即我聽到的與iPhone Mic說話的音頻相同,而不是從網絡服務器接收的音頻。

我已經搜索了很多方法來避免這種情況,但是還沒有解決方案。

如果有人遇到相同的問題並找到了解決方案,則共享它會很有幫助。

這是我初始化音頻單元的代碼

-(void)initializeAudioUnit
{

    audioUnit = NULL;
    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);


    UInt32 flag = 1;
    //enable IO for recording
    status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Input,
                              kInputBus,
                              &flag,
                              sizeof(flag));

    status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
                                kAudioUnitScope_Output,
                                kOutputBus,
                                &flag,
                                sizeof(flag));


    AudioStreamBasicDescription audioStreamBasicDescription;

    // Describe format
    audioStreamBasicDescription.mSampleRate         = 16000;
    audioStreamBasicDescription.mFormatID           = kAudioFormatLinearPCM;
    audioStreamBasicDescription.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
    audioStreamBasicDescription.mFramesPerPacket    = 1;
    audioStreamBasicDescription.mChannelsPerFrame   = 1;
    audioStreamBasicDescription.mBitsPerChannel     = 16;
    audioStreamBasicDescription.mBytesPerPacket     = 2;
    audioStreamBasicDescription.mBytesPerFrame      = 2;



    status = AudioUnitSetProperty(audioUnit,
                              kAudioUnitProperty_StreamFormat,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &audioStreamBasicDescription,
                              sizeof(audioStreamBasicDescription));
    NSLog(@"Status[%d]",(int)status);


status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_StreamFormat,
                                kAudioUnitScope_Input,
                                kOutputBus,
                                &audioStreamBasicDescription,
                                sizeof(audioStreamBasicDescription));
NSLog(@"Status[%d]",(int)status);


    AURenderCallbackStruct callbackStruct;


    // Set input callback
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);
    status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_SetInputCallback,
                              kAudioUnitScope_Global,
                              kInputBus,
                              &callbackStruct,
                              sizeof(callbackStruct));

  callbackStruct.inputProc = playbackCallback;
      callbackStruct.inputProcRefCon = (__bridge void *)(self);
  status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_SetRenderCallback,
                                kAudioUnitScope_Global,
                                kOutputBus,
                                &callbackStruct,
                                sizeof(callbackStruct));
    flag=0;

status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_ShouldAllocateBuffer,
                                kAudioUnitScope_Output,
                                kInputBus,
                                &flag,
                                sizeof(flag));

}

錄音回叫

static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32    inNumberFrames,AudioBufferList *ioData) 
{    
    MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;

    AudioBuffer tempBuffer;
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = inNumberFrames * 2;
    tempBuffer.mData = malloc(inNumberFrames *2);

    AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;





    OSStatus status;
    status = AudioUnitRender(THIS->audioUnit,
                         ioActionFlags,
                         inTimeStamp,
                         kInputBus,
                         inNumberFrames,
                         &bufferList);

    if (noErr != status) {

        printf("AudioUnitRender error: %d", (int)status);
        return noErr;
    }

    tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));

    [THIS processAudio:&bufferList];

    free(bufferList.mBuffers[0].mData);

    return noErr;
}

播放回叫

static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {


NSLog(@"In play back call back");


MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;



int32_t availableBytes=0;


  char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
  NSLog(@"bytes available in buffer[%d]",availableBytes);
  decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
  ConsumeReadBytes(&(THIS->mybuffer), availableBytes); 

  memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);


 return noErr;
}

處理從MIC錄制的音頻

- (void) processAudio: (AudioBufferList*) bufferList
{
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    //    NSLog(@"Origin size: %d", (int)sourceBuffer.mDataByteSize);
    int size = 0;
    encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
    [self performSelectorOnMainThread:@selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];

    NSLog(@"Encoded size: %i", size);

} 

您的playbackCallback渲染回調(未顯示)負責發送到RemoteIO揚聲器輸出的音頻。 如果此RemoteIO呈現回調未在其回調緩沖區中放置任何數據,則可能會將緩沖區中剩余的任何垃圾(以前記錄回調緩沖區中的內容)發送給揚聲器。

另外,Apple DTS強烈建議您的recordCallback不包含任何內存管理調用,例如malloc()。 因此,這也可能是導致問題產生的錯誤。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM