简体   繁体   English

使用AudioUnits进行双工音频通信

[英]Duplex Audio communication using AudioUnits

I am working on a app which has following requirements: 我正在开发一个具有以下要求的应用程序:

  1. Record real time audio from iOS device (iPhone/iPad) and send to server over network 记录来自iOS设备(iPhone / iPad)的实时音频并通过网络发送到服务器
  2. Play received audio from network server on iOS device(iPhone/iPad) 在iOS设备(iPhone / iPad)上播放从网络服务器接收到的音频

Above mentioned things need to be done simultaneously. 上面提到的事情需要同时完成。

I have used AudioUnit for this. 我为此使用了AudioUnit

I have run into a problem where I am hearing same audio what i speak into iPhone Mic instead of audio received from network server. 我遇到了一个问题,即我听到的与iPhone Mic说话的音频相同,而不是从网络服务器接收的音频。

I have searched a lot on how to avoid this but haven't got the solution. 我已经搜索了很多方法来避免这种情况,但是还没有解决方案。

If anyone has had same problem and found any solution, sharing it will help a lot. 如果有人遇到相同的问题并找到了解决方案,则共享它会很有帮助。

here is my code for initializing audio Unit 这是我初始化音频单元的代码

-(void)initializeAudioUnit
{

    audioUnit = NULL;
    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);


    UInt32 flag = 1;
    //enable IO for recording
    status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Input,
                              kInputBus,
                              &flag,
                              sizeof(flag));

    status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
                                kAudioUnitScope_Output,
                                kOutputBus,
                                &flag,
                                sizeof(flag));


    AudioStreamBasicDescription audioStreamBasicDescription;

    // Describe format
    audioStreamBasicDescription.mSampleRate         = 16000;
    audioStreamBasicDescription.mFormatID           = kAudioFormatLinearPCM;
    audioStreamBasicDescription.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
    audioStreamBasicDescription.mFramesPerPacket    = 1;
    audioStreamBasicDescription.mChannelsPerFrame   = 1;
    audioStreamBasicDescription.mBitsPerChannel     = 16;
    audioStreamBasicDescription.mBytesPerPacket     = 2;
    audioStreamBasicDescription.mBytesPerFrame      = 2;



    status = AudioUnitSetProperty(audioUnit,
                              kAudioUnitProperty_StreamFormat,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &audioStreamBasicDescription,
                              sizeof(audioStreamBasicDescription));
    NSLog(@"Status[%d]",(int)status);


status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_StreamFormat,
                                kAudioUnitScope_Input,
                                kOutputBus,
                                &audioStreamBasicDescription,
                                sizeof(audioStreamBasicDescription));
NSLog(@"Status[%d]",(int)status);


    AURenderCallbackStruct callbackStruct;


    // Set input callback
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);
    status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_SetInputCallback,
                              kAudioUnitScope_Global,
                              kInputBus,
                              &callbackStruct,
                              sizeof(callbackStruct));

  callbackStruct.inputProc = playbackCallback;
      callbackStruct.inputProcRefCon = (__bridge void *)(self);
  status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_SetRenderCallback,
                                kAudioUnitScope_Global,
                                kOutputBus,
                                &callbackStruct,
                                sizeof(callbackStruct));
    flag=0;

status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_ShouldAllocateBuffer,
                                kAudioUnitScope_Output,
                                kInputBus,
                                &flag,
                                sizeof(flag));

}

Recording Call Back 录音回叫

static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32    inNumberFrames,AudioBufferList *ioData) 
{    
    MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;

    AudioBuffer tempBuffer;
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = inNumberFrames * 2;
    tempBuffer.mData = malloc(inNumberFrames *2);

    AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;





    OSStatus status;
    status = AudioUnitRender(THIS->audioUnit,
                         ioActionFlags,
                         inTimeStamp,
                         kInputBus,
                         inNumberFrames,
                         &bufferList);

    if (noErr != status) {

        printf("AudioUnitRender error: %d", (int)status);
        return noErr;
    }

    tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));

    [THIS processAudio:&bufferList];

    free(bufferList.mBuffers[0].mData);

    return noErr;
}

Playback Call Back 播放回叫

static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {


NSLog(@"In play back call back");


MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;



int32_t availableBytes=0;


  char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
  NSLog(@"bytes available in buffer[%d]",availableBytes);
  decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
  ConsumeReadBytes(&(THIS->mybuffer), availableBytes); 

  memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);


 return noErr;
}

Process Audio recorded from MIC 处理从MIC录制的音频

- (void) processAudio: (AudioBufferList*) bufferList
{
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    //    NSLog(@"Origin size: %d", (int)sourceBuffer.mDataByteSize);
    int size = 0;
    encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
    [self performSelectorOnMainThread:@selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];

    NSLog(@"Encoded size: %i", size);

} 

Your playbackCallback render callback, which you have not shown, is responsible for the audio that is sent to the RemoteIO speaker output. 您的playbackCallback渲染回调(未显示)负责发送到RemoteIO扬声器输出的音频。 If this RemoteIO render callback puts no data in its callback buffers, whatever junk that was left in buffers (stuff that was previously in the record callback buffers perhaps) might be sent to the speaker instead. 如果此RemoteIO呈现回调未在其回调缓冲区中放置任何数据,则可能会将缓冲区中剩余的任何垃圾(以前记录回调缓冲区中的内容)发送给扬声器。

Also, it is strongly recommended by Apple DTS that your recordingCallback not include any memory management calls, such as malloc(). 另外,Apple DTS强烈建议您的recordCallback不包含任何内存管理调用,例如malloc()。 So this may be a bug helping cause the problem as well. 因此,这也可能是导致问题产生的错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM