繁体   English   中英

使用AudioUnits进行双工音频通信

[英]Duplex Audio communication using AudioUnits

我正在开发一个具有以下要求的应用程序:

  1. 记录来自iOS设备(iPhone / iPad)的实时音频并通过网络发送到服务器
  2. 在iOS设备(iPhone / iPad)上播放从网络服务器接收到的音频

上面提到的事情需要同时完成。

我为此使用了AudioUnit

我遇到了一个问题,即我听到的与iPhone Mic说话的音频相同,而不是从网络服务器接收的音频。

我已经搜索了很多方法来避免这种情况,但是还没有解决方案。

如果有人遇到相同的问题并找到了解决方案,则共享它会很有帮助。

这是我初始化音频单元的代码

-(void)initializeAudioUnit
{

    audioUnit = NULL;
    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);


    UInt32 flag = 1;
    //enable IO for recording
    status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Input,
                              kInputBus,
                              &flag,
                              sizeof(flag));

    status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
                                kAudioUnitScope_Output,
                                kOutputBus,
                                &flag,
                                sizeof(flag));


    AudioStreamBasicDescription audioStreamBasicDescription;

    // Describe format
    audioStreamBasicDescription.mSampleRate         = 16000;
    audioStreamBasicDescription.mFormatID           = kAudioFormatLinearPCM;
    audioStreamBasicDescription.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked |kLinearPCMFormatFlagIsNonInterleaved;
    audioStreamBasicDescription.mFramesPerPacket    = 1;
    audioStreamBasicDescription.mChannelsPerFrame   = 1;
    audioStreamBasicDescription.mBitsPerChannel     = 16;
    audioStreamBasicDescription.mBytesPerPacket     = 2;
    audioStreamBasicDescription.mBytesPerFrame      = 2;



    status = AudioUnitSetProperty(audioUnit,
                              kAudioUnitProperty_StreamFormat,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &audioStreamBasicDescription,
                              sizeof(audioStreamBasicDescription));
    NSLog(@"Status[%d]",(int)status);


status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_StreamFormat,
                                kAudioUnitScope_Input,
                                kOutputBus,
                                &audioStreamBasicDescription,
                                sizeof(audioStreamBasicDescription));
NSLog(@"Status[%d]",(int)status);


    AURenderCallbackStruct callbackStruct;


    // Set input callback
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);
    status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_SetInputCallback,
                              kAudioUnitScope_Global,
                              kInputBus,
                              &callbackStruct,
                              sizeof(callbackStruct));

  callbackStruct.inputProc = playbackCallback;
      callbackStruct.inputProcRefCon = (__bridge void *)(self);
  status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_SetRenderCallback,
                                kAudioUnitScope_Global,
                                kOutputBus,
                                &callbackStruct,
                                sizeof(callbackStruct));
    flag=0;

status = AudioUnitSetProperty(audioUnit,
                                kAudioUnitProperty_ShouldAllocateBuffer,
                                kAudioUnitScope_Output,
                                kInputBus,
                                &flag,
                                sizeof(flag));

}

录音回叫

static OSStatus recordingCallback (void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32    inNumberFrames,AudioBufferList *ioData) 
{    
    MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;

    AudioBuffer tempBuffer;
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = inNumberFrames * 2;
    tempBuffer.mData = malloc(inNumberFrames *2);

    AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = tempBuffer;





    OSStatus status;
    status = AudioUnitRender(THIS->audioUnit,
                         ioActionFlags,
                         inTimeStamp,
                         kInputBus,
                         inNumberFrames,
                         &bufferList);

    if (noErr != status) {

        printf("AudioUnitRender error: %d", (int)status);
        return noErr;
    }

    tempBuffer.mDataByteSize, &encodedSize,(__bridge void *)(THIS));

    [THIS processAudio:&bufferList];

    free(bufferList.mBuffers[0].mData);

    return noErr;
}

播放回叫

static OSStatus playbackCallback(void *inRefCon,AudioUnitRenderActionFlags *ioActionFlags,const AudioTimeStamp *inTimeStamp,UInt32 inBusNumber,UInt32 inNumberFrames,AudioBufferList *ioData) {


NSLog(@"In play back call back");


MyAudioViewController *THIS = (__bridge MyAudioViewController *)inRefCon;



int32_t availableBytes=0;


  char *inBuffer = GetDataFromCircularBuffer(&THIS->mybuffer, &availableBytes);
  NSLog(@"bytes available in buffer[%d]",availableBytes);
  decodeSpeexData(inBuffer, availableBytes,(__bridge void *)(THIS));
  ConsumeReadBytes(&(THIS->mybuffer), availableBytes); 

  memcpy(targetBuffer, THIS->outTemp, inNumberFrames*2);


 return noErr;
}

处理从MIC录制的音频

- (void) processAudio: (AudioBufferList*) bufferList
{
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    //    NSLog(@"Origin size: %d", (int)sourceBuffer.mDataByteSize);
    int size = 0;
    encodeAudioDataSpeex((spx_int16_t*)sourceBuffer.mData, sourceBuffer.mDataByteSize, &size, (__bridge void *)(self));
    [self performSelectorOnMainThread:@selector(SendAudioData:) withObject:[NSData dataWithBytes:self->jitterBuffer length:size] waitUntilDone:NO];

    NSLog(@"Encoded size: %i", size);

} 

您的playbackCallback渲染回调(未显示)负责发送到RemoteIO扬声器输出的音频。 如果此RemoteIO呈现回调未在其回调缓冲区中放置任何数据,则可能会将缓冲区中剩余的任何垃圾(以前记录回调缓冲区中的内容)发送给扬声器。

另外,Apple DTS强烈建议您的recordCallback不包含任何内存管理调用,例如malloc()。 因此,这也可能是导致问题产生的错误。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM