簡體   English   中英

在Swift 3中使用音頻隊列獲取麥克風輸入

[英]Get microphone input using Audio Queue in Swift 3

我正在開發一個通過內置麥克風錄制語音並將其實時發送到服務器的應用程序。 因此,我需要在錄音時從麥克風獲取字節流。

在谷歌搜索和堆棧溢出一段時間之后,我想我已經弄清楚了它應該如何工作,但事實並非如此。 我認為使用音頻隊列可能是一種方法。

這是我到目前為止嘗試過的:

func test() {
    func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) {
        print("test")
    }

    var inputQueue: AudioQueueRef? = nil

    var aqData = AQRecorderState(
        mDataFormat: AudioStreamBasicDescription(
            mSampleRate: 16000,
            mFormatID: kAudioFormatLinearPCM,
            mFormatFlags: 0,
            mBytesPerPacket: 2,
            mFramesPerPacket: 1,     // Must be set to 1 for uncomressed formats
            mBytesPerFrame: 2,
            mChannelsPerFrame: 1,    // Mono recording
            mBitsPerChannel: 2 * 8,  // 2 Bytes
            mReserved: 0),  // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription
        mQueue: inputQueue!,
        mBuffers: [AudioQueueBufferRef](),
        bufferByteSize: 32,
        mCurrentPacket: 0,
        mIsRunning: true)

    var error = AudioQueueNewInput(&aqData.mDataFormat,
                                   callback,
                                   nil,
                                   nil,
                                   nil,
                                   0,
                                   &inputQueue)
    AudioQueueStart(inputQueue!, nil)
}

它會編譯並啟動應用程序,但是一旦我調用test(),我就會得到一個異常:

致命錯誤:解開Optional值時意外發現nil

異常是由

mQueue: inputQueue!

我知道為什么會發生這種情況(inputQueue沒有值),但是我不知道如何正確初始化inputQueue。 問題在於,對於Swift用戶,音頻隊列的文檔非常少,我在互聯網上找不到任何有效的示例。

有人可以告訴我我做錯了嗎?

在使用音頻隊列之前,請使用AudioQueueNewInput(...) (或輸出)初始化音頻隊列:

let sampleRate = 16000
let numChannels = 2
var inFormat = AudioStreamBasicDescription(
        mSampleRate:        Double(sampleRate),
        mFormatID:          kAudioFormatLinearPCM,
        mFormatFlags:       kAudioFormatFlagsNativeFloatPacked,
        mBytesPerPacket:    UInt32(numChannels * MemoryLayout<UInt32>.size),
        mFramesPerPacket:   1,
        mBytesPerFrame:     UInt32(numChannels * MemoryLayout<UInt32>.size),
        mChannelsPerFrame:  UInt32(numChannels),
        mBitsPerChannel:    UInt32(8 * (MemoryLayout<UInt32>.size)),
        mReserved:          UInt32(0)

var inQueue: AudioQueueRef? = nil
AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue)

var aqData = AQRecorderState(
    mDataFormat:    inFormat, 
    mQueue:         inQueue!, // inQueue is initialized now and can be unwrapped
    mBuffers: [AudioQueueBufferRef](),
    bufferByteSize: 32,
    mCurrentPacket: 0,
    mIsRunning:     true)

Apple文檔中查找詳細信息

我們項目中的這段代碼可以正常工作:

AudioBuffer * buff; 
AudioQueueRef queue;
AudioStreamBasicDescription  fmt = { 0 };


static void HandleInputBuffer (
                               void                                 *aqData,
                               AudioQueueRef                        inAQ,
                               AudioQueueBufferRef                  inBuffer,
                               const AudioTimeStamp                 *inStartTime,
                               UInt32                               inNumPackets,
                               const AudioStreamPacketDescription   *inPacketDesc

                               ) {

 }



- (void) initialize  {


    thisClass = self;

    __block struct AQRecorderState aqData;

    NSError * error;

    fmt.mFormatID         = kAudioFormatLinearPCM; 
    fmt.mSampleRate       = 44100.0;               
    fmt.mChannelsPerFrame = 1;                     
    fmt.mBitsPerChannel   = 16;                    
    fmt.mChannelsPerFrame = 1;
    fmt.mFramesPerPacket  = 1;
    fmt.mBytesPerFrame = sizeof (SInt16);
    fmt.mBytesPerPacket = sizeof (SInt16);


    fmt.mFormatFlags =  kLinearPCMFormatFlagIsSignedInteger  | kLinearPCMFormatFlagIsPacked;




    OSStatus status = AudioQueueNewInput (                              // 1
                        &fmt,                          // 2
                        HandleInputBuffer,                            // 3
                        &aqData,                                      // 4
                        NULL,                                         // 5
                        kCFRunLoopCommonModes,                        // 6
                        0,                                            // 7
                        &queue                                // 8
                        );



    AudioQueueBufferRef  buffers[kNumberBuffers];
    UInt32 bufferByteSize = kSamplesSize;
    for (int i = 0; i < kNumberBuffers; ++i) {           // 1
        OSStatus allocateStatus;
        allocateStatus =  AudioQueueAllocateBuffer (                       // 2
                                  queue,                               // 3
                                  bufferByteSize,                       // 4
                                  &buffers[i]                          // 5
                                  );
        OSStatus  enqueStatus;
        NSLog(@"allocateStatus = %d" , allocateStatus);
        enqueStatus =   AudioQueueEnqueueBuffer (                        // 6
                                 queue,                               // 7
                                 buffers[i],                          // 8
                                 0,                                           // 9
                                 NULL                                         // 10
                                 );
        NSLog(@"enqueStatus = %d" , enqueStatus);
    }



    AudioQueueStart (                                    // 3
                     queue,                                   // 4
                     NULL                                             // 5
                     );


}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM