简体   繁体   English

从连续的 stream 数据播放音频 (iOS)

[英]Playing audio from a continuous stream of data (iOS)

Been banging my head against this problem all morning.整个早上都在努力解决这个问题。

I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)我已经建立了与返回音频数据的数据源的连接(它是一个录音设备,因此数据没有设定长度。数据只是流入。就像,如果你打开一个 stream 到收音机)

and I have managed to receive all the packets of data in my code.我已经设法接收到我的代码中的所有数据包。 Now I just need to play it.现在我只需要玩它。 I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.我想播放传入的数据,所以我不想排队几分钟或其他任何时间,我想使用我在那个确切时刻收到的数据并播放它。

Now I been searching all morning finding different examples but none were really layed out.现在我整个上午都在寻找不同的例子,但没有一个是真正的。

in the在里面

  • (void)connection:(NSURLConnection )connection didReceiveData:(NSData )data { (void)connection:(NSURLConnection )connection didReceiveData:(NSData )data {

function, the "data" package is the audio package. function,“数据”package就是音频package。 I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far.我尝试使用 AVPlayer、MFVideoPlayer 进行流式传输,但到目前为止对我来说没有任何效果。 Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.还尝试查看 mattgallagher 的 Audiostreamer,但仍然无法实现。

Anyone here can help, has some (preferably) working examples?这里的任何人都可以提供帮助,有一些(最好)工作示例吗?

Careful: The answer below is only valid if you receive PCM data from the server.注意:以下答案仅在您从服务器接收到 PCM 数据时有效。 This is of course never happens.这当然永远不会发生。 That's why between rendering the audio and receiving the data you need another step: data conversion.这就是为什么在渲染音频和接收数据之间需要另一个步骤:数据转换。

Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.根据格式,这可能或多或少有些棘手,但通常您应该在此步骤中使用音频转换器服务。

You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.您应该使用-(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data仅用来自服务器的数据填充缓冲区,播放它不应该与此方法有任何关系。

Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units.现在,要使用缓冲区播放您“存储”在 memory 中的数据,您需要使用 RemoteIO 和音频单元。 Here is a good, comprehensive tutorial .这是一个很好的综合教程 You can remove the "record" part from the tutorial as you don't really need it.您可以从教程中删除“记录”部分,因为您并不真正需要它。

As you can see, they define a callback for playback:如您所见,它们为播放定义了一个回调:

callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              kOutputBus,
                              &callbackStruct, 
                              sizeof(callbackStruct));

and playbackCallback function looks like this:playbackCallback回调 function 看起来像这样:

static OSStatus playbackCallback(void *inRefCon, 
                          AudioUnitRenderActionFlags *ioActionFlags, 
                          const AudioTimeStamp *inTimeStamp, 
                          UInt32 inBusNumber, 
                          UInt32 inNumberFrames, 
                          AudioBufferList *ioData) {

    for (int i = 0 ; i < ioData->mNumberBuffers; i++){      
        AudioBuffer buffer = ioData->mBuffers[i];
        unsigned char *frameBuffer = buffer.mData;
        for (int j = 0; j < inNumberFrames*2; j++){
            frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
        }
    }

    return noErr;
}

Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played.基本上,它的作用是用需要播放的下一个字节块填充ioData缓冲区。 Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).如果没有要播放的新数据,请务必将ioData缓冲区归零(如果 stream 缓冲区中没有足够的数据,则播放器将被静音)。

Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.此外,您可以使用 OpenAL 实现相同的目的,使用alSourceQueueBuffersalSourceUnqueueBuffers将缓冲区一个接一个地排队。

That's it.而已。 Happy codding!快乐编码!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM