简体   繁体   English

iOS流音频从一个iOS设备到另一个

[英]iOS Stream Audio from one iOS Device to Another

I get a song from the device iTunes library and shove it into an AVAsset: 我从设备iTunes库中获取一首歌并将其推入AVAsset:

- (void)mediaPicker: (MPMediaPickerController *)mediaPicker didPickMediaItems:(MPMediaItemCollection *)mediaItemCollection
{
    NSArray *arr = mediaItemCollection.items;

    MPMediaItem *song = [arr objectAtIndex:0];

    NSData *songData = [NSData dataWithContentsOfURL:[song valueForProperty:MPMediaItemPropertyAssetURL]];
}

Then I have this Game Center method for receiving data: 然后我有这个Game Center方法来接收数据:

- (void)match:(GKMatch *)match didReceiveData:(NSData *)data fromPlayer:(NSString *)playerID

I'm having a LOT of trouble figuring out how to send this AVAsset via GameCenter and then have it play on the receiving device. 我很难找到如何通过GameCenter发送这个AVAsset然后让它在接收设备上播放。

I've read through: http://developer.apple.com/library/ios/#documentation/MusicAudio/Reference/AudioStreamReference/Reference/reference.html#//apple_ref/doc/uid/TP40006162 我读过: http//developer.apple.com/library/ios/#documentation/MusicAudio/Reference/AudioStreamReference/Reference/reference.html#//apple_ref/doc/uid/TP40006162

http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW5 http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW5

http://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html http://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html

http://developer.apple.com/library/mac/#documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/Introduction/Introduction.html http://developer.apple.com/library/mac/#documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/Introduction/Introduction.html

I am just lost. 我迷失了。 Information overload. 信息超载。

I've implemented Cocoa With Love's Audio Stream code, but I can't figure out how to take the NSData I receive through GameCenter and shove it into his code. 我已经实现了Cocoa With Love的音频流代码,但我无法弄清楚如何通过GameCenter获取我收到的NSData并将其推送到他的代码中。 http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html

Can someone please help me figure this out? 有人可以帮我解决这个问题吗? So again the part I need help with is simply breaking up song data into packets (or however it works), then iterating through those packets and sending it through gamekit, then parsing that data AS it comes in on the receiving device as PLAY it AS it comes in. 因此,我需要帮助的部分只是简单地将歌曲数据分解成数据包(或者它可以工作),然后迭代这些数据包并通过游戏套件发送它,然后解析它在接收设备上传入的数据作为播放它它进来了。

The API you need to look at is "Audio Queue Services" . 您需要查看的API是“音频队列服务”

Right, here goes for a basic overview of what you need to do. 是的,这里是您需要做的基本概述。

When you playback audio, you set up a queue or a service. 播放音频时,您可以设置队列或服务。 That queue will ask for some audio data. 该队列将要求一些音频数据。 Then, when it has played all that back, it will ask for some more. 然后,当它播放了所有这些后,它会要求更多。 This goes on until you either stop the queue, or there's no more data to playback. 这一直持续到您停止队列,或者没有更多数据要播放。

The two main lower level APIs in iOS are Audio Unit and Audio Queue. iOS中的两个主要低级API是Audio Unit和Audio Queue。 By lower level, I mean an API that is a bit more nitty gritty than saying "just play back this mp3" or whatever. 从较低的层面来说,我的意思是一个API,比说“只是回放这个mp3”或其他什么更加细致。

My experience has been that Audio Unit is lower latency, but that Audio Queue is more suited to streaming audio. 我的经验是音频单元的延迟较低,但音频队列更适合流式传输音频。 So, I think for you the latter is a better option. 所以,我认为后者是一个更好的选择。

A key part of what you need to do is buffering. 您需要做的一件事就是缓冲。 That means loading data sufficiently so that there are no gaps in your playback. 这意味着充分加载数据,以便播放中没有间隙。 You might want to handle this by initially loading a larger amount of data. 您可能希望通过最初加载大量数据来处理此问题。 Then you are playing ahead. 那你就是在前面。 You'll have a sufficiently large buffer in memory whilst simultaneously receiving more data on a background thread. 您将在内存中拥有足够大的缓冲区,同时在后台线程上接收更多数据。

The sample project I would recommend studying closely is SpeakHere . 我建议密切研究的样本项目是SpeakHere In particular look at the classes SpeakHereController.mm and AQPlayer.mm . 特别是看看SpeakHereController.mmAQPlayer.mm类。

The controller handles things like starting and stopping AQPlayer . 控制器处理诸如启动和停止AQPlayer类的事情。 AQPlayer represents an AudioQueue . AQPlayer代表一个AudioQueue Look closely at AQPlayer::AQBufferCallback . 仔细观察AQPlayer::AQBufferCallback That's the callback method that is invoked when the queue wants more data. 这是队列需要更多数据时调用的回调方法。

You'll need to make sure that the set up of the queue data, and the format of the data you receive matches exactly. 您需要确保队列数据的设置以及您收到的数据格式完全匹配。 Checkout things like number of channels (mono or stereo?), number of frames, integers or floats, and sample rate. 检查诸如通道数量(单声道或立体声?),帧数,整数或浮点数和采样率等内容。 If anything doesn't match up, you'll either get EXC_BAD_ACCESS errors as you work your way through the respective buffers, or you'll get white noise, or - in the case of wrong sample rates - audio that sounds slowed down or sped up. 如果有任何不匹配的情况,当你在相应的缓冲区中工作时,你会得到EXC_BAD_ACCESS错误,或者你会得到白噪声,或者 - 如果采样率错误 - 听起来声音减慢或加速起来。

Note that SpeakHere runs two audio queues; 请注意,SpeakHere运行两个音频队列; one for recording, and one for playback. 一个用于录制,一个用于播放。 All audio stuff works using buffers of data. 所有音频内容都使用数据缓冲区。 So you're always passing round pointers to the buffers. 所以你总是将循环指针传递给缓冲区。 So, for example during playback you will have say a memory buffer that has 20 seconds of audio. 因此,例如在播放期间,您将说一个具有20秒音频的内存缓冲区。 Perhaps every second your callback will be invoked by the queue, essentially saying "give me another second's worth of data please". 也许每秒你的回调都会被队列调用,基本上就是说“请给我另一秒钟的数据”。 You could think of it as a playback head that moves through your data requesting more information. 您可以将其视为一个回放头,它可以移动您的数据以请求更多信息。

Let's look at this in a bit more detail. 让我们更详细地看一下这个。 Differently to SpeakHere, you're going to be working with in memory buffers rather than writing out the audio to a temporary file. 与SpeakHere不同,您将使用内存缓冲区而不是将音频写入临时文件。

Note that if you're dealing with large amounts of data, on an iOS device, you'll have no choice but to hold the bulk of that on disk. 请注意,如果您正在处理大量数据,那么在iOS设备上,您将别无选择,只能在磁盘上保留大部分数据。 Especially if the user can replay the audio, rewind it, etc., you'll need to hold it all somewhere! 特别是如果用户可以重放音频,倒回音频等,你需要把它放在某个地方!

Anyway, assuming that AQPlayer will be reading from memory, we'll need to alter it as follows. 无论如何,假设AQPlayer将从内存中读取,我们需要按如下方式更改它。

First, somewhere to hold the data, in AQPlayer.h : 首先,在AQPlayer.h保存数据的AQPlayer.h

void SetAudioBuffer(float *inAudioBuffer) { mMyAudioBuffer = inAudioBuffer; }

You already have that data in an NSData object, so you can just pass in the pointer returned from a call to [myData bytes] . 您已经在NSData对象中拥有该数据,因此您只需将从调用返回的指针传递给[myData bytes]

What provides that data to the audio queue? 是什么将这些数据提供给音频队列? That's the call back method set up in AQPlayer : 这是在AQPlayer设置的回调方法:

void AQPlayer::AQBufferCallback(void *                  inUserData,
                            AudioQueueRef           inAQ,
                            AudioQueueBufferRef     inCompleteAQBuffer) 

The method that we'll use to add part of our data to the audio queue is AudioQueueEnqueueBuffer : 我们用于将部分数据添加到音频队列的方法是AudioQueueEnqueueBuffer

AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);

inAQ is the reference to the queue as received by our callback. inAQ是我们的回调接收到的队列的引用。 inCompleteAQBuffer is the pointer to an audio queue buffer. inCompleteAQBuffer是指向音频队列缓冲区的指针。

So how do you get your data - that is the pointer returned by calling the bytes method on your NSData object - into the audio queue buffer inCompleteAQBuffer ? 那么如何通过调用NSData对象上的bytes方法将数据(即通过调用NSData对象上的bytes方法返回的指针)放入完成inCompleteAQBuffer的音频队列缓冲区中?

Using a memcpy : 使用memcpy

memcpy(inCompleteAQBuffer->mAudioData, THIS->mMyAudioBuffer + (THIS->mMyPlayBufferPosition / sizeof(float)), numBytesToCopy);

You'll also need to set the buffer size: 您还需要设置缓冲区大小:

        inCompleteAQBuffer->mAudioDataByteSize =  numBytesToCopy;   

numBytesToCopy is always going to be the same, unless you're just about to run out of data. numBytesToCopy总是一样的, 除非你刚要用完数据。 For example if your buffer is 2 seconds worth of audio data and you have 9 seconds to playback, then for the first four callbacks you will pass 2 second's worth. 例如,如果您的缓冲区是2秒钟的音频数据,并且您有9秒钟的播放时间,那么对于前四个回调,您将获得2秒的值。 For the final callback you will only have 1 second's worth of data left. 对于最终回调,您只剩下1秒的数据。 numBytesToCopy must reflect that. numBytesToCopy必须反映出来。

    // Calculate how many bytes are remaining? It could be less than a normal buffer
    // size. For example, if the buffer size is 0.5 seconds and recording stopped
    // halfway through that. In which case, we copy across only the recorded bytes
    // and we don't enqueue any more buffers.
    SInt64 numRemainingBytes = THIS->mPlayBufferEndPosition - THIS->mPlayBufferPosition;

    SInt64 numBytesToCopy =  numRemainingBytes < THIS->mBufferByteSize ? numRemainingBytes : THIS->mBufferByteSize;

Finally, we advance the playback head. 最后,我们推进了播放头。 In our callback, we've given the queue some data to play. 在我们的回调中,我们已经为队列提供了一些数据。 What happens next time we get the callback? 下次我们收到回调会发生什么? We don't want to give the same data again. 我们不想再提供相同的数据。 Not unless you're doing some funky dj loop stuff! 除非你做一些时髦的dj循环!

So we advance the head, which is basically just a pointer to our audio buffer. 所以我们推进了头部,它基本上只是指向音频缓冲区的指针。 The pointer moves through the buffer like the needle on the record: 指针像记录中的针一样在缓冲区中移动:

    SELF->mPlayBufferPosition += numBytesToCopy;

That's it! 而已! There's some other logic but you can get that from studying the full callback method in SpeakHere. 还有一些其他逻辑,但你可以通过研究SpeakHere中的完整回调方法来实现。

A couple of points I must emphasis. 我必须强调几点。 First, don't just copy and paste my code above. 首先,不要只复制并粘贴上面的代码。 Absolutely make sure you understand what you are doing. 绝对要确保你明白自己在做什么。 Undoubtably you'll hit problems and you'll need to understand what's happening. 毫无疑问,你会遇到问题,你需要了解发生了什么。

Secondly, make sure the audio formats are the same, and even better that you understand the audio format. 其次,确保音频格式相同,甚至更好地理解音频格式。 This is covered in the Audio Queue Services Programming Guide in Recording Audio . 录音中的音频队列服务编程指南中对此进行了介绍。 Look at Listing 2-8 Specifying an audio queue's audio data format . 查看清单2-8指定音频队列的音频数据格式

It's crucial to understand that you have the most primitive unit of data, either an integer or a float. 了解您拥有最原始的数据单元(整数或浮点数)至关重要。 Mono or stereo you have one or two channels in a frame. 单声道或立体声,帧中有一个或两个声道。 That defines how many integers or floats are in that frame. 这定义了该帧中有多少整数或浮点数。 Then you have frames per packet (probably 1). 然后你有每个数据包的帧(可能是1)。 Your sample rate determines how many of those packets you have per second. 您的采样率决定了您每秒有多少数据包。

It's all covered in the docs. 这些都包含在文档中。 Just make sure everything matches up or you will have some pretty strange sounds! 只要确保一切都匹配,否则你会有一些非常奇怪的声音!

Good luck! 祝好运!

What's your purpose? 你的目的是什么? and How do you call [match sendDataToAllPlayers:...] ? 你怎么称呼[match sendDataToAllPlayers:...]? It decides how do you get AVAsset back from data received. 它决定如何从收到的数据中恢复AVAsset。

There are steps mentioned: 有提到的步骤:

  1. get NSData from mediaPicker for music 从mediaPicker获取音乐的NSData
  2. create AVAsset (are you really doing it?) 创建AVAsset(你真的在做吗?)
  3. send to GKPlayers ? 发送给GKPlayers?
  4. recieving NSData from gamecenter 从gamecenter接收NSData
  5. get back AVAsset if you are dong 2. 如果你是2,请回到AVAsset。
  6. use AVPlayer:playerWithPlayItem to get AVPlayer and play 使用AVPlayer:playerWithPlayItem获取AVPlayer并播放

If you are doing some encoding and transmission on your own method, like use AVAssetReader to get raw data and sending, then just do it reversely. 如果您在自己的方法上进行一些编码和传输,比如使用AVAssetReader来获取原始数据并发送,那么只需反向执行。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM