简体   繁体   中英

Which framework should I use to play an audio file (WAV, MP3, AIFF) in iOS with low latency?

iOS has various audio frameworks from the higher-level that lets you simply play a specified file, to the lower level that lets you get at the raw PCM data, and everything in between. For our app, we just need to play external files (WAV, AIFF, MP3) but we need to do so in response to pressing a button and we need that latency to be as small as possible. (It's for queueing in live productions.)

Now the AVAudioPlayer and such work to play simple file assets (via their URL), but its latency in actually starting the sound is too great. With larger files of more than five minutes in length, the delay to start the sound can be over a second long which renders it all but useless for timing in a live performance.

Now I know things like openAL can be used for a very-low-latency playback, but then you're waist-deep into audio buffers, audio sources, listeners, etc.

That said, does anyone know of any frameworks that work at a higher-level (ie play 'MyBeddingTrack.mp3') with a very low latency? Pre-buffering is fine. It's just the trigger has to be fast.

Bonus if we can do things like set the start and end points of the playback within the file, or to change the volume or even to perform ducking, etc.

The lowest latency you can get is with Audio Units, RemoteIO.

Remote I/O Unit

The Remote I/O unit (subtype kAudioUnitSubType_RemoteIO) connects to device hardware for input, output, or simultaneous input and output. Use it for playback, recording, or low-latency simultaneous input and output where echo cancelation is not needed.

Take a look at this tutorials:

http://atastypixel.com/blog/using-remoteio-audio-unit/

http://atastypixel.com/blog/playing-audio-in-time-using-remote-io/

Although Audio Queue framework is relatively easy to use.. it packs a lot of DSP heavy lifting behind the scenes (ie if you supply it with VBR/compressed audio.. it automatically converts it to PCM before playing it on the speaker.. it also handles a lot of the threading issues for the end user opaquely).. which is good news someone doing a light weight non-real time application.

You mentioned that you need it for queuing in live productions. I'm not sure if that means that your app is real-time.. because if it is.. then Audio Queue's will struggle to meed your needs. A good article to read about this is Ross Bencina's . The take away is that you can't afford to let third party frameworks or libraries do anything that can be potentially expensive behind the scenes like thread locking or mallocing or deallocing etc etc.. that's simply too expensive and risky for developing real time audio apps.

That's where the Audio Unit framework come in. Audio Queue's are actually built on top of the Audio Unit framework (it automates a lot of it's work).. but Audio Units bring you as close to the metal as it gets with iOS. It as responsive as you want it to be, and can do a real time app easy. Audio Unit has a huge learning curve though. There are some Open Source wrappers around it that simplifies it though (see novocaine ).

If I were you.. I'd at least skim through Learning Core Audio .. it's the go to book for any iOS core-audio developer.. it talks in detail about Audio Queues, Audio Units etc and has excellent code examples ..

From my own experience.. I worked on a real-time audio app that had some intensive audio requirements.. i found the Audio Queue framework and thought it was too good to be true.. my app worked when i prototyped it with light restrictions.. but it simply choked upon stress testing it.. that's when i had to dive deep into audio units and change the architecture etc etc (it wasn't pretty). my advice: work with audio queue at least as an introduction to Audio Units.. stick with it if it meets your needs, but then don't be afraid to use Audio Units if it becomes clear that Audio Queue no longer meets your app's demands.

You need the system sound framework. The system sound framework is made for things like use interface sounds or quick, responsive sounds. Take a look here .

AVAudioPlayer has a prepareToPlay method to preload its audio buffers. This might speed up response time significantly.

I ran into the same problem as you are but after a while research I found a great framework. I am currently using kstenerud's ObjectAL sound framework. It is base on OpenAL and with well document. You are able to play background music and sound effect with multiple layers.

Here is project on github https://github.com/kstenerud/ObjectAL-for-iPhone Here is the web site http://kstenerud.github.com/ObjectAL-for-iPhone/index.html

The following SO question contains working code that plays a file using Audio Units, and specifically the AudioFilePlayer. Even though the question states that it is not working, it worked out of the box for me - only add a AUGraphStart(_graph) at the end.

The 'ScheduledFilePrime' property of the AudioFilePlayer states how much of the file to load before starting to play. You may want to play around with that.

But as the others note, Audio Units have a steep learning curve.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM