简体   繁体   中英

Metronome. Timer, music and animations

I develop an app where a user have few cells in which he can put sounds and then playing the built sequence. There is a metronome, it can tick with sound. Users can set metronome speed, that is the same that to set speed of passing to the next cell. I have realized this mechanism via "timer" with handler, which highlight the current cell and play sounds. Everything works fine. But when I animate some views, my timer stumbles. When animation is finished timer works as expected. How can I resolve this issue?

I have tried to realize timer via NSTimer , dispatch_after , performSelector:afterDelay: , CADisplayLink and dispatch_source_t . In any case I get problems during the animations. I have even tried to realize my own animation via CADisplayLink , where I calculate animated views frames, this didn't help either.

The only 100% reliable way I found of doing this, is to setup either via CoreAudio or AudioToolbox: https://developer.apple.com/documentation/audiotoolbox an audio stream data provider that gets called by iOS at regular fixed intervals to provide to the audio system the audio samples.

It may looks daunting at first, but once you've got it setup, you have full & precise control about what is generated for audio.

This is the code I used to setup the AudioUnit using AudioToolbox:

static AudioComponentInstance _audioUnit;
static int _outputAudioBus;

...

#pragma mark - Audio Unit

+(void)_activateAudioUnit
{
    [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
    if([self _createAudioUnitInstance]
       && [self _setupAudioUnitOutput]
       && [self _setupAudioUnitFormat]
       && [self _setupAudioUnitRenderCallback]
       && [self _initializeAudioUnit]
       && [self _startAudioUnit]
       )
    {
        [self _adjustOutputLatency];
//        NSLog(@"Audio unit initialized");
    }
}

+(BOOL)_createAudioUnitInstance
{
    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    OSStatus status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
    [self _logStatus:status step:@"instantiate"];
    return (status == noErr );
}

+(BOOL)_setupAudioUnitOutput
{
    UInt32 flag = 1;
    OSStatus status = AudioUnitSetProperty(_audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  _outputAudioBus,
                                  &flag,
                                  sizeof(flag));
    [self _logStatus:status step:@"set output bus"];
    return (status == noErr );
}

+(BOOL)_setupAudioUnitFormat
{
    AudioStreamBasicDescription audioFormat = {0};
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 2;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 4;
    audioFormat.mBytesPerFrame      = 4;

    OSStatus status = AudioUnitSetProperty(_audioUnit,
                                           kAudioUnitProperty_StreamFormat,
                                           kAudioUnitScope_Input,
                                           _outputAudioBus,
                                           &audioFormat,
                                           sizeof(audioFormat));
    [self _logStatus:status step:@"set audio format"];
    return (status == noErr );
}

+(BOOL)_setupAudioUnitRenderCallback
{
    AURenderCallbackStruct audioCallback;
    audioCallback.inputProc = playbackCallback;
    audioCallback.inputProcRefCon = (__bridge void *)(self);
    OSStatus status = AudioUnitSetProperty(_audioUnit,
                                           kAudioUnitProperty_SetRenderCallback,
                                           kAudioUnitScope_Global,
                                           _outputAudioBus,
                                           &audioCallback,
                                           sizeof(audioCallback));
    [self _logStatus:status step:@"set render callback"];
    return (status == noErr);
}


+(BOOL)_initializeAudioUnit
{
    OSStatus status = AudioUnitInitialize(_audioUnit);
    [self _logStatus:status step:@"initialize"];
    return (status == noErr);
}

+(void)start
{
    [self clearFeeds];
    [self _startAudioUnit];
}

+(void)stop
{
    [self _stopAudioUnit];
}

+(BOOL)_startAudioUnit
{
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    [self _logStatus:status step:@"start"];
    return (status == noErr);
}

+(BOOL)_stopAudioUnit
{
    OSStatus status = AudioOutputUnitStop(_audioUnit);
    [self _logStatus:status step:@"stop"];
    return (status == noErr);
}

+(void)_logStatus:(OSStatus)status step:(NSString *)step
{
    if( status != noErr )
    {
        NSLog(@"AudioUnit failed to %@, error: %d", step, (int)status);
    }
}

Finally, once this is started, my registered audio callback will be the one providing the audio:

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {

    @autoreleasepool {
        AudioBuffer *audioBuffer = ioData->mBuffers;

        // .. fill in audioBuffer with Metronome sample data, fill the in-between ticks with 0s
    }
    return noErr;
}

You can use a sound editor like Audacity: https://www.audacityteam.org/download/mac/ to edit and save your file into a RAW PCM mono/stereo data file or you can use one of the AVFoundation libraries to retrieve the audio samples from any of the supported audio files formats. Load your samples into a buffer, keep track of where you left off in between your audio callback frames, and feed in your metronome sample interleaved with 0.

The beauty of this is you can now rely on iOS's AudioToolbox to prioritize your code so both the audio and the view animations don't interfere with each other.

Cheers and Good Luck!

I found a solution, playing with Apple AVAudioEngine example HelloMetronome . I understood the main idea. I have to schedule sounds and handle callbacks in the UI. Using any timers for starting playing sounds and updating UI was absolutely wrong.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM