简体   繁体   English

节拍器。 计时器,音乐和动画

[英]Metronome. Timer, music and animations

I develop an app where a user have few cells in which he can put sounds and then playing the built sequence. 我开发了一个应用程序,其中用户只有几个单元,可以在其中放置声音,然后播放构建的音序。 There is a metronome, it can tick with sound. 有一个节拍器,它可以随着声音滴答作响。 Users can set metronome speed, that is the same that to set speed of passing to the next cell. 用户可以设置节拍器速度,与设置传递到下一个单元格的速度相同。 I have realized this mechanism via "timer" with handler, which highlight the current cell and play sounds. 我已经通过带有处理程序的“计时器”实现了这种机制,该机制突出显示当前单元并播放声音。 Everything works fine. 一切正常。 But when I animate some views, my timer stumbles. 但是,当我制作一些视图动画时,我的计时器绊倒了。 When animation is finished timer works as expected. 动画制作完成后,计时器将按预期工作。 How can I resolve this issue? 我该如何解决这个问题?

I have tried to realize timer via NSTimer , dispatch_after , performSelector:afterDelay: , CADisplayLink and dispatch_source_t . 我试图通过NSTimerdispatch_afterperformSelector:afterDelay:CADisplayLinkdispatch_source_t实现计时器。 In any case I get problems during the animations. 无论如何,我在动画过程中都会遇到问题。 I have even tried to realize my own animation via CADisplayLink , where I calculate animated views frames, this didn't help either. 我什至尝试通过CADisplayLink实现自己的动画,在其中计算动画视图帧,这也无济于事。

The only 100% reliable way I found of doing this, is to setup either via CoreAudio or AudioToolbox: https://developer.apple.com/documentation/audiotoolbox an audio stream data provider that gets called by iOS at regular fixed intervals to provide to the audio system the audio samples. 我发现这样做的唯一100%可靠的方法是通过CoreAudio或AudioToolbox进行设置: https : //developer.apple.com/documentation/audiotoolbox音频流数据提供者,iOS会定期以固定间隔调用该提供者,以提供音频系统的音频样本。

It may looks daunting at first, but once you've got it setup, you have full & precise control about what is generated for audio. 乍一看可能令人望而生畏,但是一旦完成设置,就可以对音频生成的内容进行完全而精确的控制。

This is the code I used to setup the AudioUnit using AudioToolbox: 这是我使用AudioToolbox设置AudioUnit的代码:

static AudioComponentInstance _audioUnit;
static int _outputAudioBus;

... ...

#pragma mark - Audio Unit

+(void)_activateAudioUnit
{
    [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
    if([self _createAudioUnitInstance]
       && [self _setupAudioUnitOutput]
       && [self _setupAudioUnitFormat]
       && [self _setupAudioUnitRenderCallback]
       && [self _initializeAudioUnit]
       && [self _startAudioUnit]
       )
    {
        [self _adjustOutputLatency];
//        NSLog(@"Audio unit initialized");
    }
}

+(BOOL)_createAudioUnitInstance
{
    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    OSStatus status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
    [self _logStatus:status step:@"instantiate"];
    return (status == noErr );
}

+(BOOL)_setupAudioUnitOutput
{
    UInt32 flag = 1;
    OSStatus status = AudioUnitSetProperty(_audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  _outputAudioBus,
                                  &flag,
                                  sizeof(flag));
    [self _logStatus:status step:@"set output bus"];
    return (status == noErr );
}

+(BOOL)_setupAudioUnitFormat
{
    AudioStreamBasicDescription audioFormat = {0};
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 2;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 4;
    audioFormat.mBytesPerFrame      = 4;

    OSStatus status = AudioUnitSetProperty(_audioUnit,
                                           kAudioUnitProperty_StreamFormat,
                                           kAudioUnitScope_Input,
                                           _outputAudioBus,
                                           &audioFormat,
                                           sizeof(audioFormat));
    [self _logStatus:status step:@"set audio format"];
    return (status == noErr );
}

+(BOOL)_setupAudioUnitRenderCallback
{
    AURenderCallbackStruct audioCallback;
    audioCallback.inputProc = playbackCallback;
    audioCallback.inputProcRefCon = (__bridge void *)(self);
    OSStatus status = AudioUnitSetProperty(_audioUnit,
                                           kAudioUnitProperty_SetRenderCallback,
                                           kAudioUnitScope_Global,
                                           _outputAudioBus,
                                           &audioCallback,
                                           sizeof(audioCallback));
    [self _logStatus:status step:@"set render callback"];
    return (status == noErr);
}


+(BOOL)_initializeAudioUnit
{
    OSStatus status = AudioUnitInitialize(_audioUnit);
    [self _logStatus:status step:@"initialize"];
    return (status == noErr);
}

+(void)start
{
    [self clearFeeds];
    [self _startAudioUnit];
}

+(void)stop
{
    [self _stopAudioUnit];
}

+(BOOL)_startAudioUnit
{
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    [self _logStatus:status step:@"start"];
    return (status == noErr);
}

+(BOOL)_stopAudioUnit
{
    OSStatus status = AudioOutputUnitStop(_audioUnit);
    [self _logStatus:status step:@"stop"];
    return (status == noErr);
}

+(void)_logStatus:(OSStatus)status step:(NSString *)step
{
    if( status != noErr )
    {
        NSLog(@"AudioUnit failed to %@, error: %d", step, (int)status);
    }
}

Finally, once this is started, my registered audio callback will be the one providing the audio: 最后,一旦开始,我注册的音频回调将是提供音频的回调:

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {

    @autoreleasepool {
        AudioBuffer *audioBuffer = ioData->mBuffers;

        // .. fill in audioBuffer with Metronome sample data, fill the in-between ticks with 0s
    }
    return noErr;
}

You can use a sound editor like Audacity: https://www.audacityteam.org/download/mac/ to edit and save your file into a RAW PCM mono/stereo data file or you can use one of the AVFoundation libraries to retrieve the audio samples from any of the supported audio files formats. 您可以使用Audacity之类的声音编辑器: https : //www.audacityteam.org/download/mac/来编辑文件并将其保存到RAW PCM mono / stereo数据文件中,也可以使用AVFoundation库之一来检索来自任何受支持的音频文件格式的音频样本。 Load your samples into a buffer, keep track of where you left off in between your audio callback frames, and feed in your metronome sample interleaved with 0. 将样本加载到缓冲区中,跟踪音频回调帧之间的中断位置,并输入与0交错的节拍器样本。

The beauty of this is you can now rely on iOS's AudioToolbox to prioritize your code so both the audio and the view animations don't interfere with each other. 这样做的好处是,您现在可以依靠iOS的AudioToolbox设置代码的优先级,以使音频和视图动画都不会相互干扰。

Cheers and Good Luck! 干杯,祝你好运!

I found a solution, playing with Apple AVAudioEngine example HelloMetronome . 我找到了一个解决方案,可以与Apple AVAudioEngine 示例HelloMetronome一起玩。 I understood the main idea. 我了解了主要思想。 I have to schedule sounds and handle callbacks in the UI. 我必须安排声音并处理UI中的回调。 Using any timers for starting playing sounds and updating UI was absolutely wrong. 使用任何计时器开始播放声音和更新UI绝对是错误的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM