简体   繁体   English

Swift:如何等待异步的 @escaping 闭包(内联)

[英]Swift: How to wait for an asynchronous, @escaping closure (inline)

How is it possible to wait for an @escaping closure to complete inline before proceeding?如何在继续之前等待 @escaping 闭包完成内联?

I am utilizing the write method from AVSpeechSynthesizer, which uses an @escaping closure, so the initial AVAudioBuffer from the callback will return after createSpeechToBuffer has completed.我正在利用 AVSpeechSynthesizer 的 write 方法,它使用 @escaping 闭包,因此回调中的初始 AVAudioBuffer 将在 createSpeechToBuffer 完成后返回。

func write(_ utterance: AVSpeechUtterance, toBufferCallback bufferCallback: @escaping AVSpeechSynthesizer.BufferCallback)

My method writes speech to a buffer, then resamples and manipulates the output, for a workflow, where speech is done in faster than real-time.我的方法将语音写入缓冲区,然后重新采样并处理输出,用于工作流程,其中语音的完成速度比实时快。

The goal is to perform the task inline, to avoid changing the workflow to standby for the 'didFinish' delegate目标是内联执行任务,以避免将工作流更改为等待“didFinish”委托

speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance)

I believe this question can be generalized to dealing with @escaping closures within a function\\method相信这个问题可以推广到处理函数\\方法中的@escaping 闭包

import Cocoa
import AVFoundation

let _speechSynth = AVSpeechSynthesizer()

func resampleBuffer( inSource: AVAudioPCMBuffer, newSampleRate: Float) -> AVAudioPCMBuffer
{
    // simulate resample data here
    let testCapacity     = 1024
    let audioFormat      = AVAudioFormat(standardFormatWithSampleRate: Double(newSampleRate), channels: 2)
    let simulateResample = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: UInt32(testCapacity))
    return simulateResample!
}

func createSpeechToBuffer( stringToSpeak: String, sampleRate: Float) -> AVAudioPCMBuffer?
{
    var outBuffer    : AVAudioPCMBuffer? = nil
    let utterance    = AVSpeechUtterance(string: stringToSpeak)
    var speechIsBusy = true
    utterance.voice  = AVSpeechSynthesisVoice(language: "en-us")
    let semaphore = DispatchSemaphore(value: 0)
    
    _speechSynth.write(utterance) { (buffer: AVAudioBuffer) in

        guard let pcmBuffer = buffer as? AVAudioPCMBuffer else {
            fatalError("unknown buffer type: \(buffer)")
        }
        
        if ( pcmBuffer.frameLength == 0 ) {
            print("buffer is empty")
        } else {
            print("buffer has content \(buffer)")
        }
        
        outBuffer    = resampleBuffer( inSource: pcmBuffer, newSampleRate: sampleRate)
        speechIsBusy = false
//        semaphore.signal()
    }
    
    // wait for completion of func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance)
    
//        while ( _speechSynth.isSpeaking )
//        {
//            /* arbitrary task waiting for write to complete */
//        }
//
//        while ( speechIsBusy )
//        {
//            /* arbitrary task waiting for write to complete */
//        }
//    semaphore.wait()
    return outBuffer
}

print("SUCCESS is waiting, returning the non-nil output from the resampleBuffer method.")

for indx in 1...10
{
    let sentence  = "This is sentence number \(indx). [[slnc 3000]] \n"
    let outBuffer = createSpeechToBuffer( stringToSpeak: sentence, sampleRate: 48000.0)
    print("outBuffer: \(String(describing: outBuffer))")
}

After I wrote the createSpeechToBuffer method and it failed to produce the desired output (inline), I realized that it returns before getting the results of the resampling.在我编写 createSpeechToBuffer 方法并且它未能产生所需的输出(内联)之后,我意识到它在获得重采样结果之前返回。 The callback is escaping, so the initial AVAudioBuffer from the callback will return after createSpeechToBuffer has completed.回调正在转义,因此回调中的初始 AVAudioBuffer 将在 createSpeechToBuffer 完成后返回。 The actual resampling does work, however I currently must save the result and continue after being notified by the delegate "didFinish utterance" to proceed.实际的重采样确实有效,但是我目前必须保存结果并在被委托“didFinish utterance”通知后继续。

Attempts at waiting for _speechSynth.isSpeaking, speechIsBusy flag, dispatch queue and semaphore are blocking the write method (using _speechSynth.write) from completing.尝试等待 _speechSynth.isSpeaking、speechIsBusy 标志、调度队列和信号量会阻止写入方法(使用 _speechSynth.write)完成。

How is it possible to wait for the result inline versus recreating a workflow depending on the delegate "didFinish utterance"?如何根据委托“didFinish 话语”等待内联结果而不是重新创建工作流?

I'm on macOS 11.4 (Big Sur) but I believe this question is applicable to macOS and ios我使用的是 macOS 11.4 (Big Sur) 但我相信这个问题适用于 macOS 和 ios

It looks to me that the commented-out code for DispatchSemaphore would work if the @escaping closure is run concurrently, and I think the problem is that it is run serially , or more accurately, not run at all, because it is scheduled to run serially.在我看来,如果@escaping闭包同时运行,则DispatchSemaphore的注释掉的代码会起作用,我认为问题在于它是串行运行的,或者更准确地说,根本不运行,因为它计划运行连续。 I'm not specifically familiar with the AVSpeechSynthesizer API, but from your description, it sounds to me as though it's calling on the main dispatch queue, which is a serial queue.我对AVSpeechSynthesizer API 不是特别熟悉,但是根据您的描述,我觉得它好像在调用主调度队列,这是一个串行队列。 You call wait to block until _speechSynth.write completes, but that's blocking the main thread, which prevents it from ever continuing to the next iteration of the run loop, so the actual work of _speechSynth.write never even starts.您调用 wait 阻塞直到_speechSynth.write完成,但这会阻塞主线程,从而阻止它继续运行循环的下一次迭代,因此_speechSynth.write的实际工作甚_speechSynth.write从未开始。

Let's back up.让我们备份。 Somewhere behind the scenes your closure is almost certainly called via DispatchQueue.main 's async method, either because that's where speechSynth.write does its work then calls your closure synchronously on the current thread at the time, or because it explicitly calls it on the main thread.在幕后的某个地方,您的闭包几乎肯定是通过DispatchQueue.mainasync方法调用的,要么是因为这是speechSynth.write工作的地方,然后当时在当前线程上同步调用您的闭包,要么是因为它在主线程。

A lot of programmers are sometimes confused as to exactly what async does.许多程序员有时对async作用感到困惑。 All async means is "schedule this task and return control to the caller immediately".所有async手段都是“安排这个任务并立即将控制权返回给调用者”。 That's it.而已。 It does not mean that the task will be run concurrently, only that it will be run later.并不意味着该任务将同时运行,只知道它会在以后运行。 Whether it is run concurrently or serially is an attribute of the DispatchQueue whose async method is being called.是并发还是串行运行是调用其async方法的DispatchQueue的一个属性。 Concurrent queues spin up threads for their tasks, which either can be run in parallel on different CPU cores (true concurrency), or interleaved with the current thread on the same core (preemptive multitasking).并发队列为其任务启动线程,这些线程可以在不同的 CPU 内核上并行运行(真正的并发),或者与同一内核上的当前线程交错运行(抢占式多任务处理)。 Serial queues on the other hand have a run loop as in NSRunLoop , and run their scheduled tasks synchronously after dequeuing them.另一方面,串行队列有一个运行循环,如NSRunLoop ,并在出队后同步运行它们的计划任务。

To illustrate what I mean, the main run loop looks vaguely like this, and other run loops are similar:为了说明我的意思,主运行循环看起来像这样,其他运行循环类似:

while !quit
{
    if an event is waiting {
        dispatch the event <-- Your code is likely blocking in here
    }
    else if a task is waiting in the queue 
    {
        dequeue the task
        execute the task <-- Your closure would be run here
    }
    else if a timer has expired {
       run timer task
    }
    else if some view needs updating {
        call the view's draw(rect:) method
    }
    else { probably other things I'm forgetting }
}

createSpeechToBuffer is almost certainly being run in response to some event processing, which means that when it blocks, it does not return back to the run loop to continue to the next iteration where it checks for tasks in the queue... which from the behavior you describe, seems to include the work being done by _speechSynth.write ... the very thing you're waiting for. createSpeechToBuffer几乎可以肯定正在运行以响应某些事件处理,这意味着当它阻塞时,它不会返回运行循环以继续下一次迭代,在那里它检查队列中的任务......这来自行为你描述的,似乎包括_speechSynth.write正在做的工作......你正在等待的事情。

You can try explicitly creating a .concurrent DispatchQueue and using it to wrap the call to _speechSynth.write in an explicit async call, but that probably won't work, and even if it does, it will be fragile to changes Apple might make to AVSpeechSynthesizer 's implementation.您可以尝试显式创建.concurrent DispatchQueue并使用它在显式async调用中包装对_speechSynth.write的调用,但这可能行不通,即使这样做,Apple 可能做出的更改也很脆弱AVSpeechSynthesizer的实现。

The safe way is to not block... but that means re-thinking your work flow a little.安全的方法是不要阻塞......但这意味着重新思考你的工作流程。 Basically whatever code would be called after createSpeechToBuffer returns should be called at the end of your closure.基本上,在createSpeechToBuffer返回之后调用的任何代码都应该在关闭结束时调用。 Of course, as currently written createSpeechToBuffer doesn't know what that code is (nor should it).当然,目前编写的createSpeechToBuffer不知道该代码是什么(也不应该知道)。 The solution is to inject it as a parameter... meaning createSpeechToBuffer itself would also take an @escaping closure.解决方案是将其作为参数注入...意味着createSpeechToBuffer本身也将采用@escaping闭包。 And of course, that means it can't return the buffer, but instead passes it to the closure.当然,这意味着它不能返回缓冲区,而是将其传递给闭包。

func createSpeechToBuffer(
    stringToSpeak: String,
    sampleRate: Float,
    onCompletion: @escaping (AVAudioPCMBuffer?) -> Void) 
{
    let utterance    = AVSpeechUtterance(string: stringToSpeak)
    utterance.voice  = AVSpeechSynthesisVoice(language: "en-us")
    let semaphore = DispatchSemaphore(value: 0)
    
    _speechSynth.write(utterance) { (buffer: AVAudioBuffer) in

        guard let pcmBuffer = buffer as? AVAudioPCMBuffer else {
            fatalError("unknown buffer type: \(buffer)")
        }
        
        if ( pcmBuffer.frameLength == 0 ) {
            print("buffer is empty")
        } else {
            print("buffer has content \(buffer)")
        }
        
        onCompletion(
            resampleBuffer(
                inSource: pcmBuffer, 
                newSampleRate: sampleRate
            )
        )
    }
}

If you really want to maintain the existing API, the other approach is to move the entire workflow itself to a .concurrent DispatchQueue , which you can block to your heart's content without worry that it will block the main thread.如果您真的想维护现有的 API,另一种方法是将整个工作流本身移动到.concurrent DispatchQueue ,您可以将其阻塞到您的内心.concurrent ,而不必担心它会阻塞主线程。 AVSpeechSynthesizer could schedule its work wherever it likes without a problem. AVSpeechSynthesizer可以随心所欲地安排工作。

If using Swift 5.5 is an option, you might look into its async and await keywords.如果使用 Swift 5.5 是一种选择,您可以查看其asyncawait关键字。 The compiler enforces a proper async context for them so that you don't block the main thread.编译器为它们强制执行适当的async上下文,这样您就不会阻塞主线程。

Update to answer how to call my version.更新以回答如何调用我的版本。

Let's say your code that calls createSpeechToBuffer currently looks like this:假设您当前调用createSpeechToBuffer的代码如下所示:

guard let buffer = createSpeechToBuffer(stringToSpeak: "Hello", sampleRate: sampleRate)
else { fatalError("Could not create speechBuffer") }

doSomethingWithSpeechBuffer(buffer)

You'd call the new version like this:你会这样称呼新版本:

createSpeechToBuffer(stringToSpeak: "Hello", sampleRate: sampleRate) 
{
    guard let buffer = $0 else {
        fatalError("Could not create speechBuffer")
    }

    doSomethingWithSpeechBuffer(buffer)
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM