简体   繁体   English

AudioKit v5 输出问题,使用 AVAudioSession defaultToSpeaker 时没有声音

[英]AudioKit v5 output problems, no sound when AVAudioSession defaultToSpeaker is used

EDIT #2: OK, I missed something big here, but I still have a problem.编辑#2:好的,我在这里错过了一些大事,但我仍然有问题。 The reason the sound is soft and I have to amplify it is that it is coming from the earpiece, not the speaker.声音柔和而我必须放大它的原因是它来自听筒,而不是扬声器。 When I add the option .defaultToSpeaker to the setCategory I get no sound at all.当我将选项 .defaultToSpeaker 添加到 setCategory 时,我根本没有声音。

So, this is the real problem, when I set the category to .playbackAndRecord and the option to .defaultToSpeaker, why do I get no sound at all on a real phone?所以,这才是真正的问题,当我将类别设置为 .playbackAndRecord 并将选项设置为 .defaultToSpeaker 时,为什么我在真正的手机上根本没有声音? In addition to no sound, I did not receive input from the mic either.除了没有声音,我也没有收到来自麦克风的输入。 The sound is fine in the simulator.模拟器中的声音很好。

EDIT #3: I began observing route changes and my code reports the following when the .defaultToSpeaker option is included.编辑 #3:我开始观察路线变化,当包含 .defaultToSpeaker 选项时,我的代码报告以下内容。

2020-12-26 12:17:56.212366-0700 SST[13807:3950195] Current route:

2020-12-26 12:17:56.213275-0700 SST[13807:3950195] <AVAudioSessionRouteDescription: 0x2816af8e0, 
inputs = (
    "<AVAudioSessionPortDescription: 0x2816af900, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
); 
outputs = (
    "<AVAudioSessionPortDescription: 0x2816af990, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
)>

The output is set to Speaker.输出设置为扬声器。 Is it significant that the selectedDataSource is (null)? selectedDataSource 为(空)是否重要? Before the .defaultToSpeaker option was added this reported output set to Receiver, also with selectedDataSource = (null), so I would guess not.在添加 .defaultToSpeaker 选项之前,此报告的输出设置为 Receiver,也带有 selectedDataSource = (null),所以我猜不是。

EDIT: I added the code to set the Audio Session category.编辑:我添加了设置音频会话类别的代码。 The new code is shown below.新代码如下所示。 So far it seems to have no effect.到目前为止,它似乎没有任何效果。 If I leave it in or comment it out, I don't see any difference.如果我将其保留或注释掉,我看不出任何区别。 I also have code (that I deleted here for simplicity) that modifies the microphone pattern.我也有修改麦克风模式的代码(为了简单起见,我在这里删除了)。 That too had no discernible effect.这也没有明显的影响。 Perhaps though, that is to be expected?也许,这是意料之中的?

In addition to the symptoms below, if I use Settings/Bluetooth to select the AirPods, then I got no output from the App, even after I remove the AirPods.除了以下症状外,如果我使用“设置”/“蓝牙”来选择 AirPods,那么即使我删除了 AirPods,应用程序也没有任何输出。

What am I missing here?我在这里缺少什么?

/EDIT /编辑

After getting this to work well on the simulator, I moved to debugging on my 11 Pro Max.在模拟器上运行良好后,我开始在我的 11 Pro Max 上进行调试。 When playing notes on the MandolinString, the sound from the (11 Pro Max or an 8) simulator is loud and clear.在 MandolinString 上演奏音符时,来自(11 Pro Max 或 8)模拟器的声音响亮而清晰。 On the real phone, the sound is barely audible and from the speaker only.在真正的手机上,声音几乎听不见,并且仅来自扬声器。 It does not go to the attached audio speaker, be that a HomePod or AirPods.它不会连接到附带的音频扬声器,无论是 HomePod 还是 AirPods。 Is this a v5 bug?这是 v5 的错误吗? Do I need to do something with the output?我需要对输出做些什么吗?

A second less important issue is that when I instantiate this object the MandolinString triggers without me calling anything.第二个不太重要的问题是,当我实例化这个对象时,MandolinString 会在没有我调用任何东西的情况下触发。 The extra fader and the reset of the gain from 0 to 1 after a delay suppresses this sound.额外的推子和延迟后增益从 0 到 1 的重置会抑制这种声音。

private let engine    = AudioEngine()
    private let mic       : AudioEngine.InputNode
    private let micAmp    : Fader
    private let mixer1    : Mixer
    private let mixer2    : Mixer
    private let silence   : Fader
    private let stringAmp : Fader
    private var pitchTap  : PitchTap
    
    private var tockAmp   : Fader
    private var metro     = Timer()
    
    private let sampler   = MIDISampler(name: "click")

    private let startTime = NSDate.timeIntervalSinceReferenceDate
    private var ampThreshold: AUValue = 0.12
    private var ampJumpSize: AUValue = 0.05

    private var samplePause = 0
    private var trackingNotStarted = true
    private var tracking = false
    private var ampPrev: AUValue = 0.0
    private var freqArray: [AUValue] = []
    
    init() {
        
        // Set up mic input and pitchtap
        mic = engine.input!
        micAmp = Fader(mic, gain: 1.0)
        mixer1 = Mixer(micAmp)
        silence = Fader(mixer1, gain: 0)
        mixer2 = Mixer(silence)
        pitchTap = PitchTap(mixer1, handler: {_ , _ in })
        
        // All sound is fed into mixer2
        // Mic input is faded to zero
        
        // Now add String sound to Mixer2 with a Fader
        pluckedString = MandolinString()
        stringAmp = Fader(pluckedString, gain: 4.0)
        mixer2.addInput(stringAmp)
        
        // Create a sound for the metronome (tock), add as input to mixer2
        try! sampler.loadWav("Click")
        tockAmp = Fader(sampler, gain: 1.0)
        mixer2.addInput(tockAmp)

        engine.output = mixer2

        self.pitchTap = PitchTap(micAmp,
                                 handler:
        { freq, amp in
            if (self.samplePause <= 0 && self.tracking) {
                self.samplePause = 0
                self.sample(freq: freq[0], amp: amp[0])
            }
        })
        
        do {
            //try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement)
            try audioSession.setCategory(AVAudioSession.Category.playAndRecord)
            //, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
            try audioSession.setActive(true)
        } catch let error as NSError {
            print("Unable to create AudioSession: \(error.localizedDescription)")
        }
        
        do {
            try engine.start()
            akStartSucceeded = true
        } catch {
            akStartSucceeded = false
        }
    } // init

XCode 12, iOS 14, SPM. XCode 12、iOS 14、SPM。 Everything up to date一切都是最新的

Most likely this is not an AudioKit issue per se, it has to do with AVAudioSession, you probably need to set it on the device to be DefaultToSpeaker.这很可能不是 AudioKit 本身的问题,它与 AVAudioSession 有关,您可能需要在设备上将其设置为 DefaultToSpeaker。 AudioKit 5 has less automatic session management compared to version 4, opting to make fewer assumptions and let the developer have control.与版本 4 相比,AudioKit 5 的自动会话管理更少,选择做出更少的假设并让开发人员拥有控制权。

The answer was indeed to add code for AVAudioSession.答案确实是为 AVAudioSession 添加代码。 However, it did not work where I first put it.但是,它在我第一次放置的地方不起作用。 It only worked for me when I put it in the App delegate didFInishLauchWithOptions.当我把它放在 App 委托 didFINishLauchWithOptions 中时,它只对我有用。 I found this in the AudioKit Cookbook.我在 AudioKit Cookbook 中找到了这个。 This works:这有效:

class AppDelegate: UIResponder, UIApplicationDelegate {

    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch.

        #if os(iOS)
            self.audioSetup()
        #endif

        return true
    }

    #if os(iOS)
    func audioSetup() {
        let session = AVAudioSession.sharedInstance()
        
        do {
            Settings.bufferLength = .short
            try session.setPreferredIOBufferDuration(Settings.bufferLength.duration)
            try session.setCategory(.playAndRecord,
                                                            options: [.defaultToSpeaker, .mixWithOthers])
            try session.setActive(true)
        } catch let err {
            print(err)
        }
    
        // Other AudioSession stuff here
        
        do {
            try session.setActive(true)
        } catch let err {
            print(err)
        }
    }
    #endif

}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM