简体   繁体   中英

AudioKit v5 output problems, no sound when AVAudioSession defaultToSpeaker is used

EDIT #2: OK, I missed something big here, but I still have a problem. The reason the sound is soft and I have to amplify it is that it is coming from the earpiece, not the speaker. When I add the option .defaultToSpeaker to the setCategory I get no sound at all.

So, this is the real problem, when I set the category to .playbackAndRecord and the option to .defaultToSpeaker, why do I get no sound at all on a real phone? In addition to no sound, I did not receive input from the mic either. The sound is fine in the simulator.

EDIT #3: I began observing route changes and my code reports the following when the .defaultToSpeaker option is included.

2020-12-26 12:17:56.212366-0700 SST[13807:3950195] Current route:

2020-12-26 12:17:56.213275-0700 SST[13807:3950195] <AVAudioSessionRouteDescription: 0x2816af8e0, 
inputs = (
    "<AVAudioSessionPortDescription: 0x2816af900, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
); 
outputs = (
    "<AVAudioSessionPortDescription: 0x2816af990, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
)>

The output is set to Speaker. Is it significant that the selectedDataSource is (null)? Before the .defaultToSpeaker option was added this reported output set to Receiver, also with selectedDataSource = (null), so I would guess not.

EDIT: I added the code to set the Audio Session category. The new code is shown below. So far it seems to have no effect. If I leave it in or comment it out, I don't see any difference. I also have code (that I deleted here for simplicity) that modifies the microphone pattern. That too had no discernible effect. Perhaps though, that is to be expected?

In addition to the symptoms below, if I use Settings/Bluetooth to select the AirPods, then I got no output from the App, even after I remove the AirPods.

What am I missing here?

/EDIT

After getting this to work well on the simulator, I moved to debugging on my 11 Pro Max. When playing notes on the MandolinString, the sound from the (11 Pro Max or an 8) simulator is loud and clear. On the real phone, the sound is barely audible and from the speaker only. It does not go to the attached audio speaker, be that a HomePod or AirPods. Is this a v5 bug? Do I need to do something with the output?

A second less important issue is that when I instantiate this object the MandolinString triggers without me calling anything. The extra fader and the reset of the gain from 0 to 1 after a delay suppresses this sound.

private let engine    = AudioEngine()
    private let mic       : AudioEngine.InputNode
    private let micAmp    : Fader
    private let mixer1    : Mixer
    private let mixer2    : Mixer
    private let silence   : Fader
    private let stringAmp : Fader
    private var pitchTap  : PitchTap
    
    private var tockAmp   : Fader
    private var metro     = Timer()
    
    private let sampler   = MIDISampler(name: "click")

    private let startTime = NSDate.timeIntervalSinceReferenceDate
    private var ampThreshold: AUValue = 0.12
    private var ampJumpSize: AUValue = 0.05

    private var samplePause = 0
    private var trackingNotStarted = true
    private var tracking = false
    private var ampPrev: AUValue = 0.0
    private var freqArray: [AUValue] = []
    
    init() {
        
        // Set up mic input and pitchtap
        mic = engine.input!
        micAmp = Fader(mic, gain: 1.0)
        mixer1 = Mixer(micAmp)
        silence = Fader(mixer1, gain: 0)
        mixer2 = Mixer(silence)
        pitchTap = PitchTap(mixer1, handler: {_ , _ in })
        
        // All sound is fed into mixer2
        // Mic input is faded to zero
        
        // Now add String sound to Mixer2 with a Fader
        pluckedString = MandolinString()
        stringAmp = Fader(pluckedString, gain: 4.0)
        mixer2.addInput(stringAmp)
        
        // Create a sound for the metronome (tock), add as input to mixer2
        try! sampler.loadWav("Click")
        tockAmp = Fader(sampler, gain: 1.0)
        mixer2.addInput(tockAmp)

        engine.output = mixer2

        self.pitchTap = PitchTap(micAmp,
                                 handler:
        { freq, amp in
            if (self.samplePause <= 0 && self.tracking) {
                self.samplePause = 0
                self.sample(freq: freq[0], amp: amp[0])
            }
        })
        
        do {
            //try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement)
            try audioSession.setCategory(AVAudioSession.Category.playAndRecord)
            //, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
            try audioSession.setActive(true)
        } catch let error as NSError {
            print("Unable to create AudioSession: \(error.localizedDescription)")
        }
        
        do {
            try engine.start()
            akStartSucceeded = true
        } catch {
            akStartSucceeded = false
        }
    } // init

XCode 12, iOS 14, SPM. Everything up to date

Most likely this is not an AudioKit issue per se, it has to do with AVAudioSession, you probably need to set it on the device to be DefaultToSpeaker. AudioKit 5 has less automatic session management compared to version 4, opting to make fewer assumptions and let the developer have control.

The answer was indeed to add code for AVAudioSession. However, it did not work where I first put it. It only worked for me when I put it in the App delegate didFInishLauchWithOptions. I found this in the AudioKit Cookbook. This works:

class AppDelegate: UIResponder, UIApplicationDelegate {

    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch.

        #if os(iOS)
            self.audioSetup()
        #endif

        return true
    }

    #if os(iOS)
    func audioSetup() {
        let session = AVAudioSession.sharedInstance()
        
        do {
            Settings.bufferLength = .short
            try session.setPreferredIOBufferDuration(Settings.bufferLength.duration)
            try session.setCategory(.playAndRecord,
                                                            options: [.defaultToSpeaker, .mixWithOthers])
            try session.setActive(true)
        } catch let err {
            print(err)
        }
    
        // Other AudioSession stuff here
        
        do {
            try session.setActive(true)
        } catch let err {
            print(err)
        }
    }
    #endif

}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM