简体   繁体   中英

Unity with azure speech sdk

When i am using the azure speech sdk on unity, when i test it on the computer it works fine, i can speak, it recognizes and responds in speech all normal.

When I build for Android and iOS it doesn't work. Both on iOS and Android it advances the recognition points without trying to recognize anything, if I just put a simple speech coming from the sdk it doesn't give anything either.

How can I resolve this problem?

Here's the code that works in unity and windows build:


---------------------------------------------
    void Start()
    {
        anim = gameObject.GetComponent<Animator>();

        var config = SpeechConfig.FromSubscription("xxxxxxxxxxxx", "northeurope");
     
        cred(config);
    }



 async Task cred(SpeechConfig config)
    { 
       texttest.GetComponent<Text>().text = config.ToString();

        var audioConfig = AudioConfig.FromDefaultMicrophoneInput();

        var synthesizer2 = new SpeechRecognizer(config, audioConfig);

        var result = await synthesizer2.RecognizeOnceAsync();

        var synthesizer = new SpeechSynthesizer(config);

            SynthesizeAudioAsync(config, synthesizer2, result);
    } 

  async Task SynthesizeAudioAsync(SpeechConfig config, SpeechRecognizer synthesizer2, SpeechRecognitionResult result)
    {
        texttest.GetComponent<Text>().text = "syn1 " + result.Text;

        OutputSpeechRecognitionResult(result);
        if (result.Reason == ResultReason.RecognizedSpeech)
            {
                if (result.Text == "xx" || result.Text == "xx" || result.Text == xx." || result.Text == "xx")
                {

                var synthesizer = new SpeechSynthesizer(config); 
  
                anim.Play("helloAll", 0, 0); 
                await synthesizer.SpeakTextAsync("Helloxx");
                chooseTopic(config,  synthesizer, result.Text);  

On iOS it gives me this in the console:


--------------------------------------------------
CANCELED: Did you set the speech resource key and region values?speakTest:OutputSpeechRecognitionResult(SpeechRecognitionResult)<SynthesizeAudioAsync>d__10:MoveNext()


CANCELED: ErrorDetails=0x15 (SPXERR_MIC_ERROR)

[CALL STACK BEGIN]



3   UnityFramework                      0x0000000109336810 _ZN9Microsoft17CognitiveServices6Speech4Impl22CSpxMicrophonePumpBase9StartPumpENSt3__110shared_ptrINS2_18ISpxAudioProcessorEEE + 756

4   UnityFramework                      0x000000010931c010 _ZN9Microsoft17CognitiveServices6Speech4Impl25ISpxDelegateAudioPumpImpl9StartPumpENSt3__110shared_ptrINS2_18ISpxAudioProcessorEEE + 84

5   UnityFramework                      0x000000010932cc0c _ZN9Microsoft17CognitiveServices6Speech4Impl27CSpxAudioPumpDelegateHelperINS2_29CSpxDelegateToSharedPtrHelperINS2_13ISpxAudioPumpELb0EEEE17DelegateStartPumpENSt3__110shared_ptrINS2_18ISpxAudioProcessorEEE + 220

6   UnityFramework                      0x0000000109325e1c _ZN9Microsoft17CognitiveServices6Speech4Impl41ISpxAudioSourceControlAdaptsAudioPumpImplINS2_32CSpxMicrophoneAudioSourceAdapterEE9StartPumpEv + 304

7   UnityFramework                      0x0000000109325664 _ZN9Microsoft17CognitiveServices6Speech4Impl41ISpxAudioSourceControlAdaptsAudioPumpImplINS2_32CSpxMicrophoneAudioSourceAdapterEE10StartAudioENSt3__110shared_ptrINS2_12ISpxNotifyMeIJRKNS7_INS2_15ISpxAudioSourceEEERKNS7_INS2_14ISpxBufferDataEEEEEEEE + 184

8   UnityFramework                      0x00000001093221d4 _ZN9Microsoft17CognitiveServices6Speech4Impl34ISpxAudioSourceControlDelegateImplINS2_29CSpxDelegateToSharedPtrHelperINS2_22ISpxAudioSourceControlELb0EEEE10StartAudioENSt3__110shared_ptrINS2_12ISpxNotifyMeIJRKNS9_INS2_15ISpxAudioSourceEEERKNS9_INS2_14ISpxBufferDataEEEEEEEE + 220

9   UnityFramework                      0x00000001094a41f4 _ZN9Microsoft17CognitiveServices6Speech4Impl28CSpxSessionAudioSourceHelperINS2_20CSpxAudioSessionShimEE16StartAudioSourceERKNSt3__110shared_ptrINS2_15ISpxAudioSourceEEE + 504

10  UnityFramework                      0x00000001094a0dbc _ZN9Microsoft17CognitiveServices6Speech4Impl28CSpxSessionAudioSourceHelperINS2_20CSpxAudioSessionShimEE22EnsureStartAudioSourceEv + 124

11  UnityFramework                      0x0000000109408dcc _ZN9Microsoft17CognitiveServices6Speech4Impl22CSpxAudioStreamSession14StartAudioPumpENS3_15RecognitionKindENSt3__110shared_ptrINS2_12ISpxKwsModelEEE + 2300

12  UnityFramework                      0x0000000109406760 _ZN9Microsoft17CognitiveServices6Speech4Impl22CSpxAudioStreamSession16StartRecognizingENS3_15RecognitionKindENSt3__110shared_ptrINS2_12ISpxKwsModelEEE + 616

13  UnityFramework                      0x0000000109406098 _ZN9Microsoft17CognitiveServices6Speech4Impl22CSpxAudioStreamSession18RecognizeOnceAsyncERKNSt3__110shared_ptrINS3_9OperationEEENS5_INS2_12ISpxKwsModelEEE + 464

14  UnityFramework                      0x0000000109424d4c _ZN9Microsoft17CognitiveServices6Speech4Impl22CSpxAudioStreamSession9OperationC2ENS3_15RecognitionKindE + 1040

15  UnityFramework                      0x0000000109420af4 _ZN9Microsoft17CognitiveServices6Speech4Impl7SpxTermINS2_21ISpxAudioStreamReaderEEEvRKNSt3__110shared_ptrIT_EE + 2004

16  UnityFramework                      0x0000000109354c28 _ZNSt3__113packaged_taskIFvvEEclEv + 96

17  UnityFramework                      0x0000000109354bb4 _ZN9Microsoft17CognitiveServices6Speech4Impl17CSpxThreadService4Task3RunEv + 32

18  UnityFramework                      0x00000001093566fc _ZN9Microsoft17CognitiveServices6Speech4Impl17CSpxThreadService6Thread7RunTaskINSt3__14pairINS6_10shared_ptrINS3_4TaskEEENS6_7promiseIbEEEEEEvRNS6_11unique_lockINS6_5mutexEEERNS6_5dequeIT_NS6_9allocatorISJ_EEEE + 332

19  UnityFramework                      0x0000000109354d8c _ZN9Microsoft17CognitiveServices6Speech4Impl17CSpxThreadService6Thread10WorkerLoopENSt3__110shared_ptrIS4_EE + 216

[CALL STACK END]

The issue was raised because of the configuration issues in iOS and Android. Check the documentation for configuration in android and iOS .

There is a github repo which works on the similar issue. Check it once.

https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/unity/from-microphone

The iOS libraries will be developed using Objective - C and doesn't support any kind of JavaScript libraries. In the same way, for Android Cordova will be used to handle the JavaScript libraries. These two are non - supportable for each opposite platform. Hence, check with the configuration of the developing platform and also the supporting libraries language.

Cordova Platforms : android 7.1.4 ios 4.5.5
Ionic Framework : ionic-angular 3.9.2
iOS: Objective - C

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM