简体   繁体   English

C# System.Speech.Recognition 没有命令?

[英]C# System.Speech.Recognition without commands?

I think this is the same question as C# system.speech.recognition alternate words , however the answer doesn't work.我认为这是与C# system.speech.recognition 替代词相同的问题,但是答案不起作用。 Microsoft API is requiring a command to be used. Microsoft API 需要使用命令。 If I don't put in a command, it displays a messaging saying it's required.如果我不输入命令,它会显示一条消息,说它是必需的。 If I add one command, that is the only text that comes across.如果我添加一个命令,那将是唯一出现的文本。 I'm wanting to write something that dictates every word I'm saying.我想写一些东西来支配我说的每一个字。 Something like MS Agent did back in the day.像 MS Agent 当年所做的那样。 Anyone have some direction as I'm failing out and don't want to use Google Cloud API.任何人都有一些方向,因为我失败了并且不想使用 Google Cloud API。 I'm wanting this to run locally.我希望它在本地运行。

using System;
using System.Speech.Recognition;
using System.Speech.Synthesis;
using System.Globalization;
using System.Threading;
using System.Collections.Generic;

namespace S2TextDemo
{
    class Program
    {
        static SpeechSynthesizer ss = new SpeechSynthesizer();
        static SpeechRecognitionEngine sre;
        static bool speechOn = true;
        static private AutoResetEvent _quitEvent;

        static void Main(string[] args)
        {
            try
            {
                 _quitEvent = new AutoResetEvent(false);

                ss.SetOutputToDefaultAudioDevice();
                CultureInfo ci = new CultureInfo("en-us");
                sre = new SpeechRecognitionEngine(ci);
                sre.SetInputToDefaultAudioDevice();
                sre.SpeechRecognized += sre_SpeechRecognized;
                //sre.SpeechRecognized += SpeechRecognizedHandler;
                Choices ch_StartStopCommands = new Choices();

                ch_StartStopCommands.Add("quit");

                GrammarBuilder gb_StartStop = new GrammarBuilder();
                gb_StartStop.Append(ch_StartStopCommands);
                Grammar g_StartStop = new Grammar(gb_StartStop);
                sre.LoadGrammarAsync(g_StartStop);

                sre.RecognizeAsync(RecognizeMode.Multiple);
                Console.WriteLine("Listening...\n");
                ss.SpeakAsync("I'm now listening.");

                 _quitEvent.WaitOne();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
                Console.ReadLine();
            }
        } // Main

        static void SpeechRecognizedHandler(object sender, SpeechRecognizedEventArgs e)
        {
            if (e.Result == null) return;
            string txt = e.Result.Text;

            // Add event handler code here.

            // The following code illustrates some of the information available
            // in the recognition result.
            Console.WriteLine("Grammar({0}), {1}: {2}",
              e.Result.Grammar.Name, e.Result.Audio.Duration, e.Result.Text);

            // Display the semantic values in the recognition result.
            foreach (KeyValuePair<String, SemanticValue> child in e.Result.Semantics)
            {
                Console.WriteLine(" {0} key: {1}",
                  child.Key, child.Value.Value ?? "null");
            }
            Console.WriteLine();

            // Display information about the words in the recognition result.
            foreach (RecognizedWordUnit word in e.Result.Words)
            {
                RecognizedAudio audio = e.Result.GetAudioForWordRange(word, word);
                Console.WriteLine(" {0,-10} {1,-10} {2,-10} {3} ({4})",
                  word.Text, word.LexicalForm, word.Pronunciation,
                  audio.Duration, word.DisplayAttributes);
            }

            // Display the recognition alternates for the result.
            foreach (RecognizedPhrase phrase in e.Result.Alternates)
            {
                Console.WriteLine(" alt({0}) {1}", phrase.Confidence, phrase.Text);
            }
        }

        static void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
        {
            double minConfidence = 0.90;
            string txt = e.Result.Text;
            float confidence = e.Result.Confidence;

            Console.WriteLine("\nRecognized: " + txt);
            if (confidence < minConfidence)
            {
                Console.WriteLine($"Failed confidence: {minConfidence} with {confidence}" );
                return;
            }

            if (txt.IndexOf("quit") >= 0)
            {
                if(speechOn)
                    ss.SpeakAsync("Shutting down.");
                else
                    Console.WriteLine("Shutting down.");

                Thread.Sleep(1000);
                _quitEvent.Set();
            }
        } // sre_SpeechRecognized
    } // Program
} // ns

When you make a new Grammar, use this:当你创建一个新的语法时,使用这个:

sre.LoadGrammarAsync(new DictationGrammar());

From the original documentation:从原始文档:
Here 这里

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM