簡體   English   中英

ASP.NET Web Api調用非Task異步方法

[英]ASP.NET Web Api calling a non Task async method

在我的Web Api項目中,我有一個[HttpPost]方法public HttpResponseMessage saveFiles() {} ,它將一些音頻文件保存到服務器。 保存文件后,需要在Microsoft.Speech服務器api中調用一個方法,該方法是異步的,但它返回void:

public void RecognizeAsync(RecognizeMode mode);

我想等到此方法完成后,再將收集到的所有信息返回給客戶端。 我不能await這里使用await ,因為此函數返回void。 我實現了一個事件: public event RecognitionFinishedHandler RecognitionFinished;

該函數完成時將調用此事件。

-編輯我用Task封裝了此事件,但是我想我做錯了,因為我無法使RecognizeAsync函數真正完成其工作。 看來該功能現在無法正常工作,這是我的代碼:

包含語音識別的功能:

public delegate void RecognitionFinishedHandler(object sender);
public class SpeechActions
{
    public event RecognitionFinishedHandler RecognitionFinished;
    private SpeechRecognitionEngine sre;
    public Dictionary<string, List<TimeSpan>> timeTags; // contains the times of each tag: "tag": [00:00, 00:23 .. ]

    public SpeechActions()
    {
        sre = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US"));
        sre.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(sre_SpeechRecognized);
        sre.AudioStateChanged += new EventHandler<AudioStateChangedEventArgs>(sre_AudioStateChanged);
    }

    /// <summary>
    /// Calculates the tags appearances in a voice over wav file.
    /// </summary>
    /// <param name="path">The path to the voice over wav file.</param>
    public void CalcTagsAppearancesInVO(string path, string[] tags, TimeSpan voLength)
    {
        timeTags = new Dictionary<string, List<TimeSpan>>();
        sre.SetInputToWaveFile(path);

        foreach (string tag in tags)
        {
            GrammarBuilder gb = new GrammarBuilder(tag);
            gb.Culture = new System.Globalization.CultureInfo("en-US");
            Grammar g = new Grammar(gb);
            sre.LoadGrammar(g);
        }

        sre.RecognizeAsync(RecognizeMode.Multiple);
    }

    void sre_AudioStateChanged(object sender, AudioStateChangedEventArgs e)
    {
        if (e.AudioState == AudioState.Stopped)
        {
            sre.RecognizeAsyncStop();
            if (RecognitionFinished != null)
            {
                RecognitionFinished(this);
            }
        }
    }

    void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
        string word = e.Result.Text;
        TimeSpan time = e.Result.Audio.AudioPosition;
        if(!timeTags.ContainsKey(word))
        {
            timeTags.Add(word, new List<TimeSpan>());
        } 

        // add the found time
        timeTags[word].Add(time);
    }
}

和我調用它的函數+事件處理程序:

[HttpPost]
    public HttpResponseMessage saveFiles()
    {
        if (HttpContext.Current.Request.Files.AllKeys.Any())
        {
            string originalFolder = HttpContext.Current.Server.MapPath("~/files/original/");
            string lowFolder = HttpContext.Current.Server.MapPath("~/files/low/");
            string audioFolder = HttpContext.Current.Server.MapPath("~/files/audio/");
            string voiceoverPath = Path.Combine(originalFolder, Path.GetFileName(HttpContext.Current.Request.Files["voiceover"].FileName));
            string outputFile = HttpContext.Current.Server.MapPath("~/files/output/") + "result.mp4";
            string voiceoverWavPath = Path.Combine(audioFolder, "voiceover.wav");
            var voiceoverInfo = Resource.From(voiceoverWavPath).LoadMetadata().Streams.OfType<AudioStream>().ElementAt(0).Info;
            DirectoryInfo di = new DirectoryInfo(originalFolder);
            // speech recognition
            // get tags from video filenames
            string sTags = "";
            di = new DirectoryInfo(HttpContext.Current.Server.MapPath("~/files/low/"));

            foreach (var item in di.EnumerateFiles())
            {
                string filename = item.Name.Substring(0, item.Name.LastIndexOf("."));
                if (item.Name.ToLower().Contains("thumbs") || filename == "voiceover")
                {
                    continue;
                }
                sTags += filename + ",";
            }
            if (sTags.Length > 0) // remove last ','
            {
                sTags = sTags.Substring(0, sTags.Length - 1);
            }
            string[] tags = sTags.Split(new char[] { ',' });

            // HERE STARTS THE PROBLEMATIC PART! ----------------------------------------------------
            var task = GetSpeechActionsCalculated(voiceoverWavPath, tags, voiceoverInfo.Duration);

            // now return the times to the client
            var finalTimes = GetFinalTimes(HttpContext.Current.Server.MapPath("~/files/low/"), task.Result.timeTags);
            var goodResponse = Request.CreateResponse(HttpStatusCode.OK, finalTimes);
            return goodResponse;
        }
        return Request.CreateResponse(HttpStatusCode.OK, "no files");
    }
    private Task<SpeechActions> GetSpeechActionsCalculated(string voPath, string[] tags, TimeSpan voLength)
    {
        var tcs = new TaskCompletionSource<SpeechActions>();
        SpeechActions sa = new SpeechActions();
        sa.RecognitionFinished += (s) =>
        {
            tcs.TrySetResult((SpeechActions)s);
        };
        sa.CalcTagsAppearancesInVO(voPath, tags, voLength);

        return tcs.Task;
    }

您的編輯工作幾乎就在那里,您只需要await任務即可:

[HttpPost]
public async Task<HttpResponseMessage> saveFiles()
{
    if (HttpContext.Current.Request.Files.AllKeys.Any())
    {
        ...

        string[] tags = sTags.Split(new char[] { ',' });

        await GetSpeechActionsCalculated(voiceoverWavPath, tags, voiceoverInfo.Duration);

        // now return the times to the client
        var finalTimes = GetFinalTimes(HttpContext.Current.Server.MapPath("~/files/low/"), task.Result.timeTags);
        var goodResponse = Request.CreateResponse(HttpStatusCode.OK, finalTimes);
        return goodResponse;
    }
    return Request.CreateResponse(HttpStatusCode.OK, "no files");
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM