繁体   English   中英

Pocketsphinx 无法有效识别通过麦克风录制的单词(命令)

[英]pocketsphinx can't efficiently recognize words (commands) recorded via mic

我在我的 debian 9 系统上编译了这个语音识别库 Pocketsphinx 的示例 C 代码。

我在名为 goforward.raw 的文件中录制了一个示例音频,其中包含命令:“前进”。

pockesphinx_continuous 程序既不能有效地识别使用 linux 上的 arecord 工具通过耳机录制的单词,也不能识别给出的示例代码。 只是部分识别,即将“前进”命令识别为“向前移动”,这没问题,但其他命令的识别效果非常差。 如果你打招呼,它就会变成你是谁。?

有趣的是,当从通过 pico2wave 工具创建的 wav 文件中提取单词时,使用文本到语音工具 pico2wave 创建的音频文件被非常有效地识别,例如 80% 的准确率。

Here is the example pockesphinx code:

#include <pocketsphinx.h>


int
main(int argc, char *argv[])
{
    ps_decoder_t *ps;
    cmd_ln_t *config;
    FILE *fh;
    char const *hyp, *uttid;
    int16 buf[512];
    int rv;
    int32 score;

    config = cmd_ln_init(NULL, ps_args(), TRUE,
                 "-hmm", MODELDIR "/en-us/en-us",
                 "-lm", MODELDIR "/en-us/en-us.lm.bin",
                 "-dict", MODELDIR "/en-us/cmudict-en-us.dict",
                 NULL);
    if (config == NULL) {
    fprintf(stderr, "Failed to create config object, see log for details\n");
    return -1;
    }

    ps = ps_init(config);
    if (ps == NULL) {
    fprintf(stderr, "Failed to create recognizer, see log for details\n");
    return -1;
    }

    fh = fopen("goforward.raw", "rb");
    if (fh == NULL) {
    fprintf(stderr, "Unable to open input file goforward.raw\n");
    return -1;
    }

    rv = ps_start_utt(ps);

    while (!feof(fh)) {
    size_t nsamp;
    nsamp = fread(buf, 2, 512, fh);
    rv = ps_process_raw(ps, buf, nsamp, FALSE, FALSE);
    }

    rv = ps_end_utt(ps);
    hyp = ps_get_hyp(ps, &score);
    printf("Recognized: %s\n", hyp);

    fclose(fh);
    ps_free(ps);
    cmd_ln_free_r(config);

    return 0;
}

这是pocketsphinx官方包提供的pocketsphinx_continuous工具代码:

/* -*- c-basic-offset: 4; indent-tabs-mode: nil -*- */
/* ====================================================================
 * Copyright (c) 1999-2010 Carnegie Mellon University.  All rights
 * reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *
 * 1. Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer. 
 *
 * 2. Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in
 *    the documentation and/or other materials provided with the
 *    distribution.
 *
 * This work was supported in part by funding from the Defense Advanced 
 * Research Projects Agency and the National Science Foundation of the 
 * United States of America, and the CMU Sphinx Speech Consortium.
 *
 * THIS SOFTWARE IS PROVIDED BY CARNEGIE MELLON UNIVERSITY ``AS IS'' AND 
 * ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, 
 * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL CARNEGIE MELLON UNIVERSITY
 * NOR ITS EMPLOYEES BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 
 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 
 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 
 * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 *
 * ====================================================================
 *
 */
/*
 * continuous.c - Simple pocketsphinx command-line application to test
 *                both continuous listening/silence filtering from microphone
 *                and continuous file transcription.
 */

/*
 * This is a simple example of pocketsphinx application that uses continuous listening
 * with silence filtering to automatically segment a continuous stream of audio input
 * into utterances that are then decoded.
 * 
 * Remarks:
 *   - Each utterance is ended when a silence segment of at least 1 sec is recognized.
 *   - Single-threaded implementation for portability.
 *   - Uses audio library; can be replaced with an equivalent custom library.
 */

#include <stdio.h>
#include <string.h>
#include <assert.h>

#if !defined(_WIN32_WCE)
#include <signal.h>
#include <setjmp.h>
#endif
#if defined(WIN32) && !defined(GNUWINCE)
#include <time.h>
#else
#include <sys/types.h>
#include <sys/time.h>
#endif

#include <sphinxbase/err.h>
#include <sphinxbase/ad.h>

#include "pocketsphinx.h"

static const arg_t cont_args_def[] = {
    POCKETSPHINX_OPTIONS,
    /* Argument file. */
    {"-argfile",
     ARG_STRING,
     NULL,
     "Argument file giving extra arguments."},
    {"-adcdev",
     ARG_STRING,
     NULL,
     "Name of audio device to use for input."},
    {"-infile",
     ARG_STRING,
     NULL,
     "Audio file to transcribe."},
    {"-time",
     ARG_BOOLEAN,
     "no",
     "Print word times in file transcription."},
    CMDLN_EMPTY_OPTION
};

static ps_decoder_t *ps;
static cmd_ln_t *config = cmd_ln_init(NULL, ps_args(), TRUE,
                                     "-hmm", "/home/bsnayak/Trainguard_MT2/pocketsphinx/model9/hmm/trainguard/",
                                     "-jsgf",  "/home/bsnayak/Trainguard_MT2/pocketsphinx/model9/lm2/trainguardmt_adv_2.jsgf",
                                     "-dict",  "/home/bsnayak/Trainguard_MT2/pocketsphinx/model9/dict/trainguard.dic",
                                     NULL);




static FILE *rawfd;

static void
print_word_times(int32 start)
{
    ps_seg_t *iter = ps_seg_iter(ps, NULL);
    while (iter != NULL) {
        int32 sf, ef, pprob;
        float conf;

        ps_seg_frames(iter, &sf, &ef);
        pprob = ps_seg_prob(iter, NULL, NULL, NULL);
        conf = logmath_exp(ps_get_logmath(ps), pprob);
        printf("%s %f %f %f\n", ps_seg_word(iter), (sf + start) / 100.0,
               (ef + start) / 100.0, conf);
        iter = ps_seg_next(iter);
    }
}

/*
 * Continuous recognition from a file
 */
static void
recognize_from_file()
{

    int16 adbuf[4096];

    const char *hyp;
    const char *uttid;

    int32 k;
    uint8 cur_vad_state, vad_state;

    char waveheader[44];
    if ((rawfd = fopen(cmd_ln_str_r(config, "-infile"), "rb")) == NULL) {
        E_FATAL_SYSTEM("Failed to open file '%s' for reading",
                       cmd_ln_str_r(config, "-infile"));
    }

    //skip wav header
    fread(waveheader, 1, 44, rawfd);
    cur_vad_state = 0;
    ps_start_utt(ps, NULL);
    while ((k = fread(adbuf, sizeof(int16), 4096, rawfd)) > 0) {
        ps_process_raw(ps, adbuf, k, FALSE, FALSE);
        vad_state = ps_get_vad_state(ps);
        if (cur_vad_state && !vad_state) {
            //speech->silence transition,
            //time to end utterance and start new one
            ps_end_utt(ps);
            hyp = ps_get_hyp(ps, NULL, &uttid);
            printf("%s: %s\n", uttid, hyp);
            fflush(stdout);
            ps_start_utt(ps, NULL);
        }
        cur_vad_state = vad_state;
    }
    ps_end_utt(ps);
    hyp = ps_get_hyp(ps, NULL, &uttid);
    printf("%s: %s\n", uttid, hyp);
    fflush(stdout);

    fclose(rawfd);
}

/* Sleep for specified msec */
static void
sleep_msec(int32 ms)
{
#if (defined(WIN32) && !defined(GNUWINCE)) || defined(_WIN32_WCE)
    Sleep(ms);
#else
    /* ------------------- Unix ------------------ */
    struct timeval tmo;

    tmo.tv_sec = 0;
    tmo.tv_usec = ms * 1000;

    select(0, NULL, NULL, NULL, &tmo);
#endif
}

/*
 * Main utterance processing loop:
 *     for (;;) {
 *     start utterance and wait for speech to process
 *     decoding till end-of-utterance silence will be detected
 *     print utterance result;
 *     }
 */
static void
recognize_from_microphone()
{
    ad_rec_t *ad;
    int16 adbuf[4096];
    uint8 cur_vad_state, vad_state;
    int32 k;
    char const *hyp;
    char const *uttid;

    if ((ad = ad_open_dev(cmd_ln_str_r(config, "-adcdev"),
                          (int) cmd_ln_float32_r(config,
                                                 "-samprate"))) == NULL)
        E_FATAL("Failed to open audio device\n");
    if (ad_start_rec(ad) < 0)
        E_FATAL("Failed to start recording\n");

    if (ps_start_utt(ps, NULL) < 0)
        E_FATAL("Failed to start utterance\n");
    cur_vad_state = 0;
    /* Indicate listening for next utterance */
    printf("READY....\n");
    fflush(stdout);
    fflush(stderr);
    for (;;) {
        if ((k = ad_read(ad, adbuf, 4096)) < 0)
            E_FATAL("Failed to read audio\n");
        sleep_msec(100);
        ps_process_raw(ps, adbuf, k, FALSE, FALSE);
        vad_state = ps_get_vad_state(ps);
        if (vad_state && !cur_vad_state) {
            //silence -> speech transition,
            // let user know that he is heard
            printf("Listening...\n");
            fflush(stdout);
        }
        if (!vad_state && cur_vad_state) {
            //speech -> silence transition, 
            //time to start new utterance
            ps_end_utt(ps);
            hyp = ps_get_hyp(ps, NULL, &uttid);
            printf("%s: %s\n", uttid, hyp);
            fflush(stdout);
            //Exit if the first word spoken was GOODBYE
            if (hyp && (strcmp(hyp, "good bye") == 0))
                break;
            if (ps_start_utt(ps, NULL) < 0)
                E_FATAL("Failed to start utterance\n");
            /* Indicate listening for next utterance */
            printf("READY....\n");
            fflush(stdout);
            fflush(stderr);
        }
        cur_vad_state = vad_state;
    }
    ad_close(ad);
}

static jmp_buf jbuf;
static void
sighandler(int signo)
{
    longjmp(jbuf, 1);
}

int
main(int argc, char *argv[])
{
    char const *cfg;
/*
    config = cmd_ln_parse_r(NULL, cont_args_def, argc, argv, TRUE);

    ///* Handle argument file as -argfile. */
  /*  if (config && (cfg = cmd_ln_str_r(config, "-argfile")) != NULL) {
        config = cmd_ln_parse_file_r(config, cont_args_def, cfg, FALSE);
    }
    if (config == NULL)
        return 1;

    ps_default_search_args(config);
    ps = ps_init(config);
    if (ps == NULL)
        return 1;
*/


if (config == NULL)
return 1;
ps = ps_init(config);
if (ps == NULL)
return 1;

    E_INFO("%s COMPILED ON: %s, AT: %s\n\n", argv[0], __DATE__, __TIME__);

    if (cmd_ln_str_r(config, "-infile") != NULL) {
        recognize_from_file();
    }
    else {

        /* Make sure we exit cleanly (needed for profiling among other things) */
        /* Signals seem to be broken in arm-wince-pe. */
#if !defined(GNUWINCE) && !defined(_WIN32_WCE) && !defined(__SYMBIAN32__)
        signal(SIGINT, &sighandler);
#endif

        if (setjmp(jbuf) == 0) {
            recognize_from_microphone();
        }
    }

    ps_free(ps);
    return 0;
}

/** Silvio Moioli: Windows CE/Mobile entry point added. */
#if defined(_WIN32_WCE)
#pragma comment(linker,"/entry:mainWCRTStartup")
#include <windows.h>

//Windows Mobile has the Unicode main only
int
wmain(int32 argc, wchar_t * wargv[])
{
    char **argv;
    size_t wlen;
    size_t len;
    int i;

    argv = malloc(argc * sizeof(char *));
    for (i = 0; i < argc; i++) {
        wlen = lstrlenW(wargv[i]);
        len = wcstombs(NULL, wargv[i], wlen);
        argv[i] = malloc(len + 1);
        wcstombs(argv[i], wargv[i], wlen);
    }

    //assuming ASCII parameters
    return main(argc, argv);
}
#endif

我必须做什么才能使其与命令一起工作? 即使有一点发音错误或口音差异,也能更有效地被识别。

这是为那些可能也有同样问题的人准备的,我回答我自己的问题的原因是很少有人谈论口袋狮身人面像语音识别库,因此很难学习或使用,因为几乎没有的社区活跃。 官方网站没有提供易于理解的指南,我发现官方文档比仅针对 Pocketsphinx 库构建他/她的应用程序的开发人员的指南更具研究范围。

因此,如果您遇到了使用默认语言模型和词典成功识别语音的路线,但又希望提高效率和准确性,那么您必须创建自己的语言模型和词典,或者您可能想为默认语言添加一些新的口音模型。

您所要做的就是创建一个示例语言语料库,其中包含文本文件中的单词或句子。 然后使用 Sphinx lmtool 从中创建语言模型(lm 文件)和字典(dic 文件)。

接下来的事情是在编译过程中不要提供默认语言模型和字典,而应该提供这个新的 lm 和 dic 文件参数。

就是这样,它会非常快速地识别单词,准确度为 100%。 这是整个过程的链接: http : //ghatage.com/tech/2012/12/13/Make-Pocketsphinx-recognize-new-words/

冒着交叉发布的风险(或避免它?) https://raspberrypi.stackexchange.com/questions/10384/speech-processing-on-the-raspberry-pi/18222#18222涵盖了一些要点

这是为了后代。

我使用了 pocketsphinx_continuous 和4 美元的声卡

为了管理在使用语音合成器时需要停止收听的事实,我使用 amixer 来处理麦克风的输入音量(这是 CMU 推荐的最佳实践,因为停止启动引擎会导致识别效果较差)

echo "SETTING MIC IN TO 15 (94%)" >> ./audio.log
amixer -c 1 set Mic 15 unmute 2>&1 >/dev/null 

使用匹配的命令在语音合成器播放时将收听静音

FILE: mute.sh
#!/bin/sh

sleep $1;
amixer -c 1 set Mic 0 unmute >/dev/null 2>&1 ; 
echo  "** MIC OFF **" >> /home/pi/PIXIE/audio.log

为了计算正确的静音时间,我只是通过 lua 运行 soxi,然后将 unmute.sh(与 mute.sh 相对)设置为从启动开始运行“x”秒。 毫无疑问,有很多方法可以解决这个问题。 我对这种方法的结果很满意。

LUA 片段:

-- Begin parallel timing  
-- MUTE UNTIL THE SOUNDCARD FREES UP 
-- "filename" is a fully qualified path to a wav file 
-- outputted by voice synth in previous operation

-- GET THE LENGTH
local sample_length = io.popen('soxi -D '..filename);
local total_length  = sample_length:read("*a"); 
clean_length = string.gsub(total_length, "\n", "") +1;  
sample_length:close();

-- EXAMPLE LOGGING OUTPUT...
--os.execute( 'echo LENGTH WAS "'.. clean_length .. '" Seconds  >> ./audio.log');   



-- we are about to play something... 
-- MUTE, then schedule UNMUTE.sh in x seconds, then play synth output
-- (have unrolled mute.sh here for clarity)

os.execute( 'amixer -c 1 set Mic '..mic_level..' unmute 2>&1 >/dev/null ');
os.execute( 'echo "** MIC OFF **"  >> ./audio.log ');

-- EXAMPLE LOGGING OUTPUT...    
-- os.execute( 'echo PLAYING: "'.. filename..'" circa ' .. clean_length .. ' Seconds  >> ./audio.log ');

os.execute( './unmute.sh "'.. clean_length ..'" &');


-- THEN PLAY THE THING WHILE THE OTHER PROCESS IS SLEEPING  

os.execute( './sounds-uncached.sh '..filename..' 21000')

要实际获取 pi 上的声音,我使用:

pocketsphinx_continuous -bestpath 0 -adcdev plughw:1  -samprate 20000  \
-nfft 512 -ds2 -topn2 -maxwpf 5 -kdtreefn 3000 -kdmaxdepth 7 -kdmaxbbi 15 \
-pl_window 10 -lm ./LANGUAGE/0892-min.lm -dict ./LANGUAGE/0892-min.dic 2>&1 \
| tee -i 2>/dev/null >( sed -u -n -e 's/^.\{9\}: //p' ) \
>( sed -u -n -e 's/^READY//p' \
-e 's/^Listening//p' -e 's/^FATAL_ERROR: \"continuous\.c\"\, //p') \
> /dev/null

同样,还有其他方式,但我喜欢这种方式的输出。

对于合成器,我使用了 Cepstrals 羽翼未丰的 pi 解决方案,但它在网上不可用,您必须直接与他们联系以安排购买,购买价格约为 30 美元。 结果是可以接受的,但是演讲确实产生了一些令人讨厌的点击和爆裂声,该公司回复说他们不再有 RaspPi 并且不愿意改进产品。 青年会

语音识别在“空闲”时占用大约 12% 的 CPU,并且在进行大量识别时会短暂飙升。

渲染时语音创建峰值约为 50-80%。

play / sox 的分量很重,但我确实在播放时对渲染的声音应用了实时效果;)

使用我能找到的每一个指南对 pi 进行了大量精简,以停止不需要的服务并在完整的 CLI 模式下运行。 800mhz 超频(最小)。

scale_governor 设置为:性能

完全运行时:在阳光直射下约 50ºC,在阴凉处运行约 38ºC。 我装了散热片。

最后一点:我实际上将所有这些装备都用于“互联网驱动的”人工智能作为一个很好的额外功能。

pi 无缝处理所有这些,实时播放任何网络音频,并将音频完全循环到任何其他 Unix 设备。 等等。

为了处理大量的语音 CPU 开销负担,我实现了一个基于 md5sum 的缓存系统,因此相同的话语不会呈现两次。 (大约 1000 个文件 @ 220 mb 总共涵盖了我通常从 AI 返回的 70% 的话语)这确实有助于降低总体 CPU 负载。

简而言之,这一切都是完全可行的。 然而,语音识别的好坏取决于您的麦克风质量、您的语言模型、您的主题声音与原始目标受众的接近程度(我对 en_UK 儿童使用 en_US 模型,并不完美)以及其他细节细节通过努力,您可以减少到一个不错的结果。

作为记录,我之前已经在 Kindle 上做过一次这一切(这也适用于 cmu sphinx 和 flite)。 希望这可以帮助。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM