簡體   English   中英

如何從用C ++編寫的外部dll中獲取const char *到C#中

[英]How to get const char * from external dll written in C++ into C#

我通過在Visual Studio 2010中將自己的C ++項目編譯為DLL來將CMU的PocketSphinx與Unity集成在一起,這是我從Unity Pro中從C#腳本調用的。 我知道dll可以工作,因為我用相同的代碼將另一個項目作為exe進行了編譯,編譯,並且可以完美地作為獨立程序運行。 我正在使用pocketsphinx_continuous項目示例,該示例獲取麥克風輸入並將文本輸出到控制台。 我已經定制了要從Unity內部調用的代碼,並且應該將它作為字符串而不是控制台輸出回我的C#代碼。 我覺得我幾乎可以正常工作了,但是const char *並沒有將其作為字符串返回。 如果使用以下聲明,最終將出現訪問沖突錯誤:

私有靜態外部字符串ognize_from_microphone();

因此,我嘗試使用此方法:

私有靜態外部IntPtrognize_from_microphone();

然后,我使用以下代碼行嘗試打印該函數的輸出:

print(“您剛才說的” + Marshal.PtrToStringAnsi(recognize_from_microphone()));

但是,我得到的只是“你剛才說的”作為回報。 如果執行此操作,我可以設法找回內存地址:print(“您剛剛說的” + ognize_from_microphone()); 所以,我知道有些東西正在退還。

這是我的C ++代碼(其中大部分最初是用C編寫的,作為來自Pocketsphinx的示例代碼):

char* MakeStringCopy (const char* str) 
{
  if (str == NULL) return NULL;
  char* res = (char*)malloc(strlen(str) + 1);
  strcpy(res, str);
  return res;
}


extern __declspec(dllexport) const char * recognize_from_microphone()
{
//this is a near complete duplication of the code from main()
char const *cfg;
config = cmd_ln_init(NULL, ps_args(), TRUE,
"-hmm", MODELDIR "\\hmm\\en_US\\hub4wsj_sc_8k",
"-lm", MODELDIR "\\lm\\en\\turtle.DMP",
"-dict", MODELDIR "\\lm\\en\\turtle.dic",
NULL);

if (config == NULL)
{
   return "config is null";
}

ps = ps_init(config);
if (ps == NULL)
{
   return "ps is null";
}

ad_rec_t *ad;
int16 adbuf[4096];
int32 k, ts, rem;
char const *hyp;
char const *uttid;
cont_ad_t *cont;
char word[256];
char words[1024] = "";
//char temp[] = "hypothesis";
//hyp = temp;

if ((ad = ad_open_dev(cmd_ln_str_r(config, "-adcdev"),
                      (int)cmd_ln_float32_r(config, "-samprate"))) == NULL)
    E_FATAL("Failed to open audio device\n");

/* Initialize continuous listening module */
if ((cont = cont_ad_init(ad, ad_read)) == NULL)
    E_FATAL("Failed to initialize voice activity detection\n");
if (ad_start_rec(ad) < 0)
    E_FATAL("Failed to start recording\n");
if (cont_ad_calib(cont) < 0)
    E_FATAL("Failed to calibrate voice activity detection\n");

for (;;) {
    /* Indicate listening for next utterance */
    //printf("READY....\n");
    fflush(stdout);
    fflush(stderr);

    /* Wait data for next utterance */
    while ((k = cont_ad_read(cont, adbuf, 4096)) == 0)
        sleep_msec(100);

    if (k < 0)
        E_FATAL("Failed to read audio\n");

    /*
     * Non-zero amount of data received; start recognition of new utterance.
     * NULL argument to uttproc_begin_utt => automatic generation of utterance-id.
     */
    if (ps_start_utt(ps, NULL) < 0)
        E_FATAL("Failed to start utterance\n");

    ps_process_raw(ps, adbuf, k, FALSE, FALSE);
    //printf("Listening...\n");
    fflush(stdout);

    /* Note timestamp for this first block of data */
    ts = cont->read_ts;

    /* Decode utterance until end (marked by a "long" silence, >1sec) */
    for (;;) {

        /* Read non-silence audio data, if any, from continuous listening module */
        if ((k = cont_ad_read(cont, adbuf, 4096)) < 0)
            E_FATAL("Failed to read audio\n");
        if (k == 0) {
            /*
             * No speech data available; check current timestamp with most recent
             * speech to see if more than 1 sec elapsed.  If so, end of utterance.
             */
            if ((cont->read_ts - ts) > DEFAULT_SAMPLES_PER_SEC)
                break;
        }
        else {
            /* New speech data received; note current timestamp */
            ts = cont->read_ts;
        }

        /*
         * Decode whatever data was read above.
         */
        rem = ps_process_raw(ps, adbuf, k, FALSE, FALSE);

        /* If no work to be done, sleep a bit */
        if ((rem == 0) && (k == 0))
            sleep_msec(20);
    }

    /*
     * Utterance ended; flush any accumulated, unprocessed A/D data and stop
     * listening until current utterance completely decoded
     */
    ad_stop_rec(ad);
    while (ad_read(ad, adbuf, 4096) >= 0);
    cont_ad_reset(cont);
    fflush(stdout);
    /* Finish decoding, obtain and print result */
    ps_end_utt(ps);

    hyp = ps_get_hyp(ps, NULL, &uttid);
    fflush(stdout);

    /* Exit if the first word spoken was GOODBYE */
   //actually, for unity, exit if any word was spoken at all! this will avoid an infinite loop of doom!
    if (hyp) {
        /*sscanf(hyp, "%s", words);
        if (strcmp(word, "goodbye") == 0)*/
            break;
    }
   else
     return "nothing returned";
    /* Resume A/D recording for next utterance */
    if (ad_start_rec(ad) < 0)
        E_FATAL("Failed to start recording\n");
}
cont_ad_close(cont);
ad_close(ad);
ps_free(ps);
const char *temp = new char[1024];
temp = MakeStringCopy(hyp);
return temp;}

如果改變返回溫度; 返回“一些字符串在這里”; 然后我看到文本出現在Unity內部。 但是,這沒有幫助,因為我不需要硬編碼的文本,所以我需要語音識別代碼的輸出,最終將其存儲在hyp變量中。

誰能幫我弄清楚我在做什么錯? 謝謝!

問題在於,您不應該在C ++中分配原始內存,而在C#中使用這種方式消耗原始內存,誰將擺脫在MakeStringCopy函數中分配的內存?

嘗試這樣的事情:

[DllImport("MyLibrary.dll")]
[return: MarshalAs(UnmanagedType.LPStr)] 
public static extern string GetStringValue();

這樣,您就告訴封送處理程序,CLR擁有調用該函數所帶來的內存,它將負責取消分配。

此外, .Net字符串包含Unicode字符 ,這就是為什么當您嘗試為其分配ANSI字符時出現訪問沖突錯誤的原因。 使用屬性UnmanagedType.LPStr還可告訴封送拆收器應預期的字符類型,以便它可以為您進行轉換。

最后,對於C ++端的內存分配,根據MSDN中的此示例,您應該在函數MakeStringCopy使用函數CoTaskMemAlloc而不是malloc

終於成功了! 我最終不得不將stringbuilder對象傳遞給C ++函數,並從C#中的該對象獲取字符串,就像我在這篇文章中找到的那樣: http : //www.pcreview.co.uk/forums/passing-and-retrieving-字符串調用c函數c-t1367069.html

代碼比我想要的慢,但至少現在可以正常工作。 這是我的最終代碼:

C#:

[DllImport ("pocketsphinx_unity",CallingConvention=CallingConvention.Cdecl,CharSet = CharSet.Ansi)]
private static extern void recognize_from_microphone(StringBuilder str);StringBuilder mytext= new StringBuilder(1000);
recognize_from_microphone(mytext);
print("you just said " + mytext.ToString());

C ++:

extern __declspec(dllexport) void recognize_from_microphone(char * fromUnity){
static ps_decoder_t *ps;
static cmd_ln_t *config;

config = cmd_ln_init(NULL, ps_args(), TRUE,
"-hmm", MODELDIR "\\hmm\\en_US\\hub4wsj_sc_8k",
"-lm", MODELDIR "\\lm\\en\\turtle.DMP",
"-dict", MODELDIR "\\lm\\en\\turtle.dic",
NULL);

if (config == NULL)
{
    //return "config is null";
}

ps = ps_init(config);
if (ps == NULL)
{
    //return "ps is null";
}

ad_rec_t *ad;
int16 adbuf[4096];
int32 k, ts, rem;
char const *hyp;
char const *uttid;
cont_ad_t *cont;
//char word[256];
char * temp;

if ((ad = ad_open_dev(cmd_ln_str_r(config, "-adcdev"),
                      (int)cmd_ln_float32_r(config, "-samprate"))) == NULL)
    printf("Failed to open audio device\n");

/* Initialize continuous listening module */
if ((cont = cont_ad_init(ad, ad_read)) == NULL)
    printf("Failed to initialize voice activity detection\n");
if (ad_start_rec(ad) < 0)
    printf("Failed to start recording\n");
if (cont_ad_calib(cont) < 0)
    printf("Failed to calibrate voice activity detection\n");

for (;;) {
    /* Indicate listening for next utterance */
    //printf("READY....\n");
    fflush(stdout);
    fflush(stderr);

    /* Wait data for next utterance */
    while ((k = cont_ad_read(cont, adbuf, 4096)) == 0)
        sleep_msec(100);

    if (k < 0)
        printf("Failed to read audio\n");

    /*
     * Non-zero amount of data received; start recognition of new utterance.
     * NULL argument to uttproc_begin_utt => automatic generation of utterance-id.
     */
    if (ps_start_utt(ps, NULL) < 0)
        printf("Failed to start utterance\n");

    ps_process_raw(ps, adbuf, k, FALSE, FALSE);
    //printf("Listening...\n");
    fflush(stdout);

    /* Note timestamp for this first block of data */
    ts = cont->read_ts;

    /* Decode utterance until end (marked by a "long" silence, >1sec) */
    for (;;) {

        /* Read non-silence audio data, if any, from continuous listening module */
        if ((k = cont_ad_read(cont, adbuf, 4096)) < 0)
            printf("Failed to read audio 2nd\n");
        if (k == 0) {
            /*
             * No speech data available; check current timestamp with most recent
             * speech to see if more than 1 sec elapsed.  If so, end of utterance.
             */
            if ((cont->read_ts - ts) > DEFAULT_SAMPLES_PER_SEC)
                break;
        }
        else {
            /* New speech data received; note current timestamp */
            ts = cont->read_ts;
        }

        /*
         * Decode whatever data was read above.
         */
        rem = ps_process_raw(ps, adbuf, k, FALSE, FALSE);

        /* If no work to be done, sleep a bit */
        if ((rem == 0) && (k == 0))
            sleep_msec(20);
    }

    /*
     * Utterance ended; flush any accumulated, unprocessed A/D data and stop
     * listening until current utterance completely decoded
     */
    ad_stop_rec(ad);
    while (ad_read(ad, adbuf, 4096) >= 0);
    cont_ad_reset(cont);
    fflush(stdout);
    /* Finish decoding, obtain and print result */
    ps_end_utt(ps);

    hyp = ps_get_hyp(ps, NULL, &uttid);
    fflush(stdout);

    /* Exit if the first word spoken was GOODBYE */
    //actually, for unity, exit if any word was spoken at all! this will avoid an infinite loop of doom!
    if (hyp) {
        strcpy(fromUnity,hyp);
        break;               
    }
    else
        //return "nothing returned";
    /* Resume A/D recording for next utterance */
    if (ad_start_rec(ad) < 0)
        printf("Failed to start recording\n");
}

cont_ad_close(cont);
ad_close(ad);
ps_free(ps);
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM