簡體   English   中英

IBM Watson語音對文本的依賴性

[英]IBM Watson speech to text dependency

我想從IBM Watson開始語音識別。下一步,我要在Pepper人形機器人上運行代碼。 實際上我不能在下面的行中導入:

import com.ibm.watson.developer_cloud.speech_to_text.v1.model.SpeechResults;

現在,我正在尋找“ SpeechResults”的Maven依賴關系來修復我的Java代碼錯誤。 我創建了一個Java Maven項目,還添加了它的依賴項。 我的.pom文件在這里:

 <?xml version="1.0" encoding="UTF-8"?>
     <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
  http://maven.apache.org/xsd/maven-4.0.0.xsd" 
   xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>java-sdk</artifactId>
  <version>6.14.0</version>
  <name>Watson Developer Cloud Java SDK</name>
  <description>Client library to use the IBM Watson Services and 
   AlchemyAPI</description>
  <url>https://www.ibm.com/watson/developer</url>
  <licenses>
    <license>
      <name>The Apache License, Version 2.0</name>
      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
    </license>
  </licenses>
  <developers>
    <developer>
      <id>german</id>
      <name>German Attanasio</name>
      <email>germanatt@us.ibm.com</email>
    </developer>
  </developers>
  <scm>
    <connection>scm:git:git@github.com:watson-developer-cloud/java- 
 sdk.git</connection>
    <developerConnection>scm:git:git@github.com:watson-developer-cloud/java- 
  sdk.git</developerConnection>
    <url>https://github.com/watson-developer-cloud/java-sdk</url>
  </scm>
  <dependencies>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>assistant</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>conversation</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>core</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>discovery</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>language-translator</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>natural-language-classifier</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
     </dependency>
     <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
       <artifactId>natural-language-understanding</artifactId>
       <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>personality-insights</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>speech-to-text</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>text-to-speech</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>tone-analyzer</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>visual-recognition</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.squareup.okhttp3</groupId>
  <artifactId>mockwebserver</artifactId>
  <version>3.11.0</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>ch.qos.logback</groupId>
  <artifactId>logback-classic</artifactId>
  <version>1.2.3</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>20.0</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.12</version>
  <scope>test</scope>
</dependency>
<dependency>
<groupId>javax.websocket</groupId>
<artifactId>javax.websocket-api</artifactId>
<version>1.0</version>
</dependency>
  </dependencies>
</project>

我還在這里附加我的Java代碼,並且我的Eclipse IDE在第47,53和55行有錯誤:

{package com.ibm.watson.developer_cloud.java_sdk;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;

import com.ibm.watson.developer_cloud.http.HttpMediaType;
import com.ibm.watson.developer_cloud.speech_to_text.v1.SpeechToText;
import com.ibm.watson.developer_cloud.speech_to_text.v1.model.RecognizeOptions;
import com.ibm.watson.developer_cloud.speech_to_text.v1.model.SpeechResults;
import com.ibm.watson.developer_cloud.speech_to_text.v1.websocket.BaseRecognizeCallbac;

public class SpeechToTextUsingWatson {
    SpeechToText service = new SpeechToText();
    boolean keepListeningOnMicrophone = true;
    String transcribedText = "";

    public SpeechToTextUsingWatson() {
        service = new SpeechToText();
        service.setUsernameAndPassword("<username>", "<password>");
    }

    public String recognizeTextFromMicrophone() {
        keepListeningOnMicrophone = true;
        try {
            // Signed PCM AudioFormat with 16kHz, 16 bit sample size, mono
            int sampleRate = 16000;
            AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, false);
            DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);

            if (!AudioSystem.isLineSupported(info)) {
                System.err.println("Line not supported");
                return null;
            }

            TargetDataLine line = (TargetDataLine) AudioSystem.getLine(info);
            line.open(format);
            line.start();

            AudioInputStream audio = new AudioInputStream(line);

            RecognizeOptions options = new RecognizeOptions.Builder()
              .continuous(true)
              .interimResults(true)
              .inactivityTimeout(5) // use this to stop listening when the speaker pauses, i.e. for 5s
              .contentType(HttpMediaType.AUDIO_RAW + "; rate=" + sampleRate)
              .build();

            service.recognizeUsingWebSocket(audio, options, new BaseRecognizeCallback() {
                @Override
                public void onTranscription(SpeechResults speechResults) {
                    // System.out.println(speechResults);
                    String transcript = speechResults.getResults().get(0).getAlternatives().get(0).getTranscript();
                    if (speechResults.getResults().get(0).isFinal()) {
                        keepListeningOnMicrophone = false;
                        transcribedText = transcript;
                        System.out.println("Sentence " + (speechResults.getResultIndex() + 1) + ": " + transcript + "\n");
                    } else {
                        System.out.print(transcript + "\r");
                    }
                }
            });

            do {
                Thread.sleep(1000);
            } while (keepListeningOnMicrophone);

            // closing the WebSockets underlying InputStream will close the WebSocket itself.
            line.stop();
            line.close();
        } catch (LineUnavailableException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return transcribedText;
    }

    public static void main(String[] args) {
        SpeechToTextUsingWatson speechToTextUsingWatson = new SpeechToTextUsingWatson();
        String recognizedText = speechToTextUsingWatson.recognizeTextFromMicrophone();
        System.out.println("Recognized Text = " + recognizedText);
        System.exit(0);
    }
}
}

找不到類通常是類路徑問題。 也許不是從Maven Central提取依賴項。 讓我們嘗試一個基本的例子。

java-sdk maven依賴項包含所有Watson服務,包括語音到文本。

在您的pom.xml添加:

<dependency>
    <groupId>com.ibm.watson.developer_cloud</groupId>
    <artifactId>java-sdk</artifactId>
    <version>6.14.0</version>
</dependency>

在Eclipse中創建一個Java文件,然后粘貼以下內容:

public class SpeechToTextExample {

  public static void main(String[] args) throws FileNotFoundException {
    IamOptions options = new IamOptions.Builder()
        .apiKey("<iam_api_key>")
        .build();

    SpeechToText service = new SpeechToText();
    service.setIamCredentials(options);
    // In case you are using the Frankfurt instance
    service.setEndPoint("https://stream-fra.watsonplatform.net/speech-to-text/api");
    File audio = new File("<path-to-wav-audio-file");
    RecognizeOptions options = new RecognizeOptions.Builder()
        .audio(audio)
        .contentType(RecognizeOptions.ContentType.AUDIO_WAV)
        .build();
    SpeechRecognitionResults transcript = service.recognize(options).execute();

    System.out.println(transcript);
  }

}

確保使用mvn clean compileEclipse M2E插件從Maven中心獲取依賴項。

如果上面的代碼有效,則可以轉到WebSocket和Microphone示例 ,該示例與您嘗試用代碼完成的示例相似。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM