簡體   English   中英

如何使用 AWS Transcribe javascript sdk

[英]How to use the AWS Transcribe javascript sdk

我正在嘗試在 Angular 項目中使用@aws-sdk/client-transcribe-streaming ,但沒有任何運氣。

以下代碼是 AWS 提供的唯一示例

// ES6+ example
import {
  TranscribeStreamingClient,
  StartStreamTranscriptionCommand,
} from "@aws-sdk/client-transcribe-streaming";

// a client can be shared by different commands.
const client = new TranscribeStreamingClient({ region: "REGION" });

const params = {
  /** input parameters */
};
const command = new StartStreamTranscriptionCommand(params);

如 SDK 文檔所述, StartStreamTranscriptionCommand object 期望params參數的類型為StartStreamTranscriptionCommandInput

這個StartStreamTranscriptionCommandInput object 有一個AsyncIterable<AudioStream>類型的AudioStream字段,我假設它是音頻 stream 將被發送到 AWS 轉錄。

問題是我不知道如何創建這個AudioStream object,文檔給我們的唯一提示是它是一個“PCM 編碼的 stream 音頻 blob。音頻 ZF7B44CFAFD5C522223D5ZA 編碼為 HTTP 幀。 "

任何有關如何創建AsyncIterable<AudioStream>的幫助將不勝感激。

我沒有直接回答您的問題,但我可能會為您提供一些有用的建議。 我設法在我最近建立的網站中實現了 Transcribe websocket 音頻流。 我使用了VueJS,但過程應該非常相似。 我沒有使用 AWS Transcribe Javascript SDK,而是使用他們提供的 Github 鏈接,將我的代碼基於來自 AWS 博客文章的信息。

這兩種資源對於讓它發揮作用都至關重要。 如果您克隆 git 存儲庫並運行代碼,如果我沒記錯的話,您應該有一個工作示例。 直到今天,我還沒有完全理解代碼是如何工作的,因為我不了解音頻,但它確實有效。

我最終在一些 JS 文件中修改並實現了 Github 代碼,然后將其添加到我的代碼中。 然后我必須計算一些AWS Signature V4的東西,我可以將其發送到 Transcribe API,然后它會返回一個 websocket 鏈接,我可以使用 JS 打開。 發送到 Transcribe websocket 的數據來自連接的麥克風,可以使用MediaDevices.getUserMedia()找到。 上面提到的 Github 代碼包含將麥克風音頻轉換為 Transcribe 所需的文件,因為它只接受 8000 和 16000 的比特率,具體取決於您選擇的語言。

理解 Transcribe 文檔並找到我必須放在一起的所有部分很棘手,因為流式傳輸到 Transcribe 似乎有點邊緣情況,但我希望我提到的資源會讓你更容易一些。

編輯:添加源代碼

獲取轉錄 websocket 鏈接。

我在 AWS Lambda function 運行節點中進行了此設置,但您可以將exports.handler中的所有內容復制到普通的 JS 文件中。 您將需要 cryptojs、aws-sdk 和 moment 節點模塊!

//THIS SCRIPT IS BASED ON https://docs.aws.amazon.com/transcribe/latest/dg/websocket.html
const crypto = require('crypto-js');
const moment = require('moment');
const aws = require('aws-sdk');
const awsRegion = '!YOUR-REGION!'
const accessKey = '!YOUR-IAM-ACCESS-KEY!';
const secretAccessKey = '!YOUR-IAM-SECRET-KEY!';

exports.handler = async (event) => {
    console.log(event);
    
    // let body = JSON.parse(event.body); I made a body object below for you to test with
    let body = {
        languageCode: "en-US", //or en-GB etc. I found en-US works better, even for British people due to the higher sample rate, which makes the audio clearer.
        sampleRate: 16000
    }
    
    console.log(crypto.enc.Hex.stringify(signature_key));
    
    
    let method = "GET"
    let region = awsRegion;
    let endpoint = "wss://transcribestreaming." + region + ".amazonaws.com:8443"
    let host = "transcribestreaming." + region + ".amazonaws.com:8443"
    let amz_date = new moment().format('yyyyMMDDTHHmmss') + 'Z';
    let datestamp = new moment().format('yyyyMMDD');
    let service = 'transcribe';
    let linkExpirationSeconds = 60;
    let signatureString = crypto.enc.Hex.stringify(signature_key);
    let languageCode = body.languageCode;
    let sampleRate = body.sampleRate
    let canonical_uri = "/stream-transcription-websocket"
    let canonical_headers = "host:" + host + "\n"
    let signed_headers = "host" 
    let algorithm = "AWS4-HMAC-SHA256"
    let credential_scope = datestamp + "%2F" + region + "%2F" + service + "%2F" + "aws4_request"
    // Date and time of request - NOT url formatted
    let credential_scope2 = datestamp + "/" + region + "/" + service + "/" + "aws4_request"
  
    
    let canonical_querystring  = "X-Amz-Algorithm=" + algorithm
    canonical_querystring += "&X-Amz-Credential="+ accessKey + "%2F" + credential_scope
    canonical_querystring += "&X-Amz-Date=" + amz_date 
    canonical_querystring += "&X-Amz-Expires=" + linkExpirationSeconds
    canonical_querystring += "&X-Amz-SignedHeaders=" + signed_headers
    canonical_querystring += "&language-code=" + languageCode + "&media-encoding=pcm&sample-rate=" + sampleRate
    
    //Empty hash as playload is unknown
    let emptyHash = crypto.SHA256("");
    let payload_hash = crypto.enc.Hex.stringify(emptyHash);
    
    let canonical_request = method + '\n' 
    + canonical_uri + '\n' 
    + canonical_querystring + '\n' 
    + canonical_headers + '\n' 
    + signed_headers + '\n' 
    + payload_hash
    
    let hashedCanonicalRequest = crypto.SHA256(canonical_request);
    
    let string_to_sign = algorithm + "\n"
    + amz_date + "\n"
    + credential_scope2 + "\n"
    + crypto.enc.Hex.stringify(hashedCanonicalRequest);
    
    //Create the signing key
    let signing_key = getSignatureKey(secretAccessKey, datestamp, region, service);
    
    //Sign the string_to_sign using the signing key
    let inBytes = crypto.HmacSHA256(string_to_sign, signing_key);
    
    let signature = crypto.enc.Hex.stringify(inBytes);
    
    canonical_querystring += "&X-Amz-Signature=" + signature;
    
    let request_url = endpoint + canonical_uri + "?" + canonical_querystring;
    
    //The final product
    console.log(request_url);
    
    let response = {
        statusCode: 200,
        headers: {
          "Access-Control-Allow-Origin": "*"  
        },
        body: JSON.stringify(request_url)
    };
    return response;    
};

function getSignatureKey(key, dateStamp, regionName, serviceName) {
    var kDate = crypto.HmacSHA256(dateStamp, "AWS4" + key);
    var kRegion = crypto.HmacSHA256(regionName, kDate);
    var kService = crypto.HmacSHA256(serviceName, kRegion);
    var kSigning = crypto.HmacSHA256("aws4_request", kService);
    return kSigning;
};

打開websocket,seding音頻並接收響應的代碼

安裝 npm 模塊:麥克風流(不確定它是否仍然可用,但它在 Github 存儲庫的源代碼中,我可能剛剛將它粘貼到 node_modules 文件夾中),@aws-sdk/util-utf8-node,@ aws-sdk/eventstream-marshaller

import audioUtils from "../js/audioUtils.js"; //For encoding audio data as PCM
import mic from "microphone-stream"; //Collect microphone input as a stream of raw bytes
import * as util_utf8_node from "@aws-sdk/util-utf8-node"; //Utilities for encoding and decoding UTF8
import * as marshaller from "@aws-sdk/eventstream-marshaller"; //For converting binary event stream messages to and from JSON

let micstream;
let mediastream;
let inputSampleRate; // The sample rate your mic is producting
let transcribeSampleRate = 16000 //The sample rate you requested from Transcribe
let transcribeLanguageCode = "en-US"; //The language you want Transcribe to use
let websocket;

// first we get the microphone input from the browser (as a promise)...
let mediaStream;
try {
    mediaStream = await window.navigator.mediaDevices.getUserMedia({
            video: false,
            audio: true
        })
}
catch (error) {
    console.log(error);
    alert("Error. Please make sure you allow this website to access your microphone");
    return;
}



this.eventStreamMarshaller = new marshaller.EventStreamMarshaller(util_utf8_node.toUtf8, util_utf8_node.fromUtf8);
            //let's get the mic input from the browser, via the microphone-stream module
            micStream = new mic();

            micStream.on("format", data => {
                inputSampleRate = data.sampleRate;
            });

            micStream.setStream(mediaStream);


//THIS IS WHERE YOU NEED TO GET YOURSELF A LINK FROM TRANSCRIBE
//AS MENTIONED I USED AWS LAMBDA FOR THIS
//LOOK AT THE ABOVE CODE FOR GETTING A TRANSCRIBE LINK

getTranscribeLink(transcribeLanguageCode, transcribeSampleRate) // Not a real funtion, you need to make this! The options are what would be in the body object in AWS Lambda

let url = "!YOUR-GENERATED-URL!"


//Configure your websocket
websocket = new WebSocket(url);
websocket.binaryType = "arraybuffer";

websocket.onopen = () => {
    //Make the spinner disappear
    micStream.on('data', rawAudioChunk => {
        // the audio stream is raw audio bytes. Transcribe expects PCM with additional metadata, encoded as binary
        let binary = convertAudioToBinaryMessage(rawAudioChunk);

        if (websocket.readyState === websocket.OPEN)
            websocket.send(binary);
    }
)};

// handle messages, errors, and close events
websocket.onmessage = async message => {

    //convert the binary event stream message to JSON
    var messageWrapper = this.eventStreamMarshaller.unmarshall(Buffer(message.data));

    var messageBody = JSON.parse(String.fromCharCode.apply(String, messageWrapper.body)); 
    
    //THIS IS WHERE YOU DO SOMETHING WITH WHAT YOU GET FROM TRANSCRIBE
    console.log("Got something from Transcribe!:");
    console.log(messageBody);
}






// FUNCTIONS

function convertAudioToBinaryMessage(audioChunk) {
    var raw = mic.toRaw(audioChunk);
    if (raw == null) return; // downsample and convert the raw audio bytes to PCM
    var downsampledBuffer = audioUtils.downsampleBuffer(raw, inputSampleRate, transcribeSampleRate);
    var pcmEncodedBuffer = audioUtils.pcmEncode(downsampledBuffer); // add the right JSON headers and structure to the message

    var audioEventMessage = this.getAudioEventMessage(Buffer.from(pcmEncodedBuffer)); //convert the JSON object + headers into a binary event stream message

    var binary = this.eventStreamMarshaller.marshall(audioEventMessage);
    return binary;
}

function getAudioEventMessage(buffer) {
    // wrap the audio data in a JSON envelope
    return {
        headers: {
            ':message-type': {
                type: 'string',
                value: 'event'
            },
            ':event-type': {
                type: 'string',
                value: 'AudioEvent'
            }
        },
        body: buffer
    };
}

audioUtils.js

export default {
    pcmEncode: pcmEncode,
    downsampleBuffer: downsampleBuffer
}

export function pcmEncode(input) {
    var offset = 0;
    var buffer = new ArrayBuffer(input.length * 2);
    var view = new DataView(buffer);
    for (var i = 0; i < input.length; i++, offset += 2) {
        var s = Math.max(-1, Math.min(1, input[i]));
        view.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
    }
    return buffer;
}

export function downsampleBuffer(buffer, inputSampleRate = 44100, outputSampleRate = 16000) {
        
    if (outputSampleRate === inputSampleRate) {
        return buffer;
    }

    var sampleRateRatio = inputSampleRate / outputSampleRate;
    var newLength = Math.round(buffer.length / sampleRateRatio);
    var result = new Float32Array(newLength);
    var offsetResult = 0;
    var offsetBuffer = 0;
    
    while (offsetResult < result.length) {

        var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);

        var accum = 0,
        count = 0;
        
        for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++ ) {
            accum += buffer[i];
            count++;
        }

        result[offsetResult] = accum / count;
        offsetResult++;
        offsetBuffer = nextOffsetBuffer;

    }

    return result;

}

我想就是這樣。 它肯定足以讓你讓它工作

事實證明,出於某種原因,他們刪除了有關如何從自述文件中獲取此AsyncIterable<AudioStream>的唯一解釋。 通過搜索他們的 GitHub 問題,有人從舊提交中向我指出了這個舊版本的自述文件。 此版本包含一些有關如何創建此 object 的示例。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM