![](/img/trans.png)
[英]Convert a Int16Array to a ndarray using ndarrays.js nodejs
[英]NodeJS Convert Int16Array binary Buffer to LINEAR16 encoded raw stream for Google Speech API
我正在嘗試在使用AudioContext在瀏覽器中進行語音記錄的節點服務器中將語音轉換為文本。 我能夠通過binaryType:arraybuffer的WebSocket連接將int16Array緩沖區(記錄的數據)發送到我的節點服務器。
this.processor.onaudioprocess = (e) => {
// this.processAudio(e)
for (
var float32Array = e.inputBuffer.getChannelData(0) || new Float32Array(this.bufferSize),
len = float32Array.length,
int16Array = new Int16Array(len);
len--;)
int16Array[len] = 32767 * Math.min(1, float32Array[len]);
this.socket.send(int16Array.buffer);
};
在服務器中,數據接收為
<Buffer 66 6f 6f ...>
現在,我想解析或轉換為可讀流,以便可以通過管道傳輸到Google語音RecognitionStream。
function processAudioBuffer(int16ArrayBuffer) {
console.log("Received stream :", int16ArrayBuffer, typeof
recognizeStreams[userId]);
const recognizer = getGoogleSpeechStreamRecognizer();
if (recognizer) {
/* HERE I NEED SOMETHING WHICH MAKES MY BUFFER COMPATIBLE WITH GOOGLE SPEECH API */
// tried with streamifier but no luck
// streamifier.createReadStream(int16ArrayBuffer).pipe(recognizer);
// also tried with Record which is used in google-cloud-node-samples to record stream from connected mic device, but no luck
var file = new Record({
path: `${userId}.raw`,
encoding: 'arraybuffer',
contents: int16ArrayBuffer
});
file.pipe(recognizer);
} else {
console.log('user stream is not yet created');
}
}
識別器拋出以下錯誤:
Error: write after end
at writeAfterEnd (/Users/demo/node_modules/duplexify/node_modules/readable-stream/lib/_stream_writable.js:222:12)
at Writable.write (/Users/demo/node_modules/duplexify/node_modules/readable-stream/lib/_stream_writable.js:262:20)
at Duplexify.end (/Users/demo/node_modules/duplexify/index.js:223:18)
at Record.pipe (/Users/demo/node_modules/record/index.js:70:14)
at processAudioBuffer (/Users/demo/app.js:87:10)
at WebSocket.incoming (/Users/demo/app.js:104:7)
at emitTwo (events.js:106:13)
at WebSocket.emit (events.js:191:7)
at Receiver._receiver.onmessage (/Users/demo/node_modules/ws/lib/WebSocket.js:146:54)
at Receiver.dataMessage (/Users/demo/node_modules/ws/lib/Receiver.js:380:14)
解決了! 我們可以將緩沖區直接寫到從GoogleSpeech創建的識別器流,如下所示:
const recognizer = getGoogleSpeechStreamRecognizer();
recognizer.write(int16ArrayBuffer)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.