简体   繁体   English

如何将 getUsermedia 音频流转换为 blob 或缓冲区?

[英]how to convert getUsermedia audio stream into a blob or buffer?

I am getting audio stream from getUserMeda and then convert it into a blob or buffer and send it to server as audio is comming I am using socket.io to emit it to server how can i convert audio mediastream into buffer?我正在从 getUserMeda 获取音频流,然后将其转换为 blob 或缓冲区,并在音频即将到来时将其发送到服务器我正在使用 socket.io 将其发送到服务器如何将音频媒体流转换为缓冲区?

Following is the code that i have written yet以下是我写的代码

navigator.getUserMedia({audio: true, video: false}, function(stream) {
webcamstream = stream;
var media = stream.getAudioTracks();
socket.emit("sendaudio", media);
},
function(e){
   console.log(e);
  }
});

How to convert stream into buffer and emit it to node.js server as stream comes from getusermedia function?当流来自 getusermedia 函数时,如何将流转换为缓冲区并将其发送到 node.js 服务器?

Per @MuazKhan's comment, use MediaRecorder (in Firefox, eventually will be in Chrome) or RecordRTC/etc to capture the data into blobs.根据@MuazKhan 的评论,使用 MediaRecorder(在 Firefox 中,最终将在 Chrome 中)或 RecordRTC/etc 将数据捕获到 blob 中。 Then you can export it via one of several methods to the server for distribution: WebSockets, WebRTC DataChannels, etc. Note that these are NOT guaranteed to transfer the data in realtime, and also MediaRecorder does not yet have bitrate controls.然后您可以通过以下几种方法之一将其导出到服务器进行分发:WebSockets、WebRTC DataChannels 等。请注意,这些不能保证实时传输数据,而且 MediaRecorder 还没有比特率控制。 If transmission is delayed, data may build up locally.如果传输延迟,数据可能会在本地积累。

If realtime (re)transmission is important, strongly consider using instead a PeerConnection to a server (per @Robert's comment) and then transform it there into a stream.如果实时(重新)传输很重要,强烈考虑使用 PeerConnection 代替服务器(根据@Robert 的评论),然后将其转换为流。 (How that is done will depend on the server, but you should have encoded Opus data to either repackage or decode and re-encode.) While re-encoding is generally not good, in this case you would do best to decode through NetEq (webrtc.org stack's jitter-buffer and PacketLossConcealment code) and get a clean realtime audio stream to re-encode for streaming, with loss and jitter dealt with. (如何完成取决于服务器,但您应该对 Opus 数据进行编码以重新打包或解码和重新编码。)虽然重新编码通常不好,但在这种情况下,您最好通过 NetEq ( webrtc.org 堆栈的抖动缓冲区和 PacketLossConcealment 代码)并获得干净的实时音频流以重新编码流,并处理丢失和抖动。

mediaRecorder = new MediaRecorder(stream);//Cria um elemento para gavar a Stream 

let chunks = [];//Cria uma matriz para receber as parte.
mediaRecorder.ondataavailable = data => 
{
chunks.push(data.data)//Vai adicionando as partes na matriz
}
mediaRecorder.onstop = () => {//Quando ativar a função parar a gravação
//Cria o BLOB com as partes acionadas na Matriz
const blob = new Blob(chunks, { type: 'audio/wav' });
}

//Voce pode ainda criar um leitor
var reader = new FileReader();
//Passa o BLOB como parametro
reader.readAsText(blob);
//Pode visualizar os dados gerados em texto
alert(reader.result);
//Pode passar o dados para uma variável
var enviar_dados = reader.result;
 //Pode passa via AJAX e ou JQUERY para o servidor, salvar no banco de dados...

 PS-> O Type pode ser 
 //const blob = new Blob(chunks, { type: 'audio/ogg; code=opus' });
 //const blob = new Blob(chunks, { type: 'application/octet-binary' });
 //const blob = new Blob(chunks, { type: 'text/plain' });
 //const blob = new Blob(chunks, { type: 'text/html' });
 .......

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM