简体   繁体   English

如何播放用WebRTC录制的音频流块?

[英]How to play audio stream chunks recorded with WebRTC?

I'm trying to create an experimental application that streams audio in real time from client 1 to client 2 . 我正在尝试创建一个实验性应用程序,该程序可以将音频从client 1实时传输到client 2

So following some tutorials and questions about the same subject, I used WebRTC and binaryjs . 因此,按照有关同一主题的一些教程和问题,我使用了WebRTCbinaryjs So far this is what I get 到目前为止,这就是我得到的

1- Client 1 and Client 2 have connected to BinaryJS to send/receive data chunks. 1- Client 1Client 2已连接到BinaryJS以发送/接收数据块。

2- Client 1 used WebRTC to record audio and gradually send it to BinaryJS 2- Client 1使用WebRTC录制音频并将其逐渐发送到BinaryJS

3- Client 2 receives the chunks and try to play them. 3- Client 2接收块并尝试播放它们。

Well I'm getting an error in the last part. 好吧,我在最后一部分遇到了错误。 This is the error message I get: 这是我收到的错误消息:

Uncaught RangeError: Source is too large 未捕获的RangeError:源太大

at Float32Array.set (native) 在Float32Array.set(本机)

And this is the code: 这是代码:

Client 1 客户1

var WSClient;
var AudioStream;

function load(){
    var session = {
        audio: true,
        video: false
    };

    var recordRTC = null;

    navigator.getUserMedia(session, startRecording, onError);
    WSClient = new BinaryClient('ws://localhost:9001');
    WSClient.on('open',function(){
        console.log('client opened')
        AudioStream = WSClient.createStream();
    })
}

function startRecording(stream){
    var context = new AudioContext();
    var audio_input = context.createMediaStreamSource(stream);
    var buffer_size = 2048;

    var recorder = context.createScriptProcessor(buffer_size, 1, 1);

    recorder.onaudioprocess = function(e){
        console.log('chunk')
        var left = e.inputBuffer.getChannelData(0);
        AudioStream.write(left);
    };

    audio_input.connect(recorder);
    recorder.connect(context.destination);
}

Client 2 客户2

var WSClient;
var audioContext;
var sourceNode;

function load(){
    audioContext = new AudioContext();
    sourceNode = audioContext.createBufferSource();
    sourceNode.connect(audioContext.destination);

    sourceNode.start(0);

    WSClient = new BinaryClient('ws://localhost:9001');

    WSClient.on('open',function(){
        console.log('client opened');
    });

    WSClient.on('stream', function(stream, meta){
        // collect stream data
        stream.on('data', function(data){
            console.log('received chunk')
            var integers = new Int16Array(data);
            var audioBuffer = audioContext.createBuffer(1, 2048, 4410);
            audioBuffer.getChannelData(0).set(integers); //appearently this is where the error occurs
            sourceNode.buffer = audioBuffer;
        });
    });
}

Server 服务器

var wav = require('wav');
var binaryjs = require('binaryjs');

var binaryjs_server = binaryjs.BinaryServer;

var server = binaryjs_server({port: 9001});

server.on('connection', function(client){
    console.log('server connected');

    var file_writter = null;

    client.on('stream', function(stream, meta){
        console.log('streaming', server.clients)
        //send to other clients
        for(var id in server.clients){
            if(server.clients.hasOwnProperty(id)){
                var otherClient = server.clients[id];
                if(otherClient != client){
                    var send = otherClient.createStream(meta);
                    stream.pipe(send);
                }
            }
        }
    });

    client.on('close', function(stream){
        console.log('client closed')
        if(file_writter != null) file_writter.end();
    });
});

The error occurs here: 错误发生在这里:

audioBuffer.getChannelData(0).set(integers);

So I have two questions: 所以我有两个问题:

Is it possible to send the chunks I captured in client 1 and then reproduce them in client 2 ? 是否可以发送我在client 1捕获的块,然后在client 2重现它们?

What is the deal with the error I'm having? 如何解决我遇到的错误?

Thanks guys! 多谢你们!

@edit 1 @edit 1

Since i'm getting code snippets from other questions I'm still trying to understand it. 由于我从其他问题中获取代码片段,因此我仍在尝试理解它。 I commented the line in client 2 code that creates an Int16Array and I now get a different error (but I don't know which version of the code is more correct): 我在client 2代码中注释了创建Int16Array ,现在我得到了另一个错误(但我不知道哪个版本的代码更正确):

Uncaught DOMException: Failed to set the 'buffer' property on 'AudioBufferSourceNode': Cannot set buffer after it has been already been set 未捕获到的DOMException:无法在'AudioBufferSourceNode'上设置'buffer'属性:已经设置后无法设置缓冲区

Probably because I'm setting it everytime I get a new chunk of data. 可能是因为每次我获取新数据块时都要进行设置。

The DOMException about AudioBufferSourceNode means you need to create a new AudioBufferSourceNode for every new Audiobuffer that you're creating. 有关AudioBufferSourceNode的DOMException意味着您需要为正在创建的每个新Audiobuffer创建一个新的AudioBufferSourceNode So something like 所以像

sourceNode = new AudioBufferSourceNode(audioContext, {buffer: audioBuffer})

And an AudioBuffer has Float32Array s. 而且AudioBuffer具有Float32Array You need to convert your Int16Array to a Float32Array before assigning it to an AudioBuffer . 您需要Int16Array转换为Float32Array然后再将其分配给AudioBuffer Probably good enough to divide everything by 32768. 可能足以将所有内容除以32768。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM