简体   繁体   English

将实时音频流式传输到 Node.js 服务器

[英]Stream live audio to Node.js server

I'm working on a project and I require to send an audio stream to a Node.js server.我正在处理一个项目,我需要将音频流发送到 Node.js 服务器。 I'm able to capture microphone sound with this function:我可以使用此功能捕获麦克风声音:

function micCapture(){
    'use strict';

    navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;

    var constraints = {
        audio: true,
        video: false
    };

    var video = document.querySelector('video');

    function successCallback(stream) {
        window.stream = stream; // stream available to console
        if (window.URL) {
            video.src = window.webkitURL.createObjectURL(stream);
        } else {
            video.src = stream;
        }
        //Send audio stream
        //server.send(stream);
    }

    function errorCallback(error) {
        console.log('navigator.getUserMedia error: ', error);
    }

    navigator.getUserMedia(constraints, successCallback, errorCallback);
}

As you can see, I'm able to capture audio and play it on the website.如您所见,我能够捕获音频并在网站上播放。

Now I want to send that audio stream to a Node.js server, and send it back to other clients.现在我想将该音频流发送到 Node.js 服务器,然后将其发送回其他客户端。 Like a voicechat, but I don't want to use WebRTC as I need the stream in the server.就像语音聊天一样,但我不想使用 WebRTC,因为我需要服务器中的流。 How can I achieve this?我怎样才能做到这一点? Can I use socket.io-stream to do this?我可以使用 socket.io-stream 来做到这一点吗? In the examples I saw, they recorded the audio, and sent a file, but I need "live" audio.在我看到的示例中,他们录制了音频并发送了一个文件,但我需要“实时”音频。

I have recently done live audio upload using socket.io from browser to server.我最近使用 socket.io 从浏览器到服务器完成了实时音频上传。 I am going to answer here in case someone else needs it.我会在这里回答以防其他人需要它。

    var stream;
    var socket = io();
    var bufferSize = 1024 * 16;
    var audioContext = new AudioContext();
    // createScriptProcessor is deprecated. Let me know if anyone find alternative
    var processor = audioContext.createScriptProcessor(bufferSize, 1, 1);
    processor.connect(audioContext.destination);

    navigator.mediaDevices.getUserMedia({ video: false, audio: true }).then(handleMicStream).catch(err => {
      console.log('error from getUserMedia', err);
    });

handleMicStream will run when user accepts the permission to use microphone.当用户接受使用麦克风的权限时, handleMicStream将运行。

  function handleMicStream(streamObj) {
    // keep the context in a global variable
    stream = streamObj;

    input = audioContext.createMediaStreamSource(stream);

    input.connect(processor);

    processor.onaudioprocess = e => {
      microphoneProcess(e); // receives data from microphone
    };
  }


  function microphoneProcess(e) {
    const left = e.inputBuffer.getChannelData(0); // get only one audio channel
    const left16 = convertFloat32ToInt16(left); // skip if you don't need this
    socket.emit('micBinaryStream', left16); // send to server via web socket
  }

// Converts data to BINARY16
function convertFloat32ToInt16(buffer) {
    let l = buffer.length;
    const buf = new Int16Array(l / 3);

    while (l--) {
      if (l % 3 === 0) {
        buf[l / 3] = buffer[l] * 0xFFFF;
      }
    }
    return buf.buffer;
  }



Have your socket.io server listen to micBinaryStream and you should get the data.让您的 socket.io 服务器监听micBinaryStream ,您应该会得到数据。 I needed the data as a BINARY16 format for google api if you do not need this you can skip the function call to convertFloat32ToInt16() .我需要将数据作为 Google api 的BINARY16格式,如果您不需要它,您可以跳过对convertFloat32ToInt16()的函数调用。

Important重要的

When you need to stop listening you MUST disconnect the the processor and end the stream.当您需要停止监听时,您必须断开处理器并结束流。 Run the function closeAll() below.运行下面的函数closeAll()

function closeAll() {
    const tracks = stream ? stream.getTracks() : null;
    const track = tracks ? tracks[0] : null;

    if (track) {
      track.stop();
    }

    if (processor) {
      if (input) {
        try {
          input.disconnect(processor);
        } catch (error) {
          console.warn('Attempt to disconnect input failed.');
        }
      }
      processor.disconnect(audioContext.destination);
    }

    if (audioContext) {
      audioContext.close().then(() => {
        input = null;
        processor = null;
        audioContext = null;
      });
    }
  }

it's an old time question, i see.这是一个古老的问题,我明白了。 I'm doing the same thing (except my server doesn't run node.js and is written in C#) and stumbled upon this.我正在做同样的事情(除了我的服务器不运行 node.js 并且是用 C# 编写的)并偶然发现了这一点。

Don't know if someone is still interested but i've elaborated a bit.不知道是否有人仍然感兴趣,但我已经详细说明了一点。 The current alternative to the deprecated createScriptProcessor is the AudioWorklet interface.当前已弃用的 createScriptProcessor 的替代方案是 AudioWorklet 接口。

From: https://webaudio.github.io/web-audio-api/#audioworklet来自: https : //webaudio.github.io/web-audio-api/#audioworklet

1.32.1. 1.32.1. Concepts概念

The AudioWorklet object allows developers to supply scripts (such as JavaScript or >WebAssembly code) to process audio on the rendering thread, supporting custom >AudioNodes. AudioWorklet 对象允许开发人员提供脚本(例如 JavaScript 或 >WebAssembly 代码)来处理渲染线程上的音频,支持自定义 >AudioNodes。 This processing mechanism ensures synchronous execution of the script >code with other built-in AudioNodes in the audio graph.这种处理机制确保脚本>代码与音频图中的其他内置 AudioNode 同步执行。

You cannot implement interfaces in Javascript as far as i know but you can extend a class derived from it.据我所知,您无法在 Javascript 中实现接口,但您可以扩展从它派生的类。

And the one we need is: https://developer.mozilla.org/en-US/docs/Web/API/AudioWorkletProcessor我们需要的是: https : //developer.mozilla.org/en-US/docs/Web/API/AudioWorkletProcessor

So i did write a processor that just mirrors the output with the input values and displays them.所以我确实写了一个处理器,它只是用输入值镜像输出并显示它们。

class CustomAudioProcessor extends AudioWorkletProcessor {
    process (inputs, outputs, parameters) {
        const input = inputs[0];
        const output = output[0];
        for (let channel = 0; channel < input.length; ++channel) {   
            for (let i = 0; i < input[channel].length; ++i) {
            // Just copying all the data from input to output
            output[channel][i] = input[channel][i];
            // The next one will make the app crash but yeah, the values are there
            // console.log(output[channel][i]);
            }
        }
    }
}

The processor must then be placed into the audio pipeline, after the microphone and before the speakers.然后必须将处理器置于音频管道中,在麦克风之后和扬声器之前。

function record() {

constraints = { audio: true };
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
   audioCtx = new AudioContext();
    var source = audioCtx.createMediaStreamSource(stream);
    audioCtx.audioWorklet.addModule("custom-audio-processor.js").then(() => {
        customAudioProcessor = new AudioWorkletNode(audioCtx, "custom-audio-processor");
        source.connect(customAudioProcessor);
        customAudioProcessor.connect(audioCtx.destination);
    }) 

    audioCtx.destination.play();

Works!有效! Good luck!祝你好运! :) :)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM