简体   繁体   English

使用 sip.js 录制来自 SIP 呼叫的麦克风和音频

[英]Record mic and audio from SIP call using sip.js

Good evening Stack Overflow!晚上好堆栈溢出! I really need help for a project of mine where I'm using sip.js and a VoIP to make real calls to a phone number.我真的需要我的一个项目的帮助,我正在使用 sip.js 和 VoIP 对电话号码进行真正的呼叫。

The Goal目标

I want to allow the user to record the audio and microphone and save the data on a server (in base64 encoding or as a file).我希望允许用户录制音频和麦克风并将数据保存在服务器上(以 base64 编码或作为文件)。 So I after the conversation can hear the conversation again and use it for what ever my purpose (employee training) was.所以我在谈话之后可以再次听到谈话并将其用于我的目的(员工培训)。

The Problem问题

I can't get the sound of the person speaking, which comes through and -HTML tag (working with the sip.js plugin).我听不到说话人的声音,这是通过 -HTML 标记(使用 sip.js 插件)发出的。 As of now I haven't found any way to successfully save the sound streaming through this audio tag.到目前为止,我还没有找到任何方法可以通过此音频标签成功保存声音流。

What I've done so far到目前为止我所做的

I've successfully figured out how to record the audio of the microphone using a plugin called AudioRecorder which allows me to record the audio through the microphone and saving it.我已经成功地找到了如何使用名为AudioRecorder的插件录制麦克风的音频,该插件允许我通过麦克风录制音频并保存。 I slightly changed the code so it got saved encoded as base64.我稍微改变了代码,所以它被保存为 base64 编码。 This all work as expected, though I only get the audio of my own voice, and not the person I'm talking with.这一切都按预期进行,尽管我只能听到我自己的声音,而不是与我交谈的人的声音。

Because I succeed to record the audio of my own voice I looked into the AudioRecorder plugin and tried to reverse the plugin to record from a audio tag.因为我成功录制了自己的声音,所以我查看了 AudioRecorder 插件并尝试反转插件以从音频标签录制。 I found the "createMediaStreamSource" function inside AudioRecorder which I wanted to work with the -tag which did not work (as I suspected, because the -tag in it self isn't a stream (of which i understand).我在 AudioRecorder 中找到了“createMediaStreamSource”函数,我想使用它不起作用的 -tag(正如我怀疑的那样,因为它本身中的 -tag 不是流(我理解)。

The Code守则

I'm basically using the sip.js plugin to establish a call to a phone number by using below code (just using an example, matching my code, because my raw code contains some added values which doesn't need to be showed here):我基本上是使用 sip.js 插件通过使用以下代码建立对电话号码的呼叫(仅使用示例,匹配我的代码,因为我的原始代码包含一些不需要在此处显示的附加值) :

// Create a user agent called bob, connect, and register to receive invitations.
var userAgent = new SIP.UA({
  uri: 'bob@example.com',
  wsServers: ['wss://sip-ws.example.com'],
  register: true
});
var options = { media: { constraints: { audio: true, video: false }, render: { remote: document.getElementById("audio") } } };

Then i use the build in invite function to call a phonenumber, which does the rest.然后我使用内置的邀请功能拨打电话号码,剩下的就交给电话号码了。 Audio and microphone is now up and running.音频和麦克风现已启动并运行。

userAgent.invite("+4512345678", options);

I can now talk with my new best friend Bob.我现在可以和我最好的新朋友鲍勃交谈了。 But I can't record other than my own sound as of now.但是现在除了我自己的声音我不能录制。

Whats Next?下一步是什么?

I would really like some help to understand how I can record the sound of "Bob" and store it, preferred in the same file as my own voice.我真的很想了解如何录制“鲍勃”的声音并将其存储在与我自己的声音相同的文件中。 If I have to record two separately files and play them synced, I won't mind, but else if preferred.如果我必须录制两个单独的文件并同步播放它们,我不会介意,但如果愿意的话。

I know this might just be a call for help without showing anything real code of what I've tried to do it myself, but I have to admit I just fiddled with the code for hours without any good results and now I'm screaming for help.我知道这可能只是寻求帮助,而没有显示我自己尝试做的任何真实代码,但我不得不承认我只是摆弄了几个小时的代码没有任何好的结果,现在我在尖叫帮助。

Thank your all in advance and sorry for the bad grammar and (mis)use of language.预先感谢大家,并对语法错误和(错误)使用语言表示抱歉。

Okay, so I after finally found a solution to my problem, which I though i wanted to share here.好的,所以我终于找到了解决问题的方法,尽管我想在这里分享。

What I did to solve the problem was to add ONE simple line of code to the "normal" recording script of a microphone.我为解决这个问题所做的是在麦克风的“正常”录音脚本中添加一行简单的代码。 The script to record mic audio is:录制麦克风音频的脚本是:

window.AudioContext = window.AudioContext || window.webkitAudioContext;

var audioGlobalContext = new AudioContext();
var audioOutputAnalyser
var inputPoint = null,
    audioRecorder = null;
var recording = false;

// Controls the start and stop of recording
function toggleRecording( e ) {
    if (recording == true) {
        recording = false;
        audioRecorder.stop();
        audioRecorder.getBuffers( gotBuffers );
        console.log("Stop recording");
    } else {
        if (!audioRecorder)
            return;
        recording = true;
        audioRecorder.clear();
        audioRecorder.record();
        console.log("Start recording");
    }
}

function gotBuffers(buffers) {
    audioRecorder.exportWAV(doneEncoding);
}

function doneEncoding(blob) {
    document.getElementById("outputAudio").pause();
    Recorder.setupDownload(blob);
}

function gotAudioMicrophoneStream(stream) {
    var source = audioGlobalContext.createMediaStreamSource(stream);
    source.connect(inputPoint);
}

function initAudio() {
        if (!navigator.getUserMedia)
            navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
        if (!navigator.cancelAnimationFrame)
            navigator.cancelAnimationFrame = navigator.webkitCancelAnimationFrame || navigator.mozCancelAnimationFrame;
        if (!navigator.requestAnimationFrame)
            navigator.requestAnimationFrame = navigator.webkitRequestAnimationFrame || navigator.mozRequestAnimationFrame;

    inputPoint = audioGlobalContext.createGain();

    navigator.getUserMedia({
        "audio": {
            "mandatory": {
                "googEchoCancellation": "true",
                "googAutoGainControl": "false",
                "googNoiseSuppression": "true",
                "googHighpassFilter": "false"
            },
            "optional": []
        },
    }, gotAudioMicrophoneStream, function(e) {
        alert('Error recording microphone');
        console.log(e);
    });

    var analyserNode = audioGlobalContext.createAnalyser();
    analyserNode.fftSize = 2048;
    inputPoint.connect(analyserNode);
    var zeroGain = audioGlobalContext.createGain();
    zeroGain.gain.value = 0.0;
    inputPoint.connect(zeroGain);
    zeroGain.connect(audioGlobalContext.destination);

    audioRecorder = new Recorder(inputPoint);
}

window.addEventListener('load', initAudio );

The function I was looking for to convert the Audio-tag sound into an Audio Source was createMediaElementSource() so what I did was adding this function:我正在寻找将音频标签声音转换为音频源的函数是createMediaElementSource()所以我所做的是添加这个函数:

function gotAudioOutputStream() {
    var source = audioGlobalContext.createMediaElementSource(document.getElementById("outputAudio"));
    source.connect(inputPoint);
    source.connect(audioGlobalContext.destination);
}

And in the initAudio() function just after navigator.getUserMedia added a call to the function.并且在 navigator.getUserMedia 之后的 initAudio() 函数中添加了对该函数的调用。 To the finished code (with HTML) would look like this完成的代码(带有 HTML)看起来像这样

window.AudioContext = window.AudioContext || window.webkitAudioContext;

var audioGlobalContext = new AudioContext();
var audioOutputAnalyser
var inputPoint = null,
    audioRecorder = null;
var recording = false;

// Controls the start and stop of recording
function toggleRecording( e ) {
    if (recording == true) {
        recording = false;
        audioRecorder.stop();
        audioRecorder.getBuffers( gotBuffers );
        console.log("Stop recording");
    } else {
        if (!audioRecorder)
            return;
        recording = true;
        audioRecorder.clear();
        audioRecorder.record();
        console.log("Start recording");
    }
}

function gotBuffers(buffers) {
    audioRecorder.exportWAV(doneEncoding);
}

function doneEncoding(blob) {
    document.getElementById("outputAudio").pause();
    Recorder.setupDownload(blob);
}

function gotAudioMicrophoneStream(stream) {
    var source = audioGlobalContext.createMediaStreamSource(stream);
    source.connect(inputPoint);
}

function gotAudioOutputStream() {
    var source = audioGlobalContext.createMediaElementSource(document.getElementById("outputAudio"));
    source.connect(inputPoint);
    source.connect(audioGlobalContext.destination);
}

function initAudio() {
        if (!navigator.getUserMedia)
            navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
        if (!navigator.cancelAnimationFrame)
            navigator.cancelAnimationFrame = navigator.webkitCancelAnimationFrame || navigator.mozCancelAnimationFrame;
        if (!navigator.requestAnimationFrame)
            navigator.requestAnimationFrame = navigator.webkitRequestAnimationFrame || navigator.mozRequestAnimationFrame;

    inputPoint = audioGlobalContext.createGain();

    navigator.getUserMedia({
        "audio": {
            "mandatory": {
                "googEchoCancellation": "true",
                "googAutoGainControl": "false",
                "googNoiseSuppression": "true",
                "googHighpassFilter": "false"
            },
            "optional": []
        },
    }, gotAudioMicrophoneStream, function(e) {
        alert('Error recording microphone');
        console.log(e);
    });

    gotAudioOutputStream();

    var analyserNode = audioGlobalContext.createAnalyser();
    analyserNode.fftSize = 2048;
    inputPoint.connect(analyserNode);
    var zeroGain = audioGlobalContext.createGain();
    zeroGain.gain.value = 0.0;
    inputPoint.connect(zeroGain);
    zeroGain.connect(audioGlobalContext.destination);

    audioRecorder = new Recorder(inputPoint);
}

window.addEventListener('load', initAudio );

<!doctype html>
<html>
<head>
    <meta name="viewport" content="width=device-width,initial-scale=1">
    <title>Audio Recorder</title>
    <script src="assets/js/AudioRecorder/js/recorderjs/recorder.js"></script>
    <script src="assets/js/AudioRecorder/js/main.js"></script>
</head>
<body>
    <audio id="outputAudio" autoplay="true" src="test.mp3" type="audio/mpeg"></audio>
    <audio id="playBack"></audio>
    <div id="controls">
        <img id="record" src="assets/js/AudioRecorder/img/mic128.png" onclick="toggleRecording(this);">
    </div>
</body>
</html>

This records your voice and the sound coming from the audio element tag.这会记录您的声音和来自音频元素标签的声音。 Simple.简单。 Hope everyone out there who had the same problem as me to "rewind" your head around Audio API will find this helpful.希望所有与我有相同问题的人都可以“倒带”音频 API,这会有所帮助。

This code snippets shown above require Recorder.js to work.上面显示的代码片段需要 Recorder.js 才能工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM