简体   繁体   English

WebRTC,getDisplayMedia() 无法捕获来自远程 stream 的声音

[英]WebRTC, getDisplayMedia() does not capture sound from the remote stream

I have a web application of my own, which is based on the peerjs library (It is a video conference).我有一个自己的 web 应用程序,它基于 peerjs 库(这是一个视频会议)。

I'm trying to make a recording with 'MediaRecorder', but I'm facing a very unpleasant case.我正在尝试使用“MediaRecorder”进行录制,但我正面临一个非常不愉快的情况。

The code for capturing my desktop stream is the following:捕获我的桌面 stream 的代码如下:

let chooseScreen = document.querySelector('.chooseScreenBtn')
chooseScreen.onclick = async () => {
    let desktopStream = await navigator.mediaDevices.getDisplayMedia({ video:true, audio: true });
}

I then successfully apply my received desktopStream to videoElement in DOM:然后我成功地将收到的desktopStream应用到 DOM 中的videoElement

const videoElement = doc.querySelector('.videoElement')
  videoElement.srcObject = desktopStream 
  videoElement.muted = false;
  videoElement.onloadedmetadata = ()=>{videoElement.play();}

For example, I get desktopStream on the page with an active conference where everyone hears and sees each other.例如,我在页面上获得了desktopStream ,其中每个人都可以听到和看到对方的活动会议。

To check the video and audio in desktopStream I play some video on the video player on the desktop.为了检查desktopStream中的视频和音频,我在桌面上的视频播放器上播放了一些视频。 I can hear any audio from my desktop but audio from any participant cannot be heard.我可以听到桌面上的任何音频,但听不到任何参与者的音频。 Of course, when I put the desktopStream in MediaRecorder I get a video file with no sound from anyone except my desktop.当然,当我将 desktopStream 放入 MediaRecorder 时,我会得到一个视频文件,除了我的桌面之外,其他人都没有声音。 Any ideas on how to solve it?关于如何解决它的任何想法?

Audio capture with getDisplayMedia is only fully supported with Chrome for Windows.仅 Windows 的 Chrome 完全支持使用getDisplayMedia进行音频捕获。 Other platforms have a number of limitations:其他平台有许多限制:

  • there is no support for audio capture at all under Firefox or Safari;在 Firefox 或 Safari 下根本不支持音频捕获;
  • on Chrome/Chromium for Linux and Mac OS, only the audio of a Chrome/Chromium tab can be captured, not the audio of a non-browser application window.在 Linux 和 Mac OS 的 Chrome/Chromium 上,只能捕获 Chrome/Chromium 选项卡的音频,而不是非浏览器应用程序 window 的音频。

Chrome's MediaRecorder API can only output one track . Chrome的MediaRecorder API只能output一track The createMediaStreamSource can take streams from desktop audio and microphone, by connecting both together into one object created by createMediaStreamDestination it gives you the ability to pipe this one stream into the MediaRecorder API. The createMediaStreamSource can take streams from desktop audio and microphone, by connecting both together into one object created by createMediaStreamDestination it gives you the ability to pipe this one stream into the MediaRecorder API.

const mergeAudioStreams = (desktopStream, voiceStream) => {
    const context = new AudioContext();

    // Create a couple of sources
    const source1 = context.createMediaStreamSource(desktopStream);
    const source2 = context.createMediaStreamSource(voiceStream);
    const destination = context.createMediaStreamDestination();

    const desktopGain = context.createGain();
    const voiceGain = context.createGain();

    desktopGain.gain.value = 0.7;
    voiceGain.gain.value = 0.7;

    source1.connect(desktopGain).connect(destination);
    // Connect source2
    source2.connect(voiceGain).connect(destination);

    return destination.stream.getAudioTracks();
}; 

It is also possible to use two or more audio inputs + video input.也可以使用两个或多个音频输入+视频输入。

window.onload = () => {
    const warningEl = document.getElementById('warning');
    const videoElement = document.getElementById('videoElement');
    const captureBtn = document.getElementById('captureBtn');
    const startBtn = document.getElementById('startBtn');
    const stopBtn = document.getElementById('stopBtn');
    const download = document.getElementById('download');
    const audioToggle = document.getElementById('audioToggle');
    const micAudioToggle = document.getElementById('micAudioToggle');
    
    if('getDisplayMedia' in navigator.mediaDevices) warningEl.style.display = 'none';

    let blobs;
    let blob;
    let rec;
    let stream;
    let voiceStream;
    let desktopStream;
    
    const mergeAudioStreams = (desktopStream, voiceStream) => {
        const context = new AudioContext();
        const destination = context.createMediaStreamDestination();
        let hasDesktop = false;
        let hasVoice = false;
        if (desktopStream && desktopStream.getAudioTracks().length > 0) {
        // If you don't want to share Audio from the desktop it should still work with just the voice.
        const source1 = context.createMediaStreamSource(desktopStream);
        const desktopGain = context.createGain();
        desktopGain.gain.value = 0.7;
        source1.connect(desktopGain).connect(destination);
        hasDesktop = true;
        }
        
        if (voiceStream && voiceStream.getAudioTracks().length > 0) {
        const source2 = context.createMediaStreamSource(voiceStream);
        const voiceGain = context.createGain();
        voiceGain.gain.value = 0.7;
        source2.connect(voiceGain).connect(destination);
        hasVoice = true;
        }
        
        return (hasDesktop || hasVoice) ? destination.stream.getAudioTracks() : [];
    };

    captureBtn.onclick = async () => {
        download.style.display = 'none';
        const audio = audioToggle.checked || false;
        const mic = micAudioToggle.checked || false;
        
        desktopStream = await navigator.mediaDevices.getDisplayMedia({ video:true, audio: audio });
        
        if (mic === true) {
        voiceStream = await navigator.mediaDevices.getUserMedia({ video: false, audio: mic });
        }
    
        const tracks = [
        ...desktopStream.getVideoTracks(), 
        ...mergeAudioStreams(desktopStream, voiceStream)
        ];
        
        console.log('Tracks to add to stream', tracks);
        stream = new MediaStream(tracks);
        console.log('Stream', stream)
        videoElement.srcObject = stream;
        videoElement.muted = true;
        
        blobs = [];
    
        rec = new MediaRecorder(stream, {mimeType: 'video/webm; codecs=vp8,opus'});
        rec.ondataavailable = (e) => blobs.push(e.data);
        rec.onstop = async () => {
        
        blob = new Blob(blobs, {type: 'video/webm'});
        let url = window.URL.createObjectURL(blob);
        download.href = url;
        download.download = 'test.webm';
        download.style.display = 'block';
        };
        startBtn.disabled = false;
        captureBtn.disabled = true;
        audioToggle.disabled = true;
        micAudioToggle.disabled = true;
    };

    startBtn.onclick = () => {
        startBtn.disabled = true;
        stopBtn.disabled = false;
        rec.start();
    };

    stopBtn.onclick = () => {
        captureBtn.disabled = false;
        audioToggle.disabled = false;
        micAudioToggle.disabled = false;
        startBtn.disabled = true;
        stopBtn.disabled = true;
        
        rec.stop();
        
        stream.getTracks().forEach(s=>s.stop())
        videoElement.srcObject = null
        stream = null;
    };
};

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM