简体   繁体   English

如何使用来自另一个对等点的 WebRTC 连接添加视频流?

[英]How to add video stream with WebRTC connection from another peer?

I am learning WebRTC and trying to create a simple chat with video call capabilities.我正在学习 WebRTC 并尝试创建具有视频通话功能的简单聊天。 I am using Django channels to handle the web-sockets and I connected the peers though them.我正在使用 Django 通道来处理网络套接字,并通过它们连接对等方。 Unfortunately, I am unable to get the other peer's media, and display it on the screen.不幸的是,我无法获取其他同行的媒体,并将其显示在屏幕上。

The connection seems to be successful, and the messages are traveling successfully though sockets, and no errors appear in console.连接似乎成功,消息通过套接字成功传输,控制台中没有出现错误。 What am I missing?我错过了什么?

the logic is: - User1 enters the room逻辑是: - User1 进入房间

  • User2 enters the room User2进入房间

  • User1 sends a message to User2 though sockets User1 通过套接字向 User2 发送消息

  • User1 presses "call" to call User2, gets local media and starts WebRTC connection User1按“call”呼叫User2,获取本地媒体并启动WebRTC连接

  • User2 presses "respond" to accept call from User2, accepts offer and responds with his local media用户 2 按“响应”接受用户 2 的呼叫,接受报价并通过其本地媒体进行响应

Edit 1: Seems to work if steps are done in the following order:编辑 1:如果按以下顺序完成步骤,则似乎有效:

  • User1 enters the room用户 1 进入房间

  • User1 presses "call" to call User2, gets local media and starts WebRTC connection User1按“call”呼叫User2,获取本地媒体并启动WebRTC连接

  • User2 enters the room User2进入房间

  • User2 presses "respond" to accept call from User2, accepts offer and responds with his local media用户 2 按“响应”接受用户 2 的呼叫,接受报价并通过其本地媒体进行响应

  • User1 presses "respond"用户 1 按“响应”

I do not quite understand why this works.我不太明白为什么会这样。 "pc.ontrack" triggered only in this specific order, and why am I able to start a WebRTC connection before second peer enters the room? “pc.ontrack”仅以此特定顺序触发,为什么我能够在第二个对等方进入房间之前启动 WebRTC 连接?

room.html:房间.html:

<!-- chat/templates/chat/room.html -->
<!DOCTYPE html>
{% load static %}
{% extends 'main/header.html' %}

{% block content %}

<body>

<div class="container">
    <a class="waves-effect waves-light btn prefix" id='call'>call</a>
    <a class="waves-effect waves-light btn prefix" id='respond'>respond</a>
  <div class="copy">Send your URL to a friend to start a video call</div>
  <video id="localVideo" autoplay muted></video>
  <video id="remoteVideo" autoplay></video>
    <textarea id="chat-log"  class="materialize-textarea" ></textarea><br/>

<div class="input-field col s12 ">
    <input id="chat-message-input" type="text" />
    <a class="waves-effect waves-light btn prefix" id="chat-message-submit"><i class="material-icons right">send</i></a>

</div>

</div>

</body>
<script>src = "{% static 'main/js/client.js' %}"></script>
{% endblock  %}

client.js:客户端.js:

// Generate random room name if needed

var roomName = "{{ room_name|escapejs }}";
var drone = new WebSocket(
        'ws://' + window.location.host +
        '/ws/chat/' + roomName + '/');


const configuration = {
  iceServers: [{
    urls: 'stun:stun.l.google.com:19302'
  }]
};

pc = new RTCPeerConnection(configuration);


function onSuccess() {};
function onError(error) {
  console.error(error);
};



document.getElementById('call').onclick = function() {startWebRTC(true);};
document.getElementById('respond').onclick = function() {startWebRTC(false);};



// Send signaling data via Scaledrone
function sendMessage(message) {
  var user = "{{user.username}}"
  drone.send(JSON.stringify({
            'message': message,
            'user': user

        }));
  console.log("Message sent")
};


function startWebRTC(isOfferer) {


  // 'onicecandidate' notifies us whenever an ICE agent needs to deliver a
  // message to the other peer through the signaling server
    pc.onicecandidate = event => {
        if (event.candidate) {
            sendMessage({'candidate': event.candidate});
    }
  };

  // If user is offerer let the 'negotiationneeded' event create the offer
  if (isOfferer) {

    pc.onnegotiationneeded = () => {
      pc.createOffer().then(localDescCreated).catch(onError);
      console.log("Offer created")
    }
  }

   // This part does not seem to be working
  // When a remote stream arrives display it in the #remoteVideo element
  pc.ontrack = event => {
    const stream = event.streams[0];
    if (!remoteVideo.srcObject || remoteVideo.srcObject.id !== stream.id) {
      remoteVideo.srcObject = stream;
      console.log("Remote stream added")
    }
  };

  navigator.mediaDevices.getUserMedia({
    audio: true,
    video: true,
  }).then(stream => {
    // Display your local video in #localVideo element
    localVideo.srcObject = stream;
    console.log("Local stream added")
    // Add your stream to be sent to the conneting peer
    stream.getTracks().forEach(track => pc.addTrack(track, stream));
    console.log("Added local stream to track")
  }, onError);



}




function localDescCreated(desc) {
    pc.setLocalDescription(
    desc,
    () => sendMessage({'sdp': pc.localDescription}),
    onError
  );
};

    document.querySelector('#chat-message-input').focus();
    document.querySelector('#chat-message-input').onkeyup = function(e) {
        if (e.keyCode === 13) {  // enter, return
            document.querySelector('#chat-message-submit').click();
        }
    };

    document.querySelector('#chat-message-submit').onclick = function(e) {
        var messageInputDom = document.querySelector('#chat-message-input');
        var message = messageInputDom.value;
        sendMessage(message);

        messageInputDom.value = '';
    };

// Listen to signaling data
drone.onmessage = function(e) {
        var data = JSON.parse(e.data);
        console.info(e)
        var message = data['message'];
        var user = data['user'];
    // Message was sent by us
    if (user === '{{user.username}}') {
        document.querySelector('#chat-log').value += (user +": " + message + '\n');
        elem = document.getElementById("chat-log")
        M.textareaAutoResize(elem);
        console.log("Echo")
      return;
    }
    if (message[0]){
    sdp = message[0]['sdp']
    candidate = message[0]['candidate']
    };

    console.log("Message recieved")
    if (sdp) {


pc.setRemoteDescription(new RTCSessionDescription(sdp),  () => {
        // When receiving an offer lets answer it
    if (pc.remoteDescription.type === 'offer') {
            pc.createAnswer().then(localDescCreated).catch(onError);
            console.log("Offer answerd")
        }
      }, onError);

      // This is called after receiving an offer or answer from another peer

    } else if (candidate) {
      // Add the new ICE candidate to our connections remote description
        pc.addIceCandidate(
        new RTCIceCandidate(candidate), onSuccess, onError);
        console.log("Ice candidate added")
    } else {
        document.querySelector('#chat-log').value += (user +": " + message + '\n');
        elem = document.getElementById("chat-log")
        M.textareaAutoResize(elem);
    }
  };

console output after sending "hello" message then pressing "call" from user1:发送“hello”消息然后从 user1 按下“call”后的控制台输出: 在此处输入图片说明

console output after recieving "hello" message then pressing "respond" from user2:收到“你好”消息后的控制台输出,然后按用户 2 的“响应”: 在此处输入图片说明

On the side that is answering, you are answering immediately, without waiting for getUserMedia and the resulting addTracks.在接听的一方,您立即接听,无需等待 getUserMedia 和由此产生的 addTracks。 These operations are asynchronous and they must be executed before you call createAnswer.这些操作是异步的,必须在调用 createAnswer 之前执行。 Consequently, the answer does not contain a mediastream and ontrack will not be called on the end that is calling.因此,答案不包含媒体流,并且不会在调用端调用 ontrack。

Using promise syntax makes this easier, along the lines of使用promise语法使这更容易,沿着

pc.setRemoteDescription(offer)
.then(() => {
  return navigator.mediaDevices.getUserMedia({audio: true, video: true});
})
.then(stream => {
  stream.getTracks().forEach(track => pc.addTrack(track, stream));
  return pc.createAnswer();
})
... then continue with setLocalDescription and signal to the offerer

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM