简体   繁体   English

HTML5 视频 stream 来自 websocket 通过 MediaSource 和 MediaSourceBuffer

[英]HTML5 Video stream from websocket via MediaSource and MediaSourceBuffer

I'm trying to play video from websocket我正在尝试从 websocket 播放视频

<video id="output" width="320" height="240" autoplay></video>

<script>
    function sockets(buffer) {
        const socket = new WebSocket('wss://localhost:5002/ws')

        socket.onmessage = async function (event) {
            // event.data is a blob
            buffer.appendBuffer(new Uint8Array(event.data))
        }
    }

    let ms = new MediaSource()
    let output = document.getElementById('output')
    output.src = URL.createObjectURL(ms)
    ms.onsourceopen = () => {
        let buffer = ms.addSourceBuffer('video/webm; codecs="vorbis,vp8"')
        sockets(buffer)
    }
</script>

I receive MediaRecorder chunks here as Blobs and try to sequentially play them using MediaSource API.我在这里接收 MediaRecorder 块作为 Blob,并尝试使用 MediaSource API 顺序播放它们。 No errors and nothing happens.没有错误,也没有任何反应。 Is there something fundamentally wrong here?这里有什么根本错误吗?

I tried:我试过了:

  • To use different codecs使用不同的编解码器
  • Played with media source modes eg sequence/segments使用媒体源模式播放,例如序列/片段
  • I was also trying different ways where you don't use MediaSource API but faced other challenges and MediaSource seems to be the best approach in my case.我还尝试了不同的方法,您不使用 MediaSource API 但面临其他挑战,MediaSource 似乎是我的最佳方法。

UPDATE: this is how the video is produced:更新:这是视频的制作方式:

let options = { mimeType: 'video/webm;codecs=vp8' }
let stream = await navigator.mediaDevices.getUserMedia({ video: true })
mediaRecorder = new MediaRecorder(stream, options)
mediaRecorder.ondataavailable = event => {
    if (event.data && event.data.size > 0) {
        send(event.data)
    }
}

The fundamental problem here is you cannot stream those data coming out of MediaRecorder and expect the other end to play it;这里的根本问题是您不能 stream 那些来自MediaRecorder的数据并期望另一端播放它; it is not a complete video.这不是一个完整的视频。 It will only work if the receiving end is able to receive the initialization bytes --which I doubt that will work in a real-world scenario.只有在接收端能够接收初始化字节时它才会起作用——我怀疑这在现实世界的场景中是否会起作用。

What you can do is to create an interval that will start/stop the MediaRecorder for example every 1 second to make 1 second video chunks that you can transmit over the wire (best I know and tested is websockets)您可以做的是创建一个间隔,例如每 1 秒启动/停止MediaRecorder ,以制作 1 秒的视频块,您可以通过网络传输(我知道和测试的最好的是 websockets)

I strongly suggest not to use MediaRecorder is you are doing real-time video streaming which was not indicated in your post, but if yes, it would be better that you create a canvas to copy the stream and do some requestAnimationFrame stuff that can capture your video stream into a something you can transmit.我强烈建议不要使用MediaRecorder ,因为您正在执行帖子中未指明的实时视频流,但如果是,最好创建一个 canvas 来复制 stream 并做一些可以捕获您的requestAnimationFrame的东西视频 stream 变成你可以传输的东西。

Take a look at this demo for reference: https://github.com/cyberquarks/quarkus-websockets-streamer/blob/master/src/main/resources/META-INF/resources/index.html看看这个演示以供参考: https://github.com/cyberquarks/quarkus-websockets-streamer/blob/master/src/main/resources/META-INF/resources/index.html

MediaRecorder in my experience response is delayed that would generally add quite a delay in the video, not to mention the delay that the socket would also introduce.以我的经验, MediaRecorder响应延迟,这通常会在视频中增加相当大的延迟,更不用说套接字也会引入的延迟。

Generally, other developers would suggest that you just take the WebRTC route, however based on my experience also WebRTC is not generally faster.通常,其他开发人员会建议您只走 WebRTC 路线,但根据我的经验,WebRTC 通常也不会更快。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM