简体   繁体   中英

Merging input Streams with nodejs/ffmpeg

I'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple getUserMedia API call to capture the webcam data and send video-data as data-blob to my server.

From There, I'm planning to either use the fluent-ffmpeg library or just spawn ffmpeg myself and pipe that raw data to ffmpeg , which in turn, does some magic and pushes that out as HLS stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.

If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.

How could that be accomplished? Can I "create" a new frame with ffmpeg , so to speak? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from getUserMedia or MultiStreamRecorder to ffmpeg or if I have to specify somewhere and somehow the exact codecs being used etc.?

The huge drawback when using hls streaming in a video conferencing application is the latency. You can have latencies up to 10 seconds which isn't ideal for live chat.

What you are looking for is an SFU (Selective Forwarding Unit) which can redirect the data live from the browser -> server -> other browser. Latency is very low there and no need to store anything.

There are several technologies you can use like janus-gateway, kurento media server or jitsi for example. Me personally I use mediasoup which gives a little more flexibility.

HERE is a simple video-conferencing project using mediasoup that can get you started.

If in the end you still want to use HLS for streaming, as it could also be handy to view video from the past, you can still use mediasoup to send the video to the server and then to ffmpeg which converts it to hls directly.

Here is a Recording project with an implementation of recording with ffmpeg. In this code it's saving as webm, but with some parameter tweaking you can save it as HLS. (send me a message if you want an implementation of that)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM