简体   繁体   English

将多个视频合并为一个

[英]Combine multiple videos into one

I have three videos: 我有三个视频:

  • a lecture that was filmed with a video camera 用摄像机拍摄的演讲
  • a video of the desktop capture of the computer used in the lecture 演讲中使用的计算机桌面捕获视频
  • and the video of the whiteboard 和白板的视频

I want to create a final video with those three components taking up a certain region of the screen. 我想创建一个最终视频,这三个组件占据屏幕的某个区域。

Is open-source software that would allow me to do this (mencoder, ffmpeg, virtualdub..)? 开源软件是否允许我这样做(mencoder,ffmpeg,virtualdub ..)? Which do you recommend? 你推荐哪一个?

Or is there a C/C++ API that would enable me to create something like that programmatically? 或者是否有一个C / C ++ API可以让我以编程方式创建类似的东西?

Edit 编辑
There will be multiple recorded lectures in the future. 将来会有多个录制的讲座。 This means that I need a generic/automated solution. 这意味着我需要通用/自动化解决方案。

I'm currently checking out if I could write an application with GStreamer to do this job. 我目前正在查看是否可以使用GStreamer编写应用程序来完成这项工作。 Any comments on that? 对此有何评论?

Solved! 解决了!
I succeeded in doing this with GStreamer's videomixer element. 我成功地用GStreamer的视频混合器元素做到了这一点。 I use the gst-launch syntax to create a pipeline and then load it with gst_parse_launch. 我使用gst-launch语法创建管道,然后使用gst_parse_launch加载它。 It's a really productive way to implement complex pipelines. 这是实现复杂管道的一种非常有效的方法。

Here's a pipeline that takes two incoming video streams and a logo image, blends them into one stream and the duplicates it so that it simultaneously displayed and saved to disk. 这是一个管道,它接收两个传入的视频流和一个徽标图像,将它们混合成一个流并复制它,以便它同时显示并保存到磁盘。

  desktop. ! queue
           ! ffmpegcolorspace
           ! videoscale
           ! video/x-raw-yuv,width=640,height=480
           ! videobox right=-320
           ! ffmpegcolorspace
           ! vmix.sink_0
  webcam. ! queue
          ! ffmpegcolorspace
          ! videoscale
          ! video/x-raw-yuv,width=320,height=240
          ! vmix.sink_1
  logo. ! queue
        ! jpegdec
        ! ffmpegcolorspace
        ! videoscale
        ! video/x-raw-yuv,width=320,height=240
        ! vmix.sink_2
  vmix. ! t.
  t. ! queue
     ! ffmpegcolorspace
     ! ffenc_mpeg2video
     ! filesink location="recording.mpg"
  t. ! queue
     ! ffmpegcolorspace
     ! dshowvideosink
  videotestsrc name="desktop"
  videotestsrc name="webcam"
  multifilesrc name="logo" location="logo.jpg"
  videomixer name=vmix
             sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=0
             sink_1::xpos=640 sink_1::ypos=0 sink_1::zorder=1
             sink_2::xpos=640 sink_2::ypos=240 sink_2::zorder=2
  tee name="t"

It can be done with ffmpeg; 它可以用ffmpeg完成; I've done it myself. 我自己做了。 That said, it is technically complex. 也就是说,技术上很复杂。 That said, again , it is what any other software you might use is going to do in its core essence. 这就是说, 再次 ,它是什么,你可能会使用其他任何软件都将在其核心要义的事。

The process works like this: 这个过程是这样的:

  1. Demux audio from source 1 to raw wav 将源1的音频解复用到原始wav
    • Demux audio from source 2 来自源2的解复用音频
    • Demux audio from source 3 来自源3的Demux音频
    • Demux video from source 1 to MPEG1 从源1到MPEG1的Demux视频
    • Demux video from source 2 来自源2的Demux视频
    • Demux video from source 3 来自源3的Demux视频
    • Concatenate audio 1 + audio 2 + audio 3 连接音频1 +音频2 +音频3
    • Concatenate video 1 + video 2 + video 3 连接视频1 +视频2 +视频3
    • Mux audio 123 and video 123 into target Mux音频123和视频123进入目标
    • encode to target format 编码为目标格式

I think what surprises folks is that you can literally concatenate two raw PCM wav audio files, and the result is valid. 我认为令人惊讶的是,你可以逐字地连接两个原始的PCM wav音频文件,结果是有效的。 What really, really surprises people is that you can do the same with MPEG1/h.261 video. 真正令人惊讶的是,你可以用MPEG1 / h.261视频做同样的事情。

Like I've said, I've done it. 就像我说的那样,我已经做到了。 There are some specifics left out, but it most definately works. 遗漏了一些细节,但它最有效。 My program was done in a bash script with ffmpeg. 我的程序是用ffmpeg的bash脚本完成的。 While I've never used the ffmpeg C API, I don't see why you could not use it to do the same thing. 虽然我从未使用过ffmpeg C API,但我不明白为什么你不能用它来做同样的事情。

It's a highly educational project to do, if you are inclined. 如果你愿意的话,这是一个非常有教育意义的项目。 If your goal is just to slap some videos together for a one off project, then maybe using a GUI tool is a better idea. 如果您的目标只是为了一次性项目拍摄一些视频,那么使用GUI工具可能是一个更好的主意。

如果您只想将素材组合成一个视频并裁剪视频,我会使用虚拟配音。

you can combine multiple video files/streams into one picture with VLC: 您可以使用VLC将多个视频文件/流组合成一张图片:

there is a command-line interface so you can script/automate it. 有一个命令行界面,所以你可以编写脚本/自动化它。

http://wiki.videolan.org/Mosaic http://wiki.videolan.org/Mosaic

avisynth can do it rather easily. avisynth可以很容易地做到这一点。 Look here under the Mosaic section for an example. 看看这里下一个例子马赛克部分。

I've used ffmpeg quite a bit and I have never stumbled upon this functionality, but that doesn't mean it isn't there. 我已经使用了ffmpeg了,我从来没有偶然发现这个功能,但这并不意味着它不存在。 You can always do it yourself in C or C++ with libavformat and libavcodec (ffmpeg libraries) if you're looking for a project, but you will have to get your hands very dirty with compositing the video yourself. 如果你正在寻找一个项目,你总是可以使用libavformat和libavcodec(ffmpeg库)在C或C ++中自己完成,但是你必须自己动手合成视频。 If you are just looking to get the video done and not tinker with code, definitely use a pre-made tool like avisynth or virtualdub. 如果您只是想完成视频而不是修补代码,那么一定要使用像avisynth或virtualdub这样的预制工具。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM