简体   繁体   English

Pipe numpy 阵列到虚拟视频设备

[英]Pipe numpy array to virtual video device

I want to pipe images to a virtual video device (eg /dev/video0), the images are created inside a loop with the desired frame rate.我想将 pipe 图像传输到虚拟视频设备(例如 /dev/video0),这些图像是在具有所需帧速率的循环内创建的。

In this minimal example i only two arrays which alternate in the cv2 window.在这个最小的例子中,我只有两个 arrays,它们在 cv2 window 中交替。 Now i look for a good solution to pipe the arrays to the virtual device.现在我为虚拟设备寻找 pipe 和 arrays 的良好解决方案。

I saw that ffmpeg-python can run asynchronous with ffmpeg.run_async() , but so far i could not make anything work with this package.我看到ffmpeg-python可以与ffmpeg.run_async()异步运行,但到目前为止我无法用这个 package 做任何事情。

example code without the ffmpeg stuff:没有 ffmpeg 东西的示例代码:

#!/usr/bin/env python3

import cv2
import numpy as np
import time

window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)

img1 = np.random.uniform(0, 255, (1080, 1440, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (1080, 1440, 3)).astype('uint8')

for i in range(125):
    time.sleep(0.04)
    if i % 2:
        img = img1
    else:
        img = img2
    cv2.imshow(window_name, img)
    cv2.waitKey(1)
cv2.destroyAllWindows()

First of all, you would have to setup a virtual camera , with for example v4l2loopback .首先,您必须设置一个虚拟相机,例如v4l2loopback See here for how to install it (ignore the usage examples).请参阅此处了解如何安装它(忽略使用示例)。
Then, you can just write to the virtual camera like to a normal file (that is, let openCV write the images to say /dev/video0 ; how to do that you have to find out yourself because im not an expert with openCV).然后,您可以像写入普通文件一样写入虚拟相机(也就是说,让 openCV 将图像写入说/dev/video0 ;如何做到这一点您必须自己弄清楚,因为我不是 openCV 专家)。
In the end, you can use ffmpeg-python with /dev/video0 as input file, do something with the video, and that's it !最后,你可以使用ffmpeg-python/dev/video0作为输入文件,对视频做一些事情,就是这样!

As Programmer wrote in his answer, it is possible to create a dummy device with the package v4l2loopback .正如程序员在他的回答中所写,可以使用 package v4l2loopback创建一个虚拟设备。 To publish images, videos or the desktop to the dummy device was already easy with ffmpeg, but i want to pipe it directly from the python script - where i capture the images - to the dummy device.使用 ffmpeg 将图像、视频或桌面发布到虚拟设备已经很容易了,但我想直接从 python 脚本中将其 pipe - 我捕获图像的地方 - 到虚拟设备。 I still think it's possible with ffmpeg-python , but i found this great answer from Alp which sheds light on the darkness.我仍然认为ffmpeg-python是可能的,但我从 Alp 找到了这个很好的答案,它揭示了黑暗。 The package pyfakewebcam is a perfect solution for the problem. package pyfakewebcam是该问题的完美解决方案。

For the sake of completeness, here is my extended minimal working example:为了完整起见,这是我扩展的最小工作示例:

#!/usr/bin/env python3

import time

import cv2
import numpy as np
import pyfakewebcam

WIDTH = 1440
HEIGHT = 1080
DEVICE = '/dev/video0'

fake_cam = pyfakewebcam.FakeWebcam(DEVICE, WIDTH, HEIGHT)

window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)

img1 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')

for i in range(125):
    time.sleep(0.04)
    if i % 2:
        img = img1
    else:
        img = img2
    fake_cam.schedule_frame(img)
    cv2.imshow(window_name, img)
    cv2.waitKey(1)
cv2.destroyAllWindows()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM