简体   繁体   English

将透明视频叠加到相机馈送 OpenCV

[英]Overlay transparent video to camera feed OpenCV

I've been searching how to overlay transparent video to camera feed in Python (or actually in any language) and the closest thing I've seen is using opencv .我一直在寻找如何在 Python(或实际上任何语言)中将透明视频叠加到相机馈送,我见过的最接近的事情是使用opencv

I followed the tutorials here and did some experiments.我按照这里的教程进行一些实验。 One is adding a new VideoCapture inside the while loop to play video from files when capturing video from camera;一种是在while循环中添加一个新的VideoCapturewhile从相机捕获视频时播放文件中的视频; but the video won't show up.但视频不会出现。

Other things I came across are mixing videos and camera feed but not really doing an overlay.我遇到的其他事情是混合视频和相机提要,但并没有真正进行叠加。

I'm lost in track and any tutorials or links on how to do it programmatically are highly appreciated.我迷失了方向,非常感谢有关如何以编程方式进行操作的任何教程或链接。

UPDATE: This is about loading the camera feed and transparent video frame-by-frame and time-by-time simultaneously.更新:这是关于同时逐帧和逐时间加载摄像机提要和透明视频。

import cv2
import time
import numpy as np

current_milli_time = lambda: int(round(time.time() * 1000))

# Camera feed
cap_cam = cv2.VideoCapture(0)
if not cap_cam.isOpened():
    print('Cannot open camera')
    exit()
ret, frame_cam = cap_cam.read()
if not ret:
    print('Cannot open camera stream')
    cap_cam.release()
    exit()

# Video feed
filename = 'myvideo.mp4'
cap_vid = cv2.VideoCapture(filename)
if not cap_cam.isOpened():
    print('Cannot open video: ' + filename)
    cap_cam.release()
    exit()
ret, frame_vid = cap_vid.read()
if not ret:
    print('Cannot open video stream: ' + filename)
    cap_cam.release()
    cap_vid.release()
    exit()

# Specify maximum video time in milliseconds
max_time = 1000 * cap_vid.get(cv2.CAP_PROP_FRAME_COUNT) / cap_vid.get(cv2.CAP_PROP_FPS)

# Resize the camera frame to the size of the video
height = int(cap_vid.get(cv2.CAP_PROP_FRAME_HEIGHT))
width = int(cap_vid.get(cv2.CAP_PROP_FRAME_WIDTH))

# Starting from now, syncronize the videos
start = current_milli_time()

while True:
    # Capture the next frame from camera
    ret, frame_cam = cap_cam.read()
    if not ret:
        print('Cannot receive frame from camera')
        break
    frame_cam = cv2.resize(frame_cam, (width, height), interpolation = cv2.INTER_AREA)

    # Capture the frame at the current time point
    time_passed = current_milli_time() - start
    if time_passed > max_time:
        print('Video time exceeded. Quitting...')
        break
    ret = cap_vid.set(cv2.CAP_PROP_POS_MSEC, time_passed)
    if not ret:
        print('An error occured while setting video time')
        break
    ret, frame_vid = cap_vid.read()
    if not ret:
        print('Cannot read from video stream')
        break

    # Blend the two images and show the result
    tr = 0.3 # transparency between 0-1, show camera if 0
    frame = ((1-tr) * frame_cam.astype(np.float) + tr * frame_vid.astype(np.float)).astype(np.uint8)
    cv2.imshow('Transparent result', frame)
    if cv2.waitKey(1) == 27: # ESC is pressed
        break

cap_cam.release()
cap_vid.release()
cv2.destroyAllWindows()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM