简体   繁体   English

如何将背景减法设置为稳定对象

[英]How to set background subtraction to stable object

hi I'm working on a thesis, I'm looking for a way how to convert human objects into white pixels in the room.嗨,我正在写一篇论文,我正在寻找一种方法如何将人类物体转换为房间中的白色像素。 Here I use background subtraction to detect objects.这里我使用背景减法来检测物体。 The problem is that background subtraction works by continuously updating the background of each frame, so objects that stay in place for a long time will be considered as background.问题是背景减法通过不断更新每一帧的背景来工作,所以长时间停留在原地的物体将被视为背景。 How do I get my algorithm to only compare each frame with the first frame when the space is empty, so that if there is a difference from each frame to the first frame it will be treated as an object??如何让我的算法仅在空间为空时将每一帧与第一帧进行比较,这样如果每帧与第一帧之间存在差异,它将被视为一个对象? I'm thinking of changing the first frame as the background image and comparing it with each frame, so that if anything differs from the background I set it will be treated as an object.我正在考虑将第一帧更改为背景图像并将其与每一帧进行比较,这样如果与我设置的背景有任何不同,它将被视为一个对象。 How to do it ??怎么做 ?? oh yeah this is my code哦,是的,这是我的代码

import numpy as np
import cv2
import datetime
from playsound import playsound

# Mengubah ukuran
dim = (480, 360)

# Menangkap citra pada video
cap = cv2.VideoCapture('sample1.mp4')

# background subtraction
fgbg = cv2.createBackgroundSubtractorMOG2(500,16,True)

# Structuring elements for morphographic filters
kernelOp = np.ones((3, 3), np.uint8)
kernelCl = np.ones((11, 11), np.uint8)

#  Read an image of the video source
ret, frame = cap.read()
frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)

#  Read an image of the video source
ret, frame = cap.read()

while cap.isOpened():
    #  Read an image of the video source
    ret, frame = cap.read()
    frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)
    # Apply background subtraction
    fgmask2 = fgbg.apply(frame)
    # eliminate shadows (gray color)
    ret, imBin2 = cv2.threshold(fgmask2, 254, 255, cv2.THRESH_BINARY)
    mask2 = cv2.morphologyEx(imBin2, cv2.MORPH_OPEN, kernelOp)
    mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernelCl)

    cv2.imshow('Original Video', frame)  # display original video
    cv2.imshow('Masked Video', mask2)  # display B & W video
    # press ESC to exit
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break
# End of while(cap.isOpened())
# release video and close all windows
cap.release()
cv2.destroyAllWindows()

在此处输入图像描述

You have the initial frame of the video but you have assigned the same variable within the while loop, so it gets overridden.您拥有视频的初始帧,但您在while循环中分配了相同的变量,因此它被覆盖。 Also its better to work grayscale.也更好地工作灰度。

Assign the initial frame a different variable:为初始帧分配一个不同的变量:

ret, initial_frame = cap.read()
initial_frame = cv2.resize(initial_frame, dim, interpolation=cv2.INTER_AREA)
initial_frame_gray = cv2.cvtColor(initial_frame, cv2.COLOR_BGR2GRAY)

Within the while loop, read the video frame by frame as grayscale and subtract every frame from the initial_frame_gray :while循环中,将视频逐帧读取为灰度,并从initial_frame_gray中减去每一帧:

while cap.isOpened():
    #  Read an image of the video source
    ret, frame = cap.read()
    frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)
    frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

Since you we are working with grayscale, you can use cv2.subtract() to find differences between current frame and initial frame.由于您使用的是灰度,您可以使用cv2.subtract()来查找当前帧和初始帧之间的差异。 It also ensures the pixel intensities remain within range [0-255]它还确保像素强度保持在 [0-255] 范围内

    difference = cv2.subtract(initial_frame_gray , frame_gray )

From here onwards, you can perform subsequent operations (eliminating shadows, morphology, etc.)从这里开始,您可以执行后续操作(消除阴影、形态等)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM