简体   繁体   中英

How to set background subtraction to stable object

hi I'm working on a thesis, I'm looking for a way how to convert human objects into white pixels in the room. Here I use background subtraction to detect objects. The problem is that background subtraction works by continuously updating the background of each frame, so objects that stay in place for a long time will be considered as background. How do I get my algorithm to only compare each frame with the first frame when the space is empty, so that if there is a difference from each frame to the first frame it will be treated as an object?? I'm thinking of changing the first frame as the background image and comparing it with each frame, so that if anything differs from the background I set it will be treated as an object. How to do it ?? oh yeah this is my code

import numpy as np
import cv2
import datetime
from playsound import playsound

# Mengubah ukuran
dim = (480, 360)

# Menangkap citra pada video
cap = cv2.VideoCapture('sample1.mp4')

# background subtraction
fgbg = cv2.createBackgroundSubtractorMOG2(500,16,True)

# Structuring elements for morphographic filters
kernelOp = np.ones((3, 3), np.uint8)
kernelCl = np.ones((11, 11), np.uint8)

#  Read an image of the video source
ret, frame = cap.read()
frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)

#  Read an image of the video source
ret, frame = cap.read()

while cap.isOpened():
    #  Read an image of the video source
    ret, frame = cap.read()
    frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)
    # Apply background subtraction
    fgmask2 = fgbg.apply(frame)
    # eliminate shadows (gray color)
    ret, imBin2 = cv2.threshold(fgmask2, 254, 255, cv2.THRESH_BINARY)
    mask2 = cv2.morphologyEx(imBin2, cv2.MORPH_OPEN, kernelOp)
    mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernelCl)

    cv2.imshow('Original Video', frame)  # display original video
    cv2.imshow('Masked Video', mask2)  # display B & W video
    # press ESC to exit
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break
# End of while(cap.isOpened())
# release video and close all windows
cap.release()
cv2.destroyAllWindows()

在此处输入图像描述

You have the initial frame of the video but you have assigned the same variable within the while loop, so it gets overridden. Also its better to work grayscale.

Assign the initial frame a different variable:

ret, initial_frame = cap.read()
initial_frame = cv2.resize(initial_frame, dim, interpolation=cv2.INTER_AREA)
initial_frame_gray = cv2.cvtColor(initial_frame, cv2.COLOR_BGR2GRAY)

Within the while loop, read the video frame by frame as grayscale and subtract every frame from the initial_frame_gray :

while cap.isOpened():
    #  Read an image of the video source
    ret, frame = cap.read()
    frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)
    frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

Since you we are working with grayscale, you can use cv2.subtract() to find differences between current frame and initial frame. It also ensures the pixel intensities remain within range [0-255]

    difference = cv2.subtract(initial_frame_gray , frame_gray )

From here onwards, you can perform subsequent operations (eliminating shadows, morphology, etc.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM