简体   繁体   English

已经有背景图像时如何获取前景蒙版

[英]How to get foreground mask when already have background image

I know that with cv2.createBackgroundSubtractorMOG2() we can substract the foreground mask using a background estimating method based on every 500 frames(default). 我知道,使用cv2.createBackgroundSubtractorMOG2()我们可以使用基于每500帧的背景估计方法减去前景蒙版(默认值)。 But how about I already have a background picture and just want substract the foreground using that picture in each frame? 但是我已经有背景图片了 ,只想在每帧中使用该图片减去前景就好了吗? What I'm tring is like this: 我正在练习的是这样的:

import numpy as np
import cv2

video = "xx.avi"
cap = cv2.VideoCapture(video)
bg = cv2.imread("bg.png")

while True:
    ret, frame = cap.read()
    original_frame = frame.copy()
    if ret:
        # get foremask?
        fgmask = frame - bg

        # filter kernel for denoising:
        kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))

        opening = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)

        closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)

        # Dilate to merge adjacent blobs
        dilation = cv2.dilate(closing, kernel, iterations = 2)

        # show fg:dilation
        cv2.imshow('fg mask', dilation)
        cv2.imshow('original', original_frame)
        k = cv2.waitKey(30) & 0xff
        if k == 27:
            cap.release()
            cv2.destroyAllWindows()
            break
    else:
        break

However I got colourful frames when doing frame = frame - bg . 但是当执行frame = frame - bg时,我得到了色彩鲜艳的帧。 How could I get the correct foreground mask? 如何获得正确的前景遮罩?

You are getting colourful images because you are substracting 2 color images, so the colour you are getting on each pixel is the difference on each channel (B,G and R) between both images. 由于要减去2幅彩色图像,因此获得彩色图像,因此每个像素获得的颜色就是两个图像在每个通道(B,G和R)上的差异。 In order to perform background subtraction, as dhanushka comments, the simplest option is to use MOG2 and forward it your background image for some (500) frames so it will learn this as the background. 为了进行背景减法,如dhanushka所评论的那样,最简单的选择是使用MOG2并将其背景图像转发一些(500)帧,以便它将其作为背景学习。 MOG2 is designed to learn the variability of each pixel colour with a Gaussian model, so if you are feeding always the same image, it will not learn this. MOG2旨在通过高斯模型来学习每种像素颜色的可变性,因此,如果您始终喂入相同的图像,它将不会学到。 Anyway, I think it should work for what you are intending to do. 无论如何,我认为它应该可以满足您的意图。 The nice thing about this approach is that MOG2 will take care of lots of more things like updating the model over time, dealing with shadows and so on. 这种方法的好处是,MOG2会处理很多其他事情,例如随着时间的推移更新模型,处理阴影等。

Another option would be to implement your own background subtraction method as you tried to do. 另一种选择是像您尝试的那样实现自己的背景扣除方法。 So, if you want to test it, you need to convert your fgmask colour image into something you can easily threshold and decide for each pixel if it is background or foreground. 因此,如果要对其进行测试,则需要将fgmask彩色图像转换为可以轻松阈值并确定每个像素是背景还是前景的图像。 A simple option would be to convert it to grayscale, and then apply a simple threshold, the lower the threshold the more "sensitive" your subtraction method is, (play with the thresh value), ie: 一个简单的选择是将其转换为灰度,然后应用一个简单的阈值,该阈值越低,您的减法方法就越“敏感”(使用阈值),即:

...
# get foremask?
    fgmask = frame - bg

    gray_image = cv2.cvtColor(fgmask, cv2.COLOR_BGR2GRAY)
    thresh = 20
    im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]

    # filter kernel for denoising:
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))

    opening = cv2.morphologyEx(im_bw, cv2.MORPH_OPEN, kernel)
...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM