简体   繁体   中英

How to combine background subtraction with dense optical flow tracking in openCV python

I am using the BackgroundSubtractorGMG method and the Gunner Farneback for dense optical flow, I wish to find a way of combining both of these methods so to improve the accuracy of detection for a moving object. Perhaps only getting the optical flow method to focus on the larger segmented images to reduce erroneous results/noise. I tried a simple feeding of the output of the background subtraction video to the optical flow method, however this did not work. I read this stackoverflow link but I am at loss on how to this using the methods above. I apologise if this is all basic or if there is a misunderstanding because I am quite new to opencv and image processing.

With dense optical flow method, background subtracted frames will not be helpful but for sparse optical flow it can be used.

Dense Optical Flow:

Gunner Farneback's Optical flow method tracks down all the pixels(coordinates) in the frame by using the current and previous frames. Hence it is called dense Optical Flow.

So all you have to pass will be the frames for tracking. So if you pass background subtracted frames( black and white). The algorithm will not work since all the pixels will have the same intensity (either 0 or 255). And will provide no good features for the algorithm to track.

Since the algorithm tracks down all the pixels in the frame. The tracking process is also very slow.

Sparse Optical Flow:

Lucas Kanade's Optical Flow method uses current and previous frames along with good features to track. So that you will have to pass specific pixels for the algorithm to track. Since it tracks only specified pixels. It is known as Sparse Optical Flow.

To find these features, You can use different methods. Some of them being goodFeaturesToTrack , Harris corners etc. You can use background subtraction method to find these features in the following way.

Step 1: Background Subtraction with MOG or GMG

Step 2: Find Contours using the background subtracted frames.

Step 3: Pass the contour points you just found or center pixels of the contours found or all the points inside the contour(whichever favorable) to the Lucas Kanade method(sparse optical flow) along with the gray-scale frames and not the background subtracted ones.

As the background subtracted frames can only be used for finding features.

Hope this helps!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM