简体   繁体   中英

Is there a way to decrease a value while counting objects using a Pi camera?

I'm currently doing a college project counting objects using a camera on a Pi. When an object is detected I need to decrease the 100 count by 1 each time an object is detected. I'm using open CV but I do not require camera feed. When an object is picked up , I need the value of qtty_of_count to be decreased by one and this value is then sent to a firebase database. Is the qtty_of_count - 1 in the incorrect place? Please help.

import datetime
import math
import cv2
import numpy as np

import firebase
##from firebase import firebase



# global variables
from firebase.firebase import FirebaseApplication

width = 0
height = 0
EntranceCounter = 0
ExitCounter = 0
min_area = 3000  # Adjust ths value according to your usage
_threshold = 70  # Adjust ths value according to your usage
OffsetRefLines = 150  # Adjust ths value according to your usage


# Check if an object in entering in monitored zone
def check_entrance_line_crossing(y, coor_y_entrance, coor_y_exit):
    abs_distance = abs(y - coor_y_entrance)

    if ((abs_distance <= 2) and (y < coor_y_exit)):
        return 1
    else:
        return 0


# Check if an object in exitting from monitored zone
def check_exit_line_crossing(y, coor_y_entrance, coor_y_exit):
    abs_distance = abs(y - coor_y_exit)

    if ((abs_distance <= 2) and (y > coor_y_entrance)):
        return 1
    else:
        return 0


camera = cv2.VideoCapture(0)

# force 640x480 webcam resolution
camera.set(3, 640)
camera.set(4, 480)

ReferenceFrame = None

# Frames may discard while adjusting to light
for i in range(0, 20):
    (grabbed, Frame) = camera.read()

while True:
    (grabbed, Frame) = camera.read()
    height = np.size(Frame, 0)
    width = np.size(Frame, 1)

    # if cannot grab a frame, this program ends here.
    if not grabbed:
        break

    # gray-scale and Gaussian blur filter applying
    GrayFrame = cv2.cvtColor(Frame, cv2.COLOR_BGR2GRAY)
    GrayFrame = cv2.GaussianBlur(GrayFrame, (21, 21), 0)

    if ReferenceFrame is None:
        ReferenceFrame = GrayFrame
        continue

    # Background subtraction and image manipulation
    FrameDelta = cv2.absdiff(ReferenceFrame, GrayFrame)
    FrameThresh = cv2.threshold(FrameDelta, _threshold, 255, cv2.THRESH_BINARY)[1]

    # Dilate image and find all the contours
    FrameThresh = cv2.dilate(FrameThresh, None, iterations=2)
    cnts, _ = cv2.findContours(FrameThresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    qtty_of_count =100

    # plot reference lines (entrance and exit lines)
    coor_y_entrance = (height // 2) - OffsetRefLines
    coor_y_exit = (height // 2) + OffsetRefLines
    cv2.line(Frame, (0, coor_y_entrance), (width, coor_y_entrance), (255, 0, 0), 2)
    cv2.line(Frame, (0, coor_y_exit), (width, coor_y_exit), (0, 0, 255), 2)

    # check all found count
    for c in cnts:
        # if a contour has small area, it'll be ignored
        if cv2.contourArea(c) < min_area:
            continue

        qtty_of_count = qtty_of_count - 1
        app = FirebaseApplication('https://appproject-d5d51.firebaseio.com/', None)
        update = app.put('/car', "spaces", qtty_of_count)
        print("Updated value in FB value: " + str(update))
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(Frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

        # find object's centroid
        coor_x_centroid = (x + x + w) // 2
        coor_y_centroid = (y + y + h) // 2
        ObjectCentroid = (coor_x_centroid, coor_y_centroid)
        cv2.circle(Frame, ObjectCentroid, 1, (0, 0, 0), 5)

        if (check_entrance_line_crossing(coor_y_centroid, coor_y_entrance, coor_y_exit)):
            EntranceCounter += 1

        if (check_exit_line_crossing(coor_y_centroid, coor_y_entrance, coor_y_exit)):
            ExitCounter += 1

print("Total countours found: " + str(qtty_of_count))

# Write entrance and exit counter values on frame and shows it
cv2.putText(Frame, "Entrances: {}".format(str(EntranceCounter)), (10, 50),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (250, 0, 1), 2)
cv2.putText(Frame, "Exits: {}".format(str(ExitCounter)), (10, 70),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imshow("Original Frame", Frame)
cv2.waitKey(1)

# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()

I need the qtty_of_count to decrease by one every time an object is detected. Thank you.

Besides the problem indicated by @Kevin, your code is executing the evaluation on image every frame that you grabed. If your object stays there for 100 frames, your count will go to zero.

To overcome this, you should tag every object in the image and just count the new objects. This could be done in several ways (see kalman filter tracking), but with no occulsion situation, one simple solution might be store the x,y position of object and establish a maximum position deviation to keep the tag to that object.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM