简体   繁体   English

有没有办法在使用Pi相机计数对象时减少值?

[英]Is there a way to decrease a value while counting objects using a Pi camera?

I'm currently doing a college project counting objects using a camera on a Pi. 我正在做一个大学项目,用Pi上的相机计算物体。 When an object is detected I need to decrease the 100 count by 1 each time an object is detected. 当检测到对象时,每次检测到对象时,我需要将100计数减1。 I'm using open CV but I do not require camera feed. 我正在使用开放式简历,但我不需要摄像头。 When an object is picked up , I need the value of qtty_of_count to be decreased by one and this value is then sent to a firebase database. 当拾取一个对象时,我需要将qtty_of_count的值减一,然后将该值发送到firebase数据库。 Is the qtty_of_count - 1 in the incorrect place? qtty_of_count - 1是不正确的地方吗? Please help. 请帮忙。

import datetime
import math
import cv2
import numpy as np

import firebase
##from firebase import firebase



# global variables
from firebase.firebase import FirebaseApplication

width = 0
height = 0
EntranceCounter = 0
ExitCounter = 0
min_area = 3000  # Adjust ths value according to your usage
_threshold = 70  # Adjust ths value according to your usage
OffsetRefLines = 150  # Adjust ths value according to your usage


# Check if an object in entering in monitored zone
def check_entrance_line_crossing(y, coor_y_entrance, coor_y_exit):
    abs_distance = abs(y - coor_y_entrance)

    if ((abs_distance <= 2) and (y < coor_y_exit)):
        return 1
    else:
        return 0


# Check if an object in exitting from monitored zone
def check_exit_line_crossing(y, coor_y_entrance, coor_y_exit):
    abs_distance = abs(y - coor_y_exit)

    if ((abs_distance <= 2) and (y > coor_y_entrance)):
        return 1
    else:
        return 0


camera = cv2.VideoCapture(0)

# force 640x480 webcam resolution
camera.set(3, 640)
camera.set(4, 480)

ReferenceFrame = None

# Frames may discard while adjusting to light
for i in range(0, 20):
    (grabbed, Frame) = camera.read()

while True:
    (grabbed, Frame) = camera.read()
    height = np.size(Frame, 0)
    width = np.size(Frame, 1)

    # if cannot grab a frame, this program ends here.
    if not grabbed:
        break

    # gray-scale and Gaussian blur filter applying
    GrayFrame = cv2.cvtColor(Frame, cv2.COLOR_BGR2GRAY)
    GrayFrame = cv2.GaussianBlur(GrayFrame, (21, 21), 0)

    if ReferenceFrame is None:
        ReferenceFrame = GrayFrame
        continue

    # Background subtraction and image manipulation
    FrameDelta = cv2.absdiff(ReferenceFrame, GrayFrame)
    FrameThresh = cv2.threshold(FrameDelta, _threshold, 255, cv2.THRESH_BINARY)[1]

    # Dilate image and find all the contours
    FrameThresh = cv2.dilate(FrameThresh, None, iterations=2)
    cnts, _ = cv2.findContours(FrameThresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    qtty_of_count =100

    # plot reference lines (entrance and exit lines)
    coor_y_entrance = (height // 2) - OffsetRefLines
    coor_y_exit = (height // 2) + OffsetRefLines
    cv2.line(Frame, (0, coor_y_entrance), (width, coor_y_entrance), (255, 0, 0), 2)
    cv2.line(Frame, (0, coor_y_exit), (width, coor_y_exit), (0, 0, 255), 2)

    # check all found count
    for c in cnts:
        # if a contour has small area, it'll be ignored
        if cv2.contourArea(c) < min_area:
            continue

        qtty_of_count = qtty_of_count - 1
        app = FirebaseApplication('https://appproject-d5d51.firebaseio.com/', None)
        update = app.put('/car', "spaces", qtty_of_count)
        print("Updated value in FB value: " + str(update))
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(Frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

        # find object's centroid
        coor_x_centroid = (x + x + w) // 2
        coor_y_centroid = (y + y + h) // 2
        ObjectCentroid = (coor_x_centroid, coor_y_centroid)
        cv2.circle(Frame, ObjectCentroid, 1, (0, 0, 0), 5)

        if (check_entrance_line_crossing(coor_y_centroid, coor_y_entrance, coor_y_exit)):
            EntranceCounter += 1

        if (check_exit_line_crossing(coor_y_centroid, coor_y_entrance, coor_y_exit)):
            ExitCounter += 1

print("Total countours found: " + str(qtty_of_count))

# Write entrance and exit counter values on frame and shows it
cv2.putText(Frame, "Entrances: {}".format(str(EntranceCounter)), (10, 50),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (250, 0, 1), 2)
cv2.putText(Frame, "Exits: {}".format(str(ExitCounter)), (10, 70),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imshow("Original Frame", Frame)
cv2.waitKey(1)

# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()

I need the qtty_of_count to decrease by one every time an object is detected. 每次检测到对象时,我都需要qtty_of_count减1。 Thank you. 谢谢。

Besides the problem indicated by @Kevin, your code is executing the evaluation on image every frame that you grabed. 除了@Kevin指出的问题之外,您的代码正在您绘制的每一帧上对图像执行评估。 If your object stays there for 100 frames, your count will go to zero. 如果您的物体在那里停留100帧,您的计数将变为零。

To overcome this, you should tag every object in the image and just count the new objects. 要解决这个问题,您应该标记图像中的每个对象并计算新对象。 This could be done in several ways (see kalman filter tracking), but with no occulsion situation, one simple solution might be store the x,y position of object and establish a maximum position deviation to keep the tag to that object. 这可以通过几种方式完成(参见卡尔曼滤波器跟踪),但是没有遮挡情况,一个简单的解决方案可能是存储对象的x,y位置并建立最大位置偏差以将标签保持在该对象上。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM