簡體   English   中英

有沒有辦法在使用Pi相機計數對象時減少值?

[英]Is there a way to decrease a value while counting objects using a Pi camera?

我正在做一個大學項目,用Pi上的相機計算物體。 當檢測到對象時,每次檢測到對象時,我需要將100計數減1。 我正在使用開放式簡歷,但我不需要攝像頭。 當拾取一個對象時,我需要將qtty_of_count的值減一,然后將該值發送到firebase數據庫。 qtty_of_count - 1是不正確的地方嗎? 請幫忙。

import datetime
import math
import cv2
import numpy as np

import firebase
##from firebase import firebase



# global variables
from firebase.firebase import FirebaseApplication

width = 0
height = 0
EntranceCounter = 0
ExitCounter = 0
min_area = 3000  # Adjust ths value according to your usage
_threshold = 70  # Adjust ths value according to your usage
OffsetRefLines = 150  # Adjust ths value according to your usage


# Check if an object in entering in monitored zone
def check_entrance_line_crossing(y, coor_y_entrance, coor_y_exit):
    abs_distance = abs(y - coor_y_entrance)

    if ((abs_distance <= 2) and (y < coor_y_exit)):
        return 1
    else:
        return 0


# Check if an object in exitting from monitored zone
def check_exit_line_crossing(y, coor_y_entrance, coor_y_exit):
    abs_distance = abs(y - coor_y_exit)

    if ((abs_distance <= 2) and (y > coor_y_entrance)):
        return 1
    else:
        return 0


camera = cv2.VideoCapture(0)

# force 640x480 webcam resolution
camera.set(3, 640)
camera.set(4, 480)

ReferenceFrame = None

# Frames may discard while adjusting to light
for i in range(0, 20):
    (grabbed, Frame) = camera.read()

while True:
    (grabbed, Frame) = camera.read()
    height = np.size(Frame, 0)
    width = np.size(Frame, 1)

    # if cannot grab a frame, this program ends here.
    if not grabbed:
        break

    # gray-scale and Gaussian blur filter applying
    GrayFrame = cv2.cvtColor(Frame, cv2.COLOR_BGR2GRAY)
    GrayFrame = cv2.GaussianBlur(GrayFrame, (21, 21), 0)

    if ReferenceFrame is None:
        ReferenceFrame = GrayFrame
        continue

    # Background subtraction and image manipulation
    FrameDelta = cv2.absdiff(ReferenceFrame, GrayFrame)
    FrameThresh = cv2.threshold(FrameDelta, _threshold, 255, cv2.THRESH_BINARY)[1]

    # Dilate image and find all the contours
    FrameThresh = cv2.dilate(FrameThresh, None, iterations=2)
    cnts, _ = cv2.findContours(FrameThresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    qtty_of_count =100

    # plot reference lines (entrance and exit lines)
    coor_y_entrance = (height // 2) - OffsetRefLines
    coor_y_exit = (height // 2) + OffsetRefLines
    cv2.line(Frame, (0, coor_y_entrance), (width, coor_y_entrance), (255, 0, 0), 2)
    cv2.line(Frame, (0, coor_y_exit), (width, coor_y_exit), (0, 0, 255), 2)

    # check all found count
    for c in cnts:
        # if a contour has small area, it'll be ignored
        if cv2.contourArea(c) < min_area:
            continue

        qtty_of_count = qtty_of_count - 1
        app = FirebaseApplication('https://appproject-d5d51.firebaseio.com/', None)
        update = app.put('/car', "spaces", qtty_of_count)
        print("Updated value in FB value: " + str(update))
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(Frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

        # find object's centroid
        coor_x_centroid = (x + x + w) // 2
        coor_y_centroid = (y + y + h) // 2
        ObjectCentroid = (coor_x_centroid, coor_y_centroid)
        cv2.circle(Frame, ObjectCentroid, 1, (0, 0, 0), 5)

        if (check_entrance_line_crossing(coor_y_centroid, coor_y_entrance, coor_y_exit)):
            EntranceCounter += 1

        if (check_exit_line_crossing(coor_y_centroid, coor_y_entrance, coor_y_exit)):
            ExitCounter += 1

print("Total countours found: " + str(qtty_of_count))

# Write entrance and exit counter values on frame and shows it
cv2.putText(Frame, "Entrances: {}".format(str(EntranceCounter)), (10, 50),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (250, 0, 1), 2)
cv2.putText(Frame, "Exits: {}".format(str(ExitCounter)), (10, 70),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imshow("Original Frame", Frame)
cv2.waitKey(1)

# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()

每次檢測到對象時,我都需要qtty_of_count減1。 謝謝。

除了@Kevin指出的問題之外,您的代碼正在您繪制的每一幀上對圖像執行評估。 如果您的物體在那里停留100幀,您的計數將變為零。

要解決這個問題,您應該標記圖像中的每個對象並計算新對象。 這可以通過幾種方式完成(參見卡爾曼濾波器跟蹤),但是沒有遮擋情況,一個簡單的解決方案可能是存儲對象的x,y位置並建立最大位置偏差以將標簽保持在該對象上。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM