簡體   English   中英

在訓練有素的 roboflow 上使用訓練有素的網絡攝像頭 model

[英]Using trained webcam on trained roboflow model

我正在嘗試在 visual code studio 上使用我的網絡攝像頭運行訓練有素的 roboflow model。 網絡攝像頭確實會與彈出窗口一起加載,但它只是角落里的一個小矩形,你看不到其他任何東西。 如果我將“圖像”、圖像更改為“圖像”、1 或 cv2.imshow 行中的其他內容,網絡攝像頭會亮起一秒鍾並返回錯誤代碼:
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: 錯誤: (-215:斷言失敗)._src:empty () 在 function 'cv::cvtColor'

這是我從 github roboflow 獲得的代碼:

# load config
import json
with open('roboflow_config.json') as f:
    config = json.load(f)

    ROBOFLOW_API_KEY = "********"
    ROBOFLOW_MODEL = "penguins-ojf2k"
    ROBOFLOW_SIZE = "416"

    FRAMERATE = config["FRAMERATE"]
    BUFFER = config["BUFFER"]

import asyncio
import cv2
import base64
import numpy as np
import httpx
import time

# Construct the Roboflow Infer URL
# (if running locally replace https://detect.roboflow.com/ with eg http://127.0.0.1:9001/)
upload_url = "".join([
    "https://detect.roboflow.com/",
    ROBOFLOW_MODEL,
    "?api_key=",
    ROBOFLOW_API_KEY,
    "&format=image", # Change to json if you want the prediction boxes, not the visualization
    "&stroke=5"
])

# Get webcam interface via opencv-python
video = cv2.VideoCapture(0,cv2.CAP_DSHOW)

# Infer via the Roboflow Infer API and return the result
# Takes an httpx.AsyncClient as a parameter
async def infer(requests):
    # Get the current image from the webcam
    ret, img = video.read()
   
    # Resize (while maintaining the aspect ratio) to improve speed and save bandwidth
    height, width, channels = img.shape
    scale = min(height, width)
    img = cv2.resize(img, (2000, 1500))

    # Encode image to base64 string
    retval, buffer = cv2.imencode('.jpg', img)
    img_str = base64.b64encode(buffer)

    # Get prediction from Roboflow Infer API
    resp = await requests.post(upload_url, data=img_str, headers={
        "Content-Type": "application/x-www-form-urlencoded"
    })


    # Parse result image
    image = np.asarray(bytearray(resp.content), dtype="uint8")
    image = cv2.imdecode(image, cv2.IMREAD_COLOR)

    return image


# Main loop; infers at FRAMERATE frames per second until you press "q"
async def main():
    # Initialize
    last_frame = time.time()

    # Initialize a buffer of images
    futures = []

    async with httpx.AsyncClient() as requests:
        while True:
            
            # On "q" keypress, exit
            if(cv2.waitKey(1) == ord('q')):
                break

            # Throttle to FRAMERATE fps and print actual frames per second achieved
            elapsed = time.time() - last_frame
            await asyncio.sleep(max(0, 1/FRAMERATE - elapsed))
            print((1/(time.time()-last_frame)), " fps")
            last_frame = time.time()

            # Enqueue the inference request and safe it to our buffer
            task = asyncio.create_task(infer(requests))
            futures.append(task)

            # Wait until our buffer is big enough before we start displaying results
            if len(futures) < BUFFER * FRAMERATE:
                continue

            # Remove the first image from our buffer
            # wait for it to finish loading (if necessary)
            image = await futures.pop(0)
            # And display the inference results
            img = cv2.imread('img.jpg')
            cv2.imshow('image', image)
            
            
            
# Run our main loop
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
asyncio.run(main())


# Release resources when finished
video.release()
cv2.destroyAllWindows()

看起來您缺少型號的版本號,因此 API 可能返回 404 錯誤,OpenCV 正試圖將其讀取為圖像。

我根據您代碼中的ROBOFLOW_MODELRoboflow Universe 上找到了您的項目 看起來您正在尋找版本3

所以嘗試改變線路

ROBOFLOW_MODEL = "penguins-ojf2k"

ROBOFLOW_MODEL = "penguins-ojf2k/3"

看起來你的 model 是在 640x640(不是 416x416)下訓練的,所以你也應該將ROBOFLOW_SIZE更改為640以獲得最佳結果。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM