简体   繁体   English

在没有 Google Coral USB 的情况下使用 tensorflow lite 进行图像分类

[英]Image classification using tensorflow lite without Google Coral USB

I am trying to evaluate a Raspberry Pi performance with a Google Goral Edge TPU USB device and without it for an image classification task on a video file.我正在尝试使用 Google Goral Edge TPU USB 设备评估 Raspberry Pi 的性能,但不使用它对视频文件进行图像分类任务。 I have managed to evaluate the peformance using the Edge TPU USB device already.我已经设法使用 Edge TPU USB 设备评估了性能。 However, when I try running a tensorflow lite code to run inference it gets me an error that tells me I need to plugin the device:但是,当我尝试运行 tensorflow lite 代码来运行推理时,它会收到一个错误,告诉我需要插入设备:

ValueError: Failed to load delegate from libedgetpu.so.1

What I am doing specifically is running inference on a video using the coral device and saving every frame in the video to benchmark the hardware.我正在做的是使用珊瑚设备对视频进行推理并保存视频中的每一帧以对硬件进行基准测试。

import argparse
import time
import cv2
import numpy as np
from pycoral.adapters import classify, common
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
from utils import visualization as visual

WINDOW_NAME = "Edge TPU Image classification"


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--model", help="File path of Tflite model.", required=True)
    parser.add_argument("--label", help="File path of label file.", required=True)
    parser.add_argument("--top_k", help="keep top k candidates.", default=2, type=int)
    parser.add_argument("--threshold", help="Score threshold.", default=0.0, type=float)
    parser.add_argument("--width", help="Resolution width.", default=640, type=int)
    parser.add_argument("--height", help="Resolution height.", default=480, type=int)
    parser.add_argument("--videopath", help="File path of Videofile.", default="")
    args = parser.parse_args()

    # Initialize window.
    cv2.namedWindow(WINDOW_NAME)
    cv2.moveWindow(WINDOW_NAME, 100, 200)

    # Initialize engine and load labels.
    count = 0
    interpreter = make_interpreter(args.model)
    interpreter.allocate_tensors()
    labels = read_label_file(args.label) if args.label else None
    elapsed_list = []
    cap = cv2.VideoCapture('/home/pi/coral-usb/pycoral/test_data/video.mkv)
    while cap.isOpened():
        _, frame = cap.read()
        im = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        cv2.imwrite("/home/pi/Desktop/frames/frame_%d.jpeg" % count, frame)
        print('gravou o frame_%d'% count, frame)      
        cv2.imshow('Frame', frame)
        cap_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
        cap_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
        # Run inference.
        start = time.perf_counter()

        _, scale = common.set_resized_input(
            interpreter, (cap_width, cap_height), lambda size: cv2.resize(im, size)
        )
        interpreter.invoke()

        # Check result.
        results = classify.get_classes(interpreter, args.top_k, args.threshold)
        elapsed_ms = (time.perf_counter() - start) * 1000
        if results:
            for i in range(len(results)):
                label = "{0} ({1:.2f})".format(labels[results[i][0]], results[i][1])
                pos = 60 + (i * 30)
                visual.draw_caption(frame, (10, pos), label)


        # display
        cv2.imshow(WINDOW_NAME, frame)
        if cv2.waitKey(10) & 0xFF == ord("q"):
            break

This code is used to run inference with coral device.此代码用于使用珊瑚设备运行推理。 I would like to know how can I do the same thing but without coral?我想知道如何在没有珊瑚的情况下做同样的事情? I would like to test the differences between using my model with and without the edge tpu usb device.我想测试使用我的模型在有和没有边缘 tpu USB 设备的情况下之间的差异。

Lastly, I have tried Image classification from this link using tensorflow lite.最后,我尝试使用 tensorflow lite 从此链接进行图像分类。 However, I am getting the following error:但是,我收到以下错误:

RuntimeError: Encountered unresolved custom op: edgetpu-custom-op.Node number 0 (edgetpu-custom-op) failed to prepare.运行时错误:遇到未解析的自定义操作:edgetpu-custom-op.Node 编号 0 (edgetpu-custom-op) 无法准备。

I recently came into this for a thesis supervision.我最近进入这个领域进行论文监督。 We tested face detection in a raspberry pi 4 with Coral USB an without (inference on rpi CPU).我们在带有 Coral USB 的树莓派 4 中测试了人脸检测,而没有(在 rpi CPU 上的推断)。 Are you using the same model file for both?您是否为两者使用相同的模型文件? If this is the case, then this is the problem.如果是这种情况,那么这就是问题所在。 You need to use the bare tflite model for the CPU inference and the TPU-compiled model for the inference with TPU.您需要使用裸 tflite 模型进行 CPU 推理,使用 TPU 编译模型进行 TPU 推理。 You can take a look at this repo where you can find the code I was mentioned before (it's not well documented but it's working, look at the inference CPU and inference CORAL files).您可以查看这个repo ,在那里您可以找到我之前提到的代码(它没有很好的文档记录,但它正在运行,请查看推理 CPU 和推理 CORAL 文件)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM