简体   繁体   English

在OpenCV中运行神经网络时,如何解决“错误:(-215)pbBlob.raw_data_type()== caffe :: FLOAT16在函数blobFromProto中的问题”

[英]How to fix, “error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in function blobFromProto” when running neural network in OpenCV

I am currently trying to use Nvidia DIGITS to train a CNN on a custom dataset for object detection, and eventually I want to run that network on an Nvidia Jetson TX2. 我目前正在尝试使用Nvidia DIGITS在自定义数据集上训练CNN以进行对象检测,最终我想在Nvidia Jetson TX2上运行该网络。 I followed the recommended instructions to download the DIGITS image from Docker, and I am able to successfully train a network with reasonable accuracy. 我按照推荐的说明从Docker下载了DIGITS映像,并且能够成功地以合理的精度训练网络。 But when I try to run my network in python using OpenCv, I get this error, 但是,当我尝试使用OpenCv在python中运行网络时,出现此错误,

"error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in function blobFromProto" “错误:(-215)pbBlob.raw_data_type()==函数blobFromProto中的caffe :: FLOAT16”

I have read in a few other threads that this is due to the fact that DIGITS stores its networks in a form that is incompatible with OpenCv's DNN functionality. 我在其他一些线程中读到,这是由于DIGITS以与OpenCv的DNN功能不兼容的形式存储其网络。

Before training my network, I have tried selecting the option in DIGITS that is supposed to make the network compatible with other software, however that doesn't seem to change the network at all, and I get the same error when running my python script. 在训练我的网络之前,我尝试过选择DIGITS中的选项,该选项应该使网络与其他软件兼容,但是似乎根本不会改变网络,并且在运行python脚本时出现相同的错误。 This is the script I run that creates the error (it comes from this tutorial https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/ ) 这是我运行的导致错误的脚本(它来自本教程https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/

# import the necessary packages
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-p", "--prototxt", required=True,
help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model", required=True,
help="path to Caffe pre-trained model")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
help="minimum probability to filter weak detections")
args = vars(ap.parse_args())

# initialize the list of class labels MobileNet SSD was trained to
# detect, then generate a set of bounding box colors for each class
CLASSES = ["dontcare", "HatchPanel"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
# load our serialized model from disk
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])

# load the input image and construct an input blob for the image
# by resizing to a fixed 300x300 pixels and then normalizing it
# (note: normalization is done via the authors of the MobileNet SSD
# implementation)
image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843,
    (300, 300), 127.5)
# pass the blob through the network and obtain the detections and
# predictions
print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward()

# loop over the detections
for i in np.arange(0, detections.shape[2]):
    # extract the confidence (i.e., probability) associated with the  
    # prediction
    confidence = detections[0, 0, i, 2]

    # filter out weak detections by ensuring the `confidence` is
    # greater than the minimum confidence
    if confidence > args["confidence"]:
        # extract the index of the class label from the `detections`,
        # then compute the (x, y)-coordinates of the bounding box for
        # the object
        idx = int(detections[0, 0, i, 1])
        box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
        (startX, startY, endX, endY) = box.astype("int")

        # display the prediction
        label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
        print("[INFO] {}".format(label))
        cv2.rectangle(image, (startX, startY), (endX, endY),
            COLORS[idx], 2)
        y = startY - 15 if startY - 15 > 15 else startY + 15
        cv2.putText(image, label, (startX, y),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)

This should output the image specified in the call to the script, with the output of the neural network drawn over top of the image. 这应该输出在脚本调用中指定的图像,并在图像上方绘制神经网络的输出。 But instead, the script crashes with the before mentioned error. 但是,脚本会因上述错误而崩溃。 I have seen other threads with people that have this same error, but as of yet, none of them have arrived at a solution that works with the current version of DIGITS. 我见过其他人也有同样的错误,但是到目前为止,还没有一个人提出与当前版本的DIGITS兼容的解决方案。

My full setup is as follows: 我的完整设置如下:

OS: Ubuntu 16.04 操作系统:Ubuntu 16.04

Nvidia DIGITS Docker Image Version: 19.01-caffe Nvidia DIGITS Docker映像版本:19.01-caffe

DIGITS Version: 6.1.1 数字版本:6.1.1

Caffe Version: 0.17.2 Caffe版本:0.17.2

Caffe Flavor: Nvidia 咖啡口味:Nvidia

OpenCV Version: 4.0.0 OpenCV版本:4.0.0

Python Version: 3.5 Python版本:3.5

Any help is much appreciated. 任何帮助深表感谢。

Harrison McIntyre, Thank you! 哈里森·麦金太尔,谢谢! This PR fixes it: https://github.com/opencv/opencv/pull/13800 . 此PR对其进行了修复: https : //github.com/opencv/opencv/pull/13800 Please note that there is a layer with type "ClusterDetections". 请注意,存在一个类型为“ ClusterDetections”的图层。 It's not supported by OpenCV but you can implement it using custom layers mechanic (see a tutorial ) OpenCV不支持它,但是您可以使用自定义图层机制来实现它(请参阅教程

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 ONNX 量化 Model 类型错误:类型“张量(float16)” - ONNX Quantized Model Type Error: Type 'tensor(float16)' 如何在数组 class 中添加对 float16 类型的支持 - how to add support for float16 type in array class 如何修复 Opencv arcLength function 错误:(-215:断言失败) - How to fix Opencv arcLength function error: (-215:Assertion failed) 如何将Dense层的参数的数据类型设置为float16? - how can I set the data type of parameters of Dense layer to float16? 从 java 中的箭头文件读取 float16 数据类型列的正确方法是什么? - What is the correct way to read float16 data type column from arrow file in java? 当 dtype 为 float16 时,为什么 Pandas 不舍入? - Why doesn't Pandas round when dtype is float16? 如何在 Keras 中使用 float16 微调 resnet50? - How to fine tune resnet50 with float16 in Keras? 错误:(-215) _src.type() == CV_8UC1 在函数 equalizeHist 中尝试均衡 float64 图像时 - error: (-215) _src.type() == CV_8UC1 in function equalizeHist when trying to equalize a float64 image 如何修复opencv / modules / objdetect / src / cascadedetect.cpp:1698:错误:(-215:断言失败)!empty()在函数&#39;detectMultiScale&#39;^中? - How to fix opencv/modules/objdetect/src/cascadedetect.cpp:1698: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'^? 如何修复 <module> 我的神经网络出错了吗? - How to fix in <module> Error for my neural network?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM