简体   繁体   中英

Issue when converting ONNX model to Caffe2

I converted a TF model to ONNX and then ONNX model to Caffe2. The conversion happened successfully. However, I am getting a RunTime Error when trying to load and infer from the obtained model.

This is the error that I am receiving. How do I add the attribute 'is_true' to SpatialBN node?

I went through the pytorch repo and saw this issue , however, it is unresolved. In the code base of ONNX here , it adds is_test attribute for opset >=7 and I am using 8. However, it is still giving the error.

[W common_gpu.cc:35] Insufficient cuda driver. Cannot use cuda.
[W init.h:137] Caffe2 GlobalInit should be run before any other API calls.
[W init.h:137] Caffe2 GlobalInit should be run before any other API calls.
[W predictor_config.cc:90] Caffe2 is compiled without optimization passes.
[E operator_schema.cc:101] Argument 'is_test' is required for Operator 'SpatialBN'.
Traceback (most recent call last):
  File "main.py", line 91, in <module>
    test_caffe("mod-caffe-net.pb", "mod-caffe-init-net.pb", "../data/mouth")
  File "main.py", line 70, in test_caffe
    predictor = workspace.Predictor(param_values, model_net)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/caffe2/python/workspace.py", line 187, in Predictor
    return C.Predictor(StringifyProto(init_net), StringifyProto(predict_net))
RuntimeError: [enforce fail at operator.cc:199] schema->Verify(operator_def). Operator def did not pass schema checking: input: "conv1/Relu:0" input: "batchNorm1/gamma/read/_1__cf__1:0" input: "batchNorm1/beta/read/_0__cf__0:0" input: "batchNorm2/moving_mean/read/_6__cf__6:0" input: "batchNorm1/moving_variance/read/_3__cf__3:0" output: "batchNorm1/FusedBatchNorm:0" name: "batchNorm1/FusedBatchNorm" type: "SpatialBN" arg { name: "epsilon" f: 0.001 } device_option { device_type: 0 device_id: 0 }

The issue is resolved. I was using the command-line utility suggested on their README . However, it points to their tutorial in deprecated version of the code .

The command-line utility (installed using the pip install onnx-caffe2 ) still has the _known_opset_version = 3 . This was causing the error. After I used the conversion utility through Python APIs in PyTorch library by importing,

from caffe2.python.onnx.backend import Caffe2Backend as c2

I was successfully able to run inference on the converted model.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM