简体   繁体   中英

C++ Tensorflow API with TensorRT

My goal is to run a tensorrt optimized tensorflow graph in a C++ application. I am using tensorflow 1.8 with tensorrt 4. Using the python api I am able to optimize the graph and see a nice performance increase.

Trying to run the graph in c++ fails with the following error:

Not found: Op type not registered 'TRTEngineOp' in binary running on e15ff5301262. Make sure the Op and Kernel are registered in the binary running in this process.

Other, non tensorrt graphs work. I had a similar error with the python api, but solved it by importing tensorflow.contrib.tensorrt. From the error I am fairly certain the kernel and op are not registered, but am unaware on how to do so in the application after tensorflow has been built. On a side note I can not use bazel but am required to use cmake. So far I link against libtensorflow_cc.so and libtensorflow_framework.so .

Can anyone help me here? thanks!

Update: Using the c or c++ api to load _trt_engine_op.so does not throw an error while loading, but fails to run with

Invalid argument: No OpKernel was registered to support Op 'TRTEngineOp' with these attrs.  Registered devices: [CPU,GPU], Registered kernels:
  <no registered kernels>

     [[Node: my_trt_op3 = TRTEngineOp[InT=[DT_FLOAT, DT_FLOAT], OutT=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], input_nodes=["tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer", "tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer"], output_nodes=["tower_0/down_0/conv_skip/Relu", "tower_0/down_1/conv_skip/Relu", "tower_0/down_2/conv_skip/Relu", "tower_0/down_3/conv_skip/Relu"], serialized_engine="\220{I\000...00\000\000"](tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer)]]

Another way to solve the problem with the error "Not found: Op type not registered 'TRTEngineOp'" on Tensorflow 1.8:

1) In the file tensorflow/contrib/tensorrt/BUILD , add new section with following content :

cc_library(
name = "trt_engine_op_kernel_cc",
srcs = [
    "kernels/trt_calib_op.cc",
    "kernels/trt_engine_op.cc",
    "ops/trt_calib_op.cc",
    "ops/trt_engine_op.cc",
    "shape_fn/trt_shfn.cc",
],
hdrs = [
    "kernels/trt_calib_op.h",
    "kernels/trt_engine_op.h",
    "shape_fn/trt_shfn.h",
],
copts = tf_copts(),
visibility = ["//visibility:public"],
deps = [
    ":trt_logging",
    ":trt_plugins",
    ":trt_resources",
    "//tensorflow/core:gpu_headers_lib",
    "//tensorflow/core:lib_proto_parsing",
    "//tensorflow/core:stream_executor_headers_lib",
] + if_tensorrt([
    "@local_config_tensorrt//:nv_infer",
]) + tf_custom_op_library_additional_deps(),
alwayslink = 1,  # buildozer: disable=alwayslink-with-hdrs
)

2) Add //tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc as dependency to the corresponding BAZEL project you want to build

PS: No need to load library _trt_engine_op.so with TF_LoadLibrary

Here are my findings (and some kind of solution) for this problem (Tensorflow 1.8.0, TensorRT 3.0.4):

I wanted to include the tensorrt support into a library, which loads a graph from a given *.pb file.

Just adding //tensorflow/contrib/tensorrt:trt_engine_op_kernel to my Bazel BUILD file didn't do the trick for me. I still got a message indicating that the Ops where not registered:

2018-05-21 12:22:07.286665: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTCalibOp" device_type: "GPU"') for unknown op: TRTCalibOp
2018-05-21 12:22:07.286856: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTEngineOp" device_type: "GPU"') for unknown op: TRTEngineOp
2018-05-21 12:22:07.296024: E tensorflow/examples/tf_inference_lib/cTfInference.cpp:56] Not found: Op type not registered 'TRTEngineOp' in binary running on ***. 
Make sure the Op and Kernel are registered in the binary running in this process.

The solution was, that I had to load the Ops library (tf_custom_op_library) within my C++ Code using the C_API:

#include "tensorflow/c/c_api.h"
...
TF_Status status = TF_NewStatus();
TF_LoadLibrary("_trt_engine_op.so", status);

The shared object _trt_engine_op.so is created for the bazel target //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so :

bazel build --config=opt --config=cuda --config=monolithic \
     //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so

Now I only have to make sure, that _trt_engine_op.so is available whenever it is needed, eg by LD_LIBRARY_PATH .

If anybody has an idea, how to do this in a more elegant way (why do we have 2 artefacts which have to be build? Can't we just have one?), I'm happy for every suggestion.

tldr

  1. add //tensorflow/contrib/tensorrt:trt_engine_op_kernel as dependency to the corresponding BAZEL project you want to build

  2. Load the ops-library _trt_engine_op.so in your code using the C-API.

For Tensorflow r1.8, the additions shown below in two BUILD files and building libtensorflow_cc.so with the monolithic option worked for me.

diff --git a/tensorflow/BUILD b/tensorflow/BUILD
index cfafffd..fb8eb31 100644
--- a/tensorflow/BUILD
+++ b/tensorflow/BUILD
@@ -525,6 +525,8 @@ tf_cc_shared_object(
         "//tensorflow/cc:scope",
         "//tensorflow/cc/profiler",
         "//tensorflow/core:tensorflow",
+        "//tensorflow/contrib/tensorrt:trt_conversion",
+        "//tensorflow/contrib/tensorrt:trt_engine_op_kernel",
     ],
 )

diff --git a/tensorflow/contrib/tensorrt/BUILD b/tensorflow/contrib/tensorrt/BUILD
index fd3582e..a6566b9 100644
--- a/tensorflow/contrib/tensorrt/BUILD
+++ b/tensorflow/contrib/tensorrt/BUILD
@@ -76,6 +76,8 @@ cc_library(
     srcs = [
         "kernels/trt_calib_op.cc",
         "kernels/trt_engine_op.cc",
+        "ops/trt_calib_op.cc",
+        "ops/trt_engine_op.cc",
     ],
     hdrs = [
         "kernels/trt_calib_op.h",
@@ -86,6 +88,7 @@ cc_library(
     deps = [
         ":trt_logging",
         ":trt_resources",
+        ":trt_shape_function",
         "//tensorflow/core:gpu_headers_lib",
         "//tensorflow/core:lib_proto_parsing",
         "//tensorflow/core:stream_executor_headers_lib",

As you mentioned, it should work when you add //tensorflow/contrib/tensorrt:trt_engine_op_kernel to the dependency list. Currently the Tensorflow-TensorRT integration is still in progress and may work well only for the python API; for C++ you'll need to call ConvertGraphDefToTensorRT() from tensorflow/contrib/tensorrt/convert/convert_graph.h for the conversion.

Let me know if you have any questions.

Solution: add import

from tensorflow.python.compiler.tensorrt import trt_convert as trt

link discuss: https://github.com/tensorflow/tensorflow/issues/26525

here is my solution, tensorflow is 1.14. in your BUILD file,exp,tensorflow/examples/your_workspace/BUILD:

in tf_cc_binary :

scrs= [...,"//tensorflow/compiler/tf2tensorrt:ops/trt_engine_op.cc"]
deps=[...,"//tensorflow/compiler/tf2tensorrt:trt_op_kernels"]

I build tensorflow with bazel by using --config=tensorrt, then the TRTEngineOp can be found in tensorflow_cc.so

BUT...also encounter the ERROR "No OpKernel was registered to support Op 'TRTEngineOp' with these attrs" when I load the graph into a session in C++ tf api

do you solve the problem?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM