简体   繁体   中英

How to run pretrained tensorflow model using Nvidia's Tensor RT on Jetson TX1?

In the Nvidia's blog, they introduced their TensorRT as follows:

NVIDIA TensorRT™ is a high performance neural network inference engine for production deployment of deep learning applications. TensorRT can be used to rapidly optimize, validate and deploy trained neural network for inference to hyperscale data centers, embedded, or automotive product platforms.

So I am wondering, if I have a pre-trained Tensorflow model, can I use it in TensorRT in Jetson TX1 for inference?

UPDATE (2020.01.03): Now both TensorFlow 1.X and 2.0 have been supported by TensorRT (Tested on Trt V6 & 7 : See this tutorial: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html ).

Base on this post from Nvidia forum, it seems that you could use TensorRT for inference with caffemodel but not tensorflow model now. Beside tensorRT, building tensorflow on tx1 is another issue (refer here: https://github.com/ugv-tracking/cfnet ).

From JetPack 3.1, NVIDIA has added TensorRT support for Tensorflow also. So, the trained TF model can be directly deployed in Jetson TX1/TK1/TX2

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM