简体   繁体   English

如何在 Jetson TX1 上使用 Nvidia 的 Tensor RT 运行预训练的 tensorflow 模型?

[英]How to run pretrained tensorflow model using Nvidia's Tensor RT on Jetson TX1?

In the Nvidia's blog, they introduced their TensorRT as follows:在 Nvidia 的博客中,他们介绍了他们的 TensorRT 如下:

NVIDIA TensorRT™ is a high performance neural network inference engine for production deployment of deep learning applications. NVIDIA TensorRT™ 是一种高性能神经网络推理引擎,用于深度学习应用程序的生产部署。 TensorRT can be used to rapidly optimize, validate and deploy trained neural network for inference to hyperscale data centers, embedded, or automotive product platforms. TensorRT 可用于快速优化、验证和部署训练有素的神经网络,以便对超大规模数据中心、嵌入式或汽车产品平台进行推理。

So I am wondering, if I have a pre-trained Tensorflow model, can I use it in TensorRT in Jetson TX1 for inference?所以我想知道,如果我有一个预训练的 Tensorflow 模型,我可以在 Jetson TX1 的 TensorRT 中使用它进行推理吗?

UPDATE (2020.01.03): Now both TensorFlow 1.X and 2.0 have been supported by TensorRT (Tested on Trt V6 & 7 : See this tutorial: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html ).更新(2020.01.03):现在 TensorRT 已支持 TensorFlow 1.X 和 2.0(在Trt V6 和 7上测试:请参阅本教程: https ://docs.nvidia.com/deeplearning/frameworks/tf-trt- 用户指南/index.html )。

Base on this post from Nvidia forum, it seems that you could use TensorRT for inference with caffemodel but not tensorflow model now.根据 Nvidia 论坛上的这篇帖子,您现在似乎可以使用 TensorRT 与 caffemodel 进行推理,但不能使用 tensorflow 模型。 Beside tensorRT, building tensorflow on tx1 is another issue (refer here: https://github.com/ugv-tracking/cfnet ).除了 tensorRT,在 tx1 上构建 tensorflow 是另一个问题(请参阅此处: https : //github.com/ugv-tracking/cfnet )。

From JetPack 3.1, NVIDIA has added TensorRT support for Tensorflow also.从 JetPack 3.1 开始,NVIDIA 还为 Tensorflow 添加了 TensorRT 支持。 So, the trained TF model can be directly deployed in Jetson TX1/TK1/TX2所以,训练好的TF模型可以直接部署在Jetson TX1/TK1/TX2

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM