Has anyone used Tensorflow Lite on any Nvidia Jetson product? I want to use my Jetson Nano for inference and would like to so with tf-lite utilizing the GPU.
Confusingly, there does not seem to be a Python API for creating a GPU Delegate in tf-lite .
Is there are clear reason for this?
Is the alternative to use the full Tensorflow library (I would prefer not use the Nvidia TensorRT engine)?
Yes, I have tried to use tf lite on Jetson Nano before.
You can refer to my previous article on Medium (PS: I am sorry that the article was written in Chinese.)
This article is about how to set up the TF Lite Environment on Jetson Nano
Notice:
You should change the following command according to your own environment.
pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp36-cp36m-linux_aarch64.whl
in case it is of interest to you to use inference with C++ you can compile TFlite 2.4.1 on your Jetson device like I did on the Xavier NX:
$ sudo apt-get install cmake curl
$ wget -O tensorflow.zip https://github.com/tensorflow/tensorflow/archive/v2.4.1.zip
$ unzip tensorflow.zip
$ mv tensorflow-2.4.1 tensorflow
$ cd tensorflow
$ ./tensorflow/lite/tools/make/download_dependencies.sh
$ ./tensorflow/lite/tools/make/build_aarch64_lib.sh
After that you will also have to install the TF lite flat buffers like this:
$ cd ./tensorflow/tensorflow/lite/tools/make/downloads/flatbuffers
$ mkdir build && cd build
$ cmake ..
$ make -j
$ sudo make install
$ sudo ldconfig
After that you find the library here tensorflow/tensorflow/lite/tools/make/gen/linux_aarch64/libtensorflow-lite.a
You can build your inference application against that like this
gcc -llibtensorflow-lite.a -ledgetpu main.cpp
You will also need to install libedgetpu.so like shown on Coral.ai
Best Alexander
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.