简体   繁体   中英

How to package tensorflow-gpu model to run on most of the machines?

Tensorflow GPU version, nvidia driver version, cudnn version has special compatibility matrix. This compatibility matrix creates problem in packaging and ditributing tensorflow model which can run by others without any hassle when I pass them.Container based technologies(eg docker) would also have problem as it won't know nvidia driver version. I am wondering if anyone knows the best way to package tensorflow model which can automatically configure according to underlying nvidia driver on the linux system. How can I achieve this?

最简单的方法是在docker https://hub.docker.com/r/tensorflow/tensorflow/上使用tensorflow映像,对于GPU,您可以将nvidia工具与预安装的现有映像一起使用,或将其与docker文件一起添加。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM