简体   繁体   中英

Can I let people use a different Tensorflow-gpu version above what they had installed with different CUDA dependencies?

I was trying to pack and release a project which uses tensorflow-gpu . Since my intention is to make the installation as easy as possible, I do not want to let the user compile tensorflow-gpu from scratch so I decided to use pipenv to install whatsoever version pip provides.

I realized that although everything works in my original local version, I can not import tensorflow in the virtualenv version.

ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

Although this seems to be easily fixable by changing local symlinks, that may break my local tensorflow and is against the concept of virtualenv and I will not have any idea on how people installed CUDA on their instances, so it doesn't seems to be promising for portability.

What can I do to ensure that tensorflow-gpu works when someone from internet get my project only with the guide of "install CUDA XX"? Should I fall back to tensorflow to ensure compatibility, and let my user install tensorflow-gpu manually?

Having a working tensorflow-gpu on a machine does involve a series of steps including installation of cuda and cudnn, the latter requiring an NVidia approval. There are a lot of machines that would not even meet the required config for tensorflow-gpu, eg any machine that doesn't have a modern nvidia gpu. You may want to define the tensorflow-gpu requirement and leave it to the user to meet it, with appropriate pointers for guidance. If the project can work acceptably on tensorflow-cpu, that would be a much easier fallback option.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM