简体   繁体   中英

tensorflow 2.0 custom layers on gpu

Will completely custom-made layers in TensorFlow automatically be run on GPUs? I noticed that in this document ( https://www.tensorflow.org/guide/keras/rnn#rnn_layers_and_rnn_cells ) it seems that the RNN wrappers won't be using CudNN? That means it wouldn't run on the GPU right?

Your custom layers will still use the GPU and you can confirm that as explained in this answer .

You are right though that the custom layers won't use cuDNN. Why does it matter? To quote after NVidia:

cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers

In other words, using these optimised primitives will enhance performance of the training. Number of examples with detailed explanation is provided in the cuDNN: Efficient Primitives for Deep Learning paper. Take for instance spatial convolutions . Non-optimised implementation would use "naive" approach, while cuDNN uses all sorts of tricks to reduce number of operations and batch them appropriately. GPU is still fast when compared to classical CPU, cuDNN just makes it faster. For more recent, independent benchmarks, check out eg this article .

Still, if Tensorflow runs in the GPU mode, complete computational graph will be executed on the GPU (to my knowledge there's even no simple way you could take out portion of the graph, ie intermediate layer, and put on CPU).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM