I am using a Nvidia RTX GPU with tensor cores, I want to make sure pytorch/tensorflow is utilizing its tensor cores. I noticed in few articles that the tensor cores are used to process float16 and by default pytorch/tensorflow uses float32. They have introduced some lib that does "mixed precision and distributed training". It is a somewhat old answer. I want to know if pytorch or tensorflow GPU now supports tensor core processing out of the box.
Mixed Precision
is available in both libraries.
For pytorch it is torch.cuda.amp
, AUTOMATIC MIXED PRECISION PACKAGE.
https://pytorch.org/docs/stable/amp.html
https://pytorch.org/docs/stable/notes/amp_examples.html .
Tensorflow has it here, https://www.tensorflow.org/guide/mixed_precision .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.