简体   繁体   中英

How to use tensor cores in pytorch and tensorflow?

I am using a Nvidia RTX GPU with tensor cores, I want to make sure pytorch/tensorflow is utilizing its tensor cores. I noticed in few articles that the tensor cores are used to process float16 and by default pytorch/tensorflow uses float32. They have introduced some lib that does "mixed precision and distributed training". It is a somewhat old answer. I want to know if pytorch or tensorflow GPU now supports tensor core processing out of the box.

Mixed Precision is available in both libraries.

For pytorch it is torch.cuda.amp , AUTOMATIC MIXED PRECISION PACKAGE.

https://pytorch.org/docs/stable/amp.html

https://pytorch.org/docs/stable/notes/amp_examples.html .

Tensorflow has it here, https://www.tensorflow.org/guide/mixed_precision .

This page is a guide to use apex.amp (Automatic Mixed Precision), a tool to enable Tensor Core-accelerated training in only 3 lines of Python.

You can also check quick start for apex API here

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM