I am aware that Tensorflow can explicitly place computation on any devices by "/cpu0"
or "/gpu0"
. However, this is hard-coded. Is there any way to iterate all visible devices with built-in API?
Here is what you would like to have:
import tensorflow as tf
from tensorflow.python.client import device_lib
def get_all_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
all_devices = get_all_devices()
for device_name in all_devices:
with tf.device(device_name):
if "cpu" in device_name:
# Do something
pass
if "gpu" in device_name:
# Do something else
pass
Code is inspired from the best answer here: How to get current available GPUs in tensorflow?
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.