简体   繁体   English

在Tensorflow中限制GPU设备

[英]Limit GPU devices in Tensorflow

I am developing in Python an application which uses Tensorflow and another model which with GPUs. 我在Python中开发了一个使用Tensorflow的应用程序和使用GPU的另一个模型。 I have a PC with many GPUs (3xNVIDIA GTX1080), due to the fact that all models try to use all available GPUs, resulting in OUT_OF_MEMORY_ERROR, I have found that you can assign a specific GPU to a Python script with 我有一台带有许多GPU(3xNVIDIA GTX1080)的PC,由于所有型号都尝试使用所有可用的GPU,导致OUT_OF_MEMORY_ERROR,我发现你可以将特定的GPU分配给Python脚本

os.environ['CUDA_VISIBLE_DEVICES'] = '1'

Here I attach a snippet of my FCN class 在这里,我附上了我的FCN课程的片段

class FCN:
  def __init__(self):
    os.environ['CUDA_VISIBLE_DEVICES'] = '1'
    self.keep_probability = tf.placeholder(tf.float32, name="keep_probabilty")
    self.image = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE, IMAGE_SIZE, 3], name="input_image")
    self.annotation = tf.placeholder(tf.int32, shape=[None, IMAGE_SIZE, IMAGE_SIZE, 1], name="annotation")

    self.pred_annotation, logits = inference(self.image, self.keep_probability)
    tf.summary.image("input_image", self.image, max_outputs=2)
    tf.summary.image("ground_truth", tf.cast(self.annotation, tf.uint8), max_outputs=2)
    tf.summary.image("pred_annotation", tf.cast(self.pred_annotation, tf.uint8), max_outputs=2)
    self.loss = tf.reduce_mean((tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
                                                                               labels=tf.squeeze(self.annotation,
                                                                                                 squeeze_dims=[3]),
                                                                               name="entropy")))
    tf.summary.scalar("entropy", self.loss)

...

Inside the same file FCN.py , I have a little main which uses the class and when Tensorflow prints the output I can see that only 1 GPU is used, as I expect. 在同一个文件FCN.py ,我有一个使用该类的主要内容,当Tensorflow打印输出时,我可以看到只使用了1个GPU,正如我所料。

if __name__ == "__main__":
  fcn = FCN()
  fcn.train_model()

  images_dir = '/home/super/datasets/MeterDataset/full-dataset-gas-images/'
  for img_file in os.listdir(images_dir):
    fcn.segment(os.path.join(images_dir, img_file))

Output: 输出:

2018-01-09 11:31:57.351029: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:09:00.0
Total memory: 7.92GiB
Free memory: 7.60GiB
2018-01-09 11:31:57.351047: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2018-01-09 11:31:57.351051: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2018-01-09 11:31:57.351057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:09:00.0)

The problem arises when I try to instantiate the FCN object from another script. 当我尝试从另一个脚本实例化FCN对象时出现问题。

def main(args):
  start_time = datetime.now()

  font = cv2.FONT_HERSHEY_SIMPLEX

  results_file = "../results.txt"
  if os.path.exists(results_file):
    os.remove(results_file)

  results_file = open(results_file, "a")

  fcn = FCN()

Here the creation of the object always uses all 3 GPUs instead of using the only assigned into the __init__() method. 这里对象的创建总是使用所有3个GPU,而不是使用仅分配到__init__()方法。

Here the undesired output: 这里是不受欢迎的输出:

2018-01-09 11:41:02.537548: I 

tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1 2 
2018-01-09 11:41:02.537555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y Y Y 
2018-01-09 11:41:02.537558: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1:   Y Y Y 
2018-01-09 11:41:02.537561: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 2:   Y Y Y 
2018-01-09 11:41:02.537567: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:0b:00.0)
2018-01-09 11:41:02.537571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080, pci bus id: 0000:09:00.0)
2018-01-09 11:41:02.537574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:2) -> (device: 2, name: GeForce GTX 1080, pci bus id: 0000:05:00.0)

Here's what you can do: 这是你可以做的:

  • Run your script with CUDA_VISIBLE_DEVICES environment variable already setup, as discussed here : 使用已设置的CUDA_VISIBLE_DEVICES环境变量运行脚本,如下所述

     CUDA_VISIBLE_DEVICES=1 python another_script.py 
  • Provide an explicit configuration to the Session constructor: Session构造函数提供显式配置:

     config = tf.ConfigProto(device_count={'GPU': 1}) sess = tf.Session(config=config) 

    ... to force tensorflow use only one GPU, not matter how many there are available. ...强制tensorflow只使用一个GPU,无论有多少GPU可用。 You can also set fine-grained list of devices via visible_device_list (see config.proto for the details). 您还可以通过visible_device_list设置细粒度的设备列表(有关详细信息,请参阅config.proto )。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM