简体   繁体   中英

CPU/GPU Memory Usage with Tensorflow

I want to run a Python script that also uses Tensorflow on a server. When I ran it with no session configuration, the process allocated all of GPU memory, preventing any other process to access to GPU.

The server specs are the following:

  • CPU: 2x 12cores@2.5 GHz,
  • RAM: 256GB,
  • Disks: 2x 240GB SSD, 6x 4TB@7200RPM,
  • GPU: 2x Nvidia Titan X.

This server is shared among other colleagues, so I am not really allowed to allocate all of the GPU memory.

On the website of Tensorflow, I found out these instructions to set a threshold to the used GPU memory.

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

I have two questions regarding these: 1. If the allocated GPU memory is not enough, will the process automatically use the CPU instead, or will it crash ? 2. What happens if a process wants to use the GPU but the GPU is already fully allocated ?

Thank you.

  1. If the allocated GPU memory is not enough TF will throw an Out Of Memory error and crash.

  2. TF will also crash in this case.

Tensorflow provides a few options as alternatives to its default behavior of allocating all available GPU memory (which it does to avoid memory fragmentation and operate more efficiently). These options are:

  • config.gpu_options.allow_growth - when configured to True will dynamically allocate more memory as needed, but will never release memory
  • config.gpu_options.per_process_gpu_memory_fraction - when configured to a double between 0 and 1, will statically allocate only that fraction of available memory instead of all memory

See https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth for more detail.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM