简体   繁体   English

在 Tensorflow 中释放和重用 GPU

[英]Freeing and Reusing GPU in Tensorflow

I would like to free and Reuse the GPU while using Tensorflow in a jupyter notebook.我想在 jupyter 笔记本中使用 Tensorflow 的同时释放和重用 GPU。

I imagine a workflow like this:我想象这样的工作流程:

  1. Make a TF calculation.进行TF计算。
  2. Free the GPU免费 GPU
  3. Wait a while稍等片刻
  4. Step 1. again.再次执行第 1 步。

This is the code i use right no.这是我使用正确的代码。 Steps 1 to 3 are working step 4 is not:步骤 1 到 3 有效,步骤 4 无效:

import time

import tensorflow as tf
from numba import cuda 


def free_gpu():
    device = cuda.get_current_device()
    cuda.close()

def test_calc():
    a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])   
    b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])

    # Run on the GPU
    c = tf.matmul(a, b)

test_calc()
free_gpu()
time.sleep(10)
test_calc()

If i run this code in Jupyter Notebooks my kernel just dies.如果我在 Jupyter Notebooks 中运行这段代码,我的 kernel 就会死掉。 Is there a alte.netiv to cuda.close() and cuda.close() that frees the GPU while not breaking TF?是否有cuda.close()cuda.close()的 alte.netiv 在不破坏 TF 的情况下释放 GPU?

Yes, building somewhat of what @talonmies said, do not bring numba into this whatsoever.是的,构建一些@talonmies 所说的内容,不要将numba带入其中。 It's basically incompatible with the TensorFlow API.它基本上与 TensorFlow API 不兼容。

Here is a solution where you completely free the GPU.这是一个完全释放 GPU 的解决方案。 Basically, you can launch the TF computations in a separate process, return any result that you care about, and then close the process.基本上,您可以在单独的进程中启动 TF 计算,返回您关心的任何结果,然后关闭该进程。 TensorFlow notably has issues regarding freeing GPU memory. TensorFlow 在释放 GPU 内存方面存在明显问题。

from multiprocessing import Process, Queue
import tensorflow as tf

def test_calc(q):
    a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
    b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])

    # Run on the GPU
    c = tf.matmul(a, b)
    q.put(c.numpy())

q = Queue()
p = Process(target=test_calc, args=(q,))
p.start()
p.join()
result = q.get()

try this.尝试这个。

from numba import cuda 
device = cuda.get_current_device()
device.reset()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM