简体   繁体   中英

TensorFlow use CPU and GPU independently

I have a simple issue: I have two small processes in Tensorflow 1.3.0 (GPU) with python 3 on windows 10 with CUDA 8. I want to run one on the GPU and the other one on CPU segregated from each other (they should NOT cooperate!). Here is a small nonsense example-code:

import tensorflow as tf
import time
import os
import sys

dim = 6096
try:
    device = sys.argv[1]
except:
    device = "gpu"

print("Running for "+device)
cur_graph = tf.Graph()

with cur_graph.as_default():
    with tf.device("/"+device+":0"):
        x = tf.Variable(tf.random_normal([dim, dim]), dtype=tf.float32)
        y = tf.Variable(tf.random_normal([dim, dim]), dtype=tf.float32)
        z = tf.Variable(tf.random_normal([dim, dim]), dtype=tf.float32)

        a = tf.matmul(x, y)
        b = tf.matmul(a, z)

        training_start = time.time()
        with tf.Session() as sess:
            init_op = tf.global_variables_initializer()
            sess.run(init_op)
            sess.run(a)
        training_time = (time.time() - training_start)
        print("Time: %5.3f" % training_time)

If I start it on CLI with

my_prog.py cpu

it takes ~10 seconds to finish and utilizes the CPU while GPU is idling

If I start it on CLI with

my_prog.py gpu

it takes ~1 seconds to finish and utilizes the GPU while GPU is idling

So far so good. Now I want to start it in different CMD's in parallel and I expect both processes work independently and utilizing CPU and GPU. But I always get an Exception from my GPU process:

G:\\Workspace\\Python>device_test.py gpu

Running for gpu
2017-11-05 17:27:52.940625: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-05 17:27:52.940879: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-05 17:27:53.373137: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:955] Found device 0 with
 properties:
name: GeForce GTX 780
major: 3 minor: 5 memoryClockRate (GHz) 0.941
pciBusID 0000:01:00.0
Total memory: 3.00GiB
Free memory: 2.45GiB
2017-11-05 17:27:53.377292: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:976] DMA: 0
2017-11-05 17:27:53.393137: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:986] 0:   Y
2017-11-05 17:27:53.394838: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlo
w device (/gpu:0) -> (device: 0, name: GeForce GTX 780, pci bus id: 0000:01:00.0)
2017-11-05 17:27:54.406490: E C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_blas.cc:366] failed to create cublas
 handle: CUBLAS_STATUS_ALLOC_FAILED
2017-11-05 17:27:54.431348: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\stream.cc:1756] attempting to perform BLAS ope
ration using StreamExecutor without BLAS support

...

> InternalError (see above for traceback): Blas GEMM launch failed :
> a.shape=(6096, 6096), b.shape=(6096, 6096), m=6096, n=6096, k=6096
>          [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false,
> _device="/job:localhost/replica:0/task:0/gpu:0"](Variable/read, Variable_1/re ad)]]
>          [[Node: MatMul/_1 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0",
> send_device="/job:localhost/replica:0/task:0/gp u:0",
> send_device_incarnation=1, tensor_name="edge_9_MatMul",
> tensor_type=DT_FLOAT,
> _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Is it possible to run TensorFlow stuff independently on CPU and on the GPU?

Eureka. I solved the issue, but don't exactly know why... I added the following lines to my code:

[...]

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess

[...]

Now it works as expected.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM