[英]Tensorflow matmul calculations on GPU are slower than on CPU
I'm experimenting with GPU computations for the first time and was hoping for a big speed-up, of course. 我是第一次尝试GPU计算,当然希望能够大幅度提高速度。 However with a basic example in tensorflow, it actually was worse:
但是,使用tensorflow中的一个基本示例实际上更糟:
On cpu:0, each of the ten runs takes on average 2 seconds, gpu:0 takes 2.7 seconds and gpu:1 is 50% worse than cpu:0 with 3 seconds. 在cpu:0上,十次运行中的每一次平均花费2秒,gpu:0花费2.7秒,而gpu:1比cpu:0差3秒,即降低50%。
Here's the code: 这是代码:
import tensorflow as tf
import numpy as np
import time
import random
for _ in range(10):
with tf.Session() as sess:
start = time.time()
with tf.device('/gpu:0'): # swap for 'cpu:0' or whatever
a = tf.constant([random.random() for _ in xrange(1000 *1000)], shape=[1000, 1000], name='a')
b = tf.constant([random.random() for _ in xrange(1000 *1000)], shape=[1000, 1000], name='b')
c = tf.matmul(a, b)
d = tf.matmul(a, c)
e = tf.matmul(a, d)
f = tf.matmul(a, e)
for _ in range(1000):
sess.run(f)
end = time.time()
print(end - start)
What am I observing here? 我在这里观察什么? Is run time maybe mainly dominated by copying data between RAM and GPU?
运行时间是否主要由在RAM和GPU之间复制数据主导?
The way you use to generate data is executed on CPU ( random.random()
is a regular python function and not TF-one). 用于生成数据的方式在CPU上执行(
random.random()
是常规的python函数,而不是TF-one)。 Also, executing it 10^6
times will be slower than requesting 10^6
random numbers in one run. 同样,执行
10^6
次比一次运行请求10^6
随机数要慢。 Change the code to: 将代码更改为:
a = tf.random_uniform([1000, 1000], name='a')
b = tf.random_uniform([1000, 1000], name='b')
so that the data will be generated on a GPU in parallel and no time will be wasted to transfer it from RAM to GPU. 因此,数据将在GPU上并行生成,而不会浪费时间将其从RAM传输到GPU。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.