[英]How to split model between 2 GPUs with keras in Tensorflow?
Essentially, I am looking for something like with tf.device('/device:GPU:0'
for keras. I want to put my operations on different GPUs. I am using a Sequential model following lines of 基本上,我正在寻找类似于
with tf.device('/device:GPU:0'
的keras。我想将我的操作放在不同的GPU上。我正在使用顺序模型
...
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
...
model.fit(train_images, train_labels, epochs=5)
Probably you can use with K.tf.device('/gpu:1'):
as a context. 可能你可以使用
with K.tf.device('/gpu:1'):
作为上下文。 Or, if the backend is tensorflow, then the way you assign gpu in tf should work for keras also. 或者,如果后端是tensorflow,那么在tf中分配gpu的方式也适用于keras。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.