[英]AssertionError: Could not compute output Tensor when using multi_gpu_model() in Keras
I have 2 Keras submodels ( model_1
, model_2
) out of which I form my full model
using keras.models.Model()
by stacking them logically in "series".我有 2 个
model_1
子模型( model_1
, model_2
),我使用keras.models.Model()
通过将它们按逻辑堆叠在“系列”中来形成我的完整model
。 By this I mean that model_2
accepts the output of model_1
plus an extra input tensor and the output of model_2
is the output of my full model
.我的意思是
model_2
接受的输出model_1
加上一个额外的输入和张的输出model_2
是我的全部的输出model
。 The full model
is created successfully and I am also able to use compile/train/predict
.完整
model
已成功创建,我也可以使用compile/train/predict
。
However, I want to parallelize the training of model
by running it on 2 GPUs, thus I use multi_gpu_model()
which fails with the error:但是,我想通过在 2 个 GPU 上运行
model
来并行化model
训练,因此我使用multi_gpu_model()
失败并出现错误:
AssertionError: Could not compute output Tensor("model_2/Dense_Decoder/truediv:0", shape=(?, 33, 22), dtype=float32)
I have tried parallelizing the two submodels individually using multi_gpu_model(model_1, gpus=2)
and multi_gpu_model(model_2, gpus=2)
, yet both succeed .我曾尝试使用
multi_gpu_model(model_1, gpus=2)
和multi_gpu_model(model_2, gpus=2)
分别并行化两个子模型,但都成功了。 The problem appears only with the full model.该问题仅出现在完整模型中。
I am using Tensorflow 1.12.0 and Keras 2.2.4 .我正在使用Tensorflow 1.12.0和Keras 2.2.4 。 A snippet that demonstrates the problem (at least on my machine) is:
演示问题的片段(至少在我的机器上)是:
from keras.layers import Input, Dense,TimeDistributed, BatchNormalization
from keras.layers import CuDNNLSTM as LSTM
from keras.models import Model
from keras.utils import multi_gpu_model
dec_layers = 2
codelayer_dim = 11
bn_momentum = 0.9
lstm_dim = 128
td_dense_dim = 0
output_dims = 22
dec_input_shape = [33, 44]
# MODEL 1
latent_input = Input(shape=(codelayer_dim,), name="Latent_Input")
# Initialize list of state tensors for the decoder
decoder_state_list = []
for dec_layer in range(dec_layers):
# The tensors for the initial states of the decoder
name = "Dense_h_" + str(dec_layer)
h_decoder = Dense(lstm_dim, activation="relu", name=name)(latent_input)
name = "BN_h_" + str(dec_layer)
decoder_state_list.append(BatchNormalization(momentum=bn_momentum, name=name)(h_decoder))
name = "Dense_c_" + str(dec_layer)
c_decoder = Dense(lstm_dim, activation="relu", name=name)(latent_input)
name = "BN_c_" + str(dec_layer)
decoder_state_list.append(BatchNormalization(momentum=bn_momentum, name=name)(c_decoder))
# Define model_1
model_1 = Model(latent_input, decoder_state_list)
# MODEL 2
inputs = []
decoder_inputs = Input(shape=dec_input_shape, name="Decoder_Inputs")
inputs.append(decoder_inputs)
xo = decoder_inputs
for dec_layer in range(dec_layers):
name = "Decoder_State_h_" + str(dec_layer)
state_h = Input(shape=[lstm_dim], name=name)
inputs.append(state_h)
name = "Decoder_State_c_" + str(dec_layer)
state_c = Input(shape=[lstm_dim], name=name)
inputs.append(state_c)
# RNN layer
decoder_lstm = LSTM(lstm_dim,
return_sequences=True,
name="Decoder_LSTM_" + str(dec_layer))
xo = decoder_lstm(xo, initial_state=[state_h, state_c])
xo = BatchNormalization(momentum=bn_momentum, name="BN_Decoder_" + str(dec_layer))(xo)
if td_dense_dim > 0: # Squeeze LSTM interconnections using Dense layers
xo = TimeDistributed(Dense(td_dense_dim), name="Time_Distributed_" + str(dec_layer))(xo)
# Final Dense layer to return probabilities
outputs = Dense(output_dims, activation='softmax', name="Dense_Decoder")(xo)
# Define model_2
model_2 = Model(inputs=inputs, outputs=[outputs])
# FULL MODEL
latent_input = Input(shape=(codelayer_dim,), name="Latent_Input")
decoder_inputs = Input(shape=dec_input_shape, name="Decoder_Inputs")
# Stack the two models
# Propagate tensors through 1st model
x = model_1(latent_input)
# Insert decoder_inputs as the first input of the 2nd model
x.insert(0, decoder_inputs)
# Propagate tensors through 2nd model
x = model_2(x)
# Define full model
model = Model(inputs=[latent_input, decoder_inputs], outputs=[x])
# Parallelize the model
parallel_model = multi_gpu_model(model, gpus=2)
parallel_model.summary()
Thanks a lot for any help / tips.非常感谢任何帮助/提示。
I found the solution to my problem, which I am not sure how to justify for.我找到了我的问题的解决方案,我不知道如何证明这一点。
The problem is caused by x.insert(0, decoder_inputs)
which I substituted with x = [decoder_inputs] + x
.问题是由
x.insert(0, decoder_inputs)
引起的,我用x = [decoder_inputs] + x
替换了它。 Both seem to result in the same list of tensors, however multi_gpu_model
complains in the first case.两者似乎都会产生相同的张量列表,但是
multi_gpu_model
在第一种情况下抱怨。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.