简体   繁体   English

AssertionError:无法计算 output 张量(“softmax_layer/Identity:0”,shape=(None, 27, 8870),dtype=float32)

[英]AssertionError: Could not compute output Tensor(“softmax_layer/Identity:0”, shape=(None, 27, 8870), dtype=float32)

I am trying to develop chatbot with an attention mechanism.我正在尝试开发具有注意力机制的聊天机器人。 but it gives errors like this.但它给出了这样的错误。 my input shape of x_train is (None, 27) and output shape is (None, 27, 8870).我的 x_train 输入形状是 (None, 27) 和 output 形状是 (None, 27, 8870)。 But I can't identify the errors properly.但我无法正确识别错误。

def chatbot_model(embedding_size, max_sentence_length, vocab_size, embedding_matrix, batch_size=None):

  if batch_size:
    encoder_inputs = Input(batch_shape=(batch_size, max_sentence_length, ), name='encoder_inputs')
    decoder_inputs = Input(batch_shape=(batch_size, max_sentence_length, ), name='decoder_inputs')
  else:
    encoder_inputs = Input(shape=(max_sentence_length, ), name='encoder_inputs')
    decoder_inputs = Input(shape=(max_sentence_length, ), name='decoder_inputs')

  embedding_layer = Embedding(vocab_size, embedding_size, weights=[embedding_matrix], input_length=max_sentence_length)
  encoder_inputs_embed = embedding_layer(encoder_inputs)
  decoder_inputs_embed = embedding_layer(decoder_inputs)

  encoder_lstm = Bidirectional(LSTM(embedding_size, return_sequences=True, return_state=True, name='encoder_lstm'), name='bidirectional_encoder')
  encoder_out, encoder_fwd_state_h, encoder_fwd_state_c, encoder_back_state_h, encoder_back_state_c = encoder_lstm(encoder_inputs_embed)
  state_h = Concatenate()([encoder_fwd_state_h, encoder_back_state_h])
  state_c = Concatenate()([encoder_fwd_state_c, encoder_back_state_c])
  enc_states = [state_h, state_c]

  decoder_lstm = LSTM(embedding_size*2, return_sequences=True, return_state=True, name='decoder_lstm')
  decoder_out, decoder_state, *_ = decoder_lstm(
        decoder_inputs_embed, initial_state=enc_states
    )

  attn_layer = AttentionLayer(name='attention_layer')
  attn_out, attn_states = attn_layer([encoder_out, decoder_out])

  decoder_concat_input = Concatenate(axis=-1, name='concat_layer')([decoder_out, attn_out])

  print('decoder_concat_input', decoder_concat_input)

  dense = Dense(vocab_size, activation='softmax', name='softmax_layer')
  dense_time = TimeDistributed(dense, name='time_distributed_layer')
  decoder_pred = dense_time(decoder_concat_input)

  full_model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_pred)
  full_model.compile(optimizer='adam', loss='categorical_crossentropy')

  full_model.summary()

  """ Inference model """
  batch_size = 1

  encoder_inf_inputs = Input(batch_shape=(batch_size, max_sentence_length, ), name='encoder_inf_inputs')
  encoder_inf_inputs_embed = embedding_layer(encoder_inf_inputs)
  encoder_inf_out, encoder_inf_fwd_state_h, encoder_inf_fwd_state_c, encoder_inf_back_state_h, encoder_inf_back_state_c = encoder_lstm(encoder_inf_inputs_embed)
  inf_state_h = Concatenate()([encoder_inf_fwd_state_h, encoder_inf_back_state_h])
  inf_state_c = Concatenate()([encoder_inf_fwd_state_c, encoder_inf_back_state_c])
  enc_inf_states = [inf_state_h, state_c]
  encoder_model = Model(inputs=encoder_inf_inputs, outputs=[encoder_inf_out, encoder_inf_fwd_state_h, encoder_inf_fwd_state_c, encoder_inf_back_state_h, encoder_inf_back_state_c])

  decoder_inf_inputs = Input(batch_shape=(batch_size, 1, ), name='decoder_word_inputs')
  decoder_inf_inputs_embed = embedding_layer(decoder_inf_inputs)
  encoder_inf_states = Input(batch_shape=(batch_size, max_sentence_length, 2*embedding_size), name='encoder_inf_states')
  decoder_init_state_h = Input(batch_shape=(batch_size, 2*embedding_size), name='decoder_init_state_h')
  decoder_init_state_c = Input(batch_shape=(batch_size, 2*embedding_size), name='decoder_init_state_c')
  decoder_init_states = [decoder_init_state_h, decoder_init_state_c]

  decoder_inf_out, decoder_inf_state_h, decoder_inf_state_c = decoder_lstm(decoder_inf_inputs_embed, initial_state=decoder_init_states)
  decoder_inf_states = [decoder_inf_state_h, decoder_inf_state_c]
  attn_inf_out, attn_inf_states = attn_layer([encoder_inf_states, decoder_inf_out])
  decoder_inf_concat = Concatenate(axis=-1, name='concat')([decoder_inf_out, attn_inf_out])
  decoder_inf_pred = TimeDistributed(dense)(decoder_inf_concat)
  decoder_model = Model(inputs=[encoder_inf_states, decoder_init_states, decoder_inf_inputs],
                        outputs=[decoder_inf_pred, attn_inf_states, decoder_inf_states])

  return full_model, encoder_model, decoder_model

It gives en error like this:它给出了这样的错误:

AssertionError                            Traceback (most recent call last)

in () ----> 1 full_model.fit(x_train[:1000, :], outs, epochs=1, batch_size=BATCH_SIZE) in () ----> 1 full_model.fit(x_train[:1000, :], outs, epochs=1, batch_size=BATCH_SIZE)

AssertionError: in user code: AssertionError:在用户代码中:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
    outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:531 train_step  **
    y_pred = self(x, training=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:927 __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py:719 call
    convert_kwargs_to_constants=base_layer_utils.call_context().saving)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py:899 _run_internal_graph
    assert str(id(x)) in tensor_dict, 'Could not compute output ' + str(x)

AssertionError: Could not compute output Tensor("time_distributed_layer/Identity:0", shape=(None, 27, 8870), dtype=float32)

model isn't predicting after successful training? model 训练成功后不预测? or not yet compiled model?还是尚未编译 model?

I posted an issue on tensorflow github page.我在 tensorflow github 页面上发布了一个问题。

link of the issue问题的链接

In my case the model isn't predicting the dataset after successful training.在我的情况下,model 在成功训练后没有预测数据集。 Follow that issue for more.Thank you关注该问题以获取更多信息。谢谢

full_model.fit(x_train[:1000, :], outs, epochs=1, batch_size=BATCH_SIZE)

passes only 1 input while you declared 2 in在您声明 2 时仅传递 1 个输入

  full_model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_pred)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 错误:断言错误:无法计算输出张量(“dense_2/truediv:0”,形状=(无,无,1),dtype=float32) - Error: AssertionError: Could not compute output Tensor(“dense_2/truediv:0”, shape=(None, None, 1), dtype=float32) AssertionError:无法计算 output 张量(“dense_17/Sigmoid:0”,shape=(None,1),dtype=float32) - AssertionError: Could not compute output Tensor(“dense_17/Sigmoid:0”, shape=(None, 1), dtype=float32) ValueError:模型输出“Tensor(“activation_1/Identity:0”, shape=(?, 3), dtype=float32)”的形状无效 - ValueError: Model output "Tensor("activation_1/Identity:0", shape=(?, 3), dtype=float32)" has invalid shape 图断开连接:无法在“input_1”层获取张量 Tensor("input_1:0", shape=(None, 299, 299, 3), dtype=float32) 的值 - Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(None, 299, 299, 3), dtype=float32) at layer "input_1" python - ValueError: Tensor Tensor("dense_2/Softmax:0", shape=(?, 43), dtype=float32) 不是这个图的一个元素 - python - ValueError: Tensor Tensor("dense_2/Softmax:0", shape=(?, 43), dtype=float32) is not an element of this graph / image / Tensor Tensor上的ValueError(“activation_5 / Softmax:0”,shape =(?,4),dtype = float32)不是此图的元素 - ValueError at /image/ Tensor Tensor(“activation_5/Softmax:0”, shape=(?, 4), dtype=float32) is not an element of this graph 类型错误:添加的图层必须是类图层的实例。 找到:Tensor("concatenate_6/concat:0", shape=(None, 4608), dtype=float32) - TypeError: The added layer must be an instance of class Layer. Found: Tensor("concatenate_6/concat:0", shape=(None, 4608), dtype=float32) Tensor Tensor("predictions/Softmax:0", shape=(?, 1000), dtype=float32) 不是这个图的元素 - Tensor Tensor("predictions/Softmax:0", shape=(?, 1000), dtype=float32) is not an element of this graph Tensor(“dense_2/Softmax:0”, shape=(?, 10), dtype=float32) 不是该图的元素 - Tensor(“dense_2/Softmax:0”, shape=(?, 10), dtype=float32) is not an element of this graph 图表断开:无法在“input_5”层获取张量 Tensor(“input_5:0”, shape=(None, None, None, 128), dtype=float32) 的值 - Graph disconnected: cannot obtain value for tensor Tensor(“input_5:0”, shape=(None, None, None, 128), dtype=float32) at layer “input_5”
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM