简体   繁体   English

无法从检查点恢复:bidirectional/backward_lstm/bias

[英]Cannot restore from checkpoint: bidirectional/backward_lstm/bias

I am trying to create a simple LSTM-based RNN in tensor2tensor.我正在尝试在 tensor2tensor 中创建一个简单的基于 LSTM 的 RNN。

The training seems to work so far but I cannot restore the model.到目前为止,培训似乎有效,但我无法恢复模型。 Trying to do so will throw a NotFoundError pointing out a bias-node from the LSTM:尝试这样做会抛出NotFoundError指出 LSTM 中的一个偏差节点:

NotFoundError: .. 

Key bidirectional/backward_lstm/bias not found in checkpoint

and I don't know why this is the case.我不知道为什么会这样。

This was actually supposed to be a workaround for another issue where I can into a similar issue using an LSTM from tensor2tensor ( https://github.com/tensorflow/tensor2tensor/issues/1616 ).这实际上应该是另一个问题的解决方法,我可以使用来自 tensor2tensor ( https://github.com/tensorflow/tensor2tensor/issues/1616 ) 的 LSTM 解决类似问题。

Environment环境

$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.12.0
tensorboard==1.12.0
tensorflow-datasets==1.0.2
tensorflow-estimator==1.13.0
tensorflow-gpu==1.12.0
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0

Model body模型体

def body(self, features):

    inputs = features['inputs'][:,:,0,:]

    hparams = self._hparams
    problem = hparams.problem
    encoders = problem.feature_info

    max_input_length = 350
    max_output_length = 350 

    encoder = Bidirectional(LSTM(128, return_sequences=True, unroll=False), merge_mode='concat')(inputs)
    encoder_last = encoder[:, -1, :]

    decoder = LSTM(256, return_sequences=True, unroll=False)(inputs, initial_state=[encoder_last, encoder_last])

    attention = dot([decoder, encoder], axes=[2, 2])
    attention = Activation('softmax', name='attention')(attention)

    context = dot([attention, encoder], axes=[2, 1])
    concat = concatenate([context, decoder])

    return tf.expand_dims(concat, 2)

Full error完全错误

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key while/lstm_keras/parallel_0_4/lstm_keras/lstm_keras/body/bidirectional/backward_lstm/bias not found in checkpoint
     [[node save/RestoreV2 (defined at /home/sfalk/tmp/pycharm_project_265/asr/model/persistence.py:282)  = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Any what the issue might be and how to fix this?任何问题可能是什么以及如何解决这个问题?

This seems to be related to https://github.com/tensorflow/tensor2tensor/issues/1486 .这似乎与https://github.com/tensorflow/tensor2tensor/issues/1486有关。 "while" seems to be prepended to keynames during restoration from a checkpoint using tensor2tensor.在使用 tensor2tensor 从检查点恢复期间,“while”似乎被添加到键名之前。 Seems to be an unaddressed bug, your input would be appreciated on github.似乎是一个未解决的错误,您的输入将在 github 上受到赞赏。

I would comment this if I could, but my reputation is too low.如果可以的话,我会对此发表评论,但我的声誉太低了。 Cheers.干杯。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM