[英]How to implement a bidirectional wrapper in functional API?
Does the bidirectional layer connect encoder to decoder or decoder to decoder.双向层是将编码器连接到解码器还是将解码器连接到解码器。 This is the 3 parts of the encoder which feed to the decoders below.
这是编码器的 3 个部分,它们馈送到下面的解码器。
#encoding layers
input_context = Input(shape = (maxLen, ), dtype = 'int32', name = 'input_context')
input_ctx_embed = embed_layer(input_context)
encoder_lstm, h1, c1 = LSTM(256, return_state = True, return_sequences = True)(input_ctx_embed)
encoder_lstm2,h2, c2 = LSTM(256, return_state = True, return_sequences = True)(encoder_lstm)
_,h3, c3 = LSTM(256, return_state = True)(encoder_lstm2)
encoder_states = [h1, c1, h2, c2,h3,c3]
#layers for the decoder
input_target = Input(shape = (maxLen, ), dtype = 'int32', name = 'input_target')
input_tar_embed = embed_layer(input_target)
# the decoder lstm uses the final states from the encoder lstm as the initial state
decoder_lstm, context_h, context_c = LSTM(256, return_state = True, return_sequences = True)
(input_tar_embed, initial_state = [h1, c1],)
decoder_lstm2, context_h2, context_c2 = LSTM(256, return_state = True, return_sequences = True)
(decoder_lstm, initial_state = [h2, c2],)
final, context_h3, context_c3 = LSTM(256, return_state = True, return_sequences = True)
(decoder_lstm2, initial_state = [h3, c3],)
dense_layer=Dense(vocab_size, activation = 'softmax')
output = TimeDistributed(dense_layer)(final)
#output=Dropout(0.3)(output)
model = Model([input_context, input_target], output)
Not sure where the bidirectional layer is, since in my opinion, if you would like to use keras.layers.LSTM()
to build a Bidirectional RNN structure without using keras.layer.Bidirectional()
, then there's one setting in keras.layers.LSTM()
which is called go_backwards
and its default is False
, set it True
makes the LSTM going backward.不确定双向层在哪里,因为在我看来,如果您想使用
keras.layers.LSTM()
来构建双向RNN 结构而不使用keras.layer.Bidirectional()
,那么在 keras.layer.Bidirectional() 中有一个设置,那么在keras.layers.LSTM()
() 构建双向 RNN 结构,那么在 keras.layer.Bidirectional() 中有一个设置。 keras.layers.LSTM()
称为go_backwards
,默认为False
,设置为True
会使 LSTM 后退。 And if you are just asking where to put Bidirectional LSTM in a encoder-decoder structure, then my answer will be "you can put it wherever you want , if that way make your model better ."如果您只是询问将双向 LSTM放在编码器-解码器结构中的哪个位置,那么我的答案将是“您可以将它放在任何您想要的地方,如果这样可以让您的model 更好的话。”
If I mixed up anything, let me know.如果我混淆了什么,请告诉我。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.