简体   繁体   中英

Understanding Keras LSTM Tensorboard Graph

I am confused about the graph I am getting in Tensborboad for my Keras LSTM network. I have defined my Keras LSTM network like this:

model = Sequential()
model.add(LSTM(neurons, return_sequences=True,input_shape=(look_back,2)))
#model.add(Bidirectional(LSTM(neurons, return_sequences=True),input_shape=(look_back,2)))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(LSTM(neurons,return_sequences=True,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))   
model.add(LSTM(20,return_sequences=False,recurrent_regularizer=l2(weight_decay),
          kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),dropout=dropout,recurrent_dropout=dropout))
model.add(Dense(outputs,kernel_regularizer=l2(weight_decay),bias_regularizer=l2(weight_decay),activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])

I thought that would give me a sequential model, where each LSTM takes the output of the previous LSTM. I sort of get that. But I also get one of the LSTM layers as the input to every single subsequent layer:

在此处输入图片说明

In the graph, it looks like lstm_2 feeds into every layer. I wouldn't have expected that. So my question is, is this expected? And if so, why?

Thanks.

I figured out why it is showing like that. It turns out that Keras creates a learning_phase placeholder and it places it in the second hidden layer. The learning_phase object branches out to every single layer, but the LSTM itself does not. I refer to this answer for more details.

Here's what the insides of my LSTM_1 layer looked like in my Tensorboard graph:

在此处输入图片说明

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM