简体   繁体   English

如何将 keras LSTM 层的输出馈入输入层?

[英]How to feed output of keras LSTM layer into input layer?

I am fairly new to tensorflow and keras and have a question.我对 tensorflow 和 keras 相当陌生,有一个问题。 I want to use do time series prediction using LSTM layer, with some modifications.我想使用 LSTM 层进行时间序列预测,并进行一些修改。 I started with the example given in the tensorflow tutorial我从tensorflow教程中给出的例子开始

def build_LSTM(neurons, batch_size, history_size, features):
   model = tf.keras.models.Sequential()
   model.add(tf.keras.layers.LSTM(neurons, 
                                  batch_input_shape=(batch_size, history_size, features),
                                  stateful=True))
   model.add(tf.keras.layers.Dense(1))
   model.compile(loss='mean_squared_error', optimizer='adam')
   return(model)

In the current state from the example, the input of for the model is of the form (observations, time steps, features), and it returns a single number (the prediction for the next time step).在示例的当前状态中,模型的输入具有以下形式(观测值、时间步长、特征),它返回一个数字(对下一个时间步长的预测)。

What I want to do is use the mode return_sequence=True in the LSTM layer.我想要做的是在 LSTM 层中使用模式return_sequence=True

Is it correct that this returns a tensor T of shape (time steps, features)?这返回形状(时间步长,特征)的张量 T 是否正确?

Is there a way to transfer this tensor from one step (lets say observation = 1) to the next step (observation = 2)?有没有办法将这个张量从一个步骤(假设观察 = 1)转移到下一步(观察 = 2)? I guess the corresponding graph would look like this:我猜相应的图应该是这样的:

在此处输入图片说明

To answer your queation, Is it correct that this returns a tensor T of shape (time steps, features)?为了回答您的问题,这是否正确返回形状(时间步长、特征)的张量 T?

Answer is yes the output is a tensor of an output for each time steps.答案是肯定的,输出是每个时间步长的输出张量。

Another question, Is there a way to transfer this tensor from one step (lets say observation = 1) to the next step (observation = 2)?另一个问题,有没有办法将这个张量从一个步骤(假设观察 = 1)转移到下一步(观察 = 2)?

This question is quite hard to answer, technically when you specify return_sequence=True, it automatically compute each timestep and feed "current state" to it self as an initial state when it compute next time step, until it compute all your data and give that Tensor output that you ask in question 1. So, if you want this tensor for further computing, such as you want to sum up all answer from odd time steps, it is possible.这个问题很难回答,从技术上讲,当您指定 return_sequence=True 时,它​​会自动计算每个时间步并在计算下一个时间步时将“当前状态”作为初始状态提供给它自己,直到它计算出所有数据并给出您在问题 1 中询问的张量输出。因此,如果您希望此张量用于进一步计算,例如您想总结奇数时间步长的所有答案,这是可能的。 Moreover, If you want to pass your last state to next batch of input, you can achieve that by giving stateful=True argument.此外,如果您想将上一个状态传递给下一批输入,您可以通过提供 stateful=True 参数来实现。

However, If you want to feed an output of last time step to current time step (something like close-loop control), regardless of given model, you need to create your own recurrent cell and use it with RNN layer custom_model = RNN(custom_recurrent _cell, return_sequence=True) .但是,如果您想将上一个时间步的输出提供给当前时间步(类似于闭环控制),无论给定模型如何,您都需要创建自己的循环单元并将其与 RNN 层custom_model = RNN(custom_recurrent _cell, return_sequence=True)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 将 keras model 输入馈送到 output 层 - Feed keras model input to the output layer 如何将 Conv2d 层 output 作为 Keras model 的输入? - How to feed a Conv2d layer output as input for a Keras model? Keras遮罩层作为LSTM层的输入 - Keras masking layer as input to lstm layer 张量 concat output 输入以馈送新的 lstm 层 - tensor concat output to input to feed new lstm layer 在Python中使用Keras将27个字段的输入连接到LSTM层的输出 - Concatenate an input of 27 fields to the output of the LSTM layer using Keras in Python 如何将应用于LSTM的注意包装器的输出用作TimeDistributed层Keras的输入? - How to use the output of attention wrapper applied over LSTM as an input to the TimeDistributed layer, Keras? Keras 功能 API 嵌入层 output 到 LSTM - Keras Functional API embedding layer output to LSTM 如何使用Keras API提取“从输入层到隐藏层”和“从隐藏层到输出层”的权重? - How to extract weights “from input layer to hidden layer” and “from hidden layer to output layer” with Keras API? Keras:修改/重新排列输入层的元素以创建新的输入层并为新的输入层提供数据以创建新的输出 - Keras: Modify/Rearrange elements of Input Layer to create a new Input Layer and feed the new input Layer to create a new Output Keras - LSTM 密集层中的输入形状错误 - Keras - Wrong input shape in LSTM dense layer
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM