[英]How to feed output of keras LSTM layer into input layer?
I am fairly new to tensorflow and keras and have a question.我对 tensorflow 和 keras 相当陌生,有一个问题。 I want to use do time series prediction using LSTM layer, with some modifications.
我想使用 LSTM 层进行时间序列预测,并进行一些修改。 I started with the example given in the tensorflow tutorial
我从tensorflow教程中给出的例子开始
def build_LSTM(neurons, batch_size, history_size, features):
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(neurons,
batch_input_shape=(batch_size, history_size, features),
stateful=True))
model.add(tf.keras.layers.Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
return(model)
In the current state from the example, the input of for the model is of the form (observations, time steps, features), and it returns a single number (the prediction for the next time step).在示例的当前状态中,模型的输入具有以下形式(观测值、时间步长、特征),它返回一个数字(对下一个时间步长的预测)。
What I want to do is use the mode return_sequence=True
in the LSTM layer.我想要做的是在 LSTM 层中使用模式
return_sequence=True
。
Is it correct that this returns a tensor T of shape (time steps, features)?这返回形状(时间步长,特征)的张量 T 是否正确?
Is there a way to transfer this tensor from one step (lets say observation = 1) to the next step (observation = 2)?有没有办法将这个张量从一个步骤(假设观察 = 1)转移到下一步(观察 = 2)? I guess the corresponding graph would look like this:
我猜相应的图应该是这样的:
To answer your queation, Is it correct that this returns a tensor T of shape (time steps, features)?为了回答您的问题,这是否正确返回形状(时间步长、特征)的张量 T?
Answer is yes the output is a tensor of an output for each time steps.答案是肯定的,输出是每个时间步长的输出张量。
Another question, Is there a way to transfer this tensor from one step (lets say observation = 1) to the next step (observation = 2)?另一个问题,有没有办法将这个张量从一个步骤(假设观察 = 1)转移到下一步(观察 = 2)?
This question is quite hard to answer, technically when you specify return_sequence=True, it automatically compute each timestep and feed "current state" to it self as an initial state when it compute next time step, until it compute all your data and give that Tensor output that you ask in question 1. So, if you want this tensor for further computing, such as you want to sum up all answer from odd time steps, it is possible.这个问题很难回答,从技术上讲,当您指定 return_sequence=True 时,它会自动计算每个时间步并在计算下一个时间步时将“当前状态”作为初始状态提供给它自己,直到它计算出所有数据并给出您在问题 1 中询问的张量输出。因此,如果您希望此张量用于进一步计算,例如您想总结奇数时间步长的所有答案,这是可能的。 Moreover, If you want to pass your last state to next batch of input, you can achieve that by giving stateful=True argument.
此外,如果您想将上一个状态传递给下一批输入,您可以通过提供 stateful=True 参数来实现。
However, If you want to feed an output of last time step to current time step (something like close-loop control), regardless of given model, you need to create your own recurrent cell and use it with RNN layer custom_model = RNN(custom_recurrent _cell, return_sequence=True)
.但是,如果您想将上一个时间步的输出提供给当前时间步(类似于闭环控制),无论给定模型如何,您都需要创建自己的循环单元并将其与 RNN 层
custom_model = RNN(custom_recurrent _cell, return_sequence=True)
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.