简体   繁体   English

keras张量整形(lstm输入形状错误)

[英]keras tensor reshaping (lstm input shape error)

I am using LSTM on keras and using a reshape layer prior in hopes that I don't have to specify shape for the LSTM layer. 我在keras上使用LSTM并事先使用了重塑层,希望我不必为LSTM层指定形状。

the input is 84600 x 6 输入为84600 x 6

84600 seconds in 2 months. 2个月内84600秒。 6 metric/[labels] im measuring throughout the 2 months 在整个2个月中进行6种度量/ [标签]即时测量

so far I have 到目前为止,我有

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Reshape((86400,1,6), input_shape=(84600, 6)))
model.add(tf.keras.layers.LSTM(128,  activation='relu', input_shape= 
(x_train.shape), return_sequences=True))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

which throws an error: 这会引发错误:

ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 86400, 1, 6]

this is understandable. 这是可以理解的。 The batch size, plus 3 layers equals 4. However, when I reshape 批处理大小加上3层等于4。但是,当我重塑形状时

model.add(tf.keras.layers.Reshape((86400,1,6), input_shape=(84600, 6)))
vvvvvvv
model.add(tf.keras.layers.Reshape((86400,6), input_shape=(84600, 6)))

It throws 它抛出

ValueError: Error when checking input: expected reshape_input to have 3 dimensions, but got array with shape (86400, 6)

It seems to ignore the batch size as an array element. 似乎忽略了批处理大小作为数组元素。 And treats it as 2 indexes. 并将其视为2个索引。 It jumps from 4 dimensions to 2 dimensions. 它从4维跳到2维。

The problem is LSTM takes 3 dimensions as input, and I can't seem to get that. 问题是LSTM将3维作为输入,但我似乎无法理解。 Ideally I want a 86400 x 1 x 6 array/tensor. 理想情况下,我想要86400 x 1 x 6阵列/张量。 So it becomes 84600 examples of 1x6 data. 因此,它成为1x6数据的84600个示例。

Thank you very much! 非常感谢你!

The problem is that the way you are reshaping your input is incompatible with an LSTM layer. 问题在于,重塑输入的方式与LSTM层不兼容。 An LSTM layer expects an input with 3 dimensions: (batch_size, timesteps, features) . LSTM层期望输入3个维度: (batch_size, timesteps, features)要素(batch_size, timesteps, features) However, you are feeding it an input with shape (batch_size, 84600, 1, 6) . 但是,您正在向其输入形状为(batch_size, 84600, 1, 6)

In your case it seems like 84600 is the number of timesteps and 6 is the number of features per timestep. 在您的情况下,似乎时间步数为84600,每个时间步的要素数为6。 So, it makes more sense to leave out the Reshape layer and simply use input_shape (84600, 6) for your LSTM layer: 因此,省略Reshape层并为LSTM层简单使用input_shape (84600, 6) 84600,6 (84600, 6)更有意义:

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(128,  activation='relu', input_shape=(84600, 6), return_sequences=True))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM