简体   繁体   English

Python中的Keras:LSTM尺寸

[英]Keras in Python: LSTM Dimensions

I am building an LSTM network. 我正在建立一个LSTM网络。 My data looks as following: 我的数据如下:

X_train.shape = (134, 300000, 4)

X_train contains 134 sequences, with 300000 timesteps and 4 features. X_train包含134个序列,具有300000个时间步长和4个特征。

Y_train.shape = (134, 2)

Y_train contains 134 labels, [1, 0] for True and [0, 1] for False. Y_train包含134个标签,[1,0]表示True,[0,1]表示False。

Below is my model in Keras. 下面是我在Keras中的模型。

model = Sequential()
model.add(LSTM(4, input_shape=(300000, 4), return_sequences=True))
model.compile(loss='categorical_crossentropy', optimizer='adam')

Whenever I run the model, I get the following error: 每当我运行模型时,都会出现以下错误:

Error when checking target: expected lstm_52 to have 3 dimensions, but got array with shape (113, 2)

It seems to be related to my Y_train data -- as its shape is (113, 2). 它的形状似乎是(113,2),与我的Y_train数据有关。

Thank you! 谢谢!

The output shape of your LSTM layer is (batch_size, 300000, 4) (because of return_sequences=True ). 您的LSTM层的输出形状为(batch_size, 300000, 4) (因为return_sequences=True )。 Therefore your model expects the target y_train to have 3 dimensions but you are passing an array with only 2 dimensions (batch_size, 2) . 因此,您的模型期望目标y_train具有3维,但是您传递的数组只有2维(batch_size, 2)

You probably want to use return_sequences=False instead. 您可能想改用return_sequences=False In this case the output shape of the LSTM layer will be (batch_size, 4) . 在这种情况下,LSTM层的输出形状将为(batch_size, 4) Moreover, you should add a final softmax layer to your model in order to have the desired output shape of (batch_size, 2) : 此外,您应该在模型中添加最终的softmax图层,以具有所需的输出形状(batch_size, 2)

model = Sequential()
model.add(LSTM(4, input_shape=(300000, 4), return_sequences=False))
model.add(Dense(2, activation='softmax')) # 2 neurons because you have 2 classes
model.compile(loss='categorical_crossentropy', optimizer='adam')

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM