简体   繁体   English

LSTM自动编码器关于Keras尺寸的问题

[英]LSTM autoencoder Issue on the dimension with Keras

I am trying to make an autoencoder with Keras. 我试图用Keras做一个自动编码器。 I am having error as follows 我有以下错误

ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (480, 7) ValueError:检查输入时出错:预期lstm_1_input具有3个维,但数组的形状为(480,7)

These are the following data info 这些是以下数据信息

df.shape => (480, 7) df.shape => (480, 7) df.shape (480, 7)

timesteps = 15
dim = 7
lH = LossHistory()

model = Sequential()
model.add(LSTM(50, input_shape=(timesteps,dim), return_sequences=True))
model.add(Dense(dim))
model.compile(loss='mae',optimizer = 'adam')

and here is the problem while using fit 这是使用fit时的问题

model.fit(data,data, epochs=20, batch_size=100, validation_data=(data,data),verbose=0, shuffle=False, callbacks=[lH])

From this link , you can setup an auto-encoder as 在此链接中 ,您可以将自动编码器设置为

inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(latent_dim)(inputs)

decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)

sequence_autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)

But you should decide on time steps per sample. 但是您应该确定每个样本的时间步长。 For example, if you decide to have 10 steps per each sample, then you can "chop" your whole data with 480 observations into 48 samples each with 10 time steps. 例如,如果您决定每个样本有10个步长,则可以将480个观测值“切碎”整个数据,分成48个样本,每个样本有10个时间步长。 Now the input shape would be (48, 10, 7). 现在输入形状将为(48,10,7)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM