简体   繁体   中英

Understanding the role of timestamp in Keras LSTM

Given a signal dataset of 10232 signals, each of 200K dim (10232, 200000) made for a classification purpose. My understanding of Keras LSTM is that it accepts data in the format (samples, timestamp, features) . After reading many articles, it turns out that LSTM Keras accepts only data in 3D so we should first expend the input data from 2D to 3D. The following is a snapshot code:

X = np.expand_dims(X, -1) # ---> (10232, 200000, 1)
input_layer = Input(shape=(X.shape[1], X.shape[2]))
lstm_ = LSTM(64, return_sequences=True)(input_layer)
lstm_ = Dropout(0.2)(lstm_)
lstm_ = LSTM(32, return_sequences=True)(lstm_)
lstm_ = Dropout(0.2)(lstm_)
lstm_ = LSTM(8)(lstm_)
output_layer = Dense(1, activation='sigmoid')(lstm_)
model = Model(inputs=input_layer, outputs=output_layer)

Making the timestamp equal to the dim of each time series X.shape[1] . My question is: What will happen if I choose timestamp= 1000 , ie, Input(shape=(1000, X.shape[2])) ? I assume it means it somehow divides each time series that is originally of dim= 200K into samples each of dim= 1000. Would Keras create these samples, each of which has dim= 1000, for me or should I perform that manually?

Thank you

If you make these changes - you should feed to the network sequences of 1000 instead of 200K. Keras will not split automatically - you have to do it manually. And you should prepare labels for every sequence of 1000.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM