简体   繁体   中英

How to combine LSTM and CNN in timeseries classification

Most commonly CNN is used when there are images as data. However, I have seen that CNN are sometines used for timeseries. Therefore, I tried both LSTM and CNN models seperately for my timeseries classification problem. My two models are as follows.

LSTM:

model = Sequential()
model.add(LSTM(200, input_shape=(25,3)))
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

CNN:

model = Sequential()
model.add(Conv1D(200, kernel_size=3, input_shape=(25,3)))
model.add(Conv1D(200, kernel_size=2))
model.add(GlobalMaxPooling1D())
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

I think LSTM and CNN has there unique characteristics and combining these two in my prediction will produce better results. However, I am struggling to find a suitable resource that suits my problem.

Is it possible to do this for my problem? If so how I can do it? Will it produce better results?

I am happy to provide more details if needed.

EDIT:

My problem setting is as follows. I have a dataset with about 5000 data points. Each data point has 3 time-series data that are exactly 25 in size. My labeled data is 1 or 0 (ie binary classification). More specifically my dataset looks as follows.

node, time-series1, time_series2, time_series3, Label
n1, [1.2, 2.5, 3.7, 4.2, ... 5.6, 8.8], [6.2, 5.2, 4.7, 3.2, ... 2.6, 1.8], [1.0, 2.8, 3.9, 4.1, ... 5.2, 8.6] …, 1
n2, [5.2, 4.5, 3.7, 2.2, ... 1.6, 0.8], [8.2, 7.5, 6.7, 5.2, ... 4.6, 1.8], …, [1.2, 2.5, 3.7, 4.2, ... 5.2, 8.5] 0
and so on.

I input these data to my LSTM and CNN models.

Have you tried to just put one layer after another? This sounds pretty standard...

model = Sequential()
model.add(Conv1D(200, kernel_size=3, activation = useSomething, input_shape=(25,3)))
model.add(LSTM(200))
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))

Do you want to try the opposite?

model = Sequential()
model.add(LSTM(200, return_sequences=True, input_shape=(25,3)))
model.add(Conv1D(200, kernel_size=3, activation = useSomething))
model.add(GlobalMaxPooling1D())
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))

Do you want a huge model?

model = Sequential()
model.add(Conv1D(15, kernel_size=3, activation = useSomething, input_shape=(25,3)))
model.add(LSTM(30, return_sequences=True))
model.add(Conv1D(70, kernel_size=3, activation = useSomething))
............
model.add(LSTM(100))
model.add(Dense(100))
model.add(Dense(1, activation='sigmoid'))

Try many things:

  • Conv, LSTM
  • LSTM, Conv
  • Conv, Conv, .., Conv, LSTM, ..., LSTM
  • LSTM, LSTM, ..., Conv, Conv, ....
  • C, L, C, L, C, L, ....
  • L, C, L, C, L, C, ....
  • C, C, L, L, C, C, ....

A two sided model?

inputs = Input((25,3))

side1 = Bidirectional(LSTM(100, return_sequences=True))(inputs) #200 total units
side2 = Conv1D(200, activation = 'tanh', padding = 'same')(inputs) #same activation 
                                                                   #same length

merged = Add()([side1, side2]) 
     #or Concatenate()([side1, side2]) if different number of units/channels/features

outputs = Conv1D(200)(merged)
outputs = GlobalMaxPooling1D()(outputs)
outputs = Dense(100)(outputs)
outputs = Dense(1, activation='sigmoid')(outputs)

model = Model(inputs, outputs)

您可能想看看 LSTNet,它就是这样做的 - https://arxiv.org/abs/1703.07015https://github.com/laiguokun/LSTNet

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM