简体   繁体   中英

4D input in LSTM layer in Keras

I have data with a shape of (10000, 20, 15, 4) where num samples = 10000 , num series in time = 20 , height = 15 , weight = 4 . So I have table 15x4 which is distributed over time. Here is the model I want to train it over this data:

...
model.add((LSTM(nums-1,return_sequences=True,input_shape=(20,15,4), activation='relu')))
model.add((LSTM(nums-1,return_sequences=False,input_shape=(20,15,4), activation='tanh')))
model.add(Dense(15,activation='relu'))
...

However, I get the following error:

ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, 
found ndim=4

How do I define a LSTM layer with 4D input shape?

LSTM layer accepts a 3D array as input which has a shape of (n_sample, n_timesteps, n_features) . Since the features of each timestep in your data is a (15,4) array, you need to first flatten them to a feature vector of length 60 and then pass it to your model:

X_train = X_train.reshape(10000, 20, -1)

# ...
model.add(LSTM(...,input_shape=(20,15*4), ...)) # modify input_shape accordingly

Alternatively, you can use a Flatten layer wrapped in a TimeDistributed layer as the first layer of your model to flatten each timestep:

model.add(TimeDistributed(Flatten(input_shape=(15,4))))

Further, note that if each timestep (ie array (15, 4) ) is a feature map where there is a local spatial relationship between its elements, say like an image patch, you can also use ConvLSTM2D instead of LSTM layer. Otherwise, flattening the timesteps and using LSTM would be fine.


As a side note: you only need to specify input_shape argument on the first layer of the model. Specifying it on other layers would be redundant and will be ignored since their input shape is automatically inferred by Keras.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM