简体   繁体   中英

How to feed into LSTM with 4 dimensional input?

I have a sequence input in this shape: (6000, 64, 100, 50)

The 6000 is just the number of sample sequences. Each sequences is 64 in length.

I plan to fit this input into an LSTM using Keras.

I setup my input this way:

input = Input(shape=(64, 100, 50))

This gives me an input shape of (?, 64, 100, 50)

However, when I put input into my LSTM like so:

x = LSTM(256, return_sequences=True)(input)

I get this error:

Input 0 is incompatible with layer lstm_37: expected ndim=3, found ndim=4

This would have worked if my input shape was something like (?, 64, 100) , but not when I've a 4th dimension.

Does this mean that LSTM can only take an input of 3 dimensional? How can I feed a 4 or even higher dimension input into LSTM using Keras?

The answer is you can't.

The Keras Documentation provides the following information for Recurrent Layer:

Input shape

3D tensor with shape (batch_size, timesteps, input_dim) .

In your case you have 64 timesteps where each step is of shape (100, 50). The easiest way to get the model working is to reshape your data to (100*50).

Numpy provides an easy function to do so:

X = numpy.zeros((6000, 64, 100, 50), dtype=numpy.uint8)
X = numpy.reshape(X, (6000, 64, 100*50))

Wheter this is reasonable or not highly depends on your data.

you can also consider TimeDistributed(LSTM(...))

inp = Input(shape=(64, 100, 50))
x = TimeDistributed(LSTM(256, return_sequences=True))(inp)

model = Model(inp, x)
model.compile('adam', 'mse')
model.summary()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM