简体   繁体   中英

How to define input_dim for Keras recurrent layers properly

I am trying to train some NN to predict timeseries. I am using Sequentional model to define my NN structure. It looks like:

from keras.models import Sequential
from keras.layers import Dense, Activation, SimpleRNN, Embedding
from keras import optimizers
from keras import losses
model = Sequential()
#model.add(Dense(units=5, input_dim=3, activation = 'tanh'))
model.add(SimpleRNN(units=5, input_dim = 3, activation = 'tanh'))
model.add(Dense(units=16, activation='tanh'))
model.add(Dense(1, activation='linear'))
prop = optimizers.rmsprop(lr=0.01)
sgd = optimizers.sgd(lr=0.01, nesterov=True, momentum=0.005)
model.compile(optimizer=prop, loss='mean_squared_error')

It does not execute and returned error is:

ValueError: Error when checking input: expected simple_rnn_9_input to have 3 dimensions, but got array with shape (221079, 3)

When I use commented out Dense layer everything is just fine. I read Keras documentation and I see they are using Embedding layer. Although, I do not really understand why Embedding layer is necessary to use recurrent layers like SimpleRNN or LSTM .

train_set is 2D array with 4 columns - 4-th one is target column, rest are inputs.

Is there any simple way how to use Keras' recurrent layers together with traditional Dense layers? I would appreciate explanation and some code examples.

Best regards, Maks

I am no expert on this, but this may help

import numpy as  np
import numpy as  np

data = np.zeros((10,4))
X = data[:,0:3].reshape(-1,1,3)
y = data[:,3].reshape(-1,1)
print(X.shape)
print(y.shape)

prints:

(10, 1, 3)
(10, 1)

then:

model.fit(X, y)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM