简体   繁体   中英

Reduce output dimensions on LSTM keras

Below is my architecture of the model. The data is a time series which I need to predict only the last value, hence return_sequences=False .

But this is exactly what creates the problem here. I have been able to run the nnet using sequences=True , but it is not what I need to do.

I need an input size (32,50,88) =(batch_size,timesteps,features) and get output size of (32,88) =(batch_size,labels) .

Features and labels have the same size, but it is irrelevant.

The error out of this code is:

ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (32, 50, 88)

which is happening in the training phase (meaning, the architecture comes to be valid).

The data comes in chunks of (32,50,88) from a generator, also the labels have the same size. Since I use keras , I need to create the batches through the generator. I have tried to add a single (50,88) but simply it doesn't work.

How could I have this kind of architecture, get the input of (32,50,88) but only get (32,88) as output?

In short, I need the timestep+50 prediction...I think..

def note_model():
    visible = Input(shape=(50,88), batch_shape=(32,50,88))
    hidden1 = Bidirectional(LSTM(200, stateful=False, return_sequences=False,  kernel_regularizer=l1(10**(-4)), dropout=0.5))(visible)
    #flat = Flatten()(hidden1)
    output = Dense(88, activation='sigmoid')(hidden1)

    model = Model(inputs=visible, outputs=output)
    print(model.summary())
    return model


    def train_note_model(model):
    checkpoint_path_notes = "1Layer-200units-loss=BCE-Model-{epoch:02d}-{val_acc:.2f}.hdf5"
    model.compile(optimizer='SGD', loss='binary_crossentropy', metrics=['accuracy']) #mean_squared_error
    monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=10, verbose=0, mode='min')
    reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=10, min_lr=0.001)
    checkpoint = ModelCheckpoint(checkpoint_path_notes,monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
    model.fit_generator(training_generator(), steps_per_epoch=2, 
                        callbacks=[monitor, reduce_lr, checkpoint],
                        validation_data= validation_generator(), validation_steps= 2,
                        verbose=1, epochs=10, shuffle=True)

model_try = note_model()
train_note_model(model_try)

Your model is correct, the issues is when checking the target which means that your training_generator is returning wrong target shapes.

Have a look at print(next(training_generator())) and ensure that it returns a tuple with shapes (32, 50, 88), (32, 88) .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM