简体   繁体   中英

keras LSTM val_loss returns always NaN in training

so i am training my model on stock data, using this code:

....



generator = batch_generator(
        sequence_length=SEQ, testsize=testsize, x_train_g=x_train, y_train_g=y_train)
    test_generator = batch_generator(
        sequence_length=SEQ,testsize=testsize, x_train_g=x_test, y_train_g=y_test_reshaped)
    x_batch, y_batch = next(generator)

...
    model.add(Dense(num_y_signals, activation='sigmoid'))

    model.compile(loss='mse', optimizer='rmsprop', metrics=["mae"])




history = model.fit_generator(generator=generator, verbose=1, validation_data=test_generator, validation_steps=10,
                                  epochs=80,
                                  steps_per_epoch=20, 
                                  )

def batch_generator(sequence_length, testsize, x_train_g, y_train_g, batch_size=256):

    warmup_steps = 30
    num_x_signals = len(x_train_g[0])
    num_y_signals = 1
    while True:
        x_shape = (batch_size, sequence_length, num_x_signals)
        x_batch = np.zeros(shape=x_shape, dtype=np.float16)

        y_shape = (batch_size, sequence_length, num_y_signals)
        y_batch = np.zeros(shape=y_shape, dtype=np.float16)

        for i in range(batch_size):

            idx = np.random.randint(testsize - sequence_length)
            x_batch[i] = x_train_g[idx:idx+sequence_length]
            y_batch[i] = y_train_g[idx:idx+sequence_length]

        yield (x_batch, y_batch)

However, always when training, the validation loss is constantly "NaN" I have tried different activation functions and optimizers, of which nothing helped.

I belive the error is simple, however, i just cant figure it out.

好的,我发现了错误:我的验证集包含NaN值。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM