繁体   English   中英

在训练集上进行测试时,Keras Stateful LSTM的准确性较低

[英]Keras Stateful LSTM get low accuracy when testing on training set

通常,我使用有状态LSTM进行预测。 当我训练LSTM时,输出精度很高。 但是,当我在训练集上测试LSTM模型时,准确性很低! 那真的让我感到困惑,我认为它们应该相同。 这是我的代码和输出。 有谁知道为什么会这样? 谢谢!

model = Sequential()
adam = keras.optimizers.Adam(lr=0.0001)
model.add(LSTM(512, batch_input_shape=(12, 1, 120), return_sequences=False, stateful=True))
model.add(Dense(8, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])

print 'Train...'
for epoch in range(30):
    mean_tr_acc = []
    mean_tr_loss = []
    current_data, current_label, origin_label, is_shuffled = train_iter.next()
    for i in range(current_data.shape[1]):
        if i%1000==0:
            print "current iter at {} with {} iteration".format(i, epoch)
        data_slice = current_data[:,i,:]
        # Data slice dim: [batch size = 12, time_step=1, feature_dim=120]
        data_slice = np.expand_dims(data_slice, axis=1)
        label_slice = current_label[:,i,:]
        one_hot_labels = keras.utils.to_categorical(label_slice, num_classes=8)
        last_element = one_hot_labels[:,-1,:]
        tr_loss, tr_acc = model.train_on_batch(np.array(data_slice), np.array(last_element))
        mean_tr_acc.append(tr_acc)
        mean_tr_loss.append(tr_loss)
    model.reset_states()

    print 'accuracy training = {}'.format(np.mean(mean_tr_acc))
    print 'loss training = {}'.format(np.mean(mean_tr_loss))
    print '___________________________________'

    # At here, just evaluate the model on the training dataset
    mean_te_acc = []
    mean_te_loss = []
    for i in range(current_data.shape[1]):
        if i%1000==0:
            print "current val iter at {} with {} iteration".format(i, epoch)
        data_slice = current_data[:,i,:]
        data_slice = np.expand_dims(data_slice, axis=1)
        label_slice = current_label[:,i,:]
        one_hot_labels = keras.utils.to_categorical(label_slice, num_classes=8)
        last_element = one_hot_labels[:,-1,:]
        te_loss, te_acc = model.test_on_batch(np.array(data_slice), np.array(last_element))
        mean_te_acc.append(te_acc)
        mean_te_loss.append(te_loss)
    model.reset_states()

这是程序输出:

current iter at 0 with 13 iteration
current iter at 1000 with 13 iteration
accuracy training = 0.991784930229
loss training = 0.0320105217397
___________________________________
Batch shuffled
current val iter at 0 with 13 iteration
current val iter at 1000 with 13 iteration
accuracy testing = 0.927557885647
loss testing = 0.230829760432
___________________________________

好的,这就是问题所在:在我的代码(有状态LSTM)中,训练错误似乎并没有真正暗示实际的训练错误。 换句话说,在模型可以在验证集上正常工作之前(在模型经过实际训练之前),需要进行更多的迭代。 通常,这是一个愚蠢的错误:P

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM