簡體   English   中英

培訓和評估的准確性在keras LSTM模型中不同:為什么評估產生的結果與培訓期間不同?

[英]Training and evaluating accuracies different in keras LSTM model: why does evaluation not produce same results as during training?

我們正在構建用於對物理光學過程進行建模的LSTM。

到目前為止,我已經使用帶有Tensorflow后端的Keras在python中生成了以下代碼。

#Define model
model = Sequential()
model.add(LSTM(128, batch_size=BATCH_SIZE, input_shape=(train_x.shape[1],train_x.shape[2]), return_sequences=True, stateful=False ))#,,return_sequences=Tru# stateful=True 
model.add(Dense(2, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=0.01, decay=1e-6)

#Compile model
model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=opt,
    metrics=['accuracy']
)

model.fit(
    train_x, train_y,
    batch_size=BATCH_SIZE,
    epochs=EPOCHS,#,
    verbose=1)

#Now I want to make sure that the we can predict the training set (using evaluate) and that it is the same result as during training
score = model.evaluate(train_x, train_y, batch_size=BATCH_SIZE, verbose=0)
print(' Train accuracy:', score[1])

代碼的輸出是

Epoch 1/10 5872/5872 [==============================] - 0s 81us/sample - loss: 0.6954 - acc: 0.4997
Epoch 2/10 5872/5872 [==============================] - 0s 13us/sample - loss: 0.6924 - acc: 0.5229 
Epoch 3/10 5872/5872 [==============================] - 0s 14us/sample - loss: 0.6910 - acc: 0.5256
Epoch 4/10 5872/5872 [==============================] - 0s 13us/sample - loss: 0.6906 - acc: 0.5243 
Epoch 5/10 5872/5872 [==============================] - 0s 13us/sample - loss: 0.6908 - acc: 0.5238

Train accuracy: 0.52480716

因此,問題在於最終建模精度(0.5238)應該等於(評估)精度(0.52480716),而事實並非如此。 我在這里做錯了什么,請給予任何幫助

“由於您的模型隨時間而變化,因此前幾個時期的損失通常高於最后幾個時期。”

https://keras.io/getting-started/faq/#why-is-the-training-loss-much-higher-than-the-testing-loss

為了進行評估,由於采用了經過訓練的模型,因此准確性更高。

如下所示,謝謝,不是,我不明白

model = Sequential()
model.add(LSTM(32, batch_size=BATCH_SIZE, input_shape=(train_x.shape[1],train_x.shape[2]), return_sequences=True, stateful=False ))#,,return_sequences=Tru# stateful=True 
model.add(Dense(2, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=0.01, decay=1e-6)

#Compile model
model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=opt,
    metrics=['accuracy']
)

#Train model
model.fit(
    train_x, train_y,
    batch_size=BATCH_SIZE,
    epochs=EPOCHS,
    verbose=1,
    shuffle=False,
    validation_data=(validation_x, validation_y)]
)

score = model.evaluate(validation_x, validation_y, batch_size=BATCH_SIZE, verbose=0)

print(' Validation accuracy:', score[1])

產量

 Epoch 1/5 5872/5872 [==============================] - 3s 554us/sample - loss: 0.6923 - acc: 0.5154 - val_loss: 0.7149 - val_acc: 0.4668
 Epoch 2/5 5872/5872 [==============================] - 2s 406us/sample - loss: 0.6895 - acc: 0.4983 - val_loss: 0.7218 - val_acc: 0.4821
 Epoch 3/5 5872/5872 [==============================] - 2s 404us/sample - loss: 0.6890 - acc: 0.4940 - val_loss: 0.7230 - val_acc: 0.4821
 Epoch 4/5 5872/5872 [==============================] - 2s 406us/sample - loss: 0.6883 - acc: 0.4928 - val_loss: 0.7336 - val_acc: 0.4592
 Epoch 5/5 5872/5872 [==============================] - 2s 404us/sample - loss: 0.6881 - acc: 0.4934 - val_loss: 0.7278 - val_acc: 0.4745

 Validation accuracy: 0.45663264

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM