[英]Keras - Plot training, validation and test set accuracy
我想繪制這個簡單神經網絡的輸出:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)
model.test_on_batch(x_test, y_test)
model.metrics_names
我繪制了訓練和驗證的准確性和損失:
print(history.history.keys())
# "Accuracy"
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
現在我想從model.test_on_batch(x_test, y_test)
添加和繪制測試集的准確性,但是從model.metrics_names
我獲得相同的值'acc'用於繪制訓練數據的准確性plt.plot(history.history['acc'])
。 我如何繪制測試集的准確性?
import keras
from matplotlib import pyplot as plt
history = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
這是一樣的,因為你是在測試集上訓練,而不是在訓練集上。 不要那樣做,只需在訓練集上訓練:
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)
變成:
history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2, shuffle=True)
在測試數據上驗證模型,如下所示,然后繪制精度和損失
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, nb_epoch=10, validation_data=(X_test, y_test), shuffle=True)
你也可以這樣做....
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics=['accuracy'])
earlyStopCallBack = EarlyStopping(monitor='loss', patience=3)
history=regressor.fit(X_train, y_train, validation_data=(X_test, y_test), epochs = EPOCHS, batch_size = BATCHSIZE, callbacks=[earlyStopCallBack])
對於策划 - 我喜歡策划......所以
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# Create figure with secondary y-axis
fig = make_subplots(specs=[[{"secondary_y": True}]])
# Add traces
fig.add_trace(
go.Scatter( y=history.history['val_loss'], name="val_loss"),
secondary_y=False,
)
fig.add_trace(
go.Scatter( y=history.history['loss'], name="loss"),
secondary_y=False,
)
fig.add_trace(
go.Scatter( y=history.history['val_accuracy'], name="val accuracy"),
secondary_y=True,
)
fig.add_trace(
go.Scatter( y=history.history['accuracy'], name="val accuracy"),
secondary_y=True,
)
# Add figure title
fig.update_layout(
title_text="Loss/Accuracy of LSTM Model"
)
# Set x-axis title
fig.update_xaxes(title_text="Epoch")
# Set y-axes titles
fig.update_yaxes(title_text="<b>primary</b> Loss", secondary_y=False)
fig.update_yaxes(title_text="<b>secondary</b> Accuracy", secondary_y=True)
fig.show()
兩種處理方法都沒有錯。 請注意,Plotly 圖有兩個尺度,一個用於損失,另一個用於准確性。
accuracy
和val_accuracy
accuracy
:plt.plot(model.history.history["accuracy"], label="training accuracy")
plt.plot(model.history.history["val_accuracy"], label="validation accuracy")
plt.legend()
plt.show()
loss
:plt.plot(model.history.history["loss"], label="training loss")
plt.plot(model.history.history["val_loss"], label="validation loss")
plt.legend()
plt.show()
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.