简体   繁体   中英

Plotting learning curve in keras gives KeyError: 'val_acc'

I was trying to plot train and test learning curve in keras, however, the following code produces KeyError: 'val_acc error .

The official document <https://keras.io/callbacks/> states that in order to use 'val_acc' I need to enable validation and accuracy monitoring which I dont understand and dont know how to use in my code.

Any help would be much appreciated. Thanks.

seed = 7
np.random.seed(seed)

dataframe = pandas.read_csv("iris.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]

encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
dummy_y = np_utils.to_categorical(encoded_Y)

kfold = StratifiedKFold(y=Y, n_folds=10, shuffle=True, random_state=seed)
cvscores = []

for i, (train, test) in enumerate(kfold):

    model = Sequential()
    model.add(Dense(12, input_dim=4, init='uniform', activation='relu'))
    model.add(Dense(3, init='uniform', activation='sigmoid'))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    history=model.fit(X[train], dummy_y[train], nb_epoch=200, batch_size=5, verbose=0)
    scores = model.evaluate(X[test], dummy_y[test], verbose=0)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
    cvscores.append(scores[1] * 100)

print( "%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores))) 


print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

看起来在 Keras + Tensorflow 2.0 中val_acc被重命名为val_accuracy

history_dict = history.history
print(history_dict.keys())

if u print keys of history_dict, you will get like this dict_keys(['loss', 'acc', 'val_loss', 'val_acc']) .

and edit a code like this

acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']

Keys and error

You may need to enable the validation split of your trainset. Usually, the validation happens in 1/3 of the trainset. In your code, make the change as given below:

history=model.fit(X[train], dummy_y[train],validation_split=0.33,nb_epoch=200, batch_size=5, verbose=0) 

It works!

The main point everyone misses to mention is that this Key Error is related to the naming of metrics during model.compile(...) . You need to be consistent with the way you name your accuracy metric inside model.compile(....,metrics=['<metric name>']) . Your history callback object will receive the dictionary containing key-value pairs as defined in metrics.

So, if your metric is metrics=['acc'] , you can access them in history object with history.history['acc'] but if you define metric as metrics=['accuracy'] , you need to change to history.history['accuracy'] to access the value, in order to avoid Key Error . I hope it helps.

NB Here's a link to the metrics you can use in Keras.

If you upgrade keras older version (eg 2.2.5) to 2.3.0 (or newer) which is compatible with Tensorflow 2.0, you might have such error ( eg KeyError: 'acc' ). Both acc and val_acc has been renamed to accuracy and val_accuracy respectively. Renaming them in script will solve the issue.

to get any val_* data ( val_acc , val_loss , ...), you need to first set the validation.

first method (will validate from what you give it):

model.fit(validation_data=(X_test, Y_test))

second method (will validate from a part of the training data):

model.fit(validation_split=0.5) 

I have changed acc to accuracy and my problem solved. Tensorflow 2+

eg

accuracy = history_dict['accuracy']
val_accuracy = history_dict['val_acccuracy']

This error also happens when you specify the validation_data=(X_test, Y_test) and your X_test and/or Y_test are empty. To check this, print the shape of X_test and Y_test respectively. In this case, the model.fit(validation_data=(X_test, Y_test), ...) method ran but because the validation set was empty, it didn't create a dictionary key for val_loss in the history.history dictionary.

What worked for me was changing objective='val_accuracy' to objective=["val_accuracy"] in

tuner = kt.BayesianOptimization(model_builder,
                         objective=["val_accuracy"],
                         max_trials=80,
                         seed=123) 
tuner.search(X_train, y_train, epochs=50, validation_split=0.2)

I have TensorFlow 2+.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM