简体   繁体   中英

Plot model loss and model accuracy from history.history Keras sequential

Plotting Model Loss and Model Accuracy in sequential models with keras seems to be straightforward. However how can they be plotted if we split the data into X_train , Y_train , X_test , Y_test , and use cross-validation? I get the errors because it does not find 'val_acc' . That means I can not plot the results on the test set.

Here is my code:

# Create the model
def create_model(neurons = 379, init_mode = 'uniform', activation='relu', inputDim = 8040, dropout_rate=1.1, learn_rate=0.001, momentum=0.7, weight_constraint=6): #weight_constraint=
    model = Sequential()
    model.add(Dense(neurons, input_dim=inputDim, kernel_initializer=init_mode, activation=activation, kernel_constraint=maxnorm(weight_constraint), kernel_regularizer=regularizers.l2(0.002))) #, activity_regularizer=regularizers.l1(0.0001))) # one inner layer
    #model.add(Dense(200, input_dim=inputDim, activation=activation)) # second inner layer
    #model.add(Dense(60, input_dim=inputDim, activation=activation))  # second inner layer
    model.add(Dropout(dropout_rate))
    model.add(Dense(1, activation='sigmoid'))
    optimizer = RMSprop(lr=learn_rate)
    # compile model
    model.compile(loss='binary_crossentropy', optimizer='RmSprop', metrics=['accuracy']) #weight_constraint=weight_constraint
    return model

model = create_model() #weight constraint= 3 or 4

seed = 7
# Define k-fold cross validation test harness

kfold = StratifiedKFold(n_splits=3, shuffle=True, random_state=seed)
cvscores = []
for train, test in kfold.split(X_train, Y_train):
    print("TRAIN:", train, "VALIDATION:", test)

# Fit the model

    history = model.fit(X_train, Y_train, epochs=40, batch_size=50, verbose=0)

# Plot Model Loss and Model accuracy
    # list all data in history
    print(history.history.keys())
    # summarize history for accuracy
    plt.plot(history.history['acc'])
    plt.plot(history.history['val_acc'])  # RAISE ERROR
    plt.title('model accuracy')
    plt.ylabel('accuracy')
    plt.xlabel('epoch')
    plt.legend(['train', 'test'], loc='upper left')
    plt.show()
    # summarize history for loss
    plt.plot(history.history['loss'])
    plt.plot(history.history['val_loss']) #RAISE ERROR
    plt.title('model loss')
    plt.ylabel('loss')
    plt.xlabel('epoch')
    plt.legend(['train', 'test'], loc='upper left')
    plt.show()

I would appreciate some necessary changes on it to get those plots also for the test.

According to the Keras.io documentation , it seems like in order to be able to use 'val_acc' and 'val_loss' you need to enable validation and accuracy monitoring. Doing so would be as simple as adding a validation_split to the model.fit in your code!

Instead of:

history = model.fit(X_train, Y_train, epochs=40, batch_size=50, verbose=0)

You would need to do something like:

history = model.fit(X_train, Y_train, validation_split=0.33, epochs=40, batch_size=50, verbose=0)

This is because typically, the validation happens during 1/3 of the trainset.

Here's an additional potentially helpful source:

Plotting learning curve in keras gives KeyError: 'val_acc'

Hope it helps!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM