简体   繁体   中英

Evaluating my model in new unseen dataset

I have trained my model ( .fit() ) and satisfied with the performance on test split making prediction ( .predict() ). So I save my model to the disk ( .save('model.h5') ).

Now I'm given new unseen dataset and asked to evaluate my already saved model on this dataset for performance. I am required to not only report accuracy but stuff line precision/recall, confusion matrix etc...

I then loaded my saved model ( .load_model('model.h5') ).

Question: What the appropriate function that I should use to prepare report of the model performance on this new dataset. I should I use .predict() function or .evaluate() .

If you want to get loss/accuracy or whatever other metrics you had during training - you need .evaluate() method. If all you need is actual probabilities or regression values - you need .predict() method.

You can use sklearn's classification_report to generate all the relevant metrics.
Code:

preds = model.predict(x1)
y_pred = np.argmax(preds, axis = 1)
print(classification_report(y1, y_pred))

Output:

              precision    recall  f1-score   support

           0       0.71      0.08      0.15        59
           1       0.42      0.95      0.58        41

    accuracy                           0.44       100
   macro avg       0.57      0.52      0.37       100
weighted avg       0.59      0.44      0.33       100

You can see all available metrics here .

model.evaluate() :
This method will provide the values for loss and metrics you defined in your compile method which is generally not enough.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM