简体   繁体   中英

keras save and load model, accuracy drop

Link to colab https://colab.research.google.com/drive/1u_jRl3uMlxEne667aCxt5Qh8eMlhme8V?usp=sharing

link to training data https://drive.google.com/file/d/1jcu7ZTnTF2obGb5OM4dD6T_GlU0sMWmL/view?usp=sharing

So i train a model that have 70% and save it into drive and deleted runtime

Then restart runtime and load the model from drive use the exact same code the accuracy drop to 40%-50%

why?

i tried save n load only the weights, or json, or.5 file, save n load using pickle etc etc. it doesnt work. after i deleted runtiime or open a new ipynb file and load the model the accuracy is always not the same

I see your question and would like to clarify your understanding of the following:

  1. Your understanding of Model Training
  2. Your understanding of Training Accuracy and Validation Accuracy
  3. General rule-of-thumbs regarding model evaluation.

When training your model, you do not want to have a "Perfect Model accuracy" (100% Accuracy during training).

At the same time, you do not want your accuracies to be too low. (anything below 70%).

During training, you want your training and testing accuracies to be as similar as possible. Having a large gap in accuracies can tell you that your model has 1 of 2 problems, overfitting and underfitting.

# Example 1
Epoch 12/60
44/44 [==============================] - 0s 5ms/step - loss: 0.5669 - acc: 0.7429 - val_loss: 0.6224 - val_acc: 0.7133

Overfitting is like your model does not accept new and different information Underfitting is your model not understanding information/bad information used for training.

Now, I refer your attention to Example 1, this random epoch I have selected in random from your training, this epoch shows a decent training dynamic, the difference between your acc and val_acc have a difference of 0.296 (2.96%)

However, your last epoch:

Epoch 60/60
44/44 [==============================] - 0s 6ms/step - loss: 0.0697 - acc: 1.0000 - val_loss: 0.5494 - val_acc: 0.7400

Has a acc difference of 0.2600(26%), this tells me that you have overfitted your model as your model has more or less memorized your validation dataset, thus, any new data that is passed into the model will be predicted less accurately. That is why when you are validating your dataset with a fresh new shuffle of your dataset your accuracy drops (There is no correlation between this drop and the accuracy of epoch accuracy delta).

for a macro view, you can refer to your model graph:验证图

for a general rule of thumb, the best training and validation accuracies are between 70%(0.7) and 89%(0.89). This can change depending on your model requirements.

Disclaimer: information in this post may not be 100% accurate

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM