简体   繁体   中英

Why do my val_loss fluctuate and have enormous values while val_categorical_accuracy are more or less constant throughout all epochs?

I adapted the AlexNet architecture, preprocessed the images, augmented the images, used LeakyReLU activation function, utilized dropouts, and had tried adjusting a learning rate. However, these trials aren't improving my model's val_loss and val_categorical_accuracy. What should i do? Embedded are my model's fitting and history visualized =

my model.compile()

my model's fitting

Training and validation loss graph

This might occurs due to several reasons, the most common reasons,


  • The number of examples of the validation set are not big enough

  • A validation step at each epoch might be small

  • Validation set has examples not in the training set

  • Training set batch size not quite good compared with validation set size

  • Model biased against ( high bias ) some classes at training set and those classes more likely to be in the validation set.

  • ... etc


You could check n - randomly chosen mislabeled examples from the validation set, usually is the best thing to do so.

Turns out i tweaked my model too much. I tried using purely AlexNet with ReLU function and it did the job, albeit too slow (175 s/epoch). I also didn't know that to have a layer (both Conv and FC layers) use an activation function, we must specify the function within the layer's parameter (before, i had used the.add after said layer and turns out it didn't register as the activation function of said layer therefore the layer didn't learn anything throughout the fitting process). I really need to learn more about building CNNs i guess.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM