简体   繁体   中英

Constant training and validation accuracy while training CNN

I am running CNN on a software defect dataset which I have converted into images using Deep Insight Library. Training the images on the CNN, after every epoch I am witnessing constant training and validation accuracy. I have not used any regularization technique like Dropout, Batch Norm etc. Besides constant training and test accuracies, you all can see that there is a high bias and high variance associated with the model. I will be happy if you can suggest steps that can help me in not getting these constant values and improve my model accuracy. enter image description here

It seems you have fallen into a local minimum. This is fine but may explain that you have gotten to the limits of the capabilities of the current implementation there are a couple of ways to improve the accuracies.

  1. Increase dataset size: more data results in beter generalizations usually thus increased accuracies. Datasets should be in the order of 100s to 1000s of items, especially for CNNs. A useful technique is using image augmentation .
  2. Increase model complexity: perhaps you have enough data but the model isn't complex enough to solve the problem. So add more depth to the model may help.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM