简体   繁体   中英

Odd results for Image Recognition using AlexNet in Deep Learning

I am using a modified AlexNet (cifar-10-model) available in the tensorflow tutorials to do some image recognition of some mechanic part images but getting very wierd results.

The training accuracy is very soon to achieve 100%. But the testing accuracy is starting as high as 45% decreasing very fast to as low as 9%.

I am doing my test on a training set of 20,000 images and testing set of 2,500 images with 8 categories. I do training and testing by batch with size of 1024.

The accuracy and training loss is showed below and you can see that:

  1. The testing accuracy starts at as high as 45%, which doesn't make sense.
  2. The mechanical images are always classified as 'left bracket' Accuracy Classification results

your testing accuracy is decreasing, I think it happens because of Overfitting. Try to use simpler model or regularization method to tune the model.

You might want to check your data or feature extraction for errors. I did a protein structure prediction for 3-labels , but I was using a wrong extraction method. My validation accuracy starts at 45% too and then falls quickly.

Knowing where my errors are, I started from scratch: now I do protein structure prediction for 8-labels . The accuracy from the first epoch is 60% and able to rise steadily to 64.9% (the current Q8 world record for CB513 is 68.9%).

So validation accuracy starting at 45% is not a problem, but falling quickly is. I'm afraid that you have an error somewhere in your data/extraction rather than just overfitting.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM