简体   繁体   中英

What is wrong with this code, why the loss in this code is not reducing?

I have implemented VGG-16 in tensorflow, VGG-16 is reasonably deep network, so the loss should definitely reduce. But in my code it's not reducing. But when i run the model on same batch again and again then the loss is reducing. Any idea, why such thing can happen.

VGG-net is followed from here .

Training was done on, dog-vs-cat dataset, with image size 224x224x3.

Network parameters are folloing:

lr_rate: 0.001 batch_size = 16

Find code @ GitHubGist

Output is as below:

产量

I am assuming you're following architecture variant E from the Simonyan & Zisserman paper you linked - then I found a few problems with your code:

  • Use activation='relu' for all hidden layers.

  • Max pooling should be done over a 2 x 2 window, so use pool_size=[2, 2] instead of pool_size=[3, 3] in the pooling layers.

  • Properly link up pool13 with conv13 :

pool13 = tf.layers.max_pooling2d(conv13, [2, 2], 2, name='pool13')

I don't have any GPU available to test, but with sufficient iterations the loss should decrease.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM