简体   繁体   中英

I wrote a custom caffe layer. But during training it says “**layer does not need backward computation”

I defined a new caffe layer, including new_layer.cpp , new_layer.cu , new_layer.hpp and related params in caffe.proto . When I train the model, it says:

new_layer does not need backward computation

However, I do defined backward_cpu and backward_gpu . I was trying to set lr_mult to be not 0. But where should I define lr_mult for a custom layer? Except this, is there any other way to make my custom layer execute backward propogation?

You can force caffe to backprop by setting

force_backward: true

At the beginning of your net.prototxt file. The default behavior of caffe is to compute backwards only when it is certain the gradients are required. Sometimes (especially when there are custom layers) this heuristic is not accurate. By setting force_backward: true caffe will compute gradients for all layers in the model (whenever possible).
Read more at the comments in caffe.proto .

Regarding lr_mult : it is part of the param section of the layer - this section is defined for all layers in caffe.proto . Thus, you only need to add this clause to your layer definition in the net.prototxt:

force_backward: true   # cannot hurt...
layer {
  name: "my_layer"
  type: "MyLayerType"
  bottom: "input"
  top: "output"
  my_layer_param { ... }
  param: { lr_mult: 1 }  # there you go
}

You can see more information here .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM