简体   繁体   中英

Caffe compute gradient with respect to input using custom cost function

I have a pretrained caffe model with no loss layers. I want to do the following steps:

  1. Compute the cost/grad of some layer in the net.
  2. Backpropagate to compute the gradient with respect to the input layer.
  3. Perform gradient descent repeating 1 and 2 to optimize input.

I can not figure out how to add a loss layer to a pretrained model to do this. In other NN frameworks you can call a backward() function and pass a cost function. Is there any way to do this in caffe?

You can create a custom layer in caffe for your cost function. Make a call to this cost function in the .prototxt file. You can finetune a pretrained model using your new cost function.

Finetuning is done using the below format of coomandline code:

./build/tools/caffe train --solver theAboveMentioned.prototxt --weights thePreTrainedWeightsFile

More on caffe finetuning can be found here .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM