简体   繁体   中英

Trying to understand custom loss layer in caffe

I have seen one can define a custom loss layer for example EuclideanLoss in caffe like this:

import caffe
import numpy as np


    class EuclideanLossLayer(caffe.Layer):
        """
        Compute the Euclidean Loss in the same manner as the C++ 
EuclideanLossLayer
        to demonstrate the class interface for developing layers in Python.
        """

        def setup(self, bottom, top):
            # check input pair
            if len(bottom) != 2:
                raise Exception("Need two inputs to compute distance.")

        def reshape(self, bottom, top):
            # check input dimensions match
            if bottom[0].count != bottom[1].count:
                raise Exception("Inputs must have the same dimension.")
            # difference is shape of inputs
            self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
            # loss output is scalar
            top[0].reshape(1)

        def forward(self, bottom, top):
            self.diff[...] = bottom[0].data - bottom[1].data
            top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

        def backward(self, top, propagate_down, bottom):
            for i in range(2):
                if not propagate_down[i]:
                    continue
                if i == 0:
                    sign = 1
                else:
                    sign = -1
                bottom[i].diff[...] = sign * self.diff / bottom[i].num

However, I have a few question regarding that code:

If I want to customise this layer and change the computation of the loss in this line:

top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

Lets say to:

channelsAxis = bottom[0].data.shape[1]
self.diff[...] = np.sum(bottom[0].data, axis=channelAxis) - np.sum(bottom[1].data, axis=channelAxis)
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

How do I have to change the backward function? For EuclideanLoss it is:

bottom[i].diff[...] = sign * self.diff / bottom[i].num

How does it have to look for my described loss?

What is the sign for?

Although it can be a very educating exercise to implement the loss you are after as a "Python" layer, you can get the same loss using existing layers. All you need is to add a "Reduction" layer for each of your blobs before calling the regular "EuclideanLoss" layer:

layer {
  type: "Reduction"
  name: "rx1"
  bottom: "x1"
  top: "rx1"
  reduction_param { axis: 1 operation: SUM }
} 
layer {
  type: "Reduction"
  name: "rx2"
  bottom: "x2"
  top: "rx2"
  reduction_param { axis: 1 operation: SUM }
} 
layer {
  type: "EuclideanLoss"
  name: "loss"
  bottom: "rx1"
  bottom: "rx2"
  top: "loss"
}

Update:
Based on your comment , if you only want to sum over the channel dimension and leave all other dimensions unchanged, you can use fixed 1x1 conv (as you suggested):

layer {
  type: "Convolution"
  name: "rx1"
  bottom: "x1"
  top: "rx1"
  param { lr_mult: 0 decay_mult: 0 } # make this layer *fixed*
  convolution_param {
    num_output: 1
    kernel_size: 1
    bias_term: 0  # no need for bias
    weight_filler: { type: "constant" value: 1 } # sum
  }
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM