簡體   English   中英

試圖了解Caffe中的自定義損失層

[英]Trying to understand custom loss layer in caffe

我已經看到可以定義一個自定義損失層,例如在caffe中的EuclideanLoss,如下所示:

import caffe
import numpy as np


    class EuclideanLossLayer(caffe.Layer):
        """
        Compute the Euclidean Loss in the same manner as the C++ 
EuclideanLossLayer
        to demonstrate the class interface for developing layers in Python.
        """

        def setup(self, bottom, top):
            # check input pair
            if len(bottom) != 2:
                raise Exception("Need two inputs to compute distance.")

        def reshape(self, bottom, top):
            # check input dimensions match
            if bottom[0].count != bottom[1].count:
                raise Exception("Inputs must have the same dimension.")
            # difference is shape of inputs
            self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
            # loss output is scalar
            top[0].reshape(1)

        def forward(self, bottom, top):
            self.diff[...] = bottom[0].data - bottom[1].data
            top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

        def backward(self, top, propagate_down, bottom):
            for i in range(2):
                if not propagate_down[i]:
                    continue
                if i == 0:
                    sign = 1
                else:
                    sign = -1
                bottom[i].diff[...] = sign * self.diff / bottom[i].num

但是,我對該代碼有一些疑問:

如果要自定義此層並更改此行中的損耗計算:

top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

讓我們說:

channelsAxis = bottom[0].data.shape[1]
self.diff[...] = np.sum(bottom[0].data, axis=channelAxis) - np.sum(bottom[1].data, axis=channelAxis)
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

如何更改后退功能? 對於歐幾里得損失是:

bottom[i].diff[...] = sign * self.diff / bottom[i].num

如何查看我描述的損失?

標志是什么?

盡管將損失作為"Python"層實現可能是非常有教育意義的練習,但是使用現有層也可以得到相同的損失。 您需要做的是在調用常規的"EuclideanLoss"層之前為每個斑點添加一個"Reduction" "EuclideanLoss"層:

layer {
  type: "Reduction"
  name: "rx1"
  bottom: "x1"
  top: "rx1"
  reduction_param { axis: 1 operation: SUM }
} 
layer {
  type: "Reduction"
  name: "rx2"
  bottom: "x2"
  top: "rx2"
  reduction_param { axis: 1 operation: SUM }
} 
layer {
  type: "EuclideanLoss"
  name: "loss"
  bottom: "rx1"
  bottom: "rx2"
  top: "loss"
}

更新:
根據您的評論 ,如果您只想對通道尺寸求和,而保持所有其他尺寸不變,則可以使用固定的1x1轉換(如您建議的那樣):

layer {
  type: "Convolution"
  name: "rx1"
  bottom: "x1"
  top: "rx1"
  param { lr_mult: 0 decay_mult: 0 } # make this layer *fixed*
  convolution_param {
    num_output: 1
    kernel_size: 1
    bias_term: 0  # no need for bias
    weight_filler: { type: "constant" value: 1 } # sum
  }
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM