简体   繁体   中英

Caffe EuclideanLoss reproduce in Tensorflow

I am trying to reproduce the EuclideanLoss from Caffe in Tensorflow . I found a function called: tf.nn.l2_loss which according to the documents computes the following:

output = sum(t ** 2) / 2

When looking at the EuclideanLoss in the Python version of caffe it says:

def forward(self, bottom, top):
        self.diff[...] = bottom[0].data - bottom[1].data
        top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.

In the original docu it says:

To me this is exactly the same computation. However, my loss values for the same net in Tensorflow are around 3000 and in Caffe they are at roughly 300. So where is the difference?

tf.nn.l2_loss does not take into account the batch size in order to calculate the loss. In order to get the same value as caffe, you should divide by the batch size. In order to do so, the easiest way is to use the mean (sum / n):

import tensorflow as tf

y_pred = tf.constant([1, 2, 3, 4], tf.float32)
y_real = tf.constant([1, 2, 4, 5], tf.float32)
mse_loss = tf.reduce_mean(tf.square(y_pred - y_real)) / 2.

sess = tf.InteractiveSession()
mse_loss.eval()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM