简体   繁体   English

卷积神经网络损失

[英]Convolutional Neural Network Loss

While Calculating the Loss Function. 在计算损失函数时。 Can i manually Calculate Loss like 我可以手动计算损失吗

Loss = tf.reduce_mean(tf.square(np.array(Prediction) - np.array(Y))) 损失= tf.reduce_mean(tf.square(np.array(Prediction)-np.array(Y)))

and then Optimize this Loss using Adam Optimizer 然后使用Adam Optimizer优化此损失

No, actually you need to use tensor Variable for Loss , not use numpy.array ( np.array(Prediction) ). 不,实际上,您需要使用numpy.array Variable for Loss ,而不是使用numpy.arraynp.array(Prediction) )。

Since tensorflow will eval these tensors in tensorflow engine. 由于tensors eval tensors 引擎中的这些tensors

No. Tensorflow loss functions typically accept tensors as input and also outputs a tensor. 不会。Tensorflow损失函数通常接受张量作为输入,也输出张量。 So np.array() wouldn't work. 因此np.array()无法正常工作。

In case of CNNs, you'd generally come across loss functions like cross-entropy, softmax corss-entropy, sigmoid cross-entropy etc. These are already in-built in tf.losses module. 对于CNN,通常会遇到损失函数,例如交叉熵,softmax corss熵,S形交叉熵等。这些函数已经内置在tf.losses模块中。 So you can use them directly. 因此,您可以直接使用它们。 The loss function that you're trying to apply looks like a Mean-squared loss. 您尝试应用的损失函数看起来像是均方损失。 This is built in tf.losses as well. 这也是内置在tf.losss中。 tf.losses.mean_squared_error. tf.losses.mean_squared_error。

Having said that, I've also implemented a few loss functions like cross-entropy using hand-coded formula such as: -tf.reduce_mean(tf.reduce_sum(targets * logProb)) . 话虽如此,我还使用一些手工编码的公式实现了一些损失函数,如交叉熵,如: -tf.reduce_mean(tf.reduce_sum(targets * logProb)) This works equally fine, as long as the inputs targets and logProb are computed as tensors and not as numpy arrays. 只要输入目标logProb计算为张量而不是numpy数组,这同样可以正常工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM