简体   繁体   English

如何计算 sigmoid 神经网络二元结果的交叉熵?

[英]How can I calculate cross-entropy on a sigmoid neural network binary outcome?

I'm currently building a NN from scratch where we want to identify based on two input variables (X_1 and X_2) what their output will be (0 or 1).我目前正在从头开始构建一个 NN,我们希望根据两个输入变量(X_1 和 X_2)来识别它们的 output 将是(0 或 1)。 I have 2 hidden layers with sigmoid activation on all neurons however I get stuck when calculating the cross-entropy.我有 2 个隐藏层,所有神经元上都有 sigmoid 激活,但是在计算交叉熵时我卡住了。 Suppose in the output layer I have the predictions [0.50, 0.57] however the true output is 0, so [1, 0] .假设在 output 层我有预测[0.50, 0.57]但是真正的 output 是 0,所以[1, 0] How do I calculate the cross-entropy over this binary output example?如何计算此二进制 output 示例的交叉熵? Does anyone have any suggestions/tips?有没有人有任何建议/提示?

Here is a function that I wrote and use to calculate the cross-entropy given a list of predictions and a list of true labels.这是一个 function,我编写并使用它来计算给定预测列表和真实标签列表的交叉熵。

from math import log
# calculate the cross-entropy of predictions and true labels
def cross_entropy(y, p):
    # y[i] is list of real labels
    # p[i] is the probability of predicting 1
    m = len(y)
    sum_vals = 0
    for i in range(m):
        # first term is for label=1, second term is for label=0
        sum_vals += float(y[i]) * log(p[i]) + (1 - float(y[i])) * log(1 - p[i])
    R = -1/m * sum_vals
    return R

Here the values in the list of labels y are each either 0 or 1, and the probabilistic predictions from the network are in the list p .这里标签列表y中的值都是 0 或 1,来自网络的概率预测在列表p中。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何手动计算分类交叉熵? - How to calculate Categorical Cross-Entropy by hand? 神经网络未经训练,交叉熵保持不变 - Neural network is not being trained, cross-entropy stays about the same 如何在python中预测和测试集之间计算二进制交叉熵? - How to calculate binary cross-entropy between a predicted and a test set in python? 如何逐个应用二元交叉熵元素,然后在 Keras 中对所有这些损失求和? - How do I apply the binary cross-entropy element-wise and then sum all these losses in Keras? 神经网络 output 在交叉熵方法中尝试解决 CartPole-v0 的问题 - Problem with output of neural network in a cross-entropy method attempt at solving CartPole-v0 以“ tanh”为激活,而“交叉熵”为代价函数的神经网络不起作用 - Neural network with 'tanh' as activation and 'cross-entropy' as cost function did not work 当我们使用实值训练目标时,为什么使用二进制交叉熵损失函数进行的神经网络训练会停滞不前? - Why the training of a neural network using binary cross-entropy loss function gets stuck when we use real-valued training targets? 如何在 TensorFlow 中选择交叉熵损失? - How to choose cross-entropy loss in TensorFlow? 在哪里使用二进制二进制交叉熵损失 - Where to use binary Binary Cross-Entropy Loss 如果我使用许多关联的网络,为什么我的交叉熵损失函数会变得很大? - Why does my cross-entropy loss function get huge if I use a network of many relus?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM