简体   繁体   中英

How can I calculate cross-entropy on a sigmoid neural network binary outcome?

I'm currently building a NN from scratch where we want to identify based on two input variables (X_1 and X_2) what their output will be (0 or 1). I have 2 hidden layers with sigmoid activation on all neurons however I get stuck when calculating the cross-entropy. Suppose in the output layer I have the predictions [0.50, 0.57] however the true output is 0, so [1, 0] . How do I calculate the cross-entropy over this binary output example? Does anyone have any suggestions/tips?

Here is a function that I wrote and use to calculate the cross-entropy given a list of predictions and a list of true labels.

from math import log
# calculate the cross-entropy of predictions and true labels
def cross_entropy(y, p):
    # y[i] is list of real labels
    # p[i] is the probability of predicting 1
    m = len(y)
    sum_vals = 0
    for i in range(m):
        # first term is for label=1, second term is for label=0
        sum_vals += float(y[i]) * log(p[i]) + (1 - float(y[i])) * log(1 - p[i])
    R = -1/m * sum_vals
    return R

Here the values in the list of labels y are each either 0 or 1, and the probabilistic predictions from the network are in the list p .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM