简体   繁体   中英

How can I convert the sigmoid activation function outputs to 0s and 1s?

I have the following array :

array([8.1837177e-05, 1.0788739e-03, 4.4837892e-03, 3.4919381e-04, 7.6085329e-05, 7.6562166e-05, 5.3864717e-04, 5.4001808e-04,  3.3849746e-02, 2.9903650e-04], dtype = float32)

i want to convert it to this :

array([0, 0, 0, 0, 0, 0, 0, 0, 1, 0], dtype = float32)

I need to find the maximum value for the row, replace it with 1. then, replace the other 9 values for that row by 0.

I need this to be done for a 2D array (a series of arrays that look like the one in the example)

Use np.where in conjunction with max :

a = np.array([8.1837177e-05, 1.0788739e-03, 4.4837892e-03, 3.4919381e-04, 7.6085329e-05, 7.6562166e-05, 5.3864717e-04, 5.4001808e-04,  3.3849746e-02, 2.9903650e-04])

np.where(a == a.max(), 1, 0)

Output:

array([0, 0, 0, 0, 0, 0, 0, 0, 1, 0])

In the 2D case, we take the maximum of each row:

np.where(a == a.max(axis=1)[:, np.newaxis], 1, 0)

That said, I feel like keras should have something built in to do this for you...

You can use a list comprehension like so:

x = [5,6,7,8,9]
y = [1 if num == max(x) else 0 for num in x]

This method takes two lines but it avoids comparing every array element with the max and works well in 2D. I don't know that it will really be faster (not asymptotically, certainly), but I think two lines is better than doing the for loop for 2D in python and the readability might be better than using np.where .

import numpy as np

# here's your example input
# note - the input must be 2D even if there's just one row
# it's easy to adapt this to the 1D case, but you'll be working with 2D arrays for this anyway
class_probs = np.array([[
    8.1837177e-05, 1.0788739e-03, 4.4837892e-03, 3.4919381e-04, 7.6085329e-05,
    7.6562166e-05, 5.3864717e-04, 5.4001808e-04, 3.3849746e-02, 2.9903650e-04,
]])
pred_classes = np.zeros_like(class_probs)
pred_classes[range(len(class_probs)), class_probs.argmax(-1)] = 1
print(pred_classes) # [[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]]

# and here's showing the same code works for multiple rows
class_probs = np.random.rand(100, 10)
pred_classes = np.zeros_like(class_probs)
pred_classes[range(len(class_probs)), class_probs.argmax(-1)] = 1
pred_classes

(this isn't your actual question, but did you mean to use the sigmoid activation function? And not softmax? The output you're getting here isn't a single distribution over the 10 possible classes (you can see that it's not even normalized). Rather, you have 10 distributions, one for each class (so, the probability that the input was class 0 is 8.1837177e-05 and the probability of being not class 0 is 1 - 8.1837177e-05 ). This makes sense when doing multi-label classification (where more than one label could apply), but then you wouldn't want to find the class with the highest probability, you'd predict all classes with probability above a threshold (eg 0.5).)

x = array([1, 2, 3, 4])


x = np.where(x == max(x), 1, 0) # will replace max with 1 and the others with 0

this will create :

array([0, 0, 0, 1])

for a 2D array, you can do the following:

x = array([[0, 3, 4, 5],
           [1, 2, 3, 1],
           [6, 9, 1, 2]])

x = np.array([np.where(l == max(l), 1, 0) for l in x])

this will create:

array([[0, 0, 0, 1],
       [0, 0, 1, 0],
       [0, 1, 0, 0]])`

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM