[英]Relu Performing worse than sigmoid?
我在所有层和输出上都使用了Sigmoid,最终错误率是0.00012,但是当我使用理论上更好的 Relu时,我得到的结果可能最差。 谁能解释为什么会这样? 我正在使用100个网站上可用的非常简单的2层实施代码,但仍在下面给出,
import numpy as np
#test
#avg(nonlin(np.dot(nonlin(np.dot([0,0,1],syn0)),syn1)))
#returns list >> [predicted_output, confidence]
def nonlin(x,deriv=False):#Sigmoid
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
def relu(x, deriv=False):#RELU
if (deriv == True):
for i in range(0, len(x)):
for k in range(len(x[i])):
if x[i][k] > 0:
x[i][k] = 1
else:
x[i][k] = 0
return x
for i in range(0, len(x)):
for k in range(0, len(x[i])):
if x[i][k] > 0:
pass # do nothing since it would be effectively replacing x with x
else:
x[i][k] = 0
return x
X = np.array([[0,0,1],
[0,0,0],
[0,1,1],
[1,0,1],
[1,0,0],
[0,1,0]])
y = np.array([[0],[1],[0],[0],[1],[1]])
np.random.seed(1)
# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
def avg(i):
if i > 0.5:
confidence = i
return [1,float(confidence)]
else:
confidence=1.0-float(i)
return [0,confidence]
for j in xrange(500000):
# Feed forward through layers 0, 1, and 2
l0 = X
l1 = nonlin(np.dot(l0,syn0Performing))
l2 = nonlin(np.dot(l1,syn1))
#print 'this is',l2,'\n'
# how much did we miss the target value?
l2_error = y - l2
#print l2_error,'\n'
if (j% 100000) == 0:
print "Error:" + str(np.mean(np.abs(l2_error)))
print syn1
# in what direction is the target value?
# were we really sure? if so, don't change too much.
l2_delta = l2_error*nonlin(l2,deriv=True)
# how much did each l1 value contribute to the l2 error (according to the weights)?
l1_error = l2_delta.dot(syn1.T)
# in what direction is the target l1?
# were we really sure? if so, don't change too much.
l1_delta = l1_error * nonlin(l1,deriv=True)
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
print "Final Error:" + str(np.mean(np.abs(l2_error)))
def p(l):
return avg(nonlin(np.dot(nonlin(np.dot(l,syn0)),syn1)))
所以p(x)是traning后的预测函数,其中x是输入值的1 x 3矩阵。
为什么说理论上更好? 在大多数应用中,ReLU已被证明是更好的,但这并不意味着它在总体上会更好。 您的示例非常简单,输入的比例在[0,1]之间,与输出相同。 这正是我希望S型曲线表现良好的地方。 由于逐渐消失的梯度问题以及大型网络中的其他一些问题,在实践中您不会在隐藏层中遇到S型曲线,但这对您来说并不是一个问题。
同样,如果有机会使用ReLU派生词, 则代码中会丢失“ else” 。 您的导数将被简单地覆盖。
就像刷新一样,这是ReLU的定义:
f(x)=最大值(0,x)
...意味着它可以将您的激活作用无限化 。 您要避免在最后一个(输出)层使用ReLU。
附带说明一下,只要有可能,就应该利用向量化操作:
def relu(x, deriv=False):#RELU
if (deriv == True):
mask = x > 0
x[mask] = 1
x[~mask] = 0
else: # HERE YOU WERE MISSING "ELSE"
return np.maximum(0,x)
是的,这要比 /否则要快得多。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.