繁体   English   中英

神经网络异或门分类

[英]neural network xor gate classification

我已经编写了一个可以预测XOR门功能的简单神经网络。 我想我已经正确地使用了数学,但是损失并没有减少,仍然保持在0.6附近。 谁能帮我找到原因?

import numpy as np
import matplotlib as plt

train_X = np.array([[0,0],[0,1],[1,0],[1,1]]).T
train_Y = np.array([[0,1,1,0]])
test_X = np.array([[0,0],[0,1],[1,0],[1,1]]).T
test_Y = np.array([[0,1,1,0]])

learning_rate = 0.1
S = 5

def sigmoid(z):
    return 1/(1+np.exp(-z))

def sigmoid_derivative(z):
    return sigmoid(z)*(1-sigmoid(z))

S0, S1, S2 = 2, 5, 1
m = 4

w1 = np.random.randn(S1, S0) * 0.01
b1 = np.zeros((S1, 1))
w2 = np.random.randn(S2, S1) * 0.01
b2 = np.zeros((S2, 1))

for i in range(1000000):
    Z1 = np.dot(w1, train_X) + b1
    A1 = sigmoid(Z1)
    Z2 = np.dot(w2, A1) + b2
    A2 = sigmoid(Z2)

    J = np.sum(-train_Y * np.log(A2) + (train_Y-1) * np.log(1-A2)) / m

    dZ2 = A2 - train_Y
    dW2 = np.dot(dZ2, A1.T) / m
    dB2 = np.sum(dZ2, axis = 1, keepdims = True) / m
    dZ1 = np.dot(w2.T, dZ2) * sigmoid_derivative(Z1)
    dW1 = np.dot(dZ1, train_X.T) / m
    dB1 = np.sum(dZ1, axis = 1, keepdims = True) / m

    w1 = w1 - dW1 * 0.03
    w2 = w2 - dW2 * 0.03
    b1 = b1 - dB1 * 0.03
    b2 = b2 - dB2 * 0.03

    print(J)

我认为您的dZ2不正确,因为您没有将它与S型导数相乘。

对于XOR问题,如果检查输出,则1略高于0.5,0略低。 我相信这是因为搜索已达到平稳状态,因此进展非常缓慢。 我尝试了RMSProp ,该速度非常快地收敛到几乎为0。 我还尝试了伪二阶算法RProp ,该算法几乎立即收敛(我使用iRProp- )。 我在下面显示RMSPprop的图

在此处输入图片说明

而且,现在网络的最终输出是

[[1.67096234e-06 9.99999419e-01 9.99994158e-01 6.87836337e-06]]

四舍五入

array([[0., 1., 1., 0.]])

但是,我强烈建议执行梯度检查 ,以确保分析梯度与数值计算的梯度匹配。 另请参见吴安德(Andrew Ng)在梯度检查方面的课程演讲

我将修改后的代码添加到RMSProp实现中。

#!/usr/bin/python3

import numpy as np
import matplotlib.pyplot as plt

train_X = np.array([[0,0],[0,1],[1,0],[1,1]]).T
train_Y = np.array([[0,1,1,0]])
test_X = np.array([[0,0],[0,1],[1,0],[1,1]]).T
test_Y = np.array([[0,1,1,0]])

learning_rate = 0.1
S = 5

def sigmoid(z):
    return 1/(1+np.exp(-z))

def sigmoid_derivative(z):
    return sigmoid(z)*(1-sigmoid(z))

S0, S1, S2 = 2, 5, 1
m = 4

w1 = np.random.randn(S1, S0) * 0.01
b1 = np.zeros((S1, 1))
w2 = np.random.randn(S2, S1) * 0.01
b2 = np.zeros((S2, 1))

# RMSProp variables
dWsqsum1 = np.zeros_like (w1)
dWsqsum2 = np.zeros_like (w2)
dBsqsum1 = np.zeros_like (b1)
dBsqsum2 = np.zeros_like (b2)
alpha = 0.9
lr = 0.01

err_vec = list ();

for i in range(20000):
    Z1 = np.dot(w1, train_X) + b1
    A1 = sigmoid(Z1)
    Z2 = np.dot(w2, A1) + b2
    A2 = sigmoid(Z2)

    J = np.sum(-train_Y * np.log(A2) + (train_Y-1) * np.log(1-A2)) / m

    dZ2 = (A2 - train_Y) * sigmoid_derivative (Z2);
    dW2 = np.dot(dZ2, A1.T) / m
    dB2 = np.sum(dZ2, axis = 1, keepdims = True) / m
    dZ1 = np.dot(w2.T, dZ2) * sigmoid_derivative(Z1)
    dW1 = np.dot(dZ1, train_X.T) / m
    dB1 = np.sum(dZ1, axis = 1, keepdims = True) / m

    # RMSProp update
    dWsqsum1 = alpha * dWsqsum1 + (1 - learning_rate) * np.square (dW1);
    dWsqsum2 = alpha * dWsqsum2 + (1 - learning_rate) * np.square (dW2);
    dBsqsum1 = alpha * dBsqsum1 + (1 - learning_rate) * np.square (dB1);
    dBsqsum2 = alpha * dBsqsum2 + (1 - learning_rate) * np.square (dB2);


    w1 = w1 - (lr * dW1 / (np.sqrt (dWsqsum1) + 10e-10));
    w2 = w2 - (lr * dW2 / (np.sqrt (dWsqsum2) + 10e-10));
    b1 = b1 - (lr * dB1 / (np.sqrt (dBsqsum1) + 10e-10));
    b2 = b2 - (lr * dB2 / (np.sqrt (dBsqsum2) + 10e-10));

    print(J)
    err_vec.append (J);


Z1 = np.dot(w1, train_X) + b1
A1 = sigmoid(Z1)
Z2 = np.dot(w2, A1) + b2
A2 = sigmoid(Z2)

print ("\n", A2);

plt.plot (np.array (err_vec));
plt.show ();

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM