[英]gradient descent newton method using Hessian Matrix
I am implementing gradient descent for regression using newtons method as explained in the 8.3 section of the Machine Learning A Probabilistic Perspective (Murphy) book. 我正在使用牛顿法实现梯度下降以进行回归,如《机器学习概率论》(墨菲)书第8.3节中所述。 I am working with two dimensional data in this implementation. 我正在此实现中处理二维数据。 I am using following notations. 我正在使用以下符号。
x = input data points m*2 x =输入数据点m * 2
y = labelled outputs(m) corresponding to input data y =对应于输入数据的标记输出(m)
H = Hessian matrix is defined as H = Hessian矩阵定义为
gradient descent update 梯度下降更新
where loss function is defined as 损失函数定义为
In my case 就我而言 is 是 array and H is 数组,H为
Here is my python implementation. 这是我的python实现。 However this is not working as cost is increasing in each iteration. 但是,这不起作用,因为每次迭代中的成本都在增加。
def loss(x,y,theta):
m,n = np.shape(x)
cost_list = []
for i in xrange(0,n):
x_0 = x[:,i].reshape((m,1))
predicted = np.dot(x_0, theta[i])
error = predicted - y
cost = np.sum(error ** 2) / m
cost_list.append(cost)
cost_list = np.array(cost_list).reshape((2,1))
return cost_list
def NewtonMethod(x,y,theta,maxIterations):
m,n = np.shape(x)
xTrans = x.transpose()
H = 2 * np.dot(xTrans,x) / m
Hinv = np.linalg.inv(H)
thetaPrev = np.zeros_like(theta)
best_iter = maxIterations
for i in range(0,maxIterations):
cost = loss(x,y,theta)
theta = theta - np.dot(Hinv,cost))
if(np.allclose(theta,thetaPrev,rtol=0.001,atol=0.001)):
break;
else:
thetaPrev = theta
best_iter = i
return theta
Here are the sample values I used 这是我使用的样本值
import numpy as np
x = np.array([[-1.7, -1.5],[-1.0 , -0.3],[ 1.7 , 1.5],[-1.2, -0.7 ][ 0.6, 0.1]])
y = np.array([ 0.3 , 0.07, -0.2, 0.07, 0.03 ])
theta = np.zeros(2)
NewtonMethod(x,y,theta,100)
Need help / suggestions to fix this problem. 需要帮助/建议来解决此问题。
Thanks 谢谢
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.