[英]Gradient Descent - Difference between theta as a list and as a numpy array
I've implemented a gradient descent algorithm and that produce different results depending on whether my theta is of type list or a numpy array: When theta is a python list my program is working fine but with theta = np.zeros((2, 1)) something is going wrong and my theta is increasing very fast.我已经实现了梯度下降算法,并且根据我的 theta 是类型列表还是 numpy 数组产生不同的结果:当 theta 是 python 列表时,我的程序运行良好,但使用 theta = np.zeros((2, 1 )) 出了点问题,我的 theta 增长得非常快。
num_iter = 1500
alpha = 0.01
theta = [0, 0]
#theta = np.zeros((2, 1), dtype=np.float64)
print(theta)
def gradient_descent(x, y, theta, alpha, iteration):
m = y.size
i = 0
temp = np.zeros_like(theta, np.float64)
for i in range(iteration):
h = x @ theta
temp[0] = (alpha/m)*(np.sum(h - y))
temp[1] = (alpha/m)*(np.sum((h - y)*x[:,1]))
theta[0] -= temp[0]
theta[1] -= temp[1]
print("theta0 {}, theta1 {}, Cost {}".format(theta[0], theta[1], compute_cost(x, y, theta)))
return theta, J_history
theta = gradient_descent(X, y, theta, alpha, num_iter)
Answer for theta as numpy array Theta 的答案为 numpy 数组
theta0 [5.663961], theta1 [63.36898425], Cost 15846739.108595487
theta0 [-495.73201075], theta1 [-4010.76967073], Cost 65114528414.94523
theta0 [31736.05800912], theta1 [259011.3427287], Cost 271418872442062.44
.
.
.
theta0 [nan], theta1 [nan], Cost nan
theta0 [nan], theta1 [nan], Cost nan
theta0 [nan], theta1 [nan], Cost nan
Answer when theta is a list当 theta 是一个列表时回答
theta0 0.05839135051546392, theta1 0.6532884974555672, Cost 6.737190464870008
theta0 0.06289175271039384, theta1 0.7700097825599365, Cost 5.9315935686049555
.
.
.
theta0 -3.6298120050247746, theta1 1.166314185951815, Cost 4.483411453374869
theta0 -3.6302914394043593, theta1 1.166362350335582, Cost 4.483388256587725
Your two thetas have different shapes: theta = [0,0]
has shape (1,2), but theta = np.zeros((2,1))
has shape (2,1).您的两个 theta 具有不同的形状:
theta = [0,0]
的形状为 (1,2),但theta = np.zeros((2,1))
的形状为 (2,1)。 So if x
has shape (n,), then x @ theta
gives (1,n) for the first or (n,1) for the second.因此,如果
x
具有形状 (n,),则x @ theta
为第一个给出 (1,n) 或为第二个给出 (n,1)。
For example,例如,
t1 = [0,0]
t2 = np.zeros((2,1))
t3 = np.zeros((2,))
x = np.arange(6).reshape(3,2)
x @ t1
# array([0, 0, 0])
x @ t2
# array([[0.],
# [0.],
# [0.]])
x @ t3
# array([0, 0, 0])
Changing to theta = np.zeros((2,))
is (I think) a quick fix.更改为
theta = np.zeros((2,))
是(我认为)一个快速修复。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.