[英]Python, Deep learning, gradient descent method example
I am studying gradient descent
method with Deep learning from scratch . 我正在从头开始进行深度学习,研究gradient descent
方法。 In the book example, there are some code that is hard to understand. 在本书的示例中,有些代码很难理解。 this is the code. 这是代码。
def gradient_descent(f, init_x, lr = 0.01, step_num = 100):
x = init_x
x_hist = []
for i in range(step_num):
x_hist.append(x) # plot with x_hist
grad = numerical_gradient(f, x)
x -= lr * grad
return x, x_hist
def function_2(x):
return x[0]**2 + x[1]**2
init_x = np.array([-3.0, 4.0])
x, x_hist = gradient_descent(function_2, init_x, lr = 0.1, step_num = 100)
I'm try to plot x_hist
to see the decrease of 'x'. 我尝试绘制x_hist
以查看'x'的减少。 But when I print x_hist
, it comes like this. 但是当我打印x_hist
,它是这样的。
x_hist
[array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10]),
array([-6.11110793e-10, 8.14814391e-10])]
I can fix this problem if i change x_hist.append(x)
to x_hist.append(x.copy())
. 如果将x_hist.append(x)
更改为x_hist.append(x.copy())
则可以解决此问题。 Unfortunately, I don't know why this is different. 不幸的是,我不知道为什么会有所不同。 Can anyone tell me the different between those?(Sorry for the English) 谁能告诉我两者之间的区别?(对不起,英语)
Your list x_hist contains a reference to x, not the value. 您的列表x_hist包含对x的引用,而不是值。 So correcting it by x_hist.append(x.copy()) is a good way. 因此,通过x_hist.append(x.copy())对其进行纠正是一种好方法。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.