简体   繁体   English

我如何可视化这个梯度下降算法?

[英]How can I visualise this gradient descent algorithm?

How can I visually display this gradient descent algorithm (eg graph)?我怎样才能直观地显示这个梯度下降算法(例如图形)?

import matplotlib.pyplot as plt

def sigmoid(sop):
    return 1.0 / (1 + numpy.exp(-1 * sop))

def error(predicted, target):
    return numpy.power(predicted - target, 2)

def error_predicted_deriv(predicted, target):
    return 2 * (predicted - target)

def activation_sop_deriv(sop):
    return sigmoid(sop) * (1.0 - sigmoid(sop))

def sop_w_deriv(x):
    return x

def update_w(w, grad, learning_rate):
    return w - learning_rate * grad

x = 0.1
target = 0.3
learning_rate = 0.01
w = numpy.random.rand()
print("Initial W : ", w)

iterations = 10000

for k in range(iterations):
    # Forward Pass
    y = w * x
    predicted = sigmoid(y)
    err = error(predicted, target)

    # Backward Pass
    g1 = error_predicted_deriv(predicted, target)

    g2 = activation_sop_deriv(predicted)

    g3 = sop_w_deriv(x)

    grad = g3 * g2 * g1
    # print(predicted)

    w = update_w(w, grad, learning_rate)

I tried making a very simple plot with matplotlib but couldn't get the line to actual display (the graph initialised properly, but the line didn't appear).我尝试用 matplotlib 制作一个非常简单的 plot 但无法将线条显示为实际显示(图形已正确初始化,但线条未出现)。

Here's what I did:这是我所做的:

plt.plot(iterations, predicted)
plt.ylabel("Prediction")
plt.xlabel("Iteration Number")
plt.show()

I tried doing a search but none of the resources I found applied to this particular format of gradient descent.我尝试进行搜索,但没有找到适用于这种特定格式的梯度下降的资源。

Both iterations and predicted are scalar values in your code, that's why you can't generate the line chart. iterationspredicted都是代码中的标量值,这就是您无法生成折线图的原因。 You would need to store their values in two arrays in order to be able to plot them:您需要将它们的值存储在两个 arrays 中,以便能够对它们进行 plot :

K = 10000

iterations = numpy.arange(K)
predicted = numpy.zeros(K)

for k in range(K):

    # Forward Pass
    y = w * x
    predicted[k] = sigmoid(y)
    err = error(predicted[k], target)

    # Backward Pass
    g1 = error_predicted_deriv(predicted[k], target)
    g2 = activation_sop_deriv(predicted[k])
    g3 = sop_w_deriv(x)

    grad = g3 * g2 * g1

    # print(predicted[k])

    w = update_w(w, grad, learning_rate)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM