简体   繁体   中英

Implementing stochastic gradient descent

I am trying to implement a basic way of the stochastic gradient desecent with multi linear regression and the L2 Norm as loss function.

The result can be seen in this picture:

在此处输入图片说明

Its pretty far of the ideal regression line, but I dont really understand why thats the case. I double checked all array dimensions and they all seem to fit.

Below is my source code. If anyone can see my error or give me a hint I would appreciate that.

def SGD(x,y,learning_rate):
    theta = np.array([[0],[0]])
    
    for i in range(N):
        xi = x[i].reshape(1,-1)
        y_pre = xi@theta

        theta = theta + learning_rate*(y[i]-y_pre[0][0])*xi.T

    print(theta)

    return theta
    

N = 100
x = np.array(np.linspace(-2,2,N))
y = 4*x + 5 + np.random.uniform(-1,1,N)

X = np.array([x**0,x**1]).T

plt.scatter(x,y,s=6)

th = SGD(X,y,0.1)

y_reg = np.matmul(X,th)
print(y_reg)
print(x)
plt.plot(x,y_reg)

plt.show()

Edit: Another solution was to shuffle the measurements with x = np.random.permutation(x)

to illustrate my comment,

def SGD(x,y,n,learning_rate):
    theta = np.array([[0],[0]])

    # currently it does exactly one iteration. do more
    for _ in range(n):
        for i in range(len(x)):
            xi = x[i].reshape(1,-1)
            y_pre = xi@theta

            theta = theta + learning_rate*(y[i]-y_pre[0][0])*xi.T

    print(theta)

    return theta

SGD(X,y,10,0.01) yields the correct result

10 次迭代

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM