简体   繁体   中英

Implementing Stochastic Gradient Descent Python

I've been trying to implement stochastic gradient descent as part of a recommendation system following these equations:

在此处输入图片说明

I have:

for step in range(max_iter):
        e = 0
        for x in range(len(R)):
            for i in range(len(R[x])):
                if R[x][i] > 0:
                    exi = 2 * (R[x][i] - np.dot(Q[:,i], P[x,:]))
                    qi, px = Q[:,i], P[x,:]

                    qi += _mu_2 * (exi * px - (2 * _lambda_1 * qi))
                    px += _mu_1 * (exi * qi - (2 * _lambda_2 * px))

                    Q[:,i], P[x,:] = qi, px

The output I expect isn't quite right but I can't really put a finger on it. Please help me to identify the problem I have in my code.

Much appreciate your support

更新qipx时 ,应交换mu1mu2

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM