简体   繁体   English

在协同过滤的梯度下降中,x 和 theta 是否同时更新?

[英]In gradient descent for collaborative filtering, are x and theta update simultaneous?

I'm taking Andrew Ng's machine learning course and I'm on chapter 16: Recommender Systems.我正在学习 Andrew Ng 的机器学习课程,并且正在学习第 16 章:推荐系统。 I currently finished watching the part about collaborative filtering.我目前看完了关于协同过滤的部分。 In it, he talked about how you can guess the parameters: theta, then use it to predict x and use the predicted x to learn better parameters, and so on.在其中,他谈到了如何猜测参数:theta,然后用它来预测 x 并使用预测的 x 来学习更好的参数,等等。 He also said it could be done simultaneously and gave the gradient descent algorithm for it:他还说可以同时完成,并给出了梯度下降算法:

协同过滤梯度下降算法示意图

I want to ask if x and theta are updated simultaneously.我想问一下 x 和 theta 是否同时更新。 Eg, for each iteration: after performing a single gradient descent on x, do i recalculate the square error sum using the new values of x and then perform a gradient descent on theta then repeat until convergence.例如,对于每次迭代:在 x 上执行一次梯度下降后,我是否使用 x 的新值重新计算平方误差和,然后在 theta 上执行梯度下降,然后重复直到收敛。 OR do i perform a single gradient descent on x, using the same squared error sum, perform a gradient descent on theta too或者我是否在 x 上执行单个梯度下降,使用相同的平方误差和,也在 theta 上执行梯度下降

You can do it in one update simultaneously.您可以同时进行一次更新。 You could even summarize it in one formular updating x and theta at the same time using:您甚至可以使用以下公式同时更新x和 theta 来总结它: 在此处输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM