简体   繁体   中英

In gradient descent for collaborative filtering, are x and theta update simultaneous?

I'm taking Andrew Ng's machine learning course and I'm on chapter 16: Recommender Systems. I currently finished watching the part about collaborative filtering. In it, he talked about how you can guess the parameters: theta, then use it to predict x and use the predicted x to learn better parameters, and so on. He also said it could be done simultaneously and gave the gradient descent algorithm for it:

协同过滤梯度下降算法示意图

I want to ask if x and theta are updated simultaneously. Eg, for each iteration: after performing a single gradient descent on x, do i recalculate the square error sum using the new values of x and then perform a gradient descent on theta then repeat until convergence. OR do i perform a single gradient descent on x, using the same squared error sum, perform a gradient descent on theta too

You can do it in one update simultaneously. You could even summarize it in one formular updating x and theta at the same time using: 在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM