[英]Gradient descent not updating theta values
如以下所述使用梯度的矢量化版本: 梯度下降似乎失敗
theta = theta - (alpha/m * (X * theta-y)' * X)';
theta值不會被更新,因此無論初始theta值如何,這都是在運行梯度下降后設置的值:
example1:
m = 1
X = [1]
y = [0]
theta = 2
theta = theta - (alpha/m .* (X .* theta-y)' * X)'
theta =
2.0000
example2:
m = 1
X = [1;1;1]
y = [1;0;1]
theta = [1;2;3]
theta = theta - (alpha/m .* (X .* theta-y)' * X)'
theta =
1.0000
2.0000
3.0000
是theta = theta - (alpha/m * (X * theta-y)' * X)';
梯度下降的正確矢量化實現?
theta = theta - (alpha/m * (X * theta-y)' * X)';
確實是梯度下降的正確向量化實現。
您完全忘記了設置學習率alpha
。
設置alpha = 0.01
,您的代碼變為:
m = 1 # number of training examples
X = [1;1;1]
y = [1;0;1]
theta = [1;2;3]
alpha = 0.01
theta = theta - (alpha/m .* (X .* theta-y)' * X)'
theta =
0.96000
1.96000
2.96000
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.