简体   繁体   English

在 Numpy Python 中从头开始进行正交线性回归的梯度下降推导

[英]Gradient descent derivation for orthogonal linear regression from scratch in Numpy Python

在此处输入图片说明

I wanted to derive gradient descent from scratch using the error function for Orthogonal linear regression.我想使用正交线性回归的误差函数从头开始推导梯度下降。 So that I could write python code in NumPy.这样我就可以在 NumPy 中编写 python 代码。 However, I am unsure about the error function in this.但是,我不确定其中的错误函数。

y = W0 + W1 * X y = W0 + W1 * X

Can anyone help me derive this!!谁能帮我推导出来!! Thanks in advance.提前致谢。

Let's say you have y = m*x + p with (m; p) being the parameters you are tuning.假设您有y = m*x + p其中(m; p)是您要调整的参数。

When you feed an observation sample X of X' through your model, you obtain a vector Y'.当您通过模型提供 X' 的观察样本 X 时,您将获得一个向量 Y'。 Then, you can compute your loss, say a mean square error, as e = 1 / n * sum_i (Yi - Yi')²' where Y is the y-coordinates associated to the x-coordinates in X .然后,您可以计算您的损失,比如均方误差,如e = 1 / n * sum_i (Yi - Yi')²'其中Y是与X的 x 坐标相关联的 y 坐标。

Now, we want to differentiate e relative to your parameters:现在,我们想要区分 e 相对于您的参数:

  • de / dm = -2 / sigma² * sum_i ( Xi' * ( Yi - p - m * Xi' ) )
  • de / dp = -2 / sigma² * sum_i ( Yi - p - m * Xi' )

Then you want to minimize e thanks to this gradient:然后你想通过这个梯度最小化e

  • m <- m - lr * de / dm
  • p <- p - lr * de / dp

With lr being your learning rate. lr是你的学习率。

Do the math on your own though, just in case I made an error.不过,请自行计算,以防万一我犯了错误。

EDIT: I've implemented it quickly using those formulas and the math seems ok (at least it converges to the solution quickly)编辑:我已经使用这些公式快速实现了它并且数学看起来没问题(至少它很快收敛到解决方案)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM