简体   繁体   中英

Linear Regression with python - gradient descent error

I have been trying to implement my own Linear Regression from scratch using python but have been facing a issue during the last days.

import pandas as pd
import numpy as np
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt

Your cost function is wrong, it should be:

 cost = 1/(2*m) * np.sum(np.power(error,2)) 

Also, try to initialize your weights as random values between 0 an 1 and scale your inputs to range 0-1.

我有一个相同的问题,我通过标准化x值解决了。

I think that you are making a mistake in the gradient descent algorithm. When updating the values for "W" vector it should be:

W = W - (learning_rate/m) * derivate.sum()

The learning rate is too large. I try learning_rate = 0.000001, and it converges normally.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM