简体   繁体   中英

Error in Ridge Regression Gradient Descent (Python)

I am currently trying to create a code for Ridge Regression using Gradient Descent.

My code goes like this:

def gd_ridge(X, Y, beta, iter, learning_rate, lambda):
    m = X.shape[0]
    
    past_costs = []
    past_betas = [beta]
    
    for i in range(iter):
        pred = np.dot(X, beta)
        err = pred - Y
        cost = cost_reg(Yhat1, train_data_Y, lambda)
        past_costs.append(cost)
        beta = beta - learning_rate * 1/m * np.dot(f(X,beta)-Y,X) + lambda/m * beta
        past_betas.append(beta)

with l = regularisation hyperparameter.

But I always end up with an error in my beta :

can't multiply sequence by non-int of type 'float'

Can anyone help me with this? I've tried different equations and I end up with the same error.

Very likely you're not passing in numpy arrays, but a list instead. Pass in numpy arrays or do beta = np.array(beta) in your function.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM