简体   繁体   中英

Using python and numpy to compute gradient of the regularized loss function

I have the following formula:

RLF

That I'm trying to use in a function to compute the gradient of the regularized loss function. I have the dataSet , which is an array of [(x(1), t(1)), ..., (x(n), t(n))] , and with the training data n = 15 .

Here's what I have so far, knowing that the loss function is the vector here.

def gradDescent(alpha, t, w, Z):
    returned = 2 * alpha * w
    y = []
    i = 0
    while i < len(dataSet):
        y.append(dataSet[i][0] * w[i])
        i+= 1
    return(returned - (2 * np.sum(np.subtract(t, y)) * Z))

The issue is, w is always equal to (M + 1 ) - whereas in the dataSet , t is equal to 15 . This results in an out of bound multiplication. Am I calculating the formula wrong? Any help?

I believe you are messing up your indexing on the data set array. Also make sure your array is actually defined as an array and not a list. I believe lists index like list[i][j] and arrays index like array[i,j].

So I would run your data object through:

import numpy as np
dataSet=np.asarray(dataSet)

Then replace your while loop with this while loop:

while i < len(dataSet):
        y.append(dataSet[i,0] * w[i])
        i+= 1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM