简体   繁体   中英

Gradiente Descent using numpy Machine Learning

im trying to implement a Gradient Descent Algorithm using Numpy. What im after is avoiding any loops in the algorithm so im using matrixes and numpy.dot to do the calculations. Im sure of the math, but whenever i try to pass this function i get an error message:

def grad(feature_matrix, output, initial_weights, step_size, tolerance):
    converged = False
    w = np.array(initial_weights) # make sure it's a numpy array
    X=feature_matrix
    y=output
    s=step_size
    t=0 
    RSS=0
    J=[]
    while not converged:
        y_h=np.dot(X,w)
        e=y-_yh
        w=w+s*2*np.dot(np.transpose(X),e)
        gradient_magnitude=sqrt(np.dot(np.transpose(X),e)
        RSS=np.dot(e,e)
        J.append(RSS)
        t=t+1
        if gradient_magnitude < tolerance:
            converged = True
    return(weights,J,t) 

i always get this error:

    File "<ipython-input-14-db210106141b>", line 15
    RSS=np.dot(e,e)
      ^
SyntaxError: invalid syntax

if a delete the RSS=np.dot(e,e) line and try to pass the function, then i get:

    File "<ipython-input-15-b0b1a5aebd0c>", line 16
    J.append(RSS)
    ^
SyntaxError: invalid syntax

It seems to be something with the function structure. It might be something obvious that im missing, but i've been looking this function for three days and looking at other examples and i just cant find the error.

Please Help!

The actual error often occurs before the point where Python realizes that a syntax error has occurred. In this case, there is a missing closing parenthesis, ) , on this line:

gradient_magnitude=sqrt(np.dot(np.transpose(X),e)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM