简体   繁体   中英

regression with stochastic gradient descent algorithm

I am studying regression with Machine Learning in Action book and I saw a source like below :

def stocGradAscent0(dataMatrix, classLabels):
    m, n = np.shape(dataMatrix)
    alpha = 0.01
    weights = np.ones(n)   #initialize to all ones
    for i in range(m):
        h = sigmoid(sum(dataMatrix[i]*weights))
        error = classLabels[i] - h
        weights = weights + alpha * error * dataMatrix[i]
    return weights

You may guess what the code means. But I didn't understand it. I read the book several times and searched related stuff like wiki or google, where exponential function is from to get weights for minimum differences. And why do we get proper weight using the exponential function with sum of X*weights? It would be kind of OLS. Anyway then we get the result like below: 在此处输入图片说明

Thanks!

It just the basics in linear regression. In the for loop it tries to calculate the error function

Z = β₀ + β₁X ; where β₁ AND X are matrices

hΘ(x) = sigmoid(Z)

ie hΘ(x) = 1/(1 + e^-(β₀ + β₁X)

then update the weights. normally it's better to give it a high number for iterations in the for loop like 1000, m it would be small i guess.

i want to explain more but i can't explain better than this dude here

Happy learning!!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM