简体   繁体   中英

Implementing sub gradient Stochastic descent in python

I want to implement subgradient and Stochastic descent using a cost function, calculate the number of iterations that it takes to find a perfect classifier for the data and also the weights (w) and bias (b). the dataset is in four dimension

this is my cost function在此处输入图片说明

i have take the derivative of the cost function and here it is: 在此处输入图片说明

When i run my code i get a lot of errors, can someone please help.

Here is my Code in python

import numpy as np

learn_rate = 1
w = np.zeros((4,1))
b = 0
M = 1000

data = '/Users/labuew/Desktop/dataset.data'

#calculating the gradient

def cal_grad_w(data, w, b):
    for i in range (M):
        sample = data[i,:]
        Ym = sample[-1]
        Xm = sample[0:4]
        if -Ym[i]*(w*Xm+b) >= 0:
            tmp = 1.0
        else:
            tmp = 0
        value = Ym[i]*Xm*tmp
        sum = sum +value
    return sum
def cal_grad_b(data, w, b):
    for i in range (M):
        sample = data[i,:]
        Ym = sample[-1]
        Xm = sample[0:4]
        if -Ym*(w*Xm+b) >= 0:
            tmp = 1.0
        else:
            tmp = 0
        value = Ym[i]*x*tmp
        sum = sum +value
    return sum

if __name__ == '__main__':
    counter = 0
    while 1:
        counter +=1
        dw = cal_grad_w(data, w, b)
        db = cal_grad_b(data, w, b)
        if dw == 0 and db == 0:
            break
        w = w - learn_rate*dw
        b = b - learn_rate *dw
    print(counter,w,b)

are you missing the numpy load function?

data = np.load('/Users/labuew/Desktop/dataset.data')

It looks like you're doing the numerics on the string.

also

Ym = sample[-1]
Xm = sample[0:4]

Also 4 dimensions implies that Ym = Xm[3]? Is your data rank 2 with the second rank being dimension 5? [0:4] includes the forth dimension ie

z = [1,2,3,4]
z[0:4] = [1,2,3,4]

This would be my best guess. I'm taking a few educated guesses about your data format.

import numpy as np

learn_rate = 1
w = np.zeros((1,4))
b = 0
M = 1000

#Possible format
#data = np.load('/Users/labuew/Desktop/dataset.data')

#Assumed format
data = np.ones((1000,5))

#calculating the gradient

def cal_grad_w(data, w, b):
    sum = 0
    for i in range (M):
        sample = data[i,:]
        Ym = sample[-1]
        Xm = sample[0:4]
        if -1*Ym*(np.matmul(w,Xm.reshape(4,1))+b) >= 0:
            tmp = 1.0
        else:
            tmp = 0
        value = Ym*Xm*tmp
        sum = sum +value
    return sum.reshape(1,4)
def cal_grad_b(data, w, b):
    sum = 0 
    for i in range (M):
        sample = data[i,:]
        Ym = sample[-1]
        Xm = sample[0:4]
        if -1*Ym*(np.matmul(w,Xm.reshape(4,1))+b) >= 0:
            tmp = 1.0
        else:
            tmp = 0
        value = Ym*tmp
        sum = sum +value
    return sum

if __name__ == '__main__':
    counter = 0
    while 1:
        counter +=1
        dw = cal_grad_w(data, w, b)
        db = cal_grad_b(data, w, b)
        if dw.all() == 0 and db == 0:
            break
        w = w - learn_rate*dw
        b = b - learn_rate*db
        print([counter,w,b])

Put in dummy data because I don't know the format.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM