简体   繁体   English

使用python numpy矩阵类的梯度下降

[英]gradient descent using python numpy matrix class

I'm trying to implement the univariate gradient descent algorithm in python. 我正在尝试在python中实现单变量梯度下降算法。 I have tried a bunch of different ways and nothing works. 我尝试了很多不同的方法,但没有任何效果。 What follows is one example of what I've tried. 以下是我尝试过的一个示例。 What am I doing wrong? 我究竟做错了什么? Thanks in advance!!! 提前致谢!!!

from numpy import *

class LinearRegression:

  def __init__(self,data_file):
    self.raw_data_ref = data_file
    self.theta = matrix([[0],[0]])
    self.iterations = 1500
    self.alpha = 0.001


  def format_data(self):
    data = loadtxt(self.raw_data_ref, delimiter = ',')
    dataMatrix = matrix(data)
    x = dataMatrix[:,0]
    y = dataMatrix[:,1]
    m = y.shape[0]
    vec = mat(ones((m,1)))
    x = concatenate((vec,x),axis = 1)
    return [x, y, m]


  def computeCost(self, x, y, m):
    predictions = x*self.theta
    squaredErrorsMat = power((predictions-y),2)
    sse = squaredErrorsMat.sum(axis = 0)
    cost = sse/(2*m)
    return cost


  def descendGradient(self, x, y, m):
      for i in range(self.iterations):

          predictions = x*self.theta
          errors = predictions - y
          sumDeriv1 = (multiply(errors,x[:,0])).sum(axis = 0)
          sumDeriv2 = (multiply(errors,x[:,1])).sum(axis = 0)

          print self.computeCost(x,y,m)

          tempTheta = self.theta
          tempTheta[0] = self.theta[0] - self.alpha*(1/m)*sumDeriv1
          tempTheta[1] = self.theta[1] - self.alpha*(1/m)*sumDeriv2

          self.theta[0] = tempTheta[0]
          self.theta[1] = tempTheta[1]


      return self.theta



regressor = LinearRegression('ex1data1.txt')
output = regressor.format_data()
regressor.descendGradient(output[0],output[1],output[2])
print regressor.theta 

A little update; 一点更新; I previously tried to do it in a more "vectorized" way, like so: 我以前曾尝试以一种更加“向量化”的方式进行操作,如下所示:

def descendGradient(self, x, y, m):
  for i in range(self.iterations):

      predictions = x*self.theta
      errors = predictions - y

      sumDeriv1 = (multiply(errors,x[:,0])).sum(axis = 0)
      sumDeriv2 = (multiply(errors,x[:,1])).sum(axis = 0)

      gammaMat = concatenate((sumDeriv1,sumDeriv2),axis = 0)
      coeff = self.alpha*(1.0/m)
      updateMatrix = gammaMat*coeff
      print updateMatrix, gammaMat


      jcost  = self.computeCost(x,y,m)
      print jcost
      tempTheta = self.theta
      tempTheta = self.theta - updateMatrix
      self.theta = tempTheta

  return self.theta

This resulted in a theta of [[-0.86221218],[ 0.88827876]]. 这导致θ为[[-0.86221218],[0.88827876]]。

You have two problems, both are related to floating points: 您有两个问题,都与浮点有关:

1. Initialize your theta matrix like this: 1.像这样初始化theta矩阵:

self.theta = matrix([[0.0],[0.0]])


2. Change the update lines, replacing (1/m) with (1.0/m) : 2.更改更新行,将(1/m)替换为(1.0/m)

tempTheta[0] = self.theta[0] - self.alpha*(1.0/m)*sumDeriv1
tempTheta[1] = self.theta[1] - self.alpha*(1.0/m)*sumDeriv2



On an unrelated note: your tempTheta variable is unnecessary. 无关紧要的是:您的tempTheta变量是不必要的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM