繁体   English   中英

如何使用fmin_ncg计算成本和theta

[英]How to calculate cost and theta with fmin_ncg

我正在Coursera上学习Andrew NG课程,我想在python上实现相同的逻辑。 我正在尝试计算成本和θ

scipy.optimize.fmin_ncg

这是一个代码

import numpy as np

from scipy.optimize import fmin_ncg


def sigmoid(z):
    return (1 / (1 + np.exp(-z))).reshape(-1, 1)


def compute_cost(theta, X, y):
    m = len(y)
    hypothesis = sigmoid(np.dot(X, theta))
    cost = (1 / m) * np.sum(np.dot(-y.T, (np.log(hypothesis))) - np.dot((1 - y.T), np.log(1 - hypothesis)))
    return cost


def compute_gradient(theta, X, y):
    m = len(y)
    hypothesis = sigmoid(np.dot(X, theta))
    gradient = (1 / m) * np.dot(X.T, (hypothesis - y))
    return gradient


def main():
    data = np.loadtxt("data/data1.txt", delimiter=",")  # 100, 3

    X = data[:, 0:2]
    y = data[:, 2:]
    m, n = X.shape

    initial_theta = np.zeros((n + 1, 1))
    X = np.column_stack((np.ones(m), X))
    mr = fmin_ncg(compute_cost, initial_theta, compute_gradient, args=(X, y), full_output=True)
    print(mr)

if __name__ == "__main__":
    main()

当我尝试运行此程序时,出现如下所示的错误和异常

Traceback (most recent call last):
  File "/file/path/without_regression.py", line 78, in <module>
    main()
  File "/file/path/without_regression.py", line 66, in main
    mr = fmin_ncg(compute_cost, initial_theta, compute_gradient, args=(X, y), full_output=True)
  File "/usr/local/anaconda3/envs/ml/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 1400, in fmin_ncg
    callback=callback, **opts)
  File "/usr/local/anaconda3/envs/ml/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 1497, in _minimize_newtoncg
    dri0 = numpy.dot(ri, ri)
ValueError: shapes (3,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0)

我不明白这个错误。 可能是因为我是初学者,这对我来说并不冗长。

如何使用scipy.optimize.fmin_ncg或其他任何最小化技术(例如scipy.optimize.minimize(...)来计算成本和theta?

如评论中所述:

暂时不参考文档,应该始终使用一维数组。

相关的SO问题

import numpy as np
a = np.random.random(size=(3,1))   # NOT TO USE!
a.shape  # (3, 1)
a.ndim   # 2
b = np.random.random(size=3)       # TO USE!
b.shape  # (3,)                    
b.ndim   # 1

这适用于您的x0 (如果不使用python-lists)和渐变。

快速的技巧(=在渐变中降低暗淡程度),例如:

gradient = (1 / m) * np.dot(X.T, (hypothesis - y)).ravel()  # .ravel()!
...      
initial_theta = np.zeros(n + 1)  # drop extra-dim

使代码运行:

Optimization terminated successfully.
         Current function value: 0.203498
         Iterations: 27
         Function evaluations: 71
         Gradient evaluations: 229
         Hessian evaluations: 0
(array([-25.13045417,   0.20598475,   0.2012217 ]), 0.2034978435366513, 71, 229, 0, 0)

额外:在调试期间,我还检查了梯度本身对数值微分的计算(推荐!),使用x0看起来不错:

from scipy.optimize import check_grad as cg
print(cg(compute_cost, compute_gradient, initial_theta, X, y))
# 1.24034933954e-05

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM