簡體   English   中英

如何在使用python scipy.optimize.minimize時確保解決方案是全局最小值

[英]How to make sure that solution is global minimum while using python scipy.optimize.minimize

我在python中實現邏輯回歸。 為了找到theta,我正在努力確定哪種算法是最好的算法,該算法始終可以保證全局最優,而不必擔心初始參數theta。

import numpy as np
import scipy.optimize as op

def Sigmoid(z):
    return 1/(1 + np.exp(-z));

def Gradient(theta,x,y):
    m , n = x.shape
    theta = theta.reshape((n,1));
    y = y.reshape((m,1))
    sigmoid_x_theta = Sigmoid(x.dot(theta));
    grad = ((x.T).dot(sigmoid_x_theta-y))/m;
    return grad.flatten();

def CostFunc(theta,x,y):
    m,n = x.shape; 
    theta = theta.reshape((n,1));
    y = y.reshape((m,1));
    term1 = np.log(Sigmoid(x.dot(theta)));
    term2 = np.log(1-Sigmoid(x.dot(theta)));
    term1 = term1.reshape((m,1))
    term2 = term2.reshape((m,1))
    term = y * term1 + (1 - y) * term2;
    J = -((np.sum(term))/m);
    return J;

data = np.loadtxt('ex2data1.txt',delimiter=',');

# m training samples and n attributes
m , n = data.shape          
X = data[:,0:n-1]
y = data[:,n-1:]
X = np.concatenate((np.ones((m,1)), X),axis = 1)
initial_theta = np.zeros((n,1))
m , n = X.shape;

Result = op.minimize(fun = CostFunc, 
                     x0 = initial_theta,
                     args = (X,y), 
                     method = 'TNC',
                     jac = Gradient);
theta = Result.x;

其中ex2data1.txt的內容為:

34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
60.18259938620976,86.30855209546826,1
79.0327360507101,75.3443764369103,1
45.08327747668339,56.3163717815305,0
61.10666453684766,96.51142588489624,1
75.02474556738889,46.55401354116538,1
76.09878670226257,87.42056971926803,1
84.43281996120035,43.53339331072109,1
95.86155507093572,38.22527805795094,0
75.01365838958247,30.60326323428011,0
82.30705337399482,76.48196330235604,1
69.36458875970939,97.71869196188608,1
39.53833914367223,76.03681085115882,0
53.9710521485623,89.20735013750205,1
69.07014406283025,52.74046973016765,1
67.94685547711617,46.67857410673128,0
70.66150955499435,92.92713789364831,1
76.97878372747498,47.57596364975532,1
67.37202754570876,42.83843832029179,0
89.67677575072079,65.79936592745237,1
50.534788289883,48.85581152764205,0
34.21206097786789,44.20952859866288,0
77.9240914545704,68.9723599933059,1
62.27101367004632,69.95445795447587,1
80.1901807509566,44.82162893218353,1
93.114388797442,38.80067033713209,0
61.83020602312595,50.25610789244621,0
38.78580379679423,64.99568095539578,0
61.379289447425,72.80788731317097,1
85.40451939411645,57.05198397627122,1
52.10797973193984,63.12762376881715,0
52.04540476831827,69.43286012045222,1
40.23689373545111,71.16774802184875,0
54.63510555424817,52.21388588061123,0
33.91550010906887,98.86943574220611,0
64.17698887494485,80.90806058670817,1
74.78925295941542,41.57341522824434,0
34.1836400264419,75.2377203360134,0
83.90239366249155,56.30804621605327,1
51.54772026906181,46.85629026349976,0
94.44336776917852,65.56892160559052,1
82.36875375713919,40.61825515970618,0
51.04775177128865,45.82270145776001,0
62.22267576120188,52.06099194836679,0
77.19303492601364,70.45820000180959,1
97.77159928000232,86.7278223300282,1
62.07306379667647,96.76882412413983,1
91.56497449807442,88.69629254546599,1
79.94481794066932,74.16311935043758,1
99.2725269292572,60.99903099844988,1
90.54671411399852,43.39060180650027,1
34.52451385320009,60.39634245837173,0
50.2864961189907,49.80453881323059,0
49.58667721632031,59.80895099453265,0
97.64563396007767,68.86157272420604,1
32.57720016809309,95.59854761387875,0
74.24869136721598,69.82457122657193,1
71.79646205863379,78.45356224515052,1
75.3956114656803,85.75993667331619,1
35.28611281526193,47.02051394723416,0
56.25381749711624,39.26147251058019,0
30.05882244669796,49.59297386723685,0
44.66826172480893,66.45008614558913,0
66.56089447242954,41.09209807936973,0
40.45755098375164,97.53518548909936,1
49.07256321908844,51.88321182073966,0
80.27957401466998,92.11606081344084,1
66.74671856944039,60.99139402740988,1
32.72283304060323,43.30717306430063,0
64.0393204150601,78.03168802018232,1
72.34649422579923,96.22759296761404,1
60.45788573918959,73.09499809758037,1
58.84095621726802,75.85844831279042,1
99.82785779692128,72.36925193383885,1
47.26426910848174,88.47586499559782,1
50.45815980285988,75.80985952982456,1
60.45555629271532,42.50840943572217,0
82.22666157785568,42.71987853716458,0
88.9138964166533,69.80378889835472,1
94.83450672430196,45.69430680250754,1
67.31925746917527,66.58935317747915,1
57.23870631569862,59.51428198012956,1
80.36675600171273,90.96014789746954,1
68.46852178591112,85.59430710452014,1
42.0754545384731,78.84478600148043,0
75.47770200533905,90.42453899753964,1
78.63542434898018,96.64742716885644,1
52.34800398794107,60.76950525602592,0
94.09433112516793,77.15910509073893,1
90.44855097096364,87.50879176484702,1
55.48216114069585,35.57070347228866,0
74.49269241843041,84.84513684930135,1
89.84580670720979,45.35828361091658,1
83.48916274498238,48.38028579728175,1
42.2617008099817,87.10385094025457,1
99.31500880510394,68.77540947206617,1
55.34001756003703,64.9319380069486,1
74.77589300092767,89.52981289513276,1

上面的代碼給出theta = Result.x值,為[-25.87282405 0.21193078 0.20722013]。 如果initial_theta = np.zeros((n,1)),則這是全局最小值。 但是,如果initial_theta = np.ones((n,1)),則會產生錯誤。 因此,在這種情況下,我們的結果取決於參數theta的初始值。 因此,可以通過任何方式自動執行此操作來避免此問題。

我也嘗試在最小化函數調用中使用“ BFGS”方法而不是“ TNC”方法,如下所示,然后得到RuntimeWarning。

initial_theta = np.zeros((n,1))
result = op.minimize(fun = CostFunc, 
                     x0 = intial_theta,
                     args = (X,y),
                     method = 'BFGS', 
                     jac = Gradient);
optimal_theta =  result.x

我用不同的initial_theta初始值多次調用上述函數,發現BFGS的最大時間收斂到局部最小值。 當我用

initial_theta = np.array([-25,0.2,0.2])

它趨近於全球最小值。 因此似乎TNC優於BFGS,因為在兩種情況下intial_theta相同時,TNC收斂於全局最小值,而BFGS收斂於局部最小值。 所以

  1. 是否在所有情況下都是如此,還是取決於特定問題?
  2. BFGS或TNC哪個更好?
  3. 方法參數='BFGS'的scipy.optimize.fmin_bfgs和scipy.optimize.minimize有什么區別還是兩者相同?

任何幫助或見解都會有所幫助。 謝謝。

沒有實用的算法可以保證找到全局最優值。 但是,有一些啟發式方法,例如DIRECT(請參見例如此處 )在給定范圍內在實踐中非常有效。 這些可用於為算法找到良好的初始化,該算法可在初始化附近找到局部最優值並更有效地工作。

  1. 但是 ,邏輯回歸是一個凸優化問題。 這意味着目標函數(誤差函數)只有一個最小值,即局部最小值始終是全局最小值。 因此,您可以使用任何本地優化器(梯度下降,L-BFGS,共軛梯度等)。 唯一的問題是由於非線性邏輯函數,您不能直接計算最小值。 如果沒有該邏輯函數,存在一個類似的問題,稱為線性回歸。 在這種情況下,可以直接計算誤差函數的全局最小值,而無需任何復雜的優化算法。

  2. 關於邏輯回歸優化器的比較可在Fabian Pedregosa的博客中找到 我的第一個猜測是您的梯度計算中有錯誤。 也許您應該使用scipy.optimize.check_grad將其與漸變的數值近似值進行比較。

  3. scipy.optimize.minimize調用scipy.optimize.fmin_bfgs

高效,通用的算法無法做到這一點。 您將永遠不會真正知道沒有嘗試輸入的成本函數是什么樣子。 也許有一個奇跡般的溝渠貫穿高原,而您忽略了。 成本函數可能以if arg1 == secret: return -1e100開頭if arg1 == secret: return -1e100 誰能說? 如果您確實確實需要一個全局最小值,那么您要么需要利用有關成本函數的額外知識,要么需要嘗試每一個可能的輸入。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM