簡體   English   中英

使用SLSQP的Python的scipy.optimize.minimize失敗,顯示“用於線搜索的正方向導數”

[英]Python's scipy.optimize.minimize with SLSQP fails with “Positive directional derivative for linesearch”

我有一個最小二乘最小化問題,受不等式約束的影響,我正在嘗試使用scipy.optimize.minimize解決。 對於不平等約束,似乎有兩種選擇:COBYLA和SLSQP。

我首先嘗試了SLSQP,因為它允許將函數的顯式偏導數最小化。 根據問題的嚴重程度,它會因錯誤而失敗:

Positive directional derivative for linesearch    (Exit mode 8)

只要施加間隔或更普遍的不平等約束。

以前例如在這里已經觀察到了這一點 手動縮放要最小化的功能(以及相關的偏導數)似乎可以解決此問題,但是我無法通過更改選項中的ftol來實現相同的效果。

總的來說,這件事使我對常規程序的健壯性產生懷疑。 這是一個簡化的示例:

import numpy as np
import scipy.optimize as sp_optimize

def cost(x, A, y):

    e = y - A.dot(x)
    rss = np.sum(e ** 2)

    return rss

def cost_deriv(x, A, y):

    e = y - A.dot(x)
    deriv0 = -2 * e.dot(A[:,0])
    deriv1 = -2 * e.dot(A[:,1])

    deriv = np.array([deriv0, deriv1])

    return deriv


A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2

prm_bounds = ((0, 3), (0,1))

cons_SLSQP = ({'type': 'ineq', 'fun' : lambda x: np.array([x[0] - x[1]]),
               'jac' : lambda x: np.array([1.0, -1.0])})

# works correctly
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)

# fails
A = 100 * A
y = A.dot(x_true)
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)

# works if bounds and inequality constraints removed
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv,
method='SLSQP', options={'disp': True})
print(min_res_SLSQP)

應該如何設置以避免故障? 更一般而言,COBYLA是否會出現類似的問題? 對於這種類型的不等式約束最小二乘優化問題,COBYLA是更好的選擇嗎?

發現在成本函數中使用平方根可以提高性能。 但是,對於問題的非線性重新參數化(更簡單,但更接近我在實踐中需要做的事情),它再次失敗。 詳細信息如下:

import numpy as np
import scipy.optimize as sp_optimize


def cost(x, y, g):

    e = ((y - x[1]) / x[0]) - g

    rss = np.sqrt(np.sum(e ** 2))

    return rss


def cost_deriv(x, y, g):

    e = ((y- x[1]) / x[0]) - g

    factor = 0.5 / np.sqrt(e.dot(e))
    deriv0 = -2 * factor * e.dot(y - x[1]) / (x[0]**2)
    deriv1 = -2 * factor * np.sum(e) / x[0]

    deriv = np.array([deriv0, deriv1])

    return deriv


x_true = np.array([1/300, .1])
N = 20
t = 20 * np.arange(N)
g = 100 * np.cos(2 * np.pi * 1e-3 * (t - t[-1] / 2))
y = g * x_true[0] + x_true[1]

x_guess = x_true / 2
prm_bounds = ((1e-4, 1e-2), (0, .4))

# check derivatives
delta = 1e-9
C0 = cost(x_guess, y, g)
C1 = cost(x_guess + np.array([delta, 0]), y, g)
approx_deriv0 = (C1 - C0) / delta
C1 = cost(x_guess + np.array([0, delta]), y, g)
approx_deriv1 = (C1 - C0) / delta
approx_deriv = np.array([approx_deriv0, approx_deriv1])
deriv = cost_deriv(x_guess, y, g)

# fails
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(y, g), jac=cost_deriv,
bounds=prm_bounds, method='SLSQP', options={'disp': True})
print(min_res_SLSQP)

代替最小化np.sum(e ** 2) ,最小化sqrt(np.sum(e ** 2))或更好(就計算而言): np.linalg.norm(e)

此修改:

  • 不會改變關於x解決方案
  • 如果需要原始目標,則將需要后處理(可能不需要)
  • 更堅固

有了這一更改,所有情況都可以使用即使使用數值微分也是如此(我太懶了,無法修改梯度,這需要反映這一點!)。

輸出示例(func-evals的數量給出num-diff):

Optimization terminated successfully.    (Exit mode 0)
            Current function value: 3.815547437029837e-06
            Iterations: 16
            Function evaluations: 88
            Gradient evaluations: 16
     fun: 3.815547437029837e-06
     jac: array([-6.09663382, -2.48862544])
 message: 'Optimization terminated successfully.'
    nfev: 88
     nit: 16
    njev: 16
  status: 0
 success: True
       x: array([ 2.00000037,  0.10000018])
Optimization terminated successfully.    (Exit mode 0)
            Current function value: 0.0002354577991007501
            Iterations: 23
            Function evaluations: 114
            Gradient evaluations: 23
     fun: 0.0002354577991007501
     jac: array([ 435.97259208,  288.7483819 ])
 message: 'Optimization terminated successfully.'
    nfev: 114
     nit: 23
    njev: 23
  status: 0
 success: True
       x: array([ 1.99999977,  0.10000014])
Optimization terminated successfully.    (Exit mode 0)
            Current function value: 0.0003392807206384532
            Iterations: 21
            Function evaluations: 112
            Gradient evaluations: 21
     fun: 0.0003392807206384532
     jac: array([ 996.57340243,   51.19298764])
 message: 'Optimization terminated successfully.'
    nfev: 112
     nit: 21
    njev: 21
  status: 0
 success: True
       x: array([ 2.00000008,  0.10000104])

盡管SLSQP可能存在一些問題,但鑒於廣泛的應用范圍,它仍然是最受測試和最強大的代碼之一!

我還希望SLSQP在這里比COBYLA更好,因為COBYLA很大程度上基於線性化。 (但僅作為猜測;考慮最小接口,很容易嘗試!)

替代

通常,用於凸二次規划的基於內部點的求解器將是此處的最佳方法。 但是為此,您需要保持科學。 (或者SOCP求解器可能會更好...我不確定)。

cvxpy帶來了一個很好的建模系統和一個很好的開源求解器( ECOS ;盡管從技術上講是一個圓錐形求解器->更通用,更不可靠;但是應該勝過SLSQP)。

使用cvxpy和ECOS,這看起來像:

import numpy as np
import cvxpy as cvx

""" Problem data """
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2

prm_bounds = ((0, 3), (0,1))

# problematic case
A = 100 * A
y = A.dot(x_true)

""" Solve """
x = cvx.Variable(len(x_true))
constraints = [x[0] >= x[1]]
for ind, (lb, ub) in enumerate(prm_bounds):  # ineffecient -> matrix-based expr better!
    constraints.append(x[ind] >= lb)
    constraints.append(x[ind] <= ub)

objective = cvx.Minimize(cvx.norm(A*x - y))
problem = cvx.Problem(objective, constraints)
problem.solve(solver=cvx.ECOS, verbose=False)
print(problem.status)
print(problem.value)
print(x.value.T)

# optimal
# -6.67593652593801e-10
# [[ 2.   0.1]]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM