简体   繁体   中英

Scipy constrained minimization does not respect constraint

I apologize if the question seems straightforward and easy. I tried to look for an answer, but did not find one that could solve my problem. I have a very simple minimization problem: I need to maximize an expected value (in a second phase the objective function will become more complicated):

    def EV(q, P):
       return (-1)*np.sum(100 * q * (2*P - 1))

q is a 12 dimensional vector whose elements need to be between 0 and 1 and, clearly, the sum of the elements of q is 1. So I proceed to set the bounds and constraints:

     cons = {'type': 'eq', 'fun': lambda q: np.sum(q) - 1}
     bds = [(0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1)]
     P = array([ 0.32510069,  0.96284943,  0.33966465,  0.61696874,  0.77368336,
    0.10127222,  0.47836665,  0.87537657,  0.2086234 ,  0.52468426,
    0.31931169,  0.86424427]).

Then I call scipy.optimize.minimize:

    X0 = np.array([0.5,0,0,0,0,0,0,0,0,0,0.4,0])
    qstar = scipy.optimize.minimize(fun = EV, x0 = X0, args = (P), method = 'L-BFGS-B', bounds = bds, constraints = cons).

However, when I print the solution qstar I get the following:

    fun: -323.56132559388169
    hess_inv: <12x12 LbfgsInvHessProduct with dtype=float64>
    jac: array([ 34.97985972, -92.56988847,  32.06706651, -23.39374987,
   -54.7366767 ,  79.74555274,   4.32666525, -75.0753145 ,
    58.27532163,  -4.93685093,  36.13766353, -72.84884873])
    message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
    nfev: 26
    nit: 1
    status: 0
    success: True
    x: array([ 0.,  1.,  0.,  1.,  1.,  0.,  0.,  1.,  0.,  1.,  0.,  1.])

Why isn't the solution satisfying the equality constraint? Is it, perhaps, because of the message? Any help is very much appreciated.

Change the solver method to SLSQP, as mentioned in the comment, constraints are only supported in SLSQP and COBYLA. SLSQP solves the problem by sequential least squares quadratic programming.

Note that COBYLA only supports inequality constraints.

import numpy as np
import scipy.optimize

def EV(q, P):
    return (-1)*np.sum(100 * q * (2*P - 1))

cons = {'type': 'eq', 'fun': lambda q: np.sum(q) - 1}
bds = [(0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1)]
P = np.array([ 0.32510069,  0.96284943,  0.33966465,  0.61696874,  0.77368336,
0.10127222,  0.47836665,  0.87537657,  0.2086234 ,  0.52468426,
0.31931169,  0.86424427])

X0 = np.array([0.5,0,0,0,0,0,0,0,0,0,0.4,0])
qstar = scipy.optimize.minimize(fun = EV, x0 = X0, args = (P), method ='SLSQP', bounds = bds, constraints = cons)
print(qstar)

gives me the following output.

fun: -92.56988588438836
jac: array([ 34.97986126, -92.56988621,  32.06707001, -23.39374828,
   -54.7366724 ,  79.74555588,   4.32666969, -75.07531452,
    58.27532005,  -4.93685246,  36.13766193, -72.84885406])
message: 'Optimization terminated successfully.'
nfev: 28
nit: 2
njev: 2
status: 0
success: True
x: array([  2.07808604e-10,   1.00000000e+00,   1.95365391e-10,
     0.00000000e+00,   0.00000000e+00,   4.37596612e-10,
     5.51522994e-11,   0.00000000e+00,   3.28030922e-10,
     8.07265366e-12,   2.14253171e-10,   0.00000000e+00])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM