简体   繁体   中英

Python - Minimization of a function with iterative bounds

I'm trying to minimize a function of N parameters (eg x[1],x[2],x[3]...,x[N] ) where the boundaries for the minimization depend on the minimized parameters themselves. For instance, assume that all values of x could vary between 0 and 1 in such a way that the summing then all I got 1, then I have the following inequalities for the boundaries:

0 <= x[1] <= 1
x[1] <= x[2] <= 1 - x[1]
x[2] <= x[3] <= 1-x[1]-x[2]
...
x[N-1] <= x[N] <= 1-x[1]-x[2]-x[3]-...-x[N] 

Does anyone have an idea on how can I construct some algorithm like that on python? Or maybe if I can adopt an existent method from Scipy for example? Thank you in advance!

As a rule of thumb: As soon as your boundaries depend on the optimization variables, they are inequality constraints instead of boundaries. Using 0-based indices, your inequalities can be written as

# left-hand sides
        -x[0] <= 0
x[i] - x[i+1] <= 0 for all i = 0, ..., n-1

# right-hand sides
sum(x[i], i = 0, .., j) - 1 <= 0 for all j = 0, .., n

Both can be expressed by a simple matrix-vector product:

import numpy as np

D_lhs = np.diag(np.ones(N-1), k=-1) - np.diag(np.ones(N))
D_rhs = np.tril(np.ones(N))

def lhs(x):
    return D_lhs @ x

def rhs(x):
    return D_rhs @ x - np.ones(x.size)

As a result, you can use scipy.optimize.minimize to minimize your objective function subject to lhs(x) <= 0 and rhs(x) <= 0 like this:

from scipy.optimize import minimize

# minmize expects eqach inequality constraint in the form con(x) >= 0,
# so lhs(x) <= 0 is the same as -1.0*lhs(x) >= 0
con1 = {'type': 'ineq', 'fun': lambda x: -1.0*lhs(x)}
con2 = {'type': 'ineq', 'fun': lambda x: -1.0*rhs(x)}

result = minimize(your_obj_fun, x0=inital_guess, constraints=(con1, con2))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM