简体   繁体   中英

Weight Middle Range for use of scipy.optimize.linprog (Simplex algorithm)

The basic problem is given these constraints (this is a simple example, my use case follows the same format but can introduce more variables):

x + y = C1
C2 <= x <= C3
C4 <= y <= C5

(where CX are constants, and x and y are variables), solve.

Easy enough with scipy.optimize.linprog . I'm able to get a technically correct result.

Example. Using these specific constraints:

x + y = 50
5 <= x <= 65
10 <= y <= 40

We get:

>>> res = scipy.optimize.linprog([1,1], bounds=([5, 65], [10, 40]), A_eq=[[1,1]], b_eq=[50]  )
>>> res.x
array([ 40.,  10.])

which is technically correct. Notice that the 2nd variable is at its minimum. But due to business reasons, we would prefer something more "fair" where each variable gets to go above its min if possible, something more along the lines of (but not necessarily the same as):

array([ 25.,  25.])

So I've been thinking about weighting the midpoints. How can I use this scipy.optimize.linprog api (or some other scipy optimization api) to modify the function being minimized such that it will give higher priorities to values closer to the midpoint of each variable range?

Please see this answer as a draft to get an idea how to approach those kinds of problems. This is certainly not the best way to do it and certainly not the most effective algorithm for this kind of problem.


The problem here is, that you can probably not express your idea as a smooth linear target function. You need some kind of distance measurement, which probably has to be at least quadratic in case of smooth functions.

The following code adds the L2 of the x vector norm as a penalty. This helps in this case, because the L2 norm is quadratic in its components and therefore prefers all components to be equally small over one larger and one smaller.

from scipy.optimize import fmin_slsqp

# equality constraints as h(x) = 0
def h(x):
    return x[0] + x[1] - 50

# inequality constraints as g(x) >= 0
def g(x):
    return [
        x[0] - 5,
        -x[0] + 65,
        x[1] - 10,
        -x[1] + 40,
    ]

# target functions
f1 = lambda x: x[0] + x[1]
f2 = lambda x: x[0] + x[1] + sum(x**2)

The results:

x = fmin_slsqp(f1, (5,45), f_eqcons=h, f_ieqcons=g)
print(x)

outputs:

Optimization terminated successfully.    (Exit mode 0)
            Current function value: 50.0
            Iterations: 1
            Function evaluations: 5
            Gradient evaluations: 1
[ 10.  40.]

And the penalized version:

x = fmin_slsqp(f2, (5,45), f_eqcons=h, f_ieqcons=g)
print(x)

prints

Optimization terminated successfully.    (Exit mode 0)
            Current function value: 1300.0
            Iterations: 3
            Function evaluations: 12
            Gradient evaluations: 3
[ 25.  25.]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM