简体   繁体   中英

Python constrained linear optimization

Hello I'm trying to solve a linear system of equations with two sideconstraints, one which is succesfully implemented, that the result should sum up to 1, but I need another, that each solution should be nonnegative. Anyone who know how to add this constraint? Thanks

import numpy as np
import numpy.linalg as LA
import scipy.optimize as optimize

A = np.array([[.5, .3, .2], [.4, 6, .3], [.2, .3, .5]])
b = np.array([0, 0, 0])
x = LA.solve(A, b)

def f(x):
y = np.dot(A, x) - b
return np.dot(y, y)

cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1},{'type': 'eq', 'fun': lambda x: x >= 0})
res = optimize.minimize(f, [0, 0, 0], method='SLSQP', constraints=cons, 
                    options={'disp': False})
xbest = res['x']

print(xbest)

I am assuming that this is the system of equations you are trying to solve:

.5x1 + .3x2 + .2x3 = 0
.4x1 + 6x2 + .3x3 = 0
.2x1 + .3x2 + .5x3 = 0
x1 + x2 + x3 =1
x1, x2, x3 >=0

This can be solved easily using scipy.optimize.linprog . Here as you do not have an objective function, the coefficients for the objective function will be [0., 0., 0.] .

from scipy.optimize import linprog

print(linprog(c=[0., 0., 0.], 
    A_eq=[[.5, .3, .2], [.4, 6, .3], [.2, .3, .5], [1., 1., 1.]],
    b_eq=[0., 0., 0., 1.],
    bounds=(0, None)))

This should give you the result for your question. However, there exists no feasible solution for your system. You can find more information about scipy.optimize.linprog here: http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.linprog.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM