简体   繁体   中英

Find optimal vector that minimizes function

I am trying to find a vector that minimizes the residual sum of squares when multiplying a matrix.

I know of scipy's optimize package (which has a minimize function). However, there is an extra constraint for my code. The sum of all entries of w (see function below) must equal 1, and no entry of w can be less than 0. Is there a package that does this for me? If not, how can I do this?

Trying to minimize w:

def w_rss(w,x0,x1):
    predictions = np.dot(x0,w)
    errors = x1 - predictions
    rss = np.dot(errors.transpose(),errors).item(0)

    return rss

X0 = np.array([[3,4,5,3],
               [1,2,2,4],
               [6,5,3,7],
               [1,0,5,2]])  

X1 = np.array([[4],
               [2],
               [4],
               [2]]) 

W = np.array([[.0],
              [.5],
              [.5],
              [.0]])

print w_rss(W,X0,X1)

So far this is my best attempt at looping through possible values of w, but it's not working properly.

def get_w(x0,x1):

J = x0.shape[1]
W0 = np.matrix([[1.0/J]*J]).transpose()
rss0 = w_rss(W0,x0,x1)
loop = range(J)
for i in loop:
    W1 = W0
    rss1 = rss0
    while rss0 == rss1:
        den = len(loop)-1
        W1[i][0] += 0.01
        for j in loop:
            if i == j:
                continue
            W1[j][0] -= 0.01/den
            if W1[j][0] <= 0:
                loop.remove(j)
        rss1 = w_rss(W1,x0,x1)
        if rss1 < rss0:
            #print W1
            W0 = W1
            rss0 = rss1
        print '--'
        print rss0
        print W0

return W0,rss0

The SLSQP code in scipy can do this. You can use scipy.optimize.minimize with method='SLSQP , or you can use the function fmin_slsqp directly. In the following, I use fmin_slsqp .

The scipy solvers generally pass a one-dimensional array to the objective function, so to be consistent, I'll change W and X1 to be 1-d arrays, and I'll write the objective function (now called w_rss1 ) to expect a 1-d argument w .

The condition that all the elements in w must be between 0 and 1 is specified using the bounds argument, and the condition that the sum must be 1 is specified using the f_eqcons argument. The constraint function returns np.sum(w) - 1 , so it is 0 when the sum of the elements is 1.

Here's the code:

import numpy as np
from scipy.optimize import fmin_slsqp


def w_rss1(w, x0, x1):
    predictions = np.dot(x0, w)
    errors = x1 - predictions
    rss = (errors**2).sum()
    return rss


def sum1constraint(w, x0, x1):
    return np.sum(w) - 1


X0 = np.array([[3,4,5,3],
               [1,2,2,4],
               [6,5,3,7],
               [1,0,5,2]])  

X1 = np.array([4, 2, 4, 2]) 

W = np.array([.0, .5, .5, .0])

result = fmin_slsqp(w_rss1, W, f_eqcons=sum1constraint, bounds=[(0.0, 1.0)]*len(W),
                    args=(X0, X1), disp=False, full_output=True)
Wopt, fW, its, imode, smode = result

if imode != 0:
    print("Optimization failed: " + smode)
else:
    print(Wopt)

When I run this, the output is

[ 0.05172414  0.55172414  0.39655172  0.        ]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM