简体   繁体   English

在优化变量中使用“添加到 1”约束进行优化

[英]Optimization with "add to 1" constraint in optimization variable

I am trying to formulate an optimization problem using scipy minimize function.我正在尝试使用scipy minimize function 来制定优化问题。 However, I am having the following problem that I can't work around:但是,我遇到了以下无法解决的问题:

I want X = [x1, x2, x3, x4, x5] that minimizes a cost function F(X).我想要X = [x1, x2, x3, x4, x5]最小化成本 function F(X)。 This X vector, however, are percentage values that must add to 1, ie, np.sum(X) = 1 .然而,这个 X 向量是必须加到 1 的百分比值,即np.sum(X) = 1

The problem is: if I use, for instance, the "SLSQP" method with some initial values (such as X0 = [0.2, 0.2, 0.2, 0.2, 0.2] ), it will try to increment each value to find some convergence direction.问题是:例如,如果我使用带有一些初始值(例如X0 = [0.2, 0.2, 0.2, 0.2, 0.2] )的“SLSQP”方法,它将尝试增加每个值以找到一些收敛方向. For example, the algorithm will make X0 -> [0.21, 0.2, 0.2, 0.2, 0.2] .例如,该算法将使X0 -> [0.21, 0.2, 0.2, 0.2, 0.2] But that cannot happen, because np.sum(X) = 1 is a requirement for me to compute the objective function.但这不可能发生,因为np.sum(X) = 1是我计算目标 function 的要求。

Using constraints does not help either!使用约束也无济于事! I could make a constraint with np.sum(X) = 1 .我可以用np.sum(X) = 1做一个约束。 However, the minimize algorithm will only check the constraint after computing the objective function.然而,最小化算法只会在计算出目标 function 之后检查约束。

Anyone have an idea on how to deal with such a problem?任何人都知道如何处理这样的问题?

Thanks a lot!非常感谢!

The NLP solvers I typically use will not ask for function and gradient evaluations when X is outside its bounds.当 X 超出其界限时,我通常使用的 NLP 求解器不会要求 function 和梯度评估。 That means we can protect things like sqrt(x) and log(x) by adding appropriate lower bounds.这意味着我们可以通过添加适当的下限来保护诸如sqrt(x)log(x)之类的东西。

Constraints can not be assumed to hold when we evaluate functions and gradients.当我们评估函数和梯度时,不能假设约束是成立的。

If your function assumes sum(X)=1 , you could call F(X/sum(X)) .如果您的 function 假设sum(X)=1 ,您可以调用F(X/sum(X)) The only complication is when X=0 .唯一的并发症是当X=0时。 In that case, return a large value for F so that the solver will move away from it.在这种情况下,为 F 返回一个较大的值,以便求解器远离它。

The must sum to 1 constraint effectively reduces the number of variables you need to optimize. must sum to 1约束有效地减少了需要优化的变量数量。 If you know the value of all but one of your variables the remaining value is implied by the constraint.如果您知道除一个变量之外的所有变量的值,则剩余值由约束隐含。 It must be exactly 1 - sum(other_vars)它必须正好是1 - sum(other_vars)

Instead of optimizing five variables X = [x1, x2, x3, x4, x5] you can optimize four variables and compute the fifth: X = [x1, x2, x3, x4, 1 - (x1 + x2 + x3 + x4)] .您可以优化四个变量并计算第五个,而不是优化五个变量X = [x1, x2, x3, x4, x5]X = [x1, x2, x3, x4, 1 - (x1 + x2 + x3 + x4)] .

This formulation of the optimization problem avoids imprecisions that might arise if all X are very low.优化问题的这种表述避免了如果所有 X 都非常低时可能出现的不精确性。

If the variables are percentages you will want to specify lower and upper bounds of 0 and 1, respectively.如果变量是百分比,您将需要分别指定 0 和 1 的下限和上限。 Furthermore, you will need to constrain x1+x2+x3+x4 <= 1 to avoid negative x5 .此外,您需要约束x1+x2+x3+x4 <= 1以避免负x5 (This inequality constraint is much easier for optimizers than the original equality constraint.) (对于优化器来说,这个不等式约束比原来的等式约束要容易得多。)

def cost_function(X):
    x1, x2, x3, x4 = X
    x5 = 1.0 - (x1, x2, x3, x4)    
    return F((x1, x2, x3, x4, x5))

x0 = [0.2, 0.2, 0.2, 0.2]  # initial guess: all values equal
minimize(cost_function, x0, 
         bounds=[(0.0, 1.0)]*4, 
         constraints=dict(type='ineq', fun=lambda X: 1.0 - X.sum()))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM