简体   繁体   中英

PuLP: Minimizing the standard deviation of decision variables

In an optimization problem developed in PuLP i use the following objective function:

objective = p.lpSum(vec[r] for r in range(0,len(vec)))

All variables are non-negative integers, hence the sum over the vector gives the total number of units for my problem. Now i am struggling with the fact, that PuLP only gives one of many solutions and i would like to narrow down the solution space to results that favors the solution set with the smallest standard deviation of the decision variables. Eg say vec is a vector with elements 6 and 12 . Then 7/11, 8/10, 9/9 are equally feasible solutions and i would like PuLP to arrive at 9/9. Then the objective

objective = p.lpSum(vec[r]*vec[r] for r in range(0,len(vec)))

would obviously create a cost function, that would help the case, but alas, it is non-linear and PuLP throws an error. Anyone who can point me to a potential solution?

Instead of minimizing the standard deviation (which is inherently non-linear), you could minimize the range or bandwidth. Along the lines of:

 minimize maxv-minv
 maxv >= vec[r]   for all r
 minv <= vec[r]   for all r

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM