简体   繁体   中英

Penalize Python PuLP Optimization function as decision variable diverges from a specific constant

I'm working on maximizing a blending problem similar to the example provided in the PuLP documentation . However, I'd like to penalize my optimization problem to the extent that one of my Ingredients differs from some known constant. In this way, I'd like to force the solver to prefer manipulating the other decision variables before resorting to changing my key one from the set constant value (which I'm also using as a starting value in a warm start). I've tried modifying the objective function as shown below but it doesn't seem to work. I realize I need to add some additional constraints to essentially take the absolute value of the difference between the beef percentage and the baseline but I'm not sure that's the problem. When I print out the problem, the baseline isn't kept in the same term as the percentage beef.

PENALTY = -0.5
KEY_INGREDIENT='BEEF'
KEY_INGREDIENT_BASELINE=20
prob += lpSum([costs[i]*ingredient_vars[i] for i in Ingredients] +[PENALTY*(ingredient_vars[KEY_INGREDIENT] - KEY_INGREDIENT_BASELINE)])

You approach makes sense, but in the objective you would need to use the absolute value of the difference. To linearize the absolute value introduce a continuous variable z . Then use the following two constraints:

z >= ingredient_vars[KEY_INGREDIENT] - KEY_INGREDIENT_BASELINE
z >= -(ingredient_vars[KEY_INGREDIENT] - KEY_INGREDIENT_BASELINE)

furthermore let PENALTY be a positive value (-0.5 would mean a high difference is good).

finally modify the objective:

prob += lpSum([costs[i]*ingredient_vars[i] for i in Ingredients] + [PENALTY*z])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM