简体   繁体   中英

Numerical issue with SCIP: same problem, different optimal values

I'm having the following numerical issue while using SCIP solver as a callable library via PySCIPOpt :

two equivalent and almost identical models for the same problem yield different optimal values and optimal solutions, with optimal values having a relative difference of order 1e-6

An independent piece of software verified that the solutions yielded by both models are feasible for the original problem and that their true values agree with the optimal values reported by SCIP for each model . Until the appearance of this instance a bunch of instances of the same problem had been solved, with both models always agreeing on their optimal solutions and values.

Is it possible to modify the numerical precision of SCIP for comparing the values of primal solutions among them and against dual bounds? Which parameters should be modified to overcome this numerical difficulty?

What I tried

I've tried the following things and the problem persisted:

  1. Turning presolving off with model.setPresolve(SCIP_PARAMSETTING.OFF) .
  2. Setting model.setParam('numerics/feastol', epsilon) with different values of epsilon .

Since feasible sets and objective functions agree (see description below), I've checked that the actual coefficients of the objective functions agree by calling model.getObjective() and comparing coefficients for equality for each variable appearing in the objective function.

The only thing that seemed to help was to add some noise (multiplying by numbers of the form 1+eps with small eps ) to the coefficients in the objective function of the model yielding the worst solution. This makes SCIP to yield the same (and the better) solution for both models if eps is within certain range.

Just in case, this is what I get with scip -v in the terminal:

SCIP version 6.0.2 [precision: 8 byte] [memory: block] [mode: optimized] [LP solver: SoPlex 4.0.2] [GitHash: e639a0059d]

Description of the models

Model (I) has approximately 30K binary variables, say X[i] for i in some index set I . It's feasible and not a hard MIP.

Model (II) is has the same variables of model (I) plus ~100 continuous variables, say Y[j] for j in some index set J . Also model (II) has some constraints like this X[i_1] + X[i_2] +... + X[i_n] <= Y[j] .

Both objective functions agree and depend only on X[i] variables, the sense is minimization. Note that variables Y[j] are essentially free in model (II), since they are continuous and they do not appear in the objective function. Obviously, there is no point in including the Y[j] variables, but the optimal values shouldn't be different.

Model (II) is the one yielding the better (ie smaller) value.

sorry for the late answer. So, in general, it can definitely happen that any MIP solver reports different optimal solutions for formulations that are slightly perturbed (even if they are mathematically the same), due to the use of floating-point arithmetic.

Your problem does seem a bit weird though. You say the Y-variables are free, ie they do not have any variable bounds attached to them? If this is the case, I would be very surprised if they don't get presolved away.

If you are still interested in this problem, could you provide your instance files to me and I will look at them?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM