I am trying to solve quadratic programming in R with package osqp. However, the results looked so wrong.
The code for reproducing the example attached here:(the objective function is the minimization
library(osqp)
library(Matrix)
P <- Matrix(c(1., 0.,0,
0., 1.,0, 0,0,1), 3, 3, sparse = TRUE)
q <- c(0, 1,1)
A <- Matrix(c(1., 1., 0.,
1., 0., 1.,1,0,1), 3, 3, sparse = TRUE)
l <- c(1.2, 0., 0.7)
u <- c(2., 0.8, 0.7)
settings <- osqpSettings(alpha = 1.0,eps_abs = 1.0e-05, eps_rel = 1.0e-05,)
model <- osqp(P, q, A, l, u, settings)
res <- model$Solve()
This is the output of the model, and it shows the optimal value is 0.7
However, if use the optimal solution to plug into the objective function it gives me 1.018556
#this should be the final objective value after solving
0.5*t(res$x)%*%P%*%res$x+t(c(0,1,1))%*%res$x
#the solution
res$x
#0.6261887 0.3500000 0.3500000
More importantly, this is not the optimal solution. If I eyeballed the solution, I could find another answer with a smaller objective function with a value of 0.5,0.1,0.6 I am not sure what I did wrong.
Thank you for any advice in advance!
I tried to fool OQSP (in python) to give your result.
I first suspected of alpha
but it seems to make no difference. The other thing is the number of iterations, it could have not converged after 25 iterations, but in that case I would not expect an objective of 0.70000.
import osqp
from scipy.sparse import csr_matrix
P = np.array([[1., 0.,0],
[0., 1.,0],
[0,0,1]]).T
q = np.array([0, 1,1])[None, :];
A = np.array([[1., 1., 0.],
[1., 0., 1.],
[1,0,1]]).T
l = np.array([1.2, 0., 0.7])[:, None]
u = np.array([2., 0.8, 0.7])[:, None]
m = osqp.OSQP()
m.setup(P=csr_matrix(P), A=csr_matrix(A), l=l, u=u, q=q.T,
alpha=1.0, eps_abs = 1.0e-05, eps_rel = 1.0e-05, max_iter=25)
results = m.solve()
This gives
x = [0.45492868, 0.35002252, 0.35002252]
Already close to the optimal solution of [0.5, 0.35, 0.35]
. One thing that I noticed is that q%*%x=0.7
.
Looking closer I see that the last row of your A
is q
, and the corresponding constraints specify that 0.7 <= x(1) + x(2) <= 0.7
. ie any valid solution for your problem have q'x = 0.7
thus we conclude that x' P x = 0
internally.
The solution you shown is very close to the one I obtain if I set P
matrix to zero
m.setup(P=csr_matrix(P)*0, A=csr_matrix(A), l=l, u=u, q=q.T,
alpha=1.0, eps_abs = 1.0e-05, eps_rel = 1.0e-05, max_iter=100)
results = m.solve()
results.x
-----------------------------------------------------------------
OSQP v0.6.2 - Operator Splitting QP Solver
(c) Bartolomeo Stellato, Goran Banjac
University of Oxford - Stanford University 2021
-----------------------------------------------------------------
problem: variables n = 3, constraints m = 3
nnz(P) + nnz(A) = 9
settings: linear system solver = qdldl,
eps_abs = 1.0e-05, eps_rel = 1.0e-05,
eps_prim_inf = 1.0e-04, eps_dual_inf = 1.0e-04,
rho = 1.00e-01 (adaptive),
sigma = 1.00e-06, alpha = 1.00, max_iter = 25
check_termination: on (interval 25),
scaling: on, scaled_termination: off
warm start: on, polish: off, time_limit: off
iter objective pri res dua res rho time
1 -9.9950e-03 1.20e+00 7.01e+01 1.00e-01 5.74e-05s
25 7.0000e-01 2.66e-15 6.94e-18 1.00e-01 1.07e-03s
status: solved
number of iterations: 25
optimal objective: 0.7000
run time: 1.79e-03s
optimal rho estimate: 1.70e+00
array([0.62618873, 0.35 , 0.35 ])
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.