简体   繁体   中英

docplex.cp.model is slower than the exhaustive search

I am working on a combinatorial optimisation problem and realised the CPLEX is taking a significant time to run. Here is a toy example:

在此处输入图片说明

I am using the python API for docplex

import numpy as np
from docplex.cp.model import CpoModel
N = 5000
S = 10
k = 2

u_i = np.random.rand(N)[:,np.newaxis]
u_ij = np.random.rand(N*S).reshape(N, S)
beta = np.random.rand(N)[:,np.newaxis]

m = CpoModel(name = 'model')
R = range(0, S)

idx = [(j) for j in R]
I = m.binary_var_dict(idx)
m.add_constraint(m.sum(I[j] for j in R)<= k)

total_rev = m.sum(beta[i,0] / ( 1 + u_i[i,0]/sum(I[j] * u_ij[i,j]  for j in R) ) for i in range(N) )

m.maximize(total_rev)

sol=m.solve(agent='local')

sol.print_solution()

for i in R:
    if sol[I[i]]==1:
        print('i : '+str(i))

Part of the output is as follows:

Model constraints: 1, variables: integer: 10, interval: 0, sequence: 0
Solve status: Optimal
Search status: SearchCompleted, stop cause: SearchHasNotBeenStopped
Solve time: 76.14 sec
-------------------------------------------------------------------------------
Objective values: (1665.58,), bounds: (1665.74,), gaps: (9.27007e-05,)
Variables:
   + 10 anonymous variables

The same I tried with an exhaustive search:

import numpy as np
import pandas as pd
from itertools import combinations,permutations,product
import time

start = time.time()

results = []

for K_i in range(1,k+1):  #K
    comb = list(combinations(range(S), K_i))

    A = len(comb)
    for a in range(A):# A
        comb_i = comb[a]

        I = np.repeat(0,S).reshape(-1,1)

        I[comb_i,0] = 1
        u_j = np.matmul(u_ij,I)

        total_rev = np.sum(beta/ (1 + u_i/u_j))
        results.append({'comb_i':comb_i, 'total_rev':total_rev })

end = time.time()
time_elapsed = end - start
print('time_elapsed : ', str(time_elapsed))

results = pd.DataFrame(results)
opt_results = results[results['total_rev'] == max(results['total_rev'].values)]
print(opt_results)

Output:

time_elapsed :  0.012971639633178711
    comb_i    total_rev
23  (1, 6)  1665.581329

As you can see the CPLEX is 1000 times slower than the exhaustive search. Is there a way to improve the CPLEX algorithm?

For this particular problem:

sol=m.solve(agent='local', SearchType='DepthFirst', Workers=1)

should help out a lot.

if you change

sol=m.solve(agent='local')

to

sol=m.solve(agent='local',SearchType="DepthFirst")

you ll get the optimal solution faster.

Nb:

Proving optimality may take time sometimes with CPOptimizer

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM