简体   繁体   English

使用 Gurobi 进行多处理

[英]Multiprocessing with Gurobi

I am trying to use multiprocessing for my optimization problem.我正在尝试使用多处理来解决我的优化问题。 I have a number of jobs (10, 20, 30) that need to be assigned to machines.我有许多需要分配给机器的工作(10、20、30)。

I am just inserting a rough outline of the optimization code, because it is way to long:我只是插入了优化代码的粗略轮廓,因为它太长了:

def solve(jobs):
    def load_data(jobs):
        df = pd.read_csv(path_jobs)
    def create_model():
        model = pg.Model(env)
        
        sets ...
        variables ...
        constraints...
        objective ...
       
        model.optimize()

        print(results of optimization) 
        export_results = results.to_csv

    

     results = create_model()
     

This is the code for the multiprocessing:这是多处理的代码:

import multiprocessing as mp
if __name__ == '__main__':
   with mp.Pool() as pool:
      jobs= [10, 20, 30]
      result = pool.map(solve, jobs)
      print(result)

When executing the model with the multiprocessing it stops after "model.optimize".当使用多处理执行 model 时,它会在“model.optimize”之后停止。 So it is not printing or exporting the results.所以它不是打印或导出结果。 Additionally, I get this error message:此外,我收到此错误消息:

MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x7f7bd8fdb890>'. Reason: 'TypeError("can't pickle PyCapsule objects")'

I would be greatful for any ideas.我会很感激任何想法。 The optimization problem itself is working perfectly.优化问题本身运行良好。

> Try With this type of Method >试试这种方法

from multiprocessing import Pool

def cube(x):
    return x * x * x


def main():
    threadObj = Pool(5)
    arr = [1, 2, 3, 4]
    outPut = []
    for loop in threadObj.map(cube, arr):
        outPut.append(loop)
    print(outPut)


main()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM