简体   繁体   English

使用嵌套对象的Python多处理

[英]Python multiprocessing using nested objects

I'm writing an optimization algorithm which uses several different initial conditions to increase the chance of finding the global optimum. 我正在编写一种优化算法,它使用几种不同的初始条件来增加找到全局最优的机会。 I'm trying to make the code run faster by using the multiprocessing library, and running the optimizations on different processes. 我正在尝试通过使用多处理库并在不同进程上运行优化来使代码运行得更快。

This is the way my code is basically working now: 这是我的代码现在基本上工作的方式:

from multiprocessing import Process, Queue
from SupportCostModel.SupportStructure import SupportStructure, SupportType

# Method the processes will execute
def optimizeAlgoritm(optimizeObject, qOut):

    optimizeObject.Optimize()
    qOut.put(optimizeObject)

# Method the main thread will execute
def getOptimumalObject(n):

    for i in range(n):

        # Create a new process with a new nested object that should be optimized
        p = Process(target = optimizeAlgoritm, args = (SupportStructure(SupportType.Monopile), qOut))
        processes.append(p)
        p.deamon = True
        p.start()

# Part the main thread is running        
if __name__ == '__main__':

    qOut = Queue()
    processes = []

    # Run the code on 6 processes
    getOptimumalObject(6)

    for i in range(len(processes)):
        processes[i].join()

    # Get the best optimized object and print the resulting value
    minimum = 1000000000000000000000000.

    while not qOut.empty():

        optimizeObject = qOut.get()

        if optimizeObject.GetTotalMass() < minimum:

            bestObject = optimizeObject
            minumum = optimizeObject.GetTotalMass()

    print(bestObject.GetTotalMass())

This code works as long as I only use 4 processes. 只要我只使用4个进程,此代码就可以运行。 If I run more than 4, say 6 as in the example, two processes will get stuck at the end of the code and the code will never stop running as the main thread is still stuck at the processes[i].join() . 如果我运行超过4,比如示例中的6,那么两个进程将停留在代码的末尾,代码将永远不会停止运行,因为主线程仍然停留在processes[i].join() I think the two processes have a problem in the qOut.put() in the optimizeAlgorithm. 我认为这两个进程在qOut.put()中的qOut.put()中存在问题。 When I remove the qOut.put() the code exits giving the error that bestObject doesn't exists, as expected. 当我删除qOut.put() ,代码退出,给出错误,即bestObject不存在,正如预期的那样。 However, the strange thing is that if I print, for example, the objects minimum after the qOut.put() it will print it, but the process will stay stay alive using 0% of my CPU. 然而,奇怪的是,如果我打印,例如, qOut.put()之后的对象最小,它将打印它,但是使用0%的CPU,进程将保持活着状态。 This forces the main code to stay alive as well. 这迫使主要代码保持活力。

I'm quite new to the multiprocessing and a read that OOP and multiprocessing don't always work very well hand in hand. 我对多处理非常陌生,并认为OOP和多处理并不总是能够很好地协同工作。 Am I using a wrong approach here? 我在这里使用了错误的方法吗? It is kind of frustrating as it almost works, but isn't working for more than 4 processes. 它有点令人沮丧,因为它几乎可以工作,但不适用于4个以上的流程。

Thanks in advance! 提前致谢!

I got it to work using pipes to send my objects! 我用它来管理我的物体!

this is the code I used: 这是我使用的代码:

from multiprocessing import Process, Pipe
from SupportCostModel.SupportStructure import SupportStructure, SupportType
import random

# Method the processes will execute
def optimizeAlgoritm(optimizeObject, conn):

    optimizeObject.Optimize()

    # Send the optimized object
    conn.send(optimizeObject)

# Method the main thread will execute
def getOptimumalObject(n):

    connections = []

    for i in range(n):

        # Create a pipe for each of the processes that is started
        parent_conn, child_conn = Pipe()

        # Save the parent connections
        connections.append(parent_conn)

        # Create objects that needs to by optimized using different initial conditions
        if i == 0:
            structure = SupportStructure(SupportType.Monopile)
        else:
            structure = SupportStructure(SupportType.Monopile)
            structure.properties.D_mp = random.randrange(4., 10.)
            structure.properties.Dtrat_tower = random.randrange(90., 120.)
            structure.properties.Dtrat_mud = random.randrange(60., 100.)
            structure.properties.Dtrat_mp = random.randrange(60., 100.)
            structure.UpdateAll()

        # Create a new process with a new nested object that should be optimized
        p = Process(target = optimizeAlgoritm, args = (structure, child_conn))
        processes.append(p)
        p.deamon = True
        p.start()

    # Receive the optimized objects
    for i in range(n):
        optimizedObjects.append(connections[i].recv())

# Part the main thread is running        
if __name__ == '__main__':

    processes = []
    optimizedObjects = []

    # Run the code on 6 processes
    getOptimumalObject(6)

    for i in range(len(processes)):
        processes[i].join()

    # Get the best optimized object and print the resulting value
    minimum = 1000000000000000000000000.

    for i in range(len(optimizedObjects)):

        optimizeObject = optimizedObjects[i]

        if optimizeObject.GetTotalMass() < minimum:

            bestObject = optimizeObject
            minumum = optimizeObject.GetTotalMass()

    print(bestObject.GetTotalMass())

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM