[英]Python multiprocessing using nested objects
我正在编写一种优化算法,它使用几种不同的初始条件来增加找到全局最优的机会。 我正在尝试通过使用多处理库并在不同进程上运行优化来使代码运行得更快。
这是我的代码现在基本上工作的方式:
from multiprocessing import Process, Queue
from SupportCostModel.SupportStructure import SupportStructure, SupportType
# Method the processes will execute
def optimizeAlgoritm(optimizeObject, qOut):
optimizeObject.Optimize()
qOut.put(optimizeObject)
# Method the main thread will execute
def getOptimumalObject(n):
for i in range(n):
# Create a new process with a new nested object that should be optimized
p = Process(target = optimizeAlgoritm, args = (SupportStructure(SupportType.Monopile), qOut))
processes.append(p)
p.deamon = True
p.start()
# Part the main thread is running
if __name__ == '__main__':
qOut = Queue()
processes = []
# Run the code on 6 processes
getOptimumalObject(6)
for i in range(len(processes)):
processes[i].join()
# Get the best optimized object and print the resulting value
minimum = 1000000000000000000000000.
while not qOut.empty():
optimizeObject = qOut.get()
if optimizeObject.GetTotalMass() < minimum:
bestObject = optimizeObject
minumum = optimizeObject.GetTotalMass()
print(bestObject.GetTotalMass())
只要我只使用4个进程,此代码就可以运行。 如果我运行超过4,比如示例中的6,那么两个进程将停留在代码的末尾,代码将永远不会停止运行,因为主线程仍然停留在processes[i].join()
。 我认为这两个进程在qOut.put()
中的qOut.put()
中存在问题。 当我删除qOut.put()
,代码退出,给出错误,即bestObject不存在,正如预期的那样。 然而,奇怪的是,如果我打印,例如, qOut.put()
之后的对象最小,它将打印它,但是使用0%的CPU,进程将保持活着状态。 这迫使主要代码保持活力。
我对多处理非常陌生,并认为OOP和多处理并不总是能够很好地协同工作。 我在这里使用了错误的方法吗? 它有点令人沮丧,因为它几乎可以工作,但不适用于4个以上的流程。
提前致谢!
我用它来管理我的物体!
这是我使用的代码:
from multiprocessing import Process, Pipe
from SupportCostModel.SupportStructure import SupportStructure, SupportType
import random
# Method the processes will execute
def optimizeAlgoritm(optimizeObject, conn):
optimizeObject.Optimize()
# Send the optimized object
conn.send(optimizeObject)
# Method the main thread will execute
def getOptimumalObject(n):
connections = []
for i in range(n):
# Create a pipe for each of the processes that is started
parent_conn, child_conn = Pipe()
# Save the parent connections
connections.append(parent_conn)
# Create objects that needs to by optimized using different initial conditions
if i == 0:
structure = SupportStructure(SupportType.Monopile)
else:
structure = SupportStructure(SupportType.Monopile)
structure.properties.D_mp = random.randrange(4., 10.)
structure.properties.Dtrat_tower = random.randrange(90., 120.)
structure.properties.Dtrat_mud = random.randrange(60., 100.)
structure.properties.Dtrat_mp = random.randrange(60., 100.)
structure.UpdateAll()
# Create a new process with a new nested object that should be optimized
p = Process(target = optimizeAlgoritm, args = (structure, child_conn))
processes.append(p)
p.deamon = True
p.start()
# Receive the optimized objects
for i in range(n):
optimizedObjects.append(connections[i].recv())
# Part the main thread is running
if __name__ == '__main__':
processes = []
optimizedObjects = []
# Run the code on 6 processes
getOptimumalObject(6)
for i in range(len(processes)):
processes[i].join()
# Get the best optimized object and print the resulting value
minimum = 1000000000000000000000000.
for i in range(len(optimizedObjects)):
optimizeObject = optimizedObjects[i]
if optimizeObject.GetTotalMass() < minimum:
bestObject = optimizeObject
minumum = optimizeObject.GetTotalMass()
print(bestObject.GetTotalMass())
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.