[英]Pipes are getting stuck--no other solution on stack overflow working
(UPDATED) I am building a module to distribute agent based models, the idea is to split the model over multiple processes and then when the agents reach a boundary they are passed to the processor handling that region. (更新)我正在构建一个模块来分发基于代理的模型,其思想是将模型拆分到多个进程中,然后当代理到达边界时,将它们传递给处理该区域的处理器。 I can get the processes set up and working with no communication, but cannot get the data to pass through the pipes and update the model segment on the other processor. 我可以建立进程并在不进行通信的情况下工作,但是无法使数据通过管道传递并更新其他处理器上的模型段。
I have tried the solutions on stackoverflow and built a simple version of the model. 我已经在stackoverflow上尝试了解决方案,并构建了该模型的简单版本。 As soon as I put in a model object into the pipe the model hangs (it works with python standard data types). 一旦将模型对象放入管道,模型就会挂起(它适用于python标准数据类型)。 The simple version just passes agents back and forth. 简单版本只是来回传递代理。
from pathos.multiprocessing import ProcessPool
from pathos.helpers import mp
import copy
class TestAgent:
"Agent Class-- Schedule iterates through each agent and \
executes step function"
def __init__(self, unique_id, model):
self.unique_id = unique_id
self.model = model
self.type = "agent"
def step(self):
pass
#print (' ', self.unique_id, "I have stepped")
class TestModel:
"Model Class iterates through schedule and executes step function for \
each agent"
def __init__(self):
self.schedule = []
self.pipe = None
self.process = None
for i in range(1000):
a = TestAgent(i, self)
self.schedule.append(a)
def step(self):
for a in self.schedule:
a.step()
if __name__ == '__main__':
pool = ProcessPool(nodes=2)
#create instance of model
test_model = TestModel()
#create copies of model to be run on 2 processors
test1 = copy.deepcopy(test_model)
#clear schedule
test1.schedule = []
#Put in only half the schedule
for i in range(0,500):
test1.schedule.append(test_model.schedule[i])
#Give process tracker number
test1.process = 1
#repeat for other processor
test2= copy.deepcopy(test_model)
test2.schedule = []
for i in range(500,1000):
test2.schedule.append(test_model.schedule[i])
test2.process = 2
#create pipe
end1, end2 = mp.Pipe()
#Main run function for each process
def run(model, pipe):
for i in range(5):
print (model.process)#, [a.unique_id for a in model.schedule])
model.step() # IT HANGS AFTER INITIAL STEP
print ("send")
pipe.send(model.schedule)
print ("closed")
sched = pipe.recv()
print ("received")
model.schedule = sched
pool.map(run, [test1, test2], [end1,end2])
The agents should switch processors and execute their print functions. 代理应切换处理器并执行其打印功能。 (My next problem will be synchronizing the processors so they stay on each step, but one thing at a time.) (我的下一个问题将是同步处理器,以使它们保持在每个步骤上,但一次执行一次。)
I got it to work. 我知道了。 I was exceeding the pipe buffer limit in python (8192). 我超出了python中的管道缓冲区限制(8192)。 This is particularly true if the agent holds a copy of the model as an attribute. 如果代理拥有模型的副本作为属性,则尤其如此。 A working version of the above code, which passes the agents one at a time is below. 下面是上面代码的有效版本,该版本一次通过代理。 It uses Pympler to get the size of all the agents. 它使用Pympler来获取所有代理的大小。
from pathos.multiprocessing import ProcessPool
from pathos.helpers import mp
import copy
# do a blocking map on the chosen function
class TestAgent:
"Agent Class-- Schedule iterates through each agent and \
executes step function"
def __init__(self, unique_id, model):
self.unique_id = unique_id
self.type = "agent"
def step(self):
pass
class TestModel:
"Model Class iterates through schedule and executes step function for \
each agent"
def __init__(self):
from pympler import asizeof
self.schedule = []
self.pipe = None
self.process = None
self.size = asizeof.asizeof
for i in range(1000):
a = TestAgent(i, self)
self.schedule.append(a)
def step(self):
for a in self.schedule:
a.step()
if __name__ == '__main__':
pool = ProcessPool(nodes=2)
#create instance of model
test_model = TestModel()
#create copies of model to be run on 2 processors
test1 = copy.deepcopy(test_model)
#clear schedule
test1.schedule = []
#Put in only half the schedule
for i in range(0,500):
test1.schedule.append(test_model.schedule[i])
#Give process tracker number
test1.process = 1
#repeat for other processor
test2= copy.deepcopy(test_model)
test2.schedule = []
for i in range(500,1000):
test2.schedule.append(test_model.schedule[i])
test2.process = 2
#create pipe
end1, end2 = mp.Pipe()
#Main run function for each process
def run(model, pipe):
for i in range(5):
agents = []
print (model.process, model.size(model.schedule) )
model.step() # IT HANGS AFTER INITIAL STEP
#agent_num = list(model.schedule._agents.keys())
for agent in model.schedule[:]:
model.schedule.remove(agent)
pipe.send(agent)
agent = pipe.recv()
agents.append(agent)
print (model.process, "all agents received")
for agent in agents:
model.schedule.append(agent)
print (model.process, len(model.schedule))
pool.map(run, [test1, test2], [end1,end2])
Mike McKerns and Thomas Moreau --Thanks for the help you put me on the right path. Mike McKerns和Thomas Moreau-谢谢您的帮助,使我走上了正确的道路。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.