[英]Memoization, Classes, and Multiprocessing in Python
我试图使用python 2.7.2中的多处理模块进行一些计算。 我的代码是这样的:
from multiprocessing import Pool
import sys
sys.setrecursionlimit(10000)
partitions = []
class Partitions:
parts = {} #My goal is to use this dict to speed
#up calculations in every process that
#uses it, without having to build it up
#from nothing each time
def __init__(self):
pass
def p1(self, k, n):
if (k,n) in Partitions.parts:
return Partitions.parts[(k, n)]
if k>n:
return 0
if k==n:
return 1
Partitions.parts[(k,n)] = self.p1(k+1, n) + self.p1(k, n-k)
return Partitions.parts[(k,n)]
def P(self, n):
result = 0
for k in xrange(1,n/2 + 1):
result += self.p1(k, n-k)
return 1 + result
p = Partitions()
def log(results):
if results:
partitions.extend(results)
return None
def partWorker(start,stop):
ps = []
for n in xrange(start, stop):
ps.append(((1,n), p.P(n)))
return ps
def main():
pool = Pool()
step = 150
for i in xrange(0,301,step):
pool.apply_async(partWorker, (i, i+step), callback = log)
pool.close()
pool.join()
return None
if __name__=="__main__":
main()
我是新手,我基本上复制了这个页面上主要代码的格式: python prime crunching:处理池比较慢? 我是否可以在每个核心中运行进程,同时查看相同的字典以帮助他们进行计算? 它现在的行为方式,每个进程创建它自己的字典,它像疯了一样吃掉ram。
我不确定这是否是你想要的......但是,看看multiprocessing.Manager( http://docs.python.org/library/multiprocessing.html#sharing-state-between-processes )。 经理允许您在流程之间共享字典。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.