[英]Python Threads are not Improving Speed
为了加快某些列表处理逻辑的速度,我编写了一个装饰器,该装饰器将1)拦截传入的函数调用2)获取其输入列表,将其分成多个部分4)将这些部分传递给单独线程上的原始函数5)合并输出和返回
我认为这是一个非常简洁的主意,直到我对其进行编码并看到速度没有变化! 即使我看到htop上有多个内核忙,但多线程版本实际上比单线程版本慢。
这与臭名昭著的cpython GIL有关吗?
谢谢!
from threading import Thread
import numpy as np
import time
# breaks a list into n list of lists
def split(a, n):
k, m = len(a) / n, len(a) % n
return (a[i * k + min(i, m):(i + 1) * k + min(i + 1, m)] for i in xrange(n))
THREAD_NUM = 8
def parallel_compute(fn):
class Worker(Thread):
def __init__(self, *args):
Thread.__init__(self)
self.result = None
self.args = args
def run(self):
self.result = fn(*self.args)
def new_compute(*args, **kwargs):
threads = [Worker(args[0], args[1], args[2], x) for x in split(args[3], THREAD_NUM)]
for x in threads: x.start()
for x in threads: x.join()
final_res = []
for x in threads: final_res.extend(x.result)
return final_res
return new_compute
# some function that does a lot of computation
def f(x): return np.abs(np.tan(np.cos(np.sqrt(x**2))))
class Foo:
@parallel_compute
def compute(self, bla, blah, input_list):
return map(f, input_list)
inp = [i for i in range(40*1000*100)]
#inp = [1,2,3,4,5,6,7]
if __name__ == "__main__":
o = Foo()
start = time.time()
res = o.compute(None, None, inp)
end = time.time()
print 'parallel', end - start
单线程版本
import time, fast_one, numpy as np
class SlowFoo:
def compute(self, bla, blah, input_list):
return map(fast_one.f, input_list)
if __name__ == "__main__":
o = SlowFoo()
start = time.time()
res = np.array(o.compute(None, None, fast_one.inp))
end = time.time()
print 'single', end - start
这是提供"PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed".
的多处理版本"PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed".
import pathos.multiprocessing as mp
import numpy as np, dill
import time
def split(a, n):
k, m = len(a) / n, len(a) % n
return (a[i * k + min(i, m):(i + 1) * k + min(i + 1, m)] for i in xrange(n))
def f(x): return np.abs(np.tan(np.cos(np.sqrt(x**2))))
def compute(input_list):
return map(f, input_list)
D = 2; pool = mp.Pool(D)
def parallel_compute(fn):
def new_compute(*args, **kwargs):
inp = []
for x in split(args[0], D): inp.append(x)
outputs_async = pool.map_async(fn, inp)
outputs = outputs_async.get()
outputs = [y for x in outputs for y in x]
return outputs
return new_compute
compute = parallel_compute(compute)
inp = [i for i in range(40*1000)]
if __name__ == "__main__":
start = time.time()
res = compute(inp)
end = time.time()
print 'parallel', end - start
print len(res)
是的,当您的线程正在执行用Python实现的CPU绑定工作时(不是通过可以在从Python结构编组/解组数据之前和之后释放GIL的C扩展),这里的GIL就是一个问题。
我建议使用一个多模式,Python实现不拥有它(IronPython的,Jython的,等等),或者不同的语言完全(如果你正在做性能敏感的工作,没有语言几乎一样液力端作为Python,但具有更好的运行时性能)。
或者,您可以在子进程中重新签名并启动所有并行代码。
您需要启动子进程进行计算的工作线程。 这些子流程可以真正并行运行。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.