简体   繁体   English

在Python中同时评估许多功能

[英]Quick evaluation of many functions at same point in Python

Problem : I need a very fast way in Python3 to evaluate many (in the thousands) functions at the same argument. 问题 :在Python3 ,我需要一种非常快速的方法来在同一参数上评估许多(数千个)函数。 So in a sense, I kind of need the opposite of NumPy's Broadcasting which allows to quickly evaluate one function at multiple points. 因此,从某种意义上说,我有点需要NumPy's Broadcasting相反,后者可以在多个点上快速评估一个功能。

My solution : At the moment I just store my functions in a list and then iterate over the list with a classic for loop to evaluate all functions individually. 我的解决方案 :目前,我只是将函数存储在列表中,然后使用经典的for循环遍历列表以分别评估所有函数。 This however is much too slow. 但是,这太慢了。

Examples, ideas and links to packages very much welcome. 示例,想法和软件包链接非常受欢迎。

Edit : People have asked what the functions look like: 1. They are computational in nature. 编辑 :人们问这些函数是什么样的:1.它们本质上是计算的。 No I/O. 没有I / O。 2. They only involve the usual algebraic operations like +, -, *, / and ** and also an indicator function. 2.它们仅涉及常规的代数运算,例如+,-,*,/和**以及指示符函数。 So no trigonometric functions or other special functions. 因此,没有三角函数或其他特殊函数。

If your functions are IO bound (meaning they spend most of their time waiting for some IO operation to complete), then using multiple threads may be a fair solution. 如果您的功能受IO约束(意味着它们花费大部分时间等待某些IO操作完成),那么使用多个线程可能是一个合理的解决方案。

If your functions are CPU bound (meaning they spend most of their time doing actual computational work), then multiple threads will not help you, unless you are using a python implementation that does not have a global interpreter lock . 如果您的函数受CPU约束(这意味着它们将大部分时间都花在实际的计算工作上),那么除非您使用的是没有全局解释器锁的python实现,否则多个线程将无济于事。

What you can do here, is use multiple python processes. 您可以在这里使用多个python进程。 The easiest solution being multiprocessing module. 最简单的解决方案是multiprocessing模块。 Here is an example: 这是一个例子:

#!/usr/bin/env python3
from multiprocessing import Pool
from functools import reduce

def a(x):
        return reduce(lambda memo, i: memo + i, x)
def b(x):
        return reduce(lambda memo, i: memo - i, x)
def c(x):
        return reduce(lambda memo, i: memo + i**2, x)

my_funcs = [a, b, c]

#create a process pool of 4 worker processes
pool = Pool(4)

async_results = []
for f in my_funcs:
        #seconds parameter to apply_async should be a tuple of parameters to pass to the function
        async_results.append(pool.apply_async(f, (range(1, 1000000),)))
results = list(map(lambda async_result: async_result.get(), async_results))
print(results)

This method allows you to utilize all your CPU power in parallel: just pick a pool size that matches the number of CPUs in your environment. 这种方法允许您并行利用所有CPU能力:只需选择与您环境中CPU数量匹配的池大小即可。 The limitation of this approach is that all your functions must be pickleable . 这种方法的局限性在于您所有的功能都必须是可腌制的

Evaluate them using threading by running them in multiple threads, as long as they do not have resource conflicts. 只要它们没有资源冲突,就可以通过在多个线程中运行它们来使用线程来评估它们。

http://www.tutorialspoint.com/python/python_multithreading.htm http://www.tutorialspoint.com/python/python_multithreading.htm

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM