简体   繁体   English

如何跟踪从多处理池返回的异步结果

[英]how to keep track of asynchronous results returned from a multiprocessing pool

I am trying to add multiprocessing to some code which features functions that I can not modify. 我试图将多处理添加到一些代码中,这些代码具有我无法修改的功能。 I want to submit these functions as jobs to a multiprocessing pool asynchronously. 我想将这些函数作为作业异步提交到多处理池。 I am doing something much like the code shown here . 我正在做的事情很像这里显示的代码。 However, I am not sure how to keep track of results. 但是,我不确定如何跟踪结果。 How can I know to which applied function a returned result corresponds? 如何知道返回结果对应的应用函数?

The important points to emphasise are that I can not modify the existing functions (other things rely on them remaining as they are) and that results can be returned in an order different to the order in which the function jobs are applied to the pool. 要强调的重点是我无法修改现有函数(其他依赖于它们的东西保持原样),并且结果可以按照与函数作业应用于池的顺序不同的顺序返回。

Thanks for any thoughts on this! 感谢您的任何想法!

EDIT: Some attempt code is below: 编辑:一些尝试代码如下:

import multiprocessing
from multiprocessing import Pool
import os
import signal
import time
import inspect

def multiply(multiplicand1=0, multiplicand2=0):
    return multiplicand1*multiplicand2

def workFunctionTest(**kwargs):
    time.sleep(3)
    return kwargs

def printHR(object):
    """
    This function prints a specified object in a human readable way.
    """
    # dictionary
    if isinstance(object, dict):
        for key, value in sorted(object.items()):
            print u'{a1}: {a2}'.format(a1=key, a2=value)
    # list or tuple
    elif isinstance(object, list) or isinstance(object, tuple):
        for element in object:
            print element
    # other
    else:
        print object

class Job(object):
    def __init__(
        self,
        workFunction=workFunctionTest,
        workFunctionKeywordArguments={'testString': "hello world"},
        workFunctionTimeout=1,
        naturalLanguageString=None,
        classInstance=None,
        resultGetter=None,
        result=None
        ):
        self.workFunction=workFunction
        self.workFunctionKeywordArguments=workFunctionKeywordArguments
        self.workFunctionTimeout=workFunctionTimeout
        self.naturalLanguageString=naturalLanguageString
        self.classInstance=self.__class__.__name__
        self.resultGetter=resultGetter
        self.result=result
    def description(self):
        descriptionString=""
        for key, value in sorted(vars(self).items()):
            descriptionString+=str("{a1}:{a2} ".format(a1=key, a2=value))
        return descriptionString
    def printout(self):
        """
        This method prints a dictionary of all data attributes.
        """
        printHR(vars(self))

class JobGroup(object):
    """
    This class acts as a container for jobs. The data attribute jobs is a list of job objects.
    """
    def __init__(
        self,
        jobs=None,
        naturalLanguageString="null",
        classInstance=None,
        result=None
        ):
        self.jobs=jobs
        self.naturalLanguageString=naturalLanguageString
        self.classInstance=self.__class__.__name__
        self.result=result
    def description(self):
        descriptionString=""
        for key, value in sorted(vars(self).items()):
            descriptionString+=str("{a1}:{a2} ".format(a1=key, a2=value))
        return descriptionString
    def printout(self):
        """
        This method prints a dictionary of all data attributes.
        """
        printHR(vars(self))

def initialise_processes():
    signal.signal(signal.SIGINT, signal.SIG_IGN)

def execute(
        jobObject=None,
        numberOfProcesses=multiprocessing.cpu_count()
        ):
        # Determine the current function name.
    functionName=str(inspect.stack()[0][3])
    def collateResults(result):
        """
        This is a process pool callback function which collates a list of results returned.
        """
        # Determine the caller function name.
        functionName=str(inspect.stack()[1][3])
        print("{a1}: result: {a2}".format(a1=functionName, a2=result))
        results.append(result)
    def getResults(job):
        # Determine the current function name.
        functionName=str(inspect.stack()[0][3])
        while True:
            try:
                result=job.resultGetter.get(job.workFunctionTimeout)
                break
            except multiprocessing.TimeoutError:
                print("{a1}: subprocess timeout for job".format(a1=functionName, a2=job.description()))
        #job.result=result
        return result
    # Create a process pool.
    pool1 = multiprocessing.Pool(numberOfProcesses, initialise_processes)
    print("{a1}: pool {a2} of {a3} processes created".format(a1=functionName, a2=str(pool1), a3=str(numberOfProcesses)))
    # Unpack the input job object and submit it to the process pool.
    print("{a1}: unpacking and applying job object {a2} to pool...".format(a1=functionName, a2=jobObject))
    if isinstance(jobObject, Job):
        # If the input job object is a job, apply it to the pool with its associated timeout specification.
        # Return a list of results.
        job=jobObject
        print("{a1}: job submitted to pool: {a2}".format(a1=functionName, a2=job.description()))
        # Apply the job to the pool, saving the object pool.ApplyResult to the job object.
        job.resultGetter=pool1.apply_async(
                func=job.workFunction,
                kwds=job.workFunctionKeywordArguments
        )
        # Get results.
        # Acquire the job result with respect to the specified job timeout and apply this result to the job data attribute result.
        print("{a1}: getting results for job...".format(a1=functionName))
        job.result=getResults(job)
        print("{a1}: job completed: {a2}".format(a1=functionName, a2=job.description()))
        print("{a1}: job result: {a2}".format(a1=functionName, a2=job.result))
        # Return the job result from execute.
        return job.result
        pool1.terminate()
        pool1.join()
    elif isinstance(jobObject, JobGroup):
        # If the input job object is a job group, cycle through each job and apply it to the pool with its associated timeout specification.
        for job in jobObject.jobs:
            print("{a1}: job submitted to pool: {a2}".format(a1=functionName, a2=job.description()))
            # Apply the job to the pool, saving the object pool.ApplyResult to the job object.
            job.resultGetter=pool1.apply_async(
                    func=job.workFunction,
                    kwds=job.workFunctionKeywordArguments
            )
        # Get results.
        # Cycle through each job and and append the result for the job to a list of results.
        results=[]
        for job in jobObject.jobs:
            # Acquire the job result with respect to the specified job timeout and apply this result to the job data attribute result.
            print("{a1}: getting results for job...".format(a1=functionName))
            job.result=getResults(job)
            print("{a1}: job completed: {a2}".format(a1=functionName, a2=job.description()))
            #print("{a1}: job result: {a2}".format(a1=functionName, a2=job.result))
            # Collate the results.
            results.append(job.result)
        # Apply the list of results to the job group data attribute results.
        jobObject.results=results
        print("{a1}: job group results: {a2}".format(a1=functionName, a2=jobObject.results))
        # Return the job result list from execute.
        return jobObject.results
        pool1.terminate()
        pool1.join()
    else:
        # invalid input object
        print("{a1}: invalid job object {a2}".format(a1=functionName, a2=jobObject))

def main():
    print('-'*80)
    print("MULTIPROCESSING SYSTEM DEMONSTRATION\n")

    # Create a job.
    print("# creating a job...\n")
    job1=Job(
            workFunction=workFunctionTest,
            workFunctionKeywordArguments={'testString': "hello world"},
            workFunctionTimeout=4
    )
    print("- printout of new job object:")
    job1.printout()
    print("\n- printout of new job object in logging format:")
    print job1.description()

    # Create another job.
    print("\n# creating another job...\n")
    job2=Job(
            workFunction=multiply,
            workFunctionKeywordArguments={'multiplicand1': 2, 'multiplicand2': 3},
            workFunctionTimeout=6
    )
    print("- printout of new job object:")
    job2.printout()
    print("\n- printout of new job object in logging format:")
    print job2.description()

    # Create a JobGroup object.
    print("\n# creating a job group (of jobs 1 and 2)...\n")
    jobGroup1=JobGroup(
            jobs=[job1, job2],
    )
    print("- printout of new job group object:")
    jobGroup1.printout()
    print("\n- printout of new job group object in logging format:")
    print jobGroup1.description()

    # Submit the job group.
    print("\nready to submit job group")
    response=raw_input("\nPress Enter to continue...\n")
    execute(jobGroup1)

    response=raw_input("\nNote the results printed above. Press Enter to continue the demonstration.\n")

    # Demonstrate timeout.
    print("\n # creating a new job in order to demonstrate timeout functionality...\n")
    job3=Job(
            workFunction=workFunctionTest,
            workFunctionKeywordArguments={'testString': "hello world"},
            workFunctionTimeout=1
    )
    print("- printout of new job object:")
    job3.printout()
    print("\n- printout of new job object in logging format:")
    print job3.description()
    print("\nNote the timeout specification of only 1 second.")

    # Submit the job.
    print("\nready to submit job")
    response=raw_input("\nPress Enter to continue...\n")
    execute(job3)

    response=raw_input("\nNote the recognition of timeouts printed above. This concludes the demonstration.")
    print('-'*80)

if __name__ == '__main__':
    main()

EDIT: This question has been placed [on hold] for the following stated reason: 编辑:由于以下原因,此问题已被暂停[暂停]:

"Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist " “要求代码的问题必须表明对正在解决的问题的最小理解。包括尝试的解决方案,为什么它们不起作用,以及预期的结果。另请参阅: Stack Overflow问题清单

This question is not requesting code; 这个问题不是要求代码; it is requesting thoughts, general guidance. 它要求思想,一般指导。 A minimal understanding of the problem under consideration is demonstrated (note the correct use of the terms "multiprocessing", "pool" and "asynchronously" and note the reference to prior code ). 对所考虑问题的最小理解得到了证明(注意正确使用术语“多处理”,“池”和“异步”并注意对先前代码的引用 )。 Regarding attempted solutions, I acknowledge that attempted efforts at solutions would have been beneficial. 关于尝试的解决方案,我承认在解决方案上的尝试努力将是有益的。 I have added such code now. 我现在已经添加了这样的代码。 I hope that I have addressed the concerns raised that lead to the [on hold] status. 我希望我已经解决了导致[暂停]状态的问题。

Without seeing actual code, I can only answer in generalities. 没有看到实际的代码,我只能回答一般性问题。 But there are two general solutions. 但有两种一般的解决方案。

First, instead of using a callback and ignoring the AsyncResult s, store them in some kind of collection. 首先,不是使用callback而忽略AsyncResult ,而是将它们存储在某种集合中。 Then you can just use that collection. 然后你就可以使用那个集合了。 For example, if you want to be able to look up the results for a function using that function as a key, just create a dict keyed with the functions: 例如,如果您希望能够使用该函数作为键查找函数的结果,只需创建一个用函数键入的dict

def in_parallel(funcs):
    results = {}
    pool = mp.Pool()
    for func in funcs:
        results[func] = pool.apply_async(func)
    pool.close()
    pool.join()
    return {func: result.get() for func, result in results.items()}

Alternatively, you can change the callback function to store the results in your collection by key. 或者,您可以更改回调函数以按键将结果存储在集合中。 For example: 例如:

def in_parallel(funcs):
    results = {}
    pool = mp.Pool()
    for func in funcs:
        def callback(result, func=func):
            results[func] = result
        pool.apply_async(func, callback=callback)
    pool.close()
    pool.join()
    return results

I'm using the function itself as a key. 我正在使用函数本身作为密钥。 But you want to use the index instead, that's just as easy. 但是你想要使用索引,这同样容易。 Any value you have, you can use as a key. 您拥有的任何价值,都可以用作关键。


Meanwhile, the example you linked is really just calling the same function on a bunch of arguments, waiting for all of them to finish, and leaving the results in some iterable in arbitrary order. 同时,您链接的示例实际上只是在一堆参数上调用相同的函数,等待所有这些参数完成,并将结果保留为任意顺序的迭代。 That's exactly what imap_unordered does, but a lot more simply. 这正是imap_unordered所做的,但更简单。 You could replace the whole complicated thing from the linked code with this: 你可以用链接代码替换整个复杂的东西:

pool = mp.Pool()
results = list(pool.imap_unordered(foo_pool, range(10)))
pool.close()
pool.join()

And then, if you want the results in their original order instead of in arbitrary order, you can just switch to imap or map instead. 然后,如果您希望结果按原始顺序而不是按任意顺序排列,则可以切换到imapmap So: 所以:

pool = mp.Pool()
results = pool.map(foo_pool, range(10))
pool.close()
pool.join()

If you need something similar but too complicated to fit into the map paradigm, concurrent.futures will probably make your life easier than multiprocessing . 如果你需要类似但过于复杂的东西以适应map范例, concurrent.futures可能会让你的生活比multiprocessing更容易。 If you're on Python 2.x, you will have to install the backport . 如果您使用的是Python 2.x,则必须安装backport But then you can do things that are much harder to do with AsyncResult s or callback s (or map ), like composing a whole bunch of futures into one big future. 但是,你可以用AsyncResultcallback (或map )做更难的事情,比如将一大堆期货组合成一个大的未来。 See the examples in the linked docs. 请参阅链接文档中的示例。


One last note: 最后一点:

The important points to emphasise are that I can not modify the existing functions… 需要强调的重点是我无法修改现有功能......

If you can't modify a function, you can always wrap it. 如果无法修改函数,则可以始终将其包装。 For example, let's say I have a function that returns the square of a number, but I'm trying to build a dict mapping numbers to their squares asynchronously, so I need to have the original number as part of the result as well. 例如,假设我有一个函数返回一个数字的平方,但我正在尝试异步地构建一个dict映射数字到它们的方块,所以我需要将原始数字作为结果的一部分。 That's easy: 这很容易:

def number_and_square(x):
    return x, square(x)

And now, I can just apply_async(number_and_square) instead of just square , and get the results I want. 现在,我可以只使用apply_async(number_and_square)而不仅仅是square ,并获得我想要的结果。

I didn't do that in the examples above because in the first case I stored the key into the collection from the calling side, and in the second place I bound it into the callback function. 我没有在上面的例子中这样做,因为在第一种情况下我将密钥存储在来自调用端的集合中,在第二种情况下,我将它绑定到回调函数中。 But binding it into a wrapper around the function is just as easy as either of these, and can be appropriate when neither of these is. 但是将它绑定到函数周围的包装器就像这两者中的任何一个一样简单,并且当这两者都不合适时可能是合适的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Python中带有工作池的异步多处理:超时后如何继续进行? - Asynchronous multiprocessing with a worker pool in Python: how to keep going after timeout? 如何使用多处理和pool.map跟踪状态? - How to keep track of status with multiprocessing and pool.map? Python 多处理:如何从池中的池返回结果? - Python multiprocessing: How to return results from Pool within pool? 如何在每次X迭代后跟踪python多处理池并运行函数? - How to keep track of python multiprocessing pool and run a function after every X iteration? 如何将结果从Multiprocessing.Pool流到csv? - How to stream results from Multiprocessing.Pool to csv? multiprocessing.Pool.map_async()的结果是否以相同的输入顺序返回? - Are results of multiprocessing.Pool.map_async() returned in the same order of the input? 如何在Python中保持multiprocessing.Pool()中的某些进程从打印到标准输出? - How to keep certain processes in a multiprocessing.Pool() from printing to stdout in Python? 如何从多处理池中终止AsyncResult? - How to terminate an AsyncResult from multiprocessing pool? 如何从多处理池重定向打印 output - How to redirect print output from multiprocessing Pool 通过多处理管理和重新启动进程来汇集结果 - Pool results with manage and re-start process from multiprocessing
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM