简体   繁体   English

多进程运行比单进程慢

[英]Multiprocessing Running Slower than a Single Process

I'm attempting to use multiprocessing to run many simulations across multiple processes; 我正在尝试使用多重处理来跨多个进程运行许多模拟。 however, the code I have written only uses 1 of the processes as far as I can tell. 但是,据我所知,我编写的代码仅使用其中一个进程。

Updated 更新

I've gotten all the processes to work (I think) thanks to @PaulBecotte ; 感谢@PaulBecotte,我已经使所有流程都能工作(我认为); however, the multiprocessing seems to run significantly slower than its non-multiprocessing counterpart. 但是,多处理似乎比其非多处理运行得慢得多。

For instance, not including the function and class declarations/implementations and imports, I have: 例如,不包括函数和类的声明/实现和导入,我有:

def monty_hall_sim(num_trial, player_type='AlwaysSwitchPlayer'):
    if player_type == 'NeverSwitchPlayer':
        player = NeverSwitchPlayer('Never Switch Player')
    else:
        player = AlwaysSwitchPlayer('Always Switch Player')

    return (MontyHallGame().play_game(player) for trial in xrange(num_trial))

def do_work(in_queue, out_queue):
    while True:
        try:
            f, args = in_queue.get()
            ret = f(*args)
            for result in ret:
                out_queue.put(result)
        except:
            break

def main():
    logging.getLogger().setLevel(logging.ERROR)

    always_switch_input_queue = multiprocessing.Queue()
    always_switch_output_queue = multiprocessing.Queue()

    total_sims = 20
    num_processes = 5
    process_sims = total_sims/num_processes

    with Timer(timer_name='Always Switch Timer'):
        for i in xrange(num_processes):
            always_switch_input_queue.put((monty_hall_sim, (process_sims, 'AlwaysSwitchPlayer')))

        procs = [multiprocessing.Process(target=do_work, args=(always_switch_input_queue, always_switch_output_queue)) for i in range(num_processes)]

        for proc in procs:
            proc.start()

        always_switch_res = []
        while len(always_switch_res) != total_sims:
            always_switch_res.append(always_switch_output_queue.get())

        always_switch_success = float(always_switch_res.count(True))/float(len(always_switch_res))

    print '\tLength of Always Switch Result List: {alw_sw_len}'.format(alw_sw_len=len(always_switch_res))
    print '\tThe success average of switching doors was: {alw_sw_prob}'.format(alw_sw_prob=always_switch_success)

which yields: 产生:

    Time Elapsed: 1.32399988174 seconds
    Length: 20
    The success average: 0.6

However, I am attempting to use this for total_sims = 10,000,000 over num_processes = 5 , and doing so has taken significantly longer than using 1 process (1 process returned in ~3 minutes). 但是,我尝试将其用于num_processes = 5 total_sims = 10,000,000 ,并且比使用1个进程(在1分钟内返回1个进程)花费了更长的时间。 The non-multiprocessing counterpart I'm comparing it to is: 我正在与之进行比较的非多处理对象是:

def main():
    logging.getLogger().setLevel(logging.ERROR)

    with Timer(timer_name='Always Switch Monty Hall Timer'):
        always_switch_res = [MontyHallGame().play_game(AlwaysSwitchPlayer('Monty Hall')) for x in xrange(10000000)]

        always_switch_success = float(always_switch_res.count(True))/float(len(always_switch_res))

    print '\n\tThe success average of not switching doors was: {not_switching}' \
          '\n\tThe success average of switching doors was: {switching}'.format(not_switching=never_switch_success,
                                                                               switching=always_switch_success)

EDIT- you changed some stuff, let me try and explain a bit better. 编辑-您更改了一些内容,让我尝试更好地解释。

Each message you put into the input queue will cause the monty_hall_sim function to get called and send num_trial messages to the output queue. 输入到输入队列中的每条消息都将导致monty_hall_sim函数被调用,并将num_trial消息发送到输出队列。

So your original implementation was right- to get 20 output messages, send in 5 input messages. 因此,您最初的实现是正确的-获得20条输出消息,发送5条输入消息。

However, your function is slightly wrong. 但是,您的功能略有错误。

for trial in xrange(num_trial):
    res = MontyHallGame().play_game(player)
    yield res

This will turn the function into a generator that will provide a new value on each next() call- great! 这将把函数变成一个生成器,该生成器将在每个next()调用中提供一个新值! The problem is here 问题在这里

while True:
    try:
        f, args = in_queue.get(timeout=1)
        ret = f(*args)
        out_queue.put(ret.next())
    except:
        break

Here, on each pass through the loop you create a NEW generator with a NEW message. 在这里,每次循环时,您都会创建一个带有NEW消息的NEW生成器。 The old one is thrown away. 旧的被扔掉了。 So here, each input message only adds a single output message to the queue before you throw it away and get another one. 因此,在这里,每条输入消息仅在队列中添加一条输出消息,然后再将其丢弃并得到另一条。 The correct way to write this is- 编写此文件的正确方法是-

while True:
    try:
        f, args = in_queue.get(timeout=1)
        ret = f(*args)
        for result in ret:
            out_queue.put(ret.next())
    except:
        break

Doing it this way will continue to yield output messages from the generator until it finishes (after yielding 4 messages in this case) 以这种方式进行操作将继续从生成器产生输出消息,直到完成为止(在这种情况下,在产生4条消息之后)

您可以尝试在某些if语句下导入“ process”

I was able to get my code to run significantly faster by changing monty_hall_sim's return to a list comprehension, having do_work add the lists to the output queue, and then extend the results list of main with the lists returned by the output queue. 通过将monty_hall_sim的返回值更改为列表理解,让do_work将列表添加到输出队列中,然后使用输出队列返回的列表扩展main的结果列表,可以使我的代码运行得更快。 Made it run in ~13 seconds. 使其在约13秒内运行。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM