简体   繁体   English

Python,使用多处理线程

[英]Python, using threading with multiprocessing

Can someone explain why threading don't work in multiprocessing.Process.有人可以解释为什么线程在 multiprocessing.Process 中不起作用。

I've attached some example to explain my problem.我附上了一些例子来解释我的问题。

I have a process that executed every second and write to file.我有一个每秒执行一次并写入文件的进程。 When I run it from shell, it works as expected.当我从 shell 运行它时,它按预期工作。

stat_collect.py stat_collect.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from threading import Timer
from os import path
from datetime import datetime

STAT_DATETIME_FMT = '%Y-%m-%d %H:%M:%S'


def collect_statistics():
    my_file = 'test.file'
    if not path.exists(my_file):
        with open(my_file, 'w') as fp:
            fp.write(datetime.now().strftime(STAT_DATETIME_FMT) + '\n')
    else:
        with open(my_file, 'a') as fp:
            fp.write(datetime.now().strftime(STAT_DATETIME_FMT) + '\n')

    Timer(1, collect_statistics).start()


if __name__ == '__main__':
    collect_statistics()

When I try to run it from other script (to work in background):当我尝试从其他脚本运行它时(在后台工作):

#!/usr/bin/env python

from multiprocessing import Process
from stat_collect import collect_statistics  # logger sc

if __name__ == '__main__':
    # This don't work
    p = Process(target=collect_statistics)
    p.start()

    while True:
        pass

Method collect_statistics executed only once, but if I use Thread(target=collect_statistics).start() it works as if I run it from shell.方法 collect_statistics 只执行一次,但如果我使用 Thread(target=collect_statistics).start() 它就像我从 shell 运行它一样。 Why this is happen?为什么会这样?

Here is what is going on:这是发生了什么:

  1. You start your process你开始你的过程
  2. collect_statistics runs collect_statistics运行
  3. Timer is started计时器已启动
  4. now the function called in the process( collect_statistics ) is finished, so the process quit, killing the timer in the same time.现在进程中调用的函数( collect_statistics )已完成,因此进程退出,同时杀死计时器。

Here is how to fix it :以下是修复方法:

stat_collect.py stat_collect.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from threading import Timer
from os import path
from datetime import datetime
import time

STAT_DATETIME_FMT = '%Y-%m-%d %H:%M:%S'


def collect_statistics():
    while True:
        my_file = 'test.file'
        if not path.exists(my_file):
            with open(my_file, 'w') as fp:
                fp.write(datetime.now().strftime(STAT_DATETIME_FMT) + '\n')
        else:
            with open(my_file, 'a') as fp:
                fp.write(datetime.now().strftime(STAT_DATETIME_FMT) + '\n')

        time.sleep(1)


if __name__ == '__main__':
    collect_statistics()

And for the calling script :对于调用脚本:

#!/usr/bin/env python

from multiprocessing import Process
from stat_collect import collect_statistics  # logger sc

if __name__ == '__main__':
    # This don't work
    p = Process(target=collect_statistics)
    p.start()
    p.join() # wait until process is over, e.g forever

p.join() is just here to replace you infinite while loop, which is taking a lot of ressource for nothing. p.join()只是在这里替换您的无限 while 循环,这会白白占用大量资源。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM