简体   繁体   English

防止多处理“失控的错误”

[英]Preventing multiprocessing “runaway bugs”

The OS will not like it if you use multiprocessing and accidentally end up creating processes without limit . 如果您使用多处理并无意间无限制地创建了进程 ,则操作系统将不会喜欢它。

Is there any simple solution that prevents this from happening (say, by limiting total number of processes, either in Python or in the OS)? 是否有任何简单的解决方案可以防止这种情况发生(例如,通过限制进程总数(在Python或OS中))?

I use Windows, and it behaves really badly (requires hard reboot) when I make a mistake like that. 我使用Windows,当我犯这样的错误时,它的表现确实很差(需要硬重启)。 So I'd love it if there's some code that I can wrap around / add to my application and prevent this from happening. 因此,如果有一些代码可以包装/添加到我的应用程序中并防止这种情况发生,我将非常乐意。

What you can do is create a short 'trip-wire' type module and import it as well as multiprocessing. 您可以做的是创建一个短的“绊线”类型的模块,然后将其导入以及进行多处理。 The trip-wire module will raise an exception if it detects a multiprocessing infinite loop. 如果跳线模块检测到多处理无限循环,将引发异常。

Mine looks like this: 我的看起来像这样:

#mp_guard.py
"""tracks invocation by creating an environment variable; if that
variable exists when next called a loop is in progress"""

import os

class Brick(Exception):
    def __init__(self):
        Exception.__init__(self, "Your machine just narrowly avoided becoming"
                                 " a brick!")

if 'MP_GUARD' in os.environ:
    raise Brick

os.environ['MP_GUARD'] = 'active'

And in the main .py file: 在主.py文件中:

import mp_guard

On Linux, you can use the setrlimit(2) syscall (with RLIMIT_NPROC ) to limit the number of processes (eg to avoid fork bombs ). 在Linux上,您可以使用setrlimit(2) syscall(带有RLIMIT_NPROC )来限制进程数(例如,避免产生fork炸弹 )。 This syscall is interfaced thru the bash ulimit (or zsh limit ) builtin. 该系统调用通过内置的bash ulimit (或zsh limit )进行接口。 Python has some binding to this syscall. Python对此系统调用有一些约束

I have no idea if something similar exist under Windows. 我不知道Windows下是否存在类似的东西。

On windows you can create "a job", I'm not an expert in python, so I dont't know if theres is any binding for creating windows jobs. 在Windows上,您可以创建“作业”,我不是python专家,所以我不知道创建Windows作业是否有任何绑定。 The Windows API function is CreateJobObject . Windows API函数是CreateJobObject A job object is (to some extent) equivalent of Unix "process group". 作业对象(在某种程度上)与Unix“进程组”等效。 You can apply certain limitations both on the job object as a whole and to each process separately (eg max processes in job). 您可以对整个作业对象和每个流程分别应用某些限制(例如,作业中的最大流程)。 You could create a job object, assign number of processes limit to it and then assign your own process to the job object. 您可以创建一个作业对象,为其分配进程数限制,然后将自己的进程分配给该作业对象。 What you may be looking for is CreateJobObject , SetInformationJobObject + JOBOBJECT_BASIC_LIMIT_INFORMATION + JOB_OBJECT_LIMIT_ACTIVE_PROCESS . 您可能正在寻找的是CreateJobObjectSetInformationJobObject + JOBOBJECT_BASIC_LIMIT_INFORMATION + JOB_OBJECT_LIMIT_ACTIVE_PROCESS Again: this is Windows API, I'm not sure if you have any 'bindings' to python related to these functions. 再说一次:这是Windows API,我不确定您是否有与这些功能相关的python“绑定”。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM