简体   繁体   English

您如何限制python子进程创建的文件大小?

[英]How would you limit file size created by a python subprocess?

I run a subprocess from python like this (not my script): 我从python这样运行一个子进程(不是我的脚本):

  with contextlib.redirect_stdout(log_file):
    # ....
    processResult = subprocess.run(args, 
                    stdout=sys.stdout, 
                    stderr=sys.stderr
                    timeout=3600)

and sometimes the process goes crazy (due to an intermittent bug) and dumps so many errors into the stdout/logfile so that it grows to 40Gb and fills up the disk space. 有时,该过程会发疯(由于间歇性的错误),并将如此多的错误转储到stdout / logfile中,使其增长到40Gb并填满了磁盘空间。

What would be the best way to protect against that? 防止这种情况的最佳方法是什么? Being a python newbie, I have 2 ideas: 作为python新手,我有2个想法:

  • piping the subprocess into something like head that aborts it if output grows beyond limit (not sure if this is possible with subprocess.run or do I have to go the low level Popen way) 用管道将子流程放入类似于head东西中,如果输出超出限制,该子流程将中止它(不确定subprocess.run是否可行,还是我必须走低级Popen方式)

  • finding or creating some handy IO wrapper class IOLimiter which would throw an error after a given size (couldn't find anything like this in stdlib and not even sure where to look for it) 查找或创建一些方便的IO包装器类IOLimiter,该类将在给定大小后引发错误(无法在stdlib中找到类似这样的东西,甚至不确定在哪里寻找它)

I suspect there would be some smarter/cleaner way? 我怀疑会有更聪明/更清洁的方式吗?

I recently had this problem myself. 我本人最近有这个问题 I did it with the popen method, setting PYTHONUNBUFFERED=1 我用popen方法做到了,设置PYTHONUNBUFFERED=1

test_proc = subprocess.Popen(
    my_command,
    universal_newlines=True,
    stdout=subprocess.PIPE,
    stderr=subprocess.STDOUT,
)

print(time.time(), "START")
# Iterate over the lines of output produced
for out_data in iter(test_proc.stdout.readline, ""):
    # Check whatever I need.

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM