简体   繁体   中英

How would you limit file size created by a python subprocess?

I run a subprocess from python like this (not my script):

  with contextlib.redirect_stdout(log_file):
    # ....
    processResult = subprocess.run(args, 
                    stdout=sys.stdout, 
                    stderr=sys.stderr
                    timeout=3600)

and sometimes the process goes crazy (due to an intermittent bug) and dumps so many errors into the stdout/logfile so that it grows to 40Gb and fills up the disk space.

What would be the best way to protect against that? Being a python newbie, I have 2 ideas:

  • piping the subprocess into something like head that aborts it if output grows beyond limit (not sure if this is possible with subprocess.run or do I have to go the low level Popen way)

  • finding or creating some handy IO wrapper class IOLimiter which would throw an error after a given size (couldn't find anything like this in stdlib and not even sure where to look for it)

I suspect there would be some smarter/cleaner way?

I recently had this problem myself. I did it with the popen method, setting PYTHONUNBUFFERED=1

test_proc = subprocess.Popen(
    my_command,
    universal_newlines=True,
    stdout=subprocess.PIPE,
    stderr=subprocess.STDOUT,
)

print(time.time(), "START")
# Iterate over the lines of output produced
for out_data in iter(test_proc.stdout.readline, ""):
    # Check whatever I need.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM