简体   繁体   中英

Python 2 Subprocess: Cannot get output from readline

I have the following C application

#include <stdio.h>

int main(void)
{
    printf("hello world\n");
    /* Go into an infinite loop here. */
    while(1);

    return 0;
}

And I have the following python code.

import subprocess
import time
import pprint


def run():
    command = ["./myapplication"]
    process = subprocess.Popen(command, stdout=subprocess.PIPE)
    try:
        while process.poll() is None:
            # HELP: This call blocks...
            for i in  process.stdout.readline():
                print(i)

    finally:
        if process.poll() is None:
            process.kill()


if __name__ == "__main__":
    run()

When I run the python code, the stdout.readline or even stdout.read blocks.

If I run the application using subprocess.call(program) then I can see "hello world" in stdout.

How can I read input from stdout with the example I have provided?

Note: I would not want to modify my C program. I have tried this on both Python 2.7.17 and Python 3.7.5 under Ubuntu 19.10 and I get the same behaviour. Adding bufsize=0 did not help me.

The easiest way is to flush buffers in the C program

...
printf("hello world\n");
fflush(stdout);
while(1);
...

If you don't want to change the C program, you can manipulate the libc buffering behavior from outside. This can be done by using stdbuf to call your program (linux). The syntax is "stdbuf -o0 yourapplication" for zero buffering and "stdbuf -oL yourapplication" for line buffering. Therefore in your python code use

...
command = ["/usr/bin/stdbuf","-oL","pathtomyapplication"]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
...

or

...
command = ["/usr/bin/stdbuf","-o0","pathtomyapplication"]
process = subprocess.Popen(command, stdout=subprocess.PIPE)
...

Applications built using the C Standard IO Library (built with #include <stdio.h> ) buffer input and output (see here for why). The stdio library, like isatty , can tell that it is writing to a pipe not a TTY and so it chooses block buffering instead of line buffering. Data is flushed when the buffer is full, but "hello world\\n" is not filling the buffer so it's not flushed.

One way around is shown in Timo Hartmann answer, using stdbuf utility. This uses an LD_PRELOAD trick to swap in its own libstdbuf.so . In many cases that is a fine solution, but LD_PRELOAD is kind of a hack and does not work in some cases , so it may not be a general solution.

Maybe you want to do this directly in Python, and there are stdlib options to help here, you can use a pseudo-tty ( docs py2 , docs py3 ) connected to stdout instead of a pipe . The program myapplication should enable line buffering, meaning that any newline character flushes the buffer.

from __future__ import print_function
from subprocess import Popen, PIPE
import errno
import os
import pty
import sys

mfd, sfd = pty.openpty()
proc = Popen(["/tmp/myapplication"], stdout=sfd)
os.close(sfd)  # don't wait for input
while True:
    try:
        output = os.read(mfd, 1000)
    except OSError as e:
        if e.errno != errno.EIO:
            raise
    else:
        print(output)

Note that we are reading bytes from the output now, so we can not necessarily decode them right away!

See Processing the output of a subprocess with Python in realtime for a blog post cleaning up this idea. There are also existing third-party libraries to do this stuff, see ptyprocess .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM