简体   繁体   中英

Python script which runs multiple shell commands and waits for the result

I'm trying to write a python wrapper for building some software. I need to automate building it hundreds of times with different configurations which means I can't just autogen.sh ; ./configure ; make ; make install autogen.sh ; ./configure ; make ; make install autogen.sh ; ./configure ; make ; make install . Some of the configurations I use require running a script which conditionally set up some environment variables. What I want is to be able to do something like this:

command = './autogen.sh'
ret = subprocess.call(command.split())
if ret != 0:
  sys.exit(ret)

command = './script.sh ; ./configure <configure-flags>'
ret = subprocess.call(command.split())
if ret != 0:
  sys.exit(ret)

command = 'make'
ret = subprocess.call(command.split())
if ret != 0:
  sys.exit(ret)

command = 'make install'
ret = subprocess.call(command.split())
if ret != 0):
  sys.exit(ret)

The problem I'm running into is that the environment variables set in script.sh are not getting preserved for configure . I saw a partial solution in Sending multiple commands to a bash shell which must share an environment , but that involves flushing the commands to stdin and polling for a result which won't really work when you have a really long makefile (mine takes about 10 - 20 minutes) and it also doesn't give you the return value which I need to know if the build was successful or not.

Does anyone know a better way to do this?

If you have a script that sets variables you want to access afterwards, you must source it (similar to what other languages call "include").

Instead of

command = './script.sh ; ./configure <configure-flags>'
ret = subprocess.call(command.split())

you can do

command = ["bash", "-c", "source script.sh; ./configure"]
subprocess.call(command)

The basic problem here is that environment variables are copied only "downward" (from parent to child), never "upward" (child to parent). Your python script is a parent. It runs a shell command, which is therefore a child; the shell runs more commands, which are children of the shell (and therefore grandchildren of the Python process).

To make environment variables persist, you'll need to import them upwards somehow. Exactly how is up to you. A common technique (used in shell scripts as well as in Python) is to have the exporter print the values it wants set, then have the shell or Python process read that output and do the setting. (I see that's what the post you linked-to does.)

For instance, a child process might print:

CONFIG_PATH=/path/to/config/file

(or the same with export added) and then the outer shell would simply eval this. This implies a great deal of trust: what if the child process print rm -rf / for instance? One can apply rules (regular expression matching, for instance) to the output before executing it, or even manually (or automatically) parse it but not execute the result until after a verification step.

Another method is to write the configuration to a file, and have the parent read the file. This is pretty much the same technique, but using a file for the communications depot, instead of fiddling with stdin and stdout. It has several more issues (naming the file, and knowing when to read it).

(There are, of course, many build and/or test frameworks written in Python. I'm not going to recommend any specific ones as I don't have that much experience with them.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM