简体   繁体   中英

Running similar program in multiple cores with different variable

I have a program which I want to create N instances of, where the only thing that varies is some hyper parameter $\\beta$.

In my mind I know I could do this with a bash script, where I call the program N times, each with a different value for $\\beta$, and send each one to the background so that the next one can run:

#!/bin/bash

nohup python3 test.py 1 >> res.txt &
nohup python3 test.py 2 >> res.txt &
nohup python3 test.py 3 >> res.txt &
nohup python3 test.py 4 >> res.txt &

Maybe I can also do this directly in python, in a cleaner manner. My question is, from your experience, what is the cleanest way of achieving this? Feel free to ask any detail I might have missed.

For running multiple things in parallel, the thing that comes to my mind is GNU Parallel .

So for your example, a dry-run gives this:

parallel --dry-run 'nohup python prog.py {} &' ::: {1..4}

Sample Output

nohup python prog.py 3 &
nohup python prog.py 2 &
nohup python prog.py 1 &
nohup python prog.py 4 &

In general, you don't want multiple, parallel processes writing to the same file - it makes a mess, so I would name the output file after the parameter:

parallel --dry-run 'nohup python prog.py {}  > res{}.txt &' ::: {1..4}

You are looking for the subprocess module.

subprocess.run([process_name, arg1, arg2, argn])

An Example.

import subprocess

subprocess.run(["ls", "-l"])

Also check how to call a subprocess and get the output

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM