简体   繁体   中英

R jobs on SLURM running on a single node only

Despite mentioning the job name, partition and node on which the job should run, R is still running on compute node 01 with no migration to other nodes. I am presenting the script below, any help is appreciated:

!/bin/bash
#SBATCH --job-name=10/0.30
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --partition=debug
#SBATCH --exclude=compute[23,31-33,40]
#SBATCH --nodelist=compute[07]

echo "program started"  

cd /home/qwe/10/0.30

sbatch /home/R-3.3.1/bin/R CMD BATCH --no-save --no-restore test_dcd.R test_dcd.out 

On running squeue to get the list of running jobs:

         12169      qwe        R      qwe  R       7:08      1 compute01
         12172      qwe        R      qwe  R       5:03      1 compute01
         12175      qwe        R      qwe  R       3:26      1 compute01
         12177      qwe        R      qwe  R       0:02      1 compute01

You have to run the sbatch passing the script as a parameter, not inside the script.

So instead of running:

sbatch /home1/ASP/R-3.3.1/bin/R...

you should run:

sbatch myscript.sh

Also if you want to use multiple cpus in a job, you should use --cpus-per-task=16 instead of the --ntasks-per-node . --ntasks and --ntasks-per-node are used for MPI applications. For more details about the options check the sbatch manpage.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM