[英]R jobs on SLURM running on a single node only
Despite mentioning the job name, partition and node on which the job should run, R is still running on compute node 01 with no migration to other nodes.尽管提到了作业名称、分区和作业应该运行的节点,R 仍然在计算节点 01 上运行,没有迁移到其他节点。 I am presenting the script below, any help is appreciated:我正在展示下面的脚本,任何帮助表示赞赏:
!/bin/bash
#SBATCH --job-name=10/0.30
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --partition=debug
#SBATCH --exclude=compute[23,31-33,40]
#SBATCH --nodelist=compute[07]
echo "program started"
cd /home/qwe/10/0.30
sbatch /home/R-3.3.1/bin/R CMD BATCH --no-save --no-restore test_dcd.R test_dcd.out
On running squeue to get the list of running jobs:在运行 squeue 以获取正在运行的作业列表:
12169 qwe R qwe R 7:08 1 compute01
12172 qwe R qwe R 5:03 1 compute01
12175 qwe R qwe R 3:26 1 compute01
12177 qwe R qwe R 0:02 1 compute01
You have to run the sbatch passing the script as a parameter, not inside the script.您必须运行将脚本作为参数传递的 sbatch,而不是在脚本内部。
So instead of running:所以,而不是运行:
sbatch /home1/ASP/R-3.3.1/bin/R...
you should run:你应该运行:
sbatch myscript.sh
Also if you want to use multiple cpus in a job, you should use --cpus-per-task=16
instead of the --ntasks-per-node
.此外,如果您想在作业中使用多个 CPU,您应该使用--cpus-per-task=16
而不是--ntasks-per-node
。 --ntasks
and --ntasks-per-node
are used for MPI applications. --ntasks
和--ntasks-per-node
用于 MPI 应用程序。 For more details about the options check the sbatch manpage.有关选项的更多详细信息,请查看 sbatch 联机帮助页。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.