[英]mpirun without options runs a program on one process only
If no value is provided for the number of copies to execute (ie, neither the "-np" nor its synonyms are provided on the command line), Open MPI will automatically execute a copy of the program on each process slot (see below for description of a "process slot")
如果没有为要执行的副本数量提供值(即,命令行上既没有提供“-np”也没有提供其同义词),Open MPI将自动在每个进程槽上执行程序的副本(参见下面的“过程槽”的描述)
So I would expect 所以我期待
mpirun program
to run eight copies of the program (actually a simple hello world), since I have an Intel® Core™ i7-2630QM CPU @ 2.00GHz × 8, but it doesn't: it simply runs a single process. 运行该程序的八个副本(实际上是一个简单的hello世界),因为我有一个英特尔®酷睿™i7-2630QM CPU @ 2.00GHz×8,但它没有:它只运行一个进程。
If you do not specify the number of processes to be used, mpirun
tries to obtain them from the (specified or) default host file. 如果未指定要使用的进程数,
mpirun
尝试从(指定的)默认主机文件中获取它们。 From the corresponding section of the man page you linked : 从您链接的手册页的相应部分 :
If the hostfile does not provide slots information, a default of 1 is assumed.
如果主机文件未提供插槽信息,则假定默认值为1。
Since you did not modify this file (I assume), mpirun
will use one slot only. 由于你没有修改这个文件(我假设),
mpirun
只会使用一个插槽。
On my machine, the default host file is located in 在我的机器上,默认主机文件位于
/etc/openmpi-x86_64/openmpi-default-hostfile
i7-2630QM is a 4-core CPU with two hardware threads per core. i7-2630QM是一个4核CPU,每个核心有两个硬件线程。 With computationally intensive programs, you should better start four MPI processes instead of eight.
对于计算密集型程序,您应该最好启动四个MPI进程而不是八个。
Simply use mpiexec -n 4 ...
as you do not need a hostfile for starting processes on the same node where mpiexec
is executed. 只需使用
mpiexec -n 4 ...
因为您不需要主机文件来在执行mpiexec
的同一节点上启动进程。
Hostfiles are used when launching MPI processes on remote nodes. 在远程节点上启动MPI进程时使用主机文件。 If you really need to create one, the following should do it:
如果您确实需要创建一个,则应执行以下操作:
hostname slots=4 max_slots=8
(replace hostname
with the host name of the machine) (将
hostname
替换为机器的主机名)
Run the program as 运行程序为
mpiexec -hostfile name_of_hostfile ...
max_slots=8
allows you to oversubscribe the node with up to eight MPI processes if your MPI program can make use of the hyperthreading. 如果您的MPI程序可以使用超线程,
max_slots=8
允许您使用最多八个MPI进程超额预订节点。 You can also set the environment variable OMPI_MCA_orte_default_hostfile
to the full path of the hostfile instead of explicitly passing it each and every time as a parameter to mpiexec
. 您还可以将环境变量
OMPI_MCA_orte_default_hostfile
设置为OMPI_MCA_orte_default_hostfile
的完整路径,而不是每次将其作为参数显式传递给mpiexec
。
If you happen to be using a distributed resource manager like Torque, LSF, SGE, etc., then, if properly compiled, Open MPI integrates with the environment and builds a host and slot list from the reservation automatically. 如果您碰巧使用分布式资源管理器(如Torque,LSF,SGE等),那么,如果编译正确,Open MPI将与环境集成并自动从预留中构建主机和插槽列表。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.