[英]MPI C send a matrix line by line to all process childrens (MPI_COMM_SPAWN)
I have a parent process and a matrix and I want to create for each line a child process and send it the corresponding line for process. 我有一个父流程和一个矩阵,我想为每行创建一个子流程,并将其发送给流程的相应行。
Parent process code: 父流程代码:
int tag = 0;
MPI_Status status;
int random(int n) {
return rand() % n;
}
float** generate_matrix(int n, int m) {
int i, j;
float **x;
x = (float **) malloc(m * sizeof(float));
for (i = 0; i < m; i++) {
x[i] = (float *) malloc(n * sizeof(float));
}
for (i = 0; i < m; i++) {
for (j = 0; j < n; j++) {
x[i][j] = random(100);
}
}
return x;
}
int main(int argc, char** argv) {
int my_rank;
int num_procs;
MPI_Comm workercomm;
int n = 4, m = 5;
float**matrix = generate_matrix(n, m);
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
MPI_Comm_spawn("C:/Users/colegnou/workspace/worker/Debug/worker.exe",
MPI_ARGV_NULL, m,
MPI_INFO_NULL, 0, MPI_COMM_SELF, &workercomm, MPI_ERRCODES_IGNORE);
for (int i = 0; i < m; i++) {
MPI_Bcast(matrix[i], n, MPI_FLOAT, MPI_ROOT, workercomm);
}
MPI_Finalize();
return 0;
}
And worker code: 和工人代码:
int tag = 0;
MPI_Status status;
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
MPI_Comm parent;
MPI_Comm_get_parent(&parent);
int myid;
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
int n = 4;
float*vector = (float *) malloc(n * sizeof(float));
if (parent != MPI_COMM_NULL) {
MPI_Bcast(vector, n, MPI_FLOAT, MPI_ROOT, parent);
}
printf("%d ->", myid);
for (int i = 0; i < n; i++) {
printf("%f ", vector[i]);
}
printf("\n");
MPI_Comm_free(&parent);
free(vector);
MPI_Finalize();
return 0;
}
But I expecting every child process to print his corresponding line by line in matrix , instead the output is: .................................................................................... 但我希望每个子进程在矩阵中逐行打印其对应的行,而不是输出:...................................... ................................................... .....
4 ->0.000000 0.000000 0.000000 0.000000
1 ->0.000000 0.000000 0.000000 0.000000
3 ->0.000000 0.000000 0.000000 0.000000
0 ->0.000000 0.000000 0.000000 0.000000
2 ->0.000000 0.000000 0.000000 0.000000
Thnaks!! 谢谢!
In the worker code, you should use root=0
instead of MPI_ROOT
. 在工作程序代码中,应使用
root=0
而不是MPI_ROOT
。
Feel free to re-read the definition of MPI_Bcast()
when an inter-communicator is used https://www.open-mpi.org/doc/v2.1/man3/MPI_Bcast.3.php 当使用内部通信器时,请
MPI_Bcast()
重新阅读MPI_Bcast()
的定义https://www.open-mpi.org/doc/v2.1/man3/MPI_Bcast.3.php
Note the allocation of the matrix is incorrect, you should malloc(m * sizeof(float *))
instead. 请注意,矩阵的分配是不正确的,您应该改为使用
malloc(m * sizeof(float *))
。
You should also perform m
broadcasts in the worker, unless MPI_Scatter()
is what you are looking for (and in that case, you should allocate a contiguous 2D matrix) 除非您要查找
MPI_Scatter()
,否则您还应该在worker中执行m
广播(在这种情况下,您应该分配连续的2D矩阵)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.