简体   繁体   中英

Error when try to use Mpi recv : signal segmentation fault

:)

I got a disturbing question concerning a mpi program. The idea is : each process(slave) send data to master in order to compute the mandelbrot fractal.

Firstly, each slave sent point and it worker. Then they sent line and it worked !

But now, i try to make them sending a block of line(let's suppose 5 lines, so a submatrix).

My idea is to make these five lines into a single line. The master receive the first "new" line but doesn't for the others O_o. I'm disturbed.

I receive for the others(>1) : signal segmentation fault signal code : adress not mapped failing at adress

Please Help me ! Because it's a long time , i'have been looking for :(

Ps : i'm french (so that's why my english is bad )

//the whole table to be used in a master //int table[NX*NY]; //int count =0; if (rank == 0) { int res; int line[MAXY+MAXY+1]; int block[5*(MAXY+MAXY+1)]; int count = 0; /* Begin User Program - the master */ //MPI_Recv(&line, MAXY+MAXY+1, MPI_INT,MPI_ANY_SOURCE, DATATAG, MPI_COMM_WORLD, &status); MPI_Recv(&block, 5*(MAXY+MAXY+1), MPI_INT,MPI_ANY_SOURCE, DATATAG, MPI_COMM_WORLD, &status); printf("sizeof of datablock received is = %d \n",sizeof(block)/sizeof(block[0])); recvd = status.MPI_SOURCE; printf("i have received blockdata from %d \n",recvd); /* remplissage du case */ for(i = -MAXX; i <= MAXX; i++) { for(j = -MAXY; j <= MAXY; j++) { cases[i + MAXX][j + MAXY] = block[count%(MAXY+MAXY+1)]; //printf("j'ai fait un bloc[count], pas credible\n"); count++; } } dump_ppm("mandel.ppm", cases); printf("Fini.\n"); } else { /* On est l'un des fils */ /* for the block;let's suppose each son send 5 rows*/ double x, y; int i, j, res, rc, rank,count; //int line[MAXY + MAXY + 1]; int block[5*(MAXY+MAXY+1)]; count = 0; MPI_Comm_rank(MPI_COMM_WORLD, &rank); for(i = -MAXX; i <= MAXX; i++) { for(j = -MAXY; j <= MAXY; j++) { x = 2 * i / (double)MAXX; y = 1.5 * j / (double)MAXY; res = mandel(x, y); //line[j+MAXY] = res; block[count] =res; if (count % (5*(MAXY+MAXY+1)) == 0){ //we send each five rows MPI_Send(&block,5*(MAXY+MAXY+1), MPI_INT, 0, DATATAG, MPI_COMM_WORLD); printf("me slave %d, have sent datablock to master\n",rank); printf("sizeof of datablock sent is = %d\n",sizeof(block)/sizeof(block[0])); } count++; } //MPI_Send(&line, MAXY+MAXY+1 , MPI_INT, 0, DATATAG, MPI_COMM_WORLD); } } MPI_Finalize(); return 0; }

The function MPI_Recv() needs the address of the buffer where the data will be received. It is the same thing for MPI_Send() . Since int block[5*(MAXY+MAXY+1)] is an array, block points to the first item of the array block[0] : this is the address that is required. On the other hand, &block points to block : it's similar to a pointer to a pointer to int . But the value of &block is not the address of the first item of the array!

Hence, could you try:

int block[5*(MAXY+MAXY+1)]
...
MPI_Send(block,5*(MAXY+MAXY+1), MPI_INT, 0, DATATAG, MPI_COMM_WORLD);
...
MPI_Recv(block, 5*(MAXY+MAXY+1), MPI_INT,MPI_ANY_SOURCE, DATATAG, MPI_COMM_WORLD, &status);

Which is equivalent to:

int block[5*(MAXY+MAXY+1)]
...
MPI_Send(&block[0],5*(MAXY+MAXY+1), MPI_INT, 0, DATATAG, MPI_COMM_WORLD);
...
MPI_Recv(&block[0], 5*(MAXY+MAXY+1), MPI_INT,MPI_ANY_SOURCE, DATATAG, MPI_COMM_WORLD, &status);

What if you what to send a single integer int a ? The address of a ( &a ) can be provided to MPI_Send() , as performed in many examples devoted to MPI_Send() :

int a=42;
MPI_Send(&a,1, MPI_INT, 0, DATATAG, MPI_COMM_WORLD);

Lastly, make sure that MPI_Send() is called as many times as MPI_Recv() . Indeed, in the code you posted, MPI_Recv() is called only once by the root process, while each non-root process will send a message to the root. Consequently, the program will work for 2 processes and it is likely to fail if more processes are used or if a single process is used.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM