简体   繁体   中英

MPI_ERR_BUFFER: invalid buffer pointer

What is the most common reason for this error

 MPI_ERR_BUFFER: invalid buffer pointer

which results from MPI_Bsend() and MPI_Rcev() calls? The program works fine when the number of parallel processes is small (<14), but when I increase the number of processes I get this error.

To expand on my previous comment:

Buffering in MPI can occur on various occasions. Messages can be buffered internally by the MPI library in order to hide the network latency (usually only done for small messages up to an implementation-dependent size) or buffering can be enforced by the user by using any of the buffered send operations MPI_Bsend() and MPI_Ibsend() . User buffering differs from the internal one though:

  • first, messages sent by MPI_Bsend() or by MPI_Ibsend() are always buffered, which is not the case with internally buffered messages. The latter can either be buffered or not depending on their size and the availability of internal buffer space;
  • second, because of the "always buffer" aspect, if no buffer space is available in the user attached buffer, an MPI_ERR_BUFFER error occurs.

Sent messages use buffer space until they are received for sure by the destionation process. Since MPI does not provide any built-in mechanisms to confirm the reception of a message, one has to devise another way to do it, eg by sending back a confirmation messages from the destination process to the source one.

For that reason one has to consider all messages that were not explicitly confirmed as being in transit and has to allocate enough memory in the buffer. Usually this means that the buffer should be at least as large as the total amount of data that you are willing to transfer plus the message envelope overhead which is equal to number_of_sends * MPI_BSEND_OVERHEAD . This can put a lot of memory pressure for large MPI jobs. One has to keep that in mind and adjust the buffer space accordingly when the number of processes is changed.

Note that buffered send is provided merely as a convenience. It could be readily implemented as a combination of memory duplication and non-blocking send operation, eg buffered send frees you from writing code like:

int data[];
int *shadow_data;
MPI_Request req;

...
<populate data>
...
shadow_data = (int *)malloc(sizeof(data));
memcpy(shadow_data, data, sizeof(data));
MPI_Isend(shadow_data, count, MPI_INT, destination, tag, MPI_COMM_WORLD, &req);
...
<reuse data as it is not used by MPI>
...
MPI_Wait(&req);
free(shadow_data);

If memory is scarce then you should resort to non-blocking sends only.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM