简体   繁体   English

MPI_SEND - 将结构作为向量发送

[英]MPI_SEND - send structure as vector

Considering this structure:考虑到这种结构:

struct Book {
 int id;
 string title;
};

And this vector:这个向量:

vector<Book> books;

How can I use MPI_Send in order to send the elements of vector books ?我如何使用MPI_Send来发送矢量books的元素?

I have tried to find a way to do this the entire day, but no results.我一整天都在试图找到一种方法来做到这一点,但没有结果。

With title being a char[N] array of fixed size N , you could've created a new datatype and use it in MPI_Send .由于title是一个固定大小Nchar[N]数组,您可以创建一个新的数据类型并在MPI_Send使用它。 Unfortunately, this approach won't work with std::string as a data member.不幸的是,这种方法不适用于std::string作为数据成员。 But you can send std::vector<Book> element-by-element.但是您可以逐个元素发送std::vector<Book>

For example:例如:

std::vector<Book> books;
// ...
const unsigned long long size = books.size();
MPI_Send(&size, 1, MPI_UNSIGNED_LONG_LONG, ...);
for (const auto& book : books) {
    MPI_Send(&book.id, 1, MPI_INT, ...);
    const unsigned long long len = book.title.length(); 
    MPI_Send(&len, 1, MPI_UNSIGNED_LONG_LONG, ...);
    MPI_Send(book.title.data(), len, MPI_CHAR, ...);
}

and

std::vector<Book> books;
unsigned long long size;
MPI_Recv(&size, 1, MPI_UNSIGNED_LONG_LONG, ...);
books.resize(size);
for (auto& book : books) {    
    MPI_Recv(&book.id, 1, MPI_INT, ...);
    unsigned long long len;
    MPI_Recv(&len, 1, MPI_UNSIGNED_LONG_LONG, ...);
    std::vector<char> str(len);
    MPI_Recv(str.data(), len, MPI_CHAR, ...);
    book.title.assign(str.begin(), str.end());
}
// ...

Method 1方法一

One way to do this is to set the title to a constant length.一种方法是将title设置为恒定长度。 You can then build an MPI data type around your struct, like so:然后,您可以围绕结构构建 MPI 数据类型,如下所示:

#include "mpi.h"
#include <iostream>
#include <string>
#include <vector>

const int MAX_TITLE_LENGTH = 256;

struct Book {
  int id;
  char title[MAX_TITLE_LENGTH];
};

int main(int argc, char *argv[]){
  MPI_Init(&argc, &argv);

  std::vector<Book> books(343);

  MPI_Datatype BookType;
  MPI_Datatype type[2] = { MPI_INTEGER, MPI_CHAR };
  int blocklen[2] = { 1, MAX_TITLE_LENGTH };

  MPI_Aint disp[2];
  disp[0] = 0;
  disp[1] = sizeof(int);
  MPI_Type_create_struct(2, blocklen, disp, type, &BookType);
  MPI_Type_commit(&BookType);

  int myrank;
  MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

  if (myrank == 0) {
    books[3].id = 4;
    MPI_Send(books.data(), 343, BookType, 1, 123, MPI_COMM_WORLD);
  } else if (myrank == 1) {
    MPI_Status status;
    MPI_Recv(books.data(), 343, BookType, 0, 123, MPI_COMM_WORLD, &status);
    std::cout<<books[3].id<<std::endl;
  }
  MPI_Finalize();
  return 0;
}

Method 2方法二

MPI is best used for quickly exchange numbers across grids of known size. MPI 最适用于在已知大小的网格之间快速交换数字。 But it can also work as a handy communication layer.但它也可以作为一个方便的通信层。 To do so, we can use the Cereal library to serialize arbitrary C++ objects and then send the serialized representations using MPI, as follows.为此,我们可以使用Cereal库来序列化任意 C++ 对象,然后使用 MPI 发送序列化的表示,如下所示。 This is slower than using MPI as designed because there are more intermediate copies, but provides the flexibility of using the full flexibility of C++.这比按设计使用 MPI 慢,因为有更多的中间副本,但提供了使用 C++ 的全部灵活性的灵活性。

#include "mpi.h"
#include <cereal/types/vector.hpp>
#include <cereal/types/string.hpp>
#include <cereal/archives/binary.hpp>
#include <sstream>
#include <string>

struct Book {
  int id;
  std::string title;
  template <class Archive>
  void serialize( Archive & ar ) { ar(id,title); }
};

template<class T>
int MPI_Send(const T &data, int dest, int tag, MPI_Comm comm){
  std::stringstream ss;
  { //Needed for RAII in Cereal
    cereal::BinaryOutputArchive archive( ss );
    archive( data );
  }
  const auto serialized = ss.str();
  return MPI_Send(serialized.data(), serialized.size(), MPI_CHAR, dest, tag, MPI_COMM_WORLD);
}

template<class T>
int MPI_Recv(T &data, int source, int tag, MPI_Comm comm, MPI_Status *status){
  //Get number of bytes in incoming message
  MPI_Probe(source, tag, MPI_COMM_WORLD, status);
  int num_incoming;
  MPI_Get_count(status, MPI_CHAR, &num_incoming);

  //Allocate a buffer of appropriate size
  std::vector<char> incoming(num_incoming);

  //Receive the data
  auto ret = MPI_Recv(incoming.data(), num_incoming, MPI_CHAR, source, tag, MPI_COMM_WORLD, status);
  std::stringstream ss;
  ss.write(incoming.data(), num_incoming);

  //Unpack the data
  {
    cereal::BinaryInputArchive archive(ss);
    archive(data);
  }

  return ret;
}

int main(int argc, char **argv){
  MPI_Init(&argc, &argv);

  std::vector<Book> books(343);

  int myrank;
  MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

  if (myrank == 0) {
    books[3].id    = 4;
    books[3].title = "Hello, world!";

    MPI_Send(books, 1, 123, MPI_COMM_WORLD);

  } else if (myrank == 1){
    MPI_Status status;
    MPI_Recv(books, 0, 123, MPI_COMM_WORLD, &status);
    std::cout<<books[3].id<<" "<<books[3].title<<std::endl;
  }

  MPI_Finalize();

  return 0;
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM