简体   繁体   中英

Reduction (sum) along arbitrary axes of a multidimensional array

I want to perform a sum reduction along arbitrary axes of a multidimensional matrix which may have arbitrary dimensions (eg axis 5 of a 10-dimensional array). The matrix is stored using the row-major format, ie as a vector together with the strides along each axis.

I know how to perform this reduction using nested loops (see example below), but doing this results in a hard-coded axis (the reduction is along axis 1 below) and an arbitrary number of dimensions (4 below). How can I generalize this without using the nested loops?


#include <iostream>
#include <vector>

int main()
{
  // shape, stride & data of the matrix

  size_t shape  [] = { 2, 3, 4, 5};
  size_t strides[] = {60,20, 5, 1};

  std::vector<double> data(2*3*4*5);

  for ( size_t i = 0 ; i < data.size() ; ++i ) data[i] = 1.;

  // shape, stride & data (zero-initialized) of the reduced matrix

  size_t rshape  [] = { 2, 4, 5};
  size_t rstrides[] = {20, 5, 1};

  std::vector<double> rdata(2*4*5, 0.0);

  // compute reduction

  for ( size_t a = 0 ; a < shape[0] ; ++a )
    for ( size_t c = 0 ; c < shape[2] ; ++c )
      for ( size_t d = 0 ; d < shape[3] ; ++d )
        for ( size_t b = 0 ; b < shape[1] ; ++b )
          rdata[ a*rstrides[0]                 + c*rstrides[1] + d*rstrides[2] ] += \
          data [ a*strides [0] + b*strides [1] + c*strides [2] + d*strides [3] ];

  // print resulting reduced matrix

  for ( size_t a = 0 ; a < rshape[0] ; ++a )
    for ( size_t b = 0 ; b < rshape[1] ; ++b )
      for ( size_t c = 0 ; c < rshape[2] ; ++c )
        std::cout << "(" << a << "," << b << "," << c << ") " << \
        rdata[ a*rstrides[0] + b*rstrides[1] + c*rstrides[2] ] << std::endl;

  return 0;
}

Note: I want to avoid 'decompressing' and 'compressing' a counter. By this I mean that I could, in pseudo-code, do:

for ( size_t i = 0 ; i < data.size() ; ++i ) 
{
  i -> {a,b,c,d}

  discard "b" (axis 1) -> {a,c,d}

  rdata(a,c,d) += data(a,b,c,d)
}

I don't know how efficient this code is, but in my opinion, it is sure to be precise.

What's going on?

A little on adjusted_strides :

For axis_count = 4 , adjusted_strides has size 5 , where:

 adjusted_strides[0] = shape[0]*shape[1]*shape[2]*shape[3];
 adjusted_strides[1] = shape[1]*shape[2]*shape[3];
 adjusted_strides[2] = shape[2]*shape[3];
 adjusted_strides[3] = shape[3];
 adjusted_strides[4] = 1;

Let's take the example where the number of dimensions is 4 and the shape of the multidimensional array ( A ) is n0, n1, n2, n3 .

When we need to transform this array into another multidimensional array ( B ) of shape: n0, n2, n3 (compressing axis = 1 (0-based) ), then, we try to proceed as follows:

For each index of A we try to find its position in B . Let A[i][j][k][l] be any element in A . Its position in flat_A will be A[i*n1*n2*n3 + j*n2*n3 + k*n3 + l]

idx = i*n1*n2*n3 + j*n2*n3 + k*n3 + l;

In the compressed array B , this element will be a part of (or added to), B[i][k][l] . In flat_B the index is new_idx = i*n2*n3 + k*n3 + l; .

How do we form new_idx from idx ?

  1. All the axes before the compressed axis have the shape of the compressed axis as a part of their product. In our example we had to remove axis 1 , so all the axes which were before the 1st axis (only one here: the 0th axis ) represented by i ), have n1 as a part of product ( i*n1*n2*n3 ).

  2. All the axes after the compressed axis remain unaffected.

  3. Finally, we need to do two things:

    1. Isolate the indices of the axes before the index of the axis to be compressed and remove the shape of this axis:

      Integer division : idx / (n1*n2*n3); ( == idx / adjusted_strides[1] ).

      We are left with just i , which can be readjusted according to the new shape (by multiplying with n2*n3 ): we get

      i*n2*n3 ( == i * adjusted_strides[2] ).

    2. We isolate the axes after the compressed axis, which are unaffected by its shape.

      idx % (n2*n3) ( == idx % adjusted_strides[2] )

      which gives us k*n3 + l .

    3. Adding the results of step i. and ii. results in:

      computed_idx = i*n2*n3 + k*n3 + l;

      Which is the same as new_idx . So, our transformation was correct :).

Code:

Note: ni refers to new_idx .

  size_t cmp_axis = 1, axis_count = sizeof shape/ sizeof *shape;
  std::vector<size_t> adjusted_strides;
  //adjusted strides is basically same as strides
  //only difference being that the first element is the 
  //total number of elements in the n dim array.

  //The only reason to introduce this array was
  //so that I don't have to write any if-elses
  adjusted_strides.push_back(shape[0]*strides[0]);
  adjusted_strides.insert(adjusted_strides.end(), strides, strides + axis_count);
  for(size_t i = 0; i < data.size(); ++i) {
    size_t ni = i/adjusted_strides[cmp_axis]*adjusted_strides[cmp_axis+1] + i%adjusted_strides[cmp_axis+1];
    rdata[ni] += data[i];
  }

Output (axis = 1)

(0,0,0) 3
(0,0,1) 3
(0,0,2) 3
(0,0,3) 3
(0,0,4) 3
(0,1,0) 3
(0,1,1) 3
(0,1,2) 3
(0,1,3) 3
(0,1,4) 3
(0,2,0) 3
(0,2,1) 3
(0,2,2) 3
(0,2,3) 3
(0,2,4) 3
(0,3,0) 3
(0,3,1) 3
(0,3,2) 3
...

Tested here .

For further reading, refer to this .

I think this should work:

#include <iostream>
#include <vector>

int main()
{
  // shape, stride & data of the matrix
  size_t shape  [] = {  2, 3, 4, 5};
  size_t strides[] = {60, 20, 5, 1};
  std::vector<double> data(2 * 3 * 4 * 5);

  size_t rshape  [] = { 2, 4, 5};
  size_t rstrides[] = {3, 5, 1};
  std::vector<double> rdata(2 * 4 * 5, 0.0);

  const unsigned int NDIM = 4;
  unsigned int axis = 1;

  for (size_t i = 0 ; i < data.size() ; ++i) data[i] = 1;

  // How many elements to advance after each reduction
  size_t step_axis = strides[NDIM - 1];
  if (axis == NDIM - 1)
  {
      step_axis = strides[NDIM - 2];
  }
  // Position of the first element of the current reduction
  size_t offset_base = 0;
  size_t offset = 0;
  size_t s = 0;
  for (auto &v : rdata)
  {
      // Current reduced element
      size_t offset_i = offset;
      for (unsigned int i = 0; i < shape[axis]; i++)
      {
          // Reduce
          v += *(data.data() + offset_i);
          // Advance to next element
          offset_i += strides[axis];
      }
      s = (s + 1) % strides[axis];
      if (s == 0)
      {
          offset_base += strides[axis - 1];
          offset = offset_base;
      }
      else
      {
          offset += step_axis;
      }
  }

  // Print
  for ( size_t a = 0 ; a < rshape[0] ; ++a )
    for ( size_t b = 0 ; b < rshape[1] ; ++b )
      for ( size_t c = 0 ; c < rshape[2] ; ++c )
        std::cout << "(" << a << "," << b << "," << c << ") " << \
        rdata[ a*rstrides[0] + b*rstrides[1] + c*rstrides[2] ] << std::endl;

  return 0;
}

Output:

(0,0,0) 3
(0,0,1) 3
(0,0,2) 3
(0,0,3) 3
(0,0,4) 3
(0,1,0) 3
(0,1,1) 3
(0,1,2) 3
(0,1,3) 3
(0,1,4) 3
(0,2,0) 3
(0,2,1) 3
(0,2,2) 3
// ...

Setting axis = 3 yields:

(0,0,0) 5
(0,0,1) 5
(0,0,2) 5
(0,0,3) 5
(0,0,4) 5
(0,1,0) 5
(0,1,1) 5
(0,1,2) 5
(0,1,3) 5
(0,1,4) 5
(0,2,0) 5
(0,2,1) 5
(0,2,2) 5
(0,2,3) 5
// ...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM