简体   繁体   English

使用cuda推力和数组,而不是将向量包含到inclusive_scan

[英]Using cuda thrust with arrays instead vectors to inclusive_scan

I have a code given by @ms: 我有一个@ms给出的代码:

#include <thrust/device_vector.h>
#include <thrust/scan.h>
#include <thrust/iterator/transform_iterator.h>
#include <thrust/iterator/counting_iterator.h>
#include <iostream>

struct omit_negative : public thrust::unary_function<int, int>
{
  __host__ __device__
  int operator()(int value)
  {
    if (value<0)
    {
      value = 0;
    }
    return value;
  }
};

int main()
{
  int array[] = {2,1,-1,3,-1,2};
  const int array_size = sizeof(array)/sizeof(array[0]);
  thrust::device_vector<int> d_array(array, array + array_size);
  thrust::device_vector<int> d_result(array_size);

  std::cout << "input data" << std::endl;
  thrust::copy(d_array.begin(), d_array.end(), std::ostream_iterator<int>(std::cout, " "));

  thrust::inclusive_scan(thrust::make_transform_iterator(d_array.begin(), omit_negative()),
                         thrust::make_transform_iterator(d_array.end(),   omit_negative()),
                         d_result.begin());

  std::cout << std::endl << "after inclusive_scan" << std::endl;
  thrust::copy(d_result.begin(), d_result.end(), std::ostream_iterator<int>(std::cout, " "));

  using namespace thrust::placeholders;
  thrust::scatter_if(d_array.begin(),
                     d_array.end(),
                     thrust::make_counting_iterator(0),
                     d_array.begin(),
                     d_result.begin(),
                     _1<0
                    );

  std::cout << std::endl << "after scatter_if" << std::endl;
  thrust::copy(d_result.begin(), d_result.end(), std::ostream_iterator<int>(std::cout, " "));
  std::cout << std::endl;
}

It refers to previous question . 它指的是先前的问题

I didn't know about thrust, but now I guess I'm going to quit idea of writing own code. 我不知道推力,但是现在我想我会放弃编写自己的代码的想法。 I'd rather use thrust. 我宁愿使用推力。 I modified my algorithm: instead -1 there are 0's (so make_transform is not necessary). 我修改了算法:取而代之的是-1,而是0(因此不必make_transform)。 Also your example creates array on host. 同样,您的示例在主机上创建数组。 But actually I have prepared array stored on device, and I' like to use it (instead of vectors) to avoid creating redundant memory and to avoid copying memory (it costs time - minimal time cost is my goal). 但是实际上我已经准备好了存储在设备上的数组,并且我想使用它(而不是向量)来避免创建冗余内存并避免复制内存(这会花费时间-我的目标是花费最少的时间)。 I'm not sure how to use arrays instead of vectors. 我不确定如何使用数组而不是向量。 Here is what I've written: 这是我写的:

int* dev_l_set = 0; 
cudaMalloc((void**)&dev_l_set, actualVerticesRowCount * sizeof(int)); 

...prepare array in kernel... 

thrust::device_vector<int> d_result(actualVerticesRowCount); 

thrust::inclusive_scan(dev_l_set, dev_l_set + actualVerticesRowCount, dev_l_set); 

using namespace thrust::placeholders; 
thrust::scatter_if(dev_l_set, dev_l_set + actualVerticesRowCount, thrust::make_counting_iterator(0), dev_l_set, d_result.begin(), _1 <= 0); 
cudaFree(dev_l_set); 

dev_l_set = thrust::raw_pointer_cast(d_result.data());

I can't cast from device_vector to int*, but I'd like to store result of scanning in initial dev_l_set array. 我无法从device_vector转换为int *,但是我想将扫描结果存储在初始dev_l_set数组中。 Also it'd be great to do it in place, is it necessary to use d_result in scatter_if? 此外,最好在适当的位置执行此操作,是否有必要在d_result中使用d_result?

Actual Input (stored on int* - device side): (example) 实际输入(存储在int *-设备端):(示例)

dev_l_set[0] = 0
dev_l_set[1] = 2
dev_l_set[2] = 0
dev_l_set[3] = 3
dev_l_set[4] = 0
dev_l_set[5] = 1

Desired output to the above input: 所需的输出到上述输入:

dev_l_set[0] = 0
dev_l_set[1] = 2
dev_l_set[2] = 0
dev_l_set[3] = 5
dev_l_set[4] = 0
dev_l_set[5] = 6

dev_l_set should store input, then do scan in place and in the end it should store output. dev_l_set应存储输入,然后进行扫描,最后应存储输出。

It could be something like this. 可能是这样的。

int* dev_l_set = 0; 
cudaMalloc((void**)&dev_l_set, actualVerticesRowCount * sizeof(int)); 

...prepare array in kernel... (see input data) 

thrust::inclusive_scan(dev_l_set, dev_l_set + actualVerticesRowCount, dev_l_set); 

using namespace thrust::placeholders; 
thrust::scatter_if(dev_l_set, dev_l_set + actualVerticesRowCount, thrust::make_counting_iterator(0), dev_l_set, dev_l_set, _1 <= 0); 

My Cuda version (minimal that app should work) is 5.5 (Tesla M2070) and unfortunately I can't use c++11. 我的Cuda版本(最低应能运行该应用程序)是5.5(Tesla M2070),不幸的是我无法使用c ++ 11。

You can do the inclusive scan as well as the scatter step in place without an additional result vector. 您可以进行包含性扫描以及分散步骤,而无需其他结果向量。

The following example directly uses the data from a raw device pointer without thrust::device_vector . 以下示例直接使用原始设备指针中的数据,而没有thrust::device_vector After the inclusive scan, the previously 0 elements are restored. 包含性扫描之后,将还原先前的0元素。

As @JaredHoberock pointed out, one should not rely on code residing in thrust::detail . 正如@JaredHoberock指出的那样,不应依赖于thrust::detail驻留的代码。 I therefore edited my answer and copied part of the code from thrust::detail::head_flags directly into this example. 因此,我编辑了我的答案并将部分代码从thrust::detail::head_flags直接复制到此示例中。

#include <thrust/scan.h>
#include <thrust/scatter.h>
#include <thrust/device_ptr.h>
#include <thrust/iterator/constant_iterator.h>

#include <iostream>


// the following code is copied from <thrust/detail/range/head_flags.h>
#include <thrust/detail/config.h>
#include <thrust/iterator/transform_iterator.h>
#include <thrust/iterator/zip_iterator.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/tuple.h>
#include <thrust/functional.h>


template<typename RandomAccessIterator,
     typename BinaryPredicate = thrust::equal_to<typename thrust::iterator_value<RandomAccessIterator>::type>,
     typename ValueType = bool,
     typename IndexType = typename thrust::iterator_difference<RandomAccessIterator>::type>
  class head_flags
{

  public:
    struct head_flag_functor
    {
      BinaryPredicate binary_pred; // this must be the first member for performance reasons
      IndexType n;

      typedef ValueType result_type;

      __host__ __device__
      head_flag_functor(IndexType n)
    : binary_pred(), n(n)
      {}

      __host__ __device__
      head_flag_functor(IndexType n, BinaryPredicate binary_pred)
    : binary_pred(binary_pred), n(n)
      {}

      template<typename Tuple>
      __host__ __device__ __thrust_forceinline__
      result_type operator()(const Tuple &t)
      {
    const IndexType i = thrust::get<0>(t);

    // note that we do not dereference the tuple's 2nd element when i <= 0
    // and therefore do not dereference a bad location at the boundary
    return (i == 0 || !binary_pred(thrust::get<1>(t), thrust::get<2>(t)));
      }
    };

    typedef thrust::counting_iterator<IndexType> counting_iterator;

  public:
    typedef thrust::transform_iterator<
      head_flag_functor,
      thrust::zip_iterator<thrust::tuple<counting_iterator,RandomAccessIterator,RandomAccessIterator> >
    > iterator;

    __host__ __device__
    head_flags(RandomAccessIterator first, RandomAccessIterator last)
      : m_begin(thrust::make_transform_iterator(thrust::make_zip_iterator(thrust::make_tuple(thrust::counting_iterator<IndexType>(0), first, first - 1)),
                                            head_flag_functor(last - first))),
    m_end(m_begin + (last - first))
    {}

    __host__ __device__
    head_flags(RandomAccessIterator first, RandomAccessIterator last, BinaryPredicate binary_pred)
      : m_begin(thrust::make_transform_iterator(thrust::make_zip_iterator(thrust::make_tuple(thrust::counting_iterator<IndexType>(0), first, first - 1)),
                                            head_flag_functor(last - first, binary_pred))),
    m_end(m_begin + (last - first))
    {}

    __host__ __device__
    iterator begin() const
    {
      return m_begin;
    }

    __host__ __device__
    iterator end() const
    {
      return m_end;
    }

    template<typename OtherIndex>
    __host__ __device__
    typename iterator::reference operator[](OtherIndex i)
    {
      return *(begin() + i);
    }

  private:
    iterator m_begin, m_end;
};

template<typename RandomAccessIterator>
__host__ __device__
head_flags<RandomAccessIterator>
  make_head_flags(RandomAccessIterator first, RandomAccessIterator last)
{
  return head_flags<RandomAccessIterator>(first, last);
}


int main()
{
    // copy data to device, this will be produced by your kernel
    int array[] = {0,2,0,3,0,1};
    const int array_size = sizeof(array)/sizeof(array[0]);
    int* dev_l_set;
    cudaMalloc((void**)&dev_l_set, array_size * sizeof(int));
    cudaMemcpy(dev_l_set, array, array_size * sizeof(int), cudaMemcpyHostToDevice);

    // wrap raw pointer in a thrust::device_ptr so thrust knows that this memory is located on the GPU
    thrust::device_ptr<int> dev_ptr = thrust::device_pointer_cast(dev_l_set);
    thrust::inclusive_scan(dev_ptr,
                 dev_ptr+array_size,
                 dev_ptr);

    // copy result back to host for printing
    cudaMemcpy(array, dev_l_set, array_size * sizeof(int), cudaMemcpyDeviceToHost);
    std::cout << "after inclusive_scan" << std::endl;
    thrust::copy(array, array+array_size, std::ostream_iterator<int>(std::cout, " "));
    std::cout << std::endl;

    using namespace thrust::placeholders;
    thrust::scatter_if(thrust::make_constant_iterator(0),
             thrust::make_constant_iterator(0)+array_size,
             thrust::make_counting_iterator(0),
             make_head_flags(dev_ptr, dev_ptr+array_size).begin(),
             dev_ptr,
             !_1
            );

    // copy result back to host for printing
    cudaMemcpy(array, dev_l_set, array_size * sizeof(int), cudaMemcpyDeviceToHost);
    std::cout << "after scatter_if" << std::endl;
    thrust::copy(array, array+array_size, std::ostream_iterator<int>(std::cout, " "));
    std::cout << std::endl;
}

output 产量

after inclusive_scan
0 2 2 5 5 6 
after scatter_if
0 2 0 5 0 6 

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 CUDA/Thrust 根据 arrays 之一中的值对两个数组/向量进行排序 - How to sort two arrays/vectors in respect to values in one of the arrays, using CUDA/Thrust 使用推力::transform_reduce 转换数组(包含扫描不是答案) - Using thrust::transform_reduce to transform an array (inclusive scan is not an answer) CUDA Thrust如何使用矢量将对象传输到设备 - CUDA Thrust How to transfer objects to device using vectors std :: partial_sum和std :: inclusive_scan之间有什么区别? - What's the difference between std::partial_sum and std::inclusive_scan? 使用向量而不是数组 - Using Vectors instead of Arrays 在 C 中使用 arrays 而不是向量 - Using arrays instead of vectors in C CUDA Thrust-如何使用具有不同大小的多个设备向量编写函数? - CUDA Thrust - How can I write a function using multiple device vectors with different sizes? 如何从两个 arrays 中创建一个对向量,然后使用 CUDA/Thrust 按对的第一个元素进行排序? - How do you make a pair vector out of two arrays and then sort by the first element of the pair using CUDA/Thrust? 在我的机器上操作大型矢量时,CUDA推力很慢 - CUDA Thrust slow when operating large vectors on my machine 将 cv::cuda::GpuMat 与推力和测试推力一起使用时出现问题 API - Problems when using cv::cuda::GpuMat with thrust and testing thrust API
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM