简体   繁体   中英

Scaling the rows of a matrix with CUDA

In some computations on the GPU, I need to scale the rows in a matrix so that all the elements in a given row sum to 1.

| a1,1 a1,2 ... a1,N |    | alpha1*a1,1 alpha1*a1,2 ... alpha1*a1,N |
| a2,1 a2,2 ... a2,N | => | alpha2*a2,1 alpha2*a2,2 ... alpha2*a2,N |
| .            .   |    | .                                .    |
| aN,1 aN,2 ... aN,N |    | alphaN*aN,1 alphaN*aN,2 ... alphaN*aN,N |

where

alphai =  1.0/(ai,1 + ai,2 + ... + ai,N)

I need the vector of alpha 's, and the scaled matrix and I would like to do this in as few blas calls as possible. The code is going to run on nvidia CUDA hardware. Does anyone know of any smart way to do this?

Cublas 5.0 introduced a blas-like routine called cublas(Type)dgmm which is the multiplication of a matrix by a diagonal matrix (represented by a vector).

There is a left option ( which will scale the rows) or a right option that will scale the column.

Please refer to CUBLAS 5.0 documentation for details.

So in your problem, you need to create a vector containing all the alpha on the GPU and use cublasdgmm with the left option.

I want to update the answers above with an example considering the use of CUDA Thrust's thrust::transform and of cuBLAS 's cublas<t>dgmm . I'm skipping the calculation of the scaling factors alpha 's, since this has been already dealt with at

Reduce matrix rows with CUDA

and

Reduce matrix columns with CUDA

Below is a complete example:

#include <thrust/device_vector.h>
#include <thrust/reduce.h>
#include <thrust/random.h>
#include <thrust/sort.h>
#include <thrust/unique.h>
#include <thrust/equal.h>

#include <cublas_v2.h>

#include "Utilities.cuh"
#include "TimingGPU.cuh"

/**************************************************************/
/* CONVERT LINEAR INDEX TO ROW INDEX - NEEDED FOR APPROACH #1 */
/**************************************************************/
template <typename T>
struct linear_index_to_row_index : public thrust::unary_function<T,T> {

    T Ncols; // --- Number of columns

    __host__ __device__ linear_index_to_row_index(T Ncols) : Ncols(Ncols) {}

    __host__ __device__ T operator()(T i) { return i / Ncols; }
};

/***********************/
/* RECIPROCAL OPERATOR */
/***********************/
struct Inv: public thrust::unary_function<float, float>
{
    __host__ __device__ float operator()(float x)
    {
        return 1.0f / x;
    }
};

/********/
/* MAIN */
/********/
int main()
{
    /**************************/
    /* SETTING UP THE PROBLEM */
    /**************************/

    const int Nrows = 10;           // --- Number of rows
    const int Ncols =  3;           // --- Number of columns  

    // --- Random uniform integer distribution between 0 and 100
    thrust::default_random_engine rng;
    thrust::uniform_int_distribution<int> dist1(0, 100);

    // --- Random uniform integer distribution between 1 and 4
    thrust::uniform_int_distribution<int> dist2(1, 4);

    // --- Matrix allocation and initialization
    thrust::device_vector<float> d_matrix(Nrows * Ncols);
    for (size_t i = 0; i < d_matrix.size(); i++) d_matrix[i] = (float)dist1(rng);

    // --- Normalization vector allocation and initialization
    thrust::device_vector<float> d_normalization(Nrows);
    for (size_t i = 0; i < d_normalization.size(); i++) d_normalization[i] = (float)dist2(rng);

    printf("\n\nOriginal matrix\n");
    for(int i = 0; i < Nrows; i++) {
        std::cout << "[ ";
        for(int j = 0; j < Ncols; j++)
            std::cout << d_matrix[i * Ncols + j] << " ";
        std::cout << "]\n";
    }

    printf("\n\nNormlization vector\n");
    for(int i = 0; i < Nrows; i++) std::cout << d_normalization[i] << "\n";

    TimingGPU timerGPU;

    /*********************************/
    /* ROW NORMALIZATION WITH THRUST */
    /*********************************/

    thrust::device_vector<float> d_matrix2(d_matrix);

    timerGPU.StartCounter();
    thrust::transform(d_matrix2.begin(), d_matrix2.end(),
                      thrust::make_permutation_iterator(
                                d_normalization.begin(),
                                thrust::make_transform_iterator(thrust::make_counting_iterator(0), linear_index_to_row_index<int>(Ncols))),
                      d_matrix2.begin(),
                      thrust::divides<float>());
    std::cout << "Timing - Thrust = " << timerGPU.GetCounter() << "\n";

    printf("\n\nNormalized matrix - Thrust case\n");
    for(int i = 0; i < Nrows; i++) {
        std::cout << "[ ";
        for(int j = 0; j < Ncols; j++)
            std::cout << d_matrix2[i * Ncols + j] << " ";
        std::cout << "]\n";
    }

    /*********************************/
    /* ROW NORMALIZATION WITH CUBLAS */
    /*********************************/
    d_matrix2 = d_matrix;

    cublasHandle_t handle;
    cublasSafeCall(cublasCreate(&handle));

    timerGPU.StartCounter();
    thrust::transform(d_normalization.begin(), d_normalization.end(), d_normalization.begin(), Inv());
    cublasSafeCall(cublasSdgmm(handle, CUBLAS_SIDE_RIGHT, Ncols, Nrows, thrust::raw_pointer_cast(&d_matrix2[0]), Ncols, 
                   thrust::raw_pointer_cast(&d_normalization[0]), 1, thrust::raw_pointer_cast(&d_matrix2[0]), Ncols));
    std::cout << "Timing - cuBLAS = " << timerGPU.GetCounter() << "\n";

    printf("\n\nNormalized matrix - cuBLAS case\n");
    for(int i = 0; i < Nrows; i++) {
        std::cout << "[ ";
        for(int j = 0; j < Ncols; j++)
            std::cout << d_matrix2[i * Ncols + j] << " ";
        std::cout << "]\n";
    }

    return 0;
}

The Utilities.cu and Utilities.cuh files are mantained here and omitted here. The TimingGPU.cu and TimingGPU.cuh are maintained here and are omitted as well.

I have tested the above code on a Kepler K20c and these are the result:

                 Thrust      cuBLAS
2500 x 1250      0.20ms      0.25ms
5000 x 2500      0.77ms      0.83ms

In the cuBLAS timing, I'm excluding the cublasCreate time. Even with this, the CUDA Thrust version seems to be more convenient.

If you use BLAS gemv with a unit vector, the result will a vector of the reciprocal of scaling factors (1/alpha) you need. That is the easy part.

Applying the factors row wise is a bit harder, because standard BLAS doesn't have anything like a Hadamard product operator you could use. Also because you are mentioning BLAS, I presume you are using column major order storage for your matrices, which is not so straightforward for row wise operations. The really slow way to do it would be to BLAS scal on each row with a pitch, but that would require one BLAS call per row and the pitched memory access will kill performance because of the effect on coalescing and L1 cache coherency.

My suggestion would be to use your own kernel for the second operation. It doesn't have to be all that complex, perhaps only something like this:

template<typename T>
__global__ void rowscale(T * X, const int M, const int N, const int LDA,
                             const T * ralpha)
{
    for(int row=threadIdx.x; row<M; row+=gridDim.x) {
        const T rscale = 1./ralpha[row]; 
        for(int col=blockIdx.x; col<N; col+=blockDim.x) 
            X[row+col*LDA] *= rscale;
    }
}

That just has a bunch of blocks stepping through the rows columnwise, scaling as they go along. Should work for any sized column major ordered matrix. Memory access should be coalesced, but depending on how worried about performance you are, there are a number of optimization you could try. It at least gives a general idea of what to do.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM