简体   繁体   English

为什么将共享的 memory 数组填充一列会使 kernel 的速度提高 40%?

[英]Why does padding the shared memory array by one column increase the speed of the kernel by 40%?

Why is this matrix transpose kernel faster, when the shared memory array is padded by one column?当共享 memory 数组填充一列时,为什么这个矩阵转置 kernel 更快?

I found the kernel at PyCuda/Examples/MatrixTranspose .我在PyCuda/Examples/MatrixTranspose找到了 kernel。

Source资源

import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy

block_size = 16    

def _get_transpose_kernel(offset):
    mod = SourceModule("""
    #define BLOCK_SIZE %(block_size)d
    #define A_BLOCK_STRIDE (BLOCK_SIZE * a_width)
    #define A_T_BLOCK_STRIDE (BLOCK_SIZE * a_height)

    __global__ void transpose(float *A_t, float *A, int a_width, int a_height)
    {
        // Base indices in A and A_t
        int base_idx_a   = blockIdx.x * BLOCK_SIZE +
        blockIdx.y * A_BLOCK_STRIDE;
        int base_idx_a_t = blockIdx.y * BLOCK_SIZE +
        blockIdx.x * A_T_BLOCK_STRIDE;

        // Global indices in A and A_t
        int glob_idx_a   = base_idx_a + threadIdx.x + a_width * threadIdx.y;
        int glob_idx_a_t = base_idx_a_t + threadIdx.x + a_height * threadIdx.y;

        /** why does the +1 offset make the kernel faster? **/
        __shared__ float A_shared[BLOCK_SIZE][BLOCK_SIZE+%(offset)d]; 

        // Store transposed submatrix to shared memory
        A_shared[threadIdx.y][threadIdx.x] = A[glob_idx_a];

        __syncthreads();

        // Write transposed submatrix to global memory
        A_t[glob_idx_a_t] = A_shared[threadIdx.x][threadIdx.y];
    }
    """% {"block_size": block_size, "offset": offset})

    kernel = mod.get_function("transpose")
    kernel.prepare("PPii", block=(block_size, block_size, 1))
    return kernel 


def transpose(tgt, src,offset):
    krnl = _get_transpose_kernel(offset)
    w, h = src.shape
    assert tgt.shape == (h, w)
    assert w % block_size == 0
    assert h % block_size == 0
    krnl.prepared_call((w / block_size, h /block_size), tgt.gpudata, src.gpudata, w, h)


def run_benchmark():
    from pycuda.curandom import rand
    print pycuda.autoinit.device.name()
    print "time\tGB/s\tsize\toffset\t"
    for offset in [0,1]:
        for size in [2048,2112]:

            source = rand((size, size), dtype=numpy.float32)
            target = gpuarray.empty((size, size), dtype=source.dtype)

            start = pycuda.driver.Event()
            stop = pycuda.driver.Event()

            warmup = 2
            for i in range(warmup):
                transpose(target, source,offset)

            pycuda.driver.Context.synchronize()
            start.record()

            count = 10          
            for i in range(count):
                transpose(target, source,offset)

            stop.record()
            stop.synchronize()

            elapsed_seconds = stop.time_since(start)*1e-3
            mem_bw = source.nbytes / elapsed_seconds * 2 * count /1024/1024/1024

            print "%6.4fs\t%6.4f\t%i\t%i" %(elapsed_seconds,mem_bw,size,offset)


run_benchmark()

Output Output

Quadro FX 580
time    GB/s    size    offset  
0.0802s 3.8949  2048    0
0.0829s 4.0105  2112    0
0.0651s 4.7984  2048    1
0.0595s 5.5816  2112    1

Code adopted采用的代码

The answer is shared memory bank conflicts.答案是共享 memory 银行冲突。 The CUDA hardware you are using arranges shared memory into 16 banks, and shared memory is sequentially "striped" across all of those 16 banks.您正在使用的 CUDA 硬件将共享 memory 安排到 16 个 bank 中,并且共享 memory 在所有这 16 个 bank 中顺序“条带化”。 If two threads try and access the same bank simultaneously, a conflict occurs and the threads must be serialized.如果两个线程同时尝试访问同一个bank,就会发生冲突,线程必须被序列化。 This is what you are seeing here.这就是你在这里看到的。 By extending the stride of the shared memory array by 1, you are ensuring that the same column indices in successive rows of the shared array are on different banks, which eliminates most of the possible conflicts.通过将共享 memory 数组的步长扩展 1,您可以确保共享数组的连续行中的相同列索引位于不同的存储区,从而消除了大多数可能的冲突。

This phenomena (and an associated global memory phenomena called partition camping) is discussed in great depth in the "Optimizing Matrix Transpose in CUDA" paper which ships with the SDK matrix transpose example.这种现象(以及相关的全局 memory 现象称为分区露营)在 SDK 矩阵转置示例附带的“优化 CUDA 中的矩阵转置”论文中进行了深入讨论。 It is well worth reading.非常值得一读。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM