簡體   English   中英

Python Numba Cuda 比 JIT 慢

[英]Python Numba Cuda slower than JIT

我目前正在通過將其卸載到 GPU 來加速一些數值處理。 我下面有一些演示代碼(實際代碼會更復雜)。 我正在使用一個 NP 數組並計算一個范圍內有多少個值。

硬件,我正在運行 n 和 AMD 3600X(6 核 12 線程)和 RTX 2060 Super(2176 cuda 內核)。

示例代碼:

import time
import numpy as np
from numba import cuda
from numba import jit

width = 1024
height = 1024
size = width * height
print(f'Number of records {size}')

array_of_random = np.random.rand(size)
output_array = np.zeros(size, dtype=bool)
device_array = cuda.to_device(array_of_random)
device_output_array = cuda.device_array_like(output_array)


def count_array_standard(array, pivot_point, local_output_array):
    for i in range(array.shape[0]):
        if (pivot_point - 0.05) < array[i] < (pivot_point + 0.05):
            local_output_array[i] = True
        else:
            local_output_array[i] = False


@jit('(f8,b1[:])')
def count_array_jit(pivot_point, local_output_array):
    global array_of_random
    for i in range(len(array_of_random)):
        if (pivot_point - 0.05) < array_of_random[i] < (pivot_point + 0.05):
            local_output_array[i] = True
        else:
            local_output_array[i] = False


@cuda.jit()
def count_array_cuda(local_device_array, pivot_point, local_device_output_array):
    tx = cuda.threadIdx.x
    ty = cuda.blockIdx.x
    bw = cuda.blockDim.x
    pos = tx + ty * bw

    for i in range(pos, pos + bw):
        if i<local_device_output_array.size:
            if (pivot_point - 0.05) < local_device_array[i] < (pivot_point + 0.05):
                local_device_output_array[i] = True
            else:
                local_device_output_array[i] = False


print("")
print("Standard")
for x in range(3):
    start = time.perf_counter()
    count_array_standard(array_of_random, 0.5, output_array)
    result = np.sum(output_array)
    print(f'Run: {x} Result: {result} Time: {time.perf_counter() - start}')

print("")
print("Jit")
for x in range(3):
    start = time.perf_counter()
    count_array_jit(0.5, output_array)
    result = np.sum(output_array)
    print(f'Run: {x} Result: {result} Time: {time.perf_counter() - start}')

print("")
print("Cuda Jit")

threads_per_block = 16
blocks_per_grid = (array_of_random.size + (threads_per_block - 1)) // threads_per_block

for x in range(3):
    start = time.perf_counter()
    count_array_cuda[blocks_per_grid, threads_per_block](device_array, .5, device_output_array)
    result = np.sum(device_output_array.copy_to_host())
    print(f'Run: {x} Result: {result} Time: {time.perf_counter() - start}')

給我一組結果:

Number of records 1048576

Standard
Run: 0 Result: 104778 Time: 0.35327580000000003
Run: 1 Result: 104778 Time: 0.3521047999999999
Run: 2 Result: 104778 Time: 0.35452510000000004

Jit
Run: 0 Result: 104778 Time: 0.0020474000000001435
Run: 1 Result: 104778 Time: 0.001856599999999986
Run: 2 Result: 104778 Time: 0.0018399000000000054

Cuda Jit
Run: 0 Result: 104778 Time: 0.10867309999999986
Run: 1 Result: 104778 Time: 0.0023599000000000814
Run: 2 Result: 104778 Time: 0.002314700000000114

numba 的基本 jit 和 cuda jit 都比標准代碼快,我確實希望 jit 的初始運行需要更長的時間,jit 的后續運行比 cuda 更快。 當使用大約 16 個線程時,我還看到 cuda 的最佳結果,我預計需要更高的線程數。

由於我是 cuda 編碼的新手,我想知道我是否遺漏了一些基本的東西。 感激地接受任何指導。

我看到了兩個問題。

  1. 您為輸入數組中的每個數據項所做的工作量太小,無法在 GPU 上感興趣。

  2. 您選擇的線程組織加上 cuda.jit 例程中的 for 循環似乎在做多余的工作。

為了解決項目 1,您可能需要對每個項目做更多的工作,而不僅僅是將其與限制進行比較並寫入比較結果。 或者,如果您真的受此基准測試的激勵,如果您將數據移動分開,您可以對 kernel 本身計時,看看計算成本到底是多少。

對於解決第 2 項的簡單方法,我將去掉 cuda.jit kernel 中的 for 循環,並讓每個線程處理輸入數組中的 1 個元素。 這是一個執行此操作的示例(轉換為 python 2.x,因為這是我可以方便地使用 numba 的機器設置):

$ cat t58.py
import time
import numpy as np
from numba import cuda
from numba import jit

width = 1024
height = 1024
size = width * height
print("Number of records")
print(size)

array_of_random = np.random.rand(size)
output_array = np.zeros(size, dtype=bool)
device_array = cuda.to_device(array_of_random)
device_output_array = cuda.device_array_like(output_array)


def count_array_standard(array, pivot_point, local_output_array):
    for i in range(array.shape[0]):
        if (pivot_point - 0.05) < array[i] < (pivot_point + 0.05):
            local_output_array[i] = True
        else:
            local_output_array[i] = False


@jit('(f8,b1[:])')
def count_array_jit(pivot_point, local_output_array):
    global array_of_random
    for i in range(len(array_of_random)):
        if (pivot_point - 0.05) < array_of_random[i] < (pivot_point + 0.05):
            local_output_array[i] = True
        else:
            local_output_array[i] = False


@cuda.jit()
def count_array_cuda(local_device_array, pivot_point, local_device_output_array):
    tx = cuda.threadIdx.x
    ty = cuda.blockIdx.x
    bw = cuda.blockDim.x
    i = tx + ty * bw
    if i<local_device_output_array.size:
        if (pivot_point - 0.05) < local_device_array[i] < (pivot_point + 0.05):
            local_device_output_array[i] = True
        else:
            local_device_output_array[i] = False


print("")
print("Standard")
for x in range(3):
    start = time.clock()
    count_array_standard(array_of_random, 0.5, output_array)
    result = np.sum(output_array)
    print(x)
    print(result)
    print(time.clock() - start)

print("")
print("Jit")
for x in range(3):
    start = time.clock()
    count_array_jit(0.5, output_array)
    result = np.sum(output_array)
    print(x)
    print(result)
    print(time.clock() - start)

print("")
print("Cuda Jit")

threads_per_block = 128
blocks_per_grid = (array_of_random.size + (threads_per_block - 1)) // threads_per_block

for x in range(3):

    start = time.clock()
    count_array_cuda[blocks_per_grid, threads_per_block](device_array, .5, device_output_array)
    cuda.synchronize()
    stop = time.clock()
    result = np.sum(device_output_array.copy_to_host())
    print(x)
    print(result)
    print(stop - start)
$ python t58.py
Number of records
1048576

Standard
0
104891
0.53704
1
104891
0.528287
2
104891
0.515948

Jit
0
104891
0.002993
1
104891
0.002635
2
104891
0.002595

Cuda Jit
0
104891
0.146518
1
104891
0.000832
2
104891
0.000813
$

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM