簡體   English   中英

為什么numpy的einsum比numpy的內置函數慢?

[英]Why is numpy's einsum slower than numpy's built-in functions?

我通常從numpy的einsum函數中獲得了很好的表現(我喜歡它的語法)。 @Ophion對這個問題的回答表明 - 對於測試的案例 - einsum始終優於“內置”功能(有時候會有一些,有時會很多)。 但我剛遇到一個einsum慢得多的情況。 考慮以下等效函數:

(M, K) = (1000000, 20)
C = np.random.rand(K, K)
X = np.random.rand(M, K)

def func_dot(C, X):
    Y = X.dot(C)
    return np.sum(Y * X, axis=1)

def func_einsum(C, X):
    return np.einsum('ik,km,im->i', X, C, X)

def func_einsum2(C, X):
    # Like func_einsum but break it into two steps.
    A = np.einsum('ik,km', X, C)
    return np.einsum('ik,ik->i', A, X)

我希望func_einsum運行得最快,但這不是我遇到的。 在具有超線程,numpy版本1.9.0.dev-7ae0206的四核CPU上運行,以及使用OpenBLAS進行多線程處理,我得到以下結果:

In [2]: %time y1 = func_dot(C, X)
CPU times: user 320 ms, sys: 312 ms, total: 632 ms
Wall time: 209 ms
In [3]: %time y2 = func_einsum(C, X)
CPU times: user 844 ms, sys: 0 ns, total: 844 ms
Wall time: 842 ms
In [4]: %time y3 = func_einsum2(C, X)
CPU times: user 292 ms, sys: 44 ms, total: 336 ms
Wall time: 334 ms

當我將K增加到200時,差異更加極端:

In [2]: %time y1= func_dot(C, X)
CPU times: user 4.5 s, sys: 1.02 s, total: 5.52 s
Wall time: 2.3 s
In [3]: %time y2= func_einsum(C, X)
CPU times: user 1min 16s, sys: 44 ms, total: 1min 16s
Wall time: 1min 16s
In [4]: %time y3 = func_einsum2(C, X)
CPU times: user 15.3 s, sys: 312 ms, total: 15.6 s
Wall time: 15.6 s

有人能解釋為什么einsum這么慢嗎?

如果重要,這是我的numpy配置:

In [6]: np.show_config()
lapack_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/lib']
    language = f77
atlas_threads_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('ATLAS_WITHOUT_LAPACK', None)]
    language = c
    include_dirs = ['/usr/local/include']
blas_opt_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('ATLAS_INFO', '"\\"None\\""')]
    language = c
    include_dirs = ['/usr/local/include']
atlas_blas_threads_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('ATLAS_INFO', '"\\"None\\""')]
    language = c
    include_dirs = ['/usr/local/include']
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('ATLAS_WITHOUT_LAPACK', None)]
    language = f77
    include_dirs = ['/usr/local/include']
lapack_mkl_info:
  NOT AVAILABLE
blas_mkl_info:
  NOT AVAILABLE
mkl_info:
  NOT AVAILABLE

你可以充分利用這兩個方面:

def func_dot_einsum(C, X):
    Y = X.dot(C)
    return np.einsum('ij,ij->i', Y, X)

在我的系統上:

In [7]: %timeit func_dot(C, X)
10 loops, best of 3: 31.1 ms per loop

In [8]: %timeit func_einsum(C, X)
10 loops, best of 3: 105 ms per loop

In [9]: %timeit func_einsum2(C, X)
10 loops, best of 3: 43.5 ms per loop

In [10]: %timeit func_dot_einsum(C, X)
10 loops, best of 3: 21 ms per loop

如果可用, np.dot使用BLAS,MKL或您擁有的任何庫。 所以對np.dot的調用幾乎肯定是多線程的。 np.einsum有自己的循環,因此不使用任何這些優化,除了它自己使用SIMD來加速通過vanilla C實現。


然后是多輸入einsum調用運行得慢得多...... einsum的numpy源非常復雜,我不完全理解它。 所以請注意以下內容充其量只是推測,但這是我認為正在發生的事情......

當你運行像np.einsum('ij,ij->i', a, b) ,做np.sum(a*b, axis=1)的好處來自於避免必須用所有實例化中間數組產品,並在其上循環兩次。 所以在低級別發生的事情是這樣的:

for i in range(I):
    out[i] = 0
    for j in range(J):
        out[i] += a[i, j] * b[i, j]

現在說你正在追求類似的東西:

np.einsum('ij,jk,ik->i', a, b, c)

你可以做同樣的操作

np.sum(a[:, :, None] * b[None, :, :] * c[:, None, :], axis=(1, 2))

而我認為einsum所做的就是運行這個最后的代碼,而不必實例化巨大的中間數組,這肯定會讓事情變得更快:

In [29]: a, b, c = np.random.rand(3, 100, 100)

In [30]: %timeit np.einsum('ij,jk,ik->i', a, b, c)
100 loops, best of 3: 2.41 ms per loop

In [31]: %timeit np.sum(a[:, :, None] * b[None, :, :] * c[:, None, :], axis=(1, 2))
100 loops, best of 3: 12.3 ms per loop

但是如果你仔細看一下,擺脫中間存儲可能是一件可怕的事情。 這就是我認為einsum在低級別做的事情:

for i in range(I):
    out[i] = 0
    for j in range(J):
        for k in range(K):
            out[i] += a[i, j] * b[j, k] * c[i, k]

但是你正在重復大量的操作! 如果您改為:

for i in range(I):
    out[i] = 0
    for j in range(J):
        temp = 0
        for k in range(K):
            temp += b[j, k] * c[i, k]
        out[i] += a[i, j] * temp

你會做I * J * (K-1)減少乘法(和I * J額外增加),並節省你自己很多時間。 我的猜測是,einsum不夠智能,不能在這個級別上優化事物。 源代碼中有一個提示,它只用1或2個操作數優化操作,而不是3.在任何情況下,為一般輸入自動執行此操作似乎只是簡單...

einsum有一個'2操作數,ndim = 2'的專門案例。 在這種情況下,有3個操作數,總共3個維度。 所以它必須使用一般的nditer

在嘗試理解如何解析字符串輸入時,我編寫了一個純Python einsum模擬器, https://github.com/hpaulj/numpy-einsum/blob/master/einsum_py.py

(剝離的)einsum和產品總和函數是:

def myeinsum(subscripts, *ops, **kwargs):
    # dropin preplacement for np.einsum (more or less)
    <parse subscript strings>
    <prepare op_axes>
    x = sum_of_prod(ops, op_axes, **kwargs)
    return x

def sum_of_prod(ops, op_axes,...):
    ...
    it = np.nditer(ops, flags, op_flags, op_axes)
    it.operands[nop][...] = 0
    it.reset()
    for (x,y,z,w) in it:
        w[...] += x*y*z
    return it.operands[nop]

使用(M,K)=(10,5)調試myeinsum('ik,km,im->i',X,C,X,debug=True)輸出myeinsum('ik,km,im->i',X,C,X,debug=True) (M,K)=(10,5)

{'max_label': 109, 
 'min_label': 105, 
 'nop': 3, 
 'shapes': [(10, 5), (5, 5), (10, 5)], 
 ....}}
 ...
iter labels: [105, 107, 109],'ikm'

op_axes [[0, 1, -1], [-1, 0, 1], [0, -1, 1], [0, -1, -1]]

如果你在cython編寫這樣的sum-of-prod函數,你應該得到一些接近廣義einsum

使用滿(M,K) ,這個模擬的einsum慢6-7倍。


一些時間建立在其他答案上:

In [84]: timeit np.dot(X,C)
1 loops, best of 3: 781 ms per loop

In [85]: timeit np.einsum('ik,km->im',X,C)
1 loops, best of 3: 1.28 s per loop

In [86]: timeit np.einsum('im,im->i',A,X)
10 loops, best of 3: 163 ms per loop

這個'im,im->i' step is substantially faster than the other. The sum dimension, 'im,im->i' step is substantially faster than the other. The sum dimension, m is only 20. I suspect einsum`將此視為特殊情況。

In [87]: timeit np.einsum('im,im->i',np.dot(X,C),X)
1 loops, best of 3: 950 ms per loop

In [88]: timeit np.einsum('im,im->i',np.einsum('ik,km->im',X,C),X)
1 loops, best of 3: 1.45 s per loop

這些復合計算的時間只是相應部分的總和。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM