简体   繁体   中英

Python `expm` of an `(N,M,M)` matrix

Let A be an (N,M,M) matrix (with N very large) and I would like to compute scipy.linalg.expm(A[n,:,:]) for each n in range(N) . I can of course just use a for loop but I was wondering if there was some trick to do this in a better way (something like np.einsum ).

I have the same question for other operations like inverting matrices (inverting solved in comments).

Depending on the size and structure of your matrices you can do better than loop.

Assuming your matrices can be diagonalized as A = VDV^(-1) (where D has the eigenvalues in its diagonal and V contains the corresponding eigenvectors as columns), you can compute the matrix exponential as

exp(A) = V exp(D) V^(-1)

where exp(D) simply contains exp(lambda) for each eigenvalue lambda in its diagonal. This is really easy to prove if we use the power series definition of the exponential function. If the matrix A is furthermore normal, the matrix V is unitary and thus its inverse can be computed by simply taking its adjoint.

The good news is that numpy.linalg.eig and numpy.linalg.inv both work with stacked matrices just fine:

import numpy as np
import scipy.linalg

A = np.random.rand(1000,10,10)

def loopy_expm(A):
    expmA = np.zeros_like(A)
    for n in range(A.shape[0]):
        expmA[n,...] = scipy.linalg.expm(A[n,...])
    return expmA

def eigy_expm(A):
    vals,vects = np.linalg.eig(A)
    return np.einsum('...ik, ...k, ...kj -> ...ij',
                     vects,np.exp(vals),np.linalg.inv(vects))

Note that there's probably some room for optimization in specifying the order of operations in the call to einsum , but I didn't investigate that.

Testing the above for the random array:

In [59]: np.allclose(loopy_expm(A),eigy_expm(A))
Out[59]: True

In [60]: %timeit loopy_expm(A)
824 ms ± 55.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [61]: %timeit eigy_expm(A)
138 ms ± 992 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

That's already nice. If you're lucky enough that your matrices are all normal (say, because they are real symmetric):

A = np.random.rand(1000,10,10)
A = (A + A.transpose(0,2,1))/2

def eigy_expm_normal(A):
    vals,vects = np.linalg.eig(A)
    return np.einsum('...ik, ...k, ...jk -> ...ij',
                     vects,np.exp(vals),vects.conj())

Note the symmetric definition of the input matrix and the transpose inside the pattern of einsum . Results:

In [80]: np.allclose(loopy_expm(A),eigy_expm_normal(A))
Out[80]: True

In [79]: %timeit loopy_expm(A)
878 ms ± 89.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [80]: %timeit eigy_expm_normal(A)
55.8 ms ± 868 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

That is a 15-fold speedup for the above example shapes.


It should be noted though that scipy.linalg.eigm uses Padé approximation according to the documentation. This might imply that if your matrices are ill-conditioned, the eigenvalue decomposition may yield different results than scipy.linalg.eigm . I'm not familiar with how this function works, but I expect it to be safer for pathological inputs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM