[英]How to implement cholesky decomposition using numpy efficiently?
I want to implement efficient realization of cholesky decomposition.我想实现cholesky分解的有效实现。 Naive code looks like
天真的代码看起来像
import numpy as np
def cholesky(A):
n = A.shape[0]
L = np.zeros_like(A)
for i in range(n):
for j in range(i+1):
s = 0
for k in range(j):
s += L[i][k] * L[j][k]
if (i == j):
L[i][j] = (A[i][i] - s) ** 0.5
else:
L[i][j] = (1.0 / L[j][j] * (A[i][j] - s))
return L
I wonder if there is a way to make it more efficient.我想知道是否有一种方法可以提高效率。 Eg vectorize it?
例如矢量化它?
This is not vectorized, but for matrix of size 100 it is ~1000x faster:这不是矢量化的,但对于大小为 100 的矩阵,它的速度要快约 1000 倍:
import numba as nb
@nb.njit('float64[:, :](float64[:, :])')
def cholesky_numba(A):
n = A.shape[0]
L = np.zeros_like(A)
for i in range(n):
for j in range(i+1):
s = 0
for k in range(j):
s += L[i][k] * L[j][k]
if (i == j):
L[i][j] = (A[i][i] - s) ** 0.5
else:
L[i][j] = (1.0 / L[j][j] * (A[i][j] - s))
return L
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.