I want to implement efficient realization of cholesky decomposition. Naive code looks like
import numpy as np
def cholesky(A):
n = A.shape[0]
L = np.zeros_like(A)
for i in range(n):
for j in range(i+1):
s = 0
for k in range(j):
s += L[i][k] * L[j][k]
if (i == j):
L[i][j] = (A[i][i] - s) ** 0.5
else:
L[i][j] = (1.0 / L[j][j] * (A[i][j] - s))
return L
I wonder if there is a way to make it more efficient. Eg vectorize it?
This is not vectorized, but for matrix of size 100 it is ~1000x faster:
import numba as nb
@nb.njit('float64[:, :](float64[:, :])')
def cholesky_numba(A):
n = A.shape[0]
L = np.zeros_like(A)
for i in range(n):
for j in range(i+1):
s = 0
for k in range(j):
s += L[i][k] * L[j][k]
if (i == j):
L[i][j] = (A[i][i] - s) ** 0.5
else:
L[i][j] = (1.0 / L[j][j] * (A[i][j] - s))
return L
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.