简体   繁体   中英

Calculating the Entropy of a NxN matrix in python

I have a NxN matrix where all the elements are having values between [-1, 1]. I can calculate Shannon's Entropy manually, but I want something like Von Neumann's Entropy. Is there any inbuilt function in Numpy/Scipy? Manual method will also do. Matrix is usually of size 100x100. Something like this.

[[-0.244608 -0.71395497 -0.36534627]  
[-0.44626849 -0.82385746 -0.74654582]
[ 0.38240205 -0.58970239  0.67858516]]

Thank You.

What about just finding eigenvalues? Untested pseudo-code

import numpy as np
from numpy import linalg as LA

M = ... # this is your matrix

e, v = LA.eig(M)

t = e * np.log(e)

return -np.sum(t)

UPDATE

Looking at companion site, this answer might be of an interest to you

https://cs.stackexchange.com/questions/56261/computing-von-neumann-entropy-efficiently

UPDATE

If you don't want to go via eigenvalues/polynomials, then you could compute log of the matrix (everything else is trivial) using Jordan decomposition to get Jordan normal form of a matrix . In python it could be done via SymPy, http://docs.sympy.org/0.7.1/modules/matrices.html#sympy.matrices.matrices.Matrix.jordan_form , check Compute Jordan normal form of matrix in Python / NumPy also for details.

Then log(M) could be computed from Jordan form using Gantmacher 1959 theorem, check this paper https://www.ams.org/journals/proc/1966-017-05/S0002-9939-1966-0202740-6/S0002-9939-1966-0202740-6.pdf for simplified explanation, especially eqns 3.4-3.8

But I bet you a donut Jordan normal form of your matrix will be complex.

You can define von Neumann entropy in one of two ways according to Nielsen & Chuang in "Quantum Computation and Quantum Information". It can be defined either in terms of (the negative of) the trace of the matrix times its own (matrix) logarithm...or...it can be defined in terms of the eigenvalues. The above examples all take logarithms base e, but you need base 2. In order to do this you'll need a base change in your computation. Here are two functions in Python which can be used, one for each version of the definition of von Neumann entropy (of a density operator say):

For the trace version

def von_neumann_entropy(rho):
    import numpy as np
    from scipy import linalg as la
    R = rho*(la.logm(rho)/la.logm(np.matrix([[2]])))
    S = -np.matrix.trace(R)
    return(S)

For the eigenvalue version

def vn_eig_entropy(rho):
    import numpy as np
    from scipy import linalg as la
    import math as m
    EV = la.eigvals(rho)

    # Drop zero eigenvalues so that log2 is defined
    my_list = [x for x in EV.tolist() if x]
    EV = np.array(my_list)

    log2_EV = np.matrix(np.log2(EV))
    EV = np.matrix(EV)
    S = -np.dot(EV, log2_EV.H)
    return(S)

These will return the same value, so it does not matter which you use. Just feed one of these functions a square matrix using something like

rho = np.matrix([[5/6, 1/6],
                 [1/6, 1/6]])

Obviously any square matrix will work, not just a 2x2, this is just to give you an example. If your matrix has zero eigenvalues, the convention is to set 0*log(0) terms equal to zero. This is taken care of by the second function vn_eig_entropy . All density matrices are "non-negative definite", so this is the only issue with eigenvalues you should run into. I know this response is a bit late, but maybe it will help someone else.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM