I have two arrays:
import numpy as np
a = np.array(['1','2','3'])
b = np.array(['3','4','1','5'])
I want to calculate joint entropy. I've found some materials to make it like:
import numpy as np
def entropy(*X):
return = np.sum(-p * np.log2(p) if p > 0 else 0
for p in (np.mean(reduce(np.logical_and, (predictions == c for predictions, c in zip(X, classes))))
for classes in itertools.product(*[set(x) for x in X])))
Seems to work fine with len(a) = len(b)
but it ends with error if len(a) != len(b)
UPD: Arrays a
and b
were created from exampled main input:
b:3 p1:1 p2:6 p5:7
b:4 p1:2 p7:2
b:1 p3:4 p5:8
b:5 p1:3 p4:4
Where array a
was created from p1 values. So not every line consists of every pK
but every has b
property. I need to calculate mutual information I(b,pK)
for each pK
.
Assuming you are talking about the Joint Shannon Entropy , the formula straightforward:
The problem with this, when I look at what you've done so far, is that you lack P(x,y)
, ie the joint probability of the two variables occurring together. It looks like a,b
are the individual probabilities for events a and b respectively.
You have other problems with your posted code (mentioned in the comments):
a=["1","2"]
is not the same as a=[1,2]
. One is a string, the other is a number.P(x,y)
.Here is an idea:
import numpy as np
from scipy import stats
a = np.array(['1','2','3','0'])
b = np.array(['3','4','1','5'])
aa = [int(x) for x in a]
bb = [int(x) for x in b]
je = stats.entropy(aa,bb)
print("joint entropy : ",je)
output: 0.9083449242695364
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.