简体   繁体   中英

Entropy of t-distribution in scipy: How to input degrees of freedom to digamma and beta functions?

The closed-form analytical solution for the entropy of a variable X that follows the t-distribution, derived here , is

在此处输入图片说明

Seeing that python has functions scipy.special.digamma and scipy.special.beta , how can the above formula be implemented in code?

What confuses me is that the functions just mentioned do not take the degrees of freedom parameter nu (v) as an input according to the documentation . A running example would help

By its definition, the entropy is defined by Shanoon as: 在此处输入图片说明

Now if you apply this formula to the Student-t distribution, you will notice that this one already contains the degree of freedom parameter ( v ): 在此处输入图片说明

As a result of the integration you will have in the approximation both Betta and Digmma. If you can the calculation, honestly I couldn't , you will find out that these take v as an input just as a result of the calculation. It is not in their definition.

v varies between 1 (Cauchy distribution) and infinity (the normal distribution).

To simplify the calculations, I used the code below:

import numpy as np
import scipy.special as sc
v = float(input('Degre of freedom '))
v1 = (1+v)/2
v2 = v/2
Entropy_of_Variable_X = v1*(sc.digamma(v1)-sc.digamma(v2))+np.log(np.sqrt(v)*sc.beta(v2,0.5))
print('Entropy of the variable X, of degree of freedom equal to : ', v, 'is ', Entropy_of_Variable_X)

You can pass it a list or something like that to calculate the entropy for multiple distribution.

You can also use the differential entropy of multivariate student t-distribution, where, dim is dimensional, dof is degree of freedom and cmtx is covariance.

import numpy as np
import scipy.special as sc
def compute_true_entropy(dim=1, dof=3, std=False):
    cmtx = np.identity(dim)
    B0 = 0.5*np.log(np.linalg.det(cmtx))
    B1 = sc.gamma((dim+dof)/2)/((sc.gamma(dof/2))*((np.pi*dof)**(dim/2)))
    B2 = ((dof+dim)/2)*(sc.digamma((dof+dim)/2) - sc.digamma((dof)/2))
    entropy = B0 - np.log(B1) + B2
   return entropy

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM