简体   繁体   中英

normalizing a list of very small double numbers (likelihoods)

I am writing an algorithm where, given a model, I compute likelihoods for a list of datasets and then need to normalize (to probability) each one of the likelihood. So something like [0.00043, 0.00004, 0.00321] might be converted to may be like [0.2, 0.03, 0.77]. My problem is that the log likelihoods, I am working with, are quite small (for instance, in log space, the values are like -269647.432, -231444.981 etc). In my C++ code, when I try to add two of them (by taking their exponent) I get an answer of "Inf". I tried to add them in log-space (Summation/Subtraction of log) , but again stumbled upon the same problem.

Can anybody share his/her expert opinion on this?

Thanks

Assuming the likelihoods have been calculated correctly, you could divide each of them by the largest likelihood. That can be done in logarithm form by subtracting the largest log-likelihood from each log-likelihood.

You can then convert out of logarithm space. The largest will be 1.0, because its normalized log is 0. The smaller ones will each be between 0 and 1.0, and represented as a fraction of the largest.

This is standard procedure. Numerically stable Matlab code:

LL = [ . . . ];  % vector of log-likelihoods
M = max(LL);
LL = LL - M;
L = exp(LL);
L = L ./ sum(L);

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM