简体   繁体   English

语言模型的困惑度如何在0和1之间?

[英]How can the perplexity of a language model be between 0 and 1?

In Tensorflow, I'm getting outputs like 0.602129 or 0.663941. 在Tensorflow中,我得到的输出为0.602129或0.663941。 It appears that values closer to 0 imply a better model, but it seems like perplexity is supposed to be calculated as 2^loss, which implies that loss is negative. 似乎值越接近0意味着模型越好,但是似乎困惑应该被计算为2 ^损失,这意味着损失为负。 This doesn't make any sense. 这没有任何意义。

This does not make a lot of sense to me. 这对我来说没有多大意义。 Perplexity is calculated as 2^entropy . 困惑度被计算为2^entropy And the entropy is from 0 to 1. So your results which are < 1 do not make sense. 熵从0到1。因此,小于1的结果没有意义。

I would suggest you to take a look at how your model calculate the perplexity because I suspect there might be an error. 我建议您看一下您的模型如何计算困惑度,因为我怀疑可能存在错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM