[英]How to compute perplexity using KenLM?
假設我們在此基礎上構建了一個模型:
$ wget https://gist.githubusercontent.com/alvations/1c1b388456dc3760ffb487ce950712ac/raw/86cdf7de279a2b9bceeb3adb481e42691d12fbba/something.txt
$ lmplz -o 5 < something.txt > something.arpa
從困惑公式( https://web.stanford.edu/class/cs124/lec/languagemodeling.pdf )
應用逆對數公式之和得到內部變量,然后取第n個根,困惑數異常小:
>>> import kenlm
>>> m = kenlm.Model('something.arpa')
# Sentence seen in data.
>>> s = 'The development of a forward-looking and comprehensive European migration policy,'
>>> list(m.full_scores(s))
[(-0.8502398729324341, 2, False), (-3.0185394287109375, 3, False), (-0.3004383146762848, 4, False), (-1.0249041318893433, 5, False), (-0.6545327305793762, 5, False), (-0.29304179549217224, 5, False), (-0.4497605562210083, 5, False), (-0.49850910902023315, 5, False), (-0.3856896460056305, 5, False), (-0.3572353720664978, 5, False), (-1.7523181438446045, 1, False)]
>>> n = len(s.split())
>>> sum_inv_logs = -1 * sum(score for score, _, _ in m.full_scores(s))
>>> math.pow(sum_inv_logs, 1.0/n)
1.2536033936438895
用數據中未找到的句子再試一次:
# Sentence not seen in data.
>>> s = 'The European developement of a forward-looking and comphrensive society is doh.'
>>> sum_inv_logs = -1 * sum(score for score, _, _ in m.full_scores(s))
>>> sum_inv_logs
35.59524390101433
>>> n = len(s.split())
>>> math.pow(sum_inv_logs, 1.0/n)
1.383679905428275
並再次嘗試完全域外數據:
>>> s = """On the evening of 5 May 2017, just before the French Presidential Election on 7 May, it was reported that nine gigabytes of Macron's campaign emails had been anonymously posted to Pastebin, a document-sharing site. In a statement on the same evening, Macron's political movement, En Marche!, said: "The En Marche! Movement has been the victim of a massive and co-ordinated hack this evening which has given rise to the diffusion on social media of various internal information"""
>>> sum_inv_logs = -1 * sum(score for score, _, _ in m.full_scores(s))
>>> sum_inv_logs
282.61719834804535
>>> n = len(list(m.full_scores(s)))
>>> n
79
>>> math.pow(sum_inv_logs, 1.0/n)
1.0740582373271952
雖然,期望較長的句子具有較低的困惑度,但奇怪的是差異小於1.0並且在小數范圍內。
以上是使用 KenLM 計算困惑度的正確方法嗎? 如果沒有,有沒有人知道如何通過 Python API 計算 KenLM 的困惑?
見https://github.com/kpu/kenlm/blob/master/python/kenlm.pyx#L182
import kenlm
model=kenlm.Model("something.arpa")
per=model.perplexity("your text sentance")
print(per)
困惑度公式為:
但這是原始概率,所以在代碼中:
import numpy as np
import kenlm
m = kenlm.Model('something.arpa')
# Because the score is in log base 10, so:
product_inv_prob = np.prod([math.pow(10.0, score) for score, _, _ in m.full_scores(s)])
n = len(list(m.full_scores(s)))
perplexity = math.pow(product_inv_prob, 1.0/n)
或者直接使用 log (base 10) prob:
sum_inv_logprob = -1 * sum(score for score, _, _ in m.full_scores(s))
n = len(list(m.full_scores(s)))
perplexity = math.pow(10.0, sum_inv_logs / n)
來源: https : //www.mail-archive.com/moses-support@mit.edu/msg15341.html
只想評論阿爾瓦斯的回答
sum_inv_logprob = sum(score for score, _, _ in m.full_scores(s))
實際上應該是:
sum_inv_logprob = -1.0 * sum(score for score, _, _ in m.full_scores(s))
你可以簡單地使用
import numpy as np
import kenlm
m = kenlm.Model('something.arpa')
ppl = m.perplexity('something')
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.