简体   繁体   English

在pytorch中实现似然训练损失

[英]Implementation of Unlikelihood Training loss in pytorch

I am trying to implement the Unlikelihood Training loss that was proposed in this research paper: NEURAL TEXT DEGENERATION WITH UNLIKELIHOOD TRAINING .我正在尝试实施本研究论文中提出的似然训练损失: NEURAL TEXT DEGENERATION WITH UNLIKELIHOOD TRAINING This loss is an updated version of the negative log-likelihood loss (NLLLOSS).此损失是负对数似然损失 (NLLLOSS) 的更新版本。

The main idea of this loss is that it avoids unwanted tokens during the training process.这种损失的主要思想是它在训练过程中避免了不需要的令牌。 在此处输入图像描述

This is my code:这是我的代码:

def NLLLoss(logs, targets, c, alpha=0.1):
    out = torch.zeros_like(targets, dtype=torch.float)
    for i in range(len(targets)):
        # out[i] = logs[i][targets[i]] # The original implementation
        out[i] = alpha * (1 - logs[i][c[i]]) * logs[i][targets[i]]
    return -out.sum()/len(out)

The commented line is the original NLLLoss implementation.注释行是原始的 NLLLoss 实现。 This code well, but I was wondering, is this implementation correct?这段代码很好,但我想知道,这个实现是否正确?

No, log(1-x) does not equal 1 - log(x).不,log(1-x) 不等于 1 - log(x)。 I think what need is here .我想这里有什么需要。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM