简体   繁体   English

我是否在 PyTroch 中正确实施了自注意力?

[英]Have I implemented self-attention correctly in PyTroch?

This is my attempt at implementing self-attention using PyTorch. Have I done anything wrong or could it be improved some how?这是我使用 PyTorch 实现自注意力的尝试。我做错了什么或者可以改进吗?

class SelfAttention(nn.Module):
    def __init__(self, embedding_dim):
        super(SelfAttention, self).__init__()

        self.keys = nn.Linear(embedding_dim, embedding_dim)
        self.queries = nn.Linear(embedding_dim, embedding_dim)
        self.values = nn.Linear(embedding_dim, embedding_dim)

    
    def forward(self, x):
        keys = self.keys(x)
        queries = self.queries(x)
        values = self.values(x)
        
        scores_prime = torch.matmul(queries.T, keys)
        scores = nn.functional.softmax(scores_prime)

        context_vectors = torch.matmul(values, scores)

        return context_vectors

My test vector ran through without error, but I can't be sure I didn't make a mistake.我的测试向量运行没有错误,但我不能确定我没有犯错误。

To better test your implementation, I suggest you use a different dimension for the queries and keys.为了更好地测试您的实施,我建议您对查询和键使用不同的维度。 I think you replaced the roles of queries and keys.我认为您替换了查询和键的角色。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM