简体   繁体   English

在Pytorch中应用l2归一化时尺寸超出范围

[英]Dimension out of range when applying l2 normalization in Pytorch

I'm getting a runtime error: 我遇到了运行时错误:

RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)` RuntimeError:尺寸超出范围(预计在[-1,0]范围内,但得到1)`

and can't figure out how to fix it. 而且不知道如何解决它。

The error appears to refer to the line: 该错误似乎是指该行:

i_enc = F.normalize(input =i_batch, p=2, dim=1, eps=1e-12)  # (batch, K, feat_dim)

I'm trying to encode image features (batch x 36 x 2038) by applying a L2 norm. 我正在尝试通过应用L2范数来编码图像特征(批处理x 36 x 2038)。 Below is the full code for the section. 以下是本节的完整代码。

def forward(self, q_batch, i_batch):

    # batch size = 512
    # q -> 512(batch)x14(length)
    # i -> 512(batch)x36(K)x2048(f_dim)
    # one-hot -> glove
    emb = self.embed(q_batch)
    output, hn = self.gru(emb.permute(1, 0, 2))  
    q_enc = hn.view(-1,self.h_dim)

    # image encoding with l2 norm
    i_enc = F.normalize(input =i_batch, p=2, dim=1, eps=1e-12)  # (batch, K, feat_dim)


    q_enc_copy = q_enc.repeat(1, self.K).view(-1, self.K, self.h_dim)

    q_i_concat = torch.cat((i_enc, q_enc_copy), -1)
    q_i_concat = self.non_linear(q_i_concat, self.td_W, self.td_W2 )#512 x 36 x 512
    i_attention = self.att_w(q_i_concat)  #512x36x1
    i_attention = F.softmax(i_attention.squeeze(),1)
    #weighted sum
    i_enc = torch.bmm(i_attention.unsqueeze(1), i_enc).squeeze()  # (batch, feat_dim)

    # element-wise multiplication
    q = self.non_linear(q_enc, self.q_W, self.q_W2)
    i = self.non_linear(i_enc, self.i_W, self.i_W2)
    h = torch.mul(q, i)  # (batch, hid_dim)

    # output classifier
    # BCE with logitsloss
    score = self.c_Wo(self.non_linear(h, self.c_W, self.c_W2))

    return score

I would appreciate any help. 我将不胜感激任何帮助。 Thanks 谢谢

I would suggest to check the shape of i_batch (eg print(i_batch.shape) ), as I suspect i_batch has only 1 dimension (eg of shape [N] ). 我建议检查i_batch的形状(例如print(i_batch.shape) ),因为我怀疑i_batch只有1个尺寸(例如[N]形状)。

This would explain why PyTorch is complaining you can normalize only over the dimension #0; 这可以解释为什么PyTorch抱怨您只能对维度#0进行归一化; while you are asking for the operation to be done over a dimension #1 (cf dim=1 ). 当您要求在尺寸#1上执行操作时(cf dim=1 )。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 PyTorch CrossEntropyLoss 维度超出范围 - PyTorch CrossEntropyLoss DImension Out of Range PyTorch dataset transform normalization nb_samples += batch_samples IndexError: Dimension out of range(预计在[-2, 1]范围内,但得到2) - PyTorch dataset transform normalization nb_samples += batch_samples IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2) PyTorch 中的 L1/L2 正则化 - L1/L2 regularization in PyTorch 稀疏矩阵中行的L2归一化 - L2 normalization of rows in scipy sparse matrix L2矩阵逐行归一化梯度 - L2 matrix rowwise normalization gradient 在sklearn python中撤消L2规范化 - Undo L2 Normalization in sklearn python IndexError:尺寸超出范围 - PyTorch 尺寸预计在 [-1, 0] 范围内,但得到 1 - IndexError: Dimension out of range - PyTorch dimension expected to be in range of [-1, 0], but got 1 在 Pytorch 中出现错误:IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) - Getting an Error in Pytorch: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) 在SVM中使用特征之前如何在特征上使用L2规范化 - How to use L2 normalization on features before using them in SVM 为什么 keras 层中的 L2 归一化会扩展暗淡? - Why does L2 normalization in keras layer expands the dims?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM