简体   繁体   中英

How to get weights for custom loss function in pytorch?

I have a model in pytorch and would like to add L1 regularization insde the loss_function. But I don't want to pass the weights to the loss_function() - is there a better way of doing this? For details see the loss_function() below.

class AutoEncoder(nn.Module):
    def __init__(self, inp_size, hid_size):
        super(AutoEncoder, self).__init__(

        self.lambd = 1. 

        # Encoder
        self.e1 = nn.Linear(inp_size, hid_size)

        # Decoder
        self.d1 = nn.Linear(hid_size, inp_size)
        self.sigmoid = nn.Sigmoid()    

        pass

   def forward(self,x):
       encode = self.e1(x)
       decode = self.sigmoid(self.d1(encode))
       return decode

   def loss_function(self, recon_x, x):
       l2_loss = nn.MSELoss()

       # Here I would like to compute the L1 regularization of the weight parameters
       loss = l2_loss(recon_x, x) + self.lambd(l1_loss(self.e1) + l1_loss(self.e2))
       return loss

I think something like this could work:

We define the loss function that has a layer as an input. Note that input to torch.norm should be torch Tensor so we need to do .data in the weights of the layer because it is a Parameter . Then, we compute the norm of the layer setting un p=1 (L1).

def l1_loss(layer):
    return (torch.norm(layer.weight.data, p=1))

lin1 = nn.Linear(8, 64)
l = l1_loss(lin1)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM