简体   繁体   English

如何在 PyTorch 中用零填充张量?

[英]How to pad a tensor with zeros in PyTorch?

A really simple question here but I can't seem to figure this one out.这是一个非常简单的问题,但我似乎无法弄清楚这个问题。 As part of preparation to finals in Deep learning, I'm trying to solve questions from previous exams.作为准备深度学习期末考试的一部分,我正在尝试解决以前考试中的问题。 I need to write a similar Conv2d method which does stride and padding.我需要编写一个类似的Conv2d方法来执行步幅和填充。 My current code is:我当前的代码是:

class MyConv2d(nn.Module):
    def __init__(self, in_channels=1, out_channels=1, kernel_size=(1,1), stride=1, padding=0):
        # Set input as fields
        self.in_channels = in_channels
        self.out_channels = out_channels
        self.p = kernel_size[0]
        self.q = kernel_size[1]
        self.stride = stride
        self.padding = padding
        self.kern = nn.Parameter(torch.rand((out_channels, in_channels, kernel_size[0], kernel_size[1])))
        self.bias = nn.Parameter(torch.rand((out_channels,1)))
        
    def forward(self, X):
        X_clone = X.clone()
        h = int((X_clone.size(2) + (2 * self.padding) - self.p) / self.stride) + 1
        w = int((X_clone.size(3) + (2 * self.padding) - self.q) / self.stride) + 1
        result = torch.empty(X_clone.size(0), self.out_channels, h, w)
        

Test case:测试用例:

batch_size = 3
H, W = 6,6
in_channels = 3
out_channels = 1
kernel_size = (2,2)
stride = 2
padding= 1
X = torch.rand(batch_size, in_channels, H, W)
conv = MyConv2d(in_channels=in_channels, out_channels=out_channels,kernel_size=kernel_size, stride=stride, padding=padding)
res = conv.forward(X)

The forward method should gave the same result as of Conv2d . forward方法应该给出与Conv2d相同的结果。 The next step in forward method is padding with zeros but I can't seem to figure an easy way to pad X_clone with zeros. forward方法的下一步是用零填充,但我似乎无法找到一种简单的方法来用零填充X_clone How it can be done?怎么做到的?

EDIT : Forgot to mention that as part of the question, I'm not allowed to use any other methods under nn .编辑:忘记提及作为问题的一部分,我不允许在nn下使用任何其他方法。

I can't figure out other fancy methods except creating a new tensor and adding the original one to it.除了创建一个新张量并将原始张量添加到其中之外,我想不出其他奇特的方法。

This padding function could be helpful:此填充 function 可能会有所帮助:

def zero_padding(input_tensor, pad_size: int = 1):
  h, w = input_tensor.shape  # assuming no batch and channel dimension
  pad_tensor = torch.zeros([pad_size*2 + h, pad_size*2 + w])
  pad_tensor[pad_size:pad_size+h, pad_size:pad_size+w] = input_tensor
  return pad_tensor

You can expand this function to work with inputs containing batch and channel dimensions, arbitrary padding, etc.您可以扩展此 function 以处理包含批处理和通道维度、任意填充等的输入。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM