[英]With PyTorch, how is my Conv1d dimension reducing when I have padding?
My conv module
is:我的
conv module
是:
return torch.nn.Sequential(
torch.nn.Conv1d(
in_channels=in_channels,
out_channels=in_channels,
kernel_size=2,
stride=1,
dilation=1,
padding=1
),
torch.nn.ReLU(),
torch.nn.Conv1d(
in_channels=in_channels,
out_channels=in_channels,
kernel_size=2,
stride=1,
dilation=2,
padding=1
),
torch.nn.ReLU(),
torch.nn.Conv1d(
in_channels=in_channels,
out_channels=in_channels,
kernel_size=2,
stride=1,
dilation=4,
padding=1
),
torch.nn.ReLU()
)
And in forward
, I have:在
forward
中,我有:
down_out = self.downscale_time_conv(inputs)
inputs
has a .size
of torch.Size([8, 161, 24])
. inputs
的.size
为torch.Size([8, 161, 24])
。 I'd expect down_out
to have the same size, but instead it has: torch.Size([8, 161, 23])
我希望
down_out
具有相同的大小,但它具有: torch.Size([8, 161, 23])
Where did that last element go?最后一个元素 go 在哪里?
The answer can be found on Pytorch documentation online ( here ).答案可以在 Pytorch 在线文档(此处)中找到。 For every operation the output shape is expressed with respect to the input parameters:
对于每个操作,output 形状相对于输入参数表示:
For each conv1D:对于每个 conv1D:
- L1 = 25 → int((24 + 2*1 - 1*(2 - 1) - 1) / 1 + 1)
- L2 = 25 → int((25 + 2*1 - 2*(2 - 1) - 1) / 1 + 1)
- L3 = 23 → int((25 + 2*1 - 4*(2 - 1) - 1) / 1 + 1)
Do not forget that Lin
is the previous size.别忘了
Lin
是以前的尺码。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.