简体   繁体   English

Pytorch 矩阵大小问题不会成倍增加

[英]Pytorch matrix size issue does not multiply

This may have been answered before so happy about any links.在对任何链接感到高兴之前,这可能已经得到了回答。 I am new to pytorch and do not understand why my Conv2d pipeline is failing with我是 pytorch 的新手,不明白为什么我的 Conv2d 管道失败了

mat1 and mat2 shapes cannot be multiplied (64x49 and 3136x512)
        self.net = nn.Sequential(
            nn.Conv2d(in_channels=c, out_channels=32, kernel_size=8, stride=4),
            nn.ReLU(),
            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=2),
            nn.ReLU(),
            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1),
            nn.ReLU(),
            nn.Flatten(),
            nn.Linear(3136, 512),
            nn.ReLU(),
            nn.Linear(512, output_size)
        )

with input shape 1x84x84 .输入形状1x84x84

I did the calculation and this is the size that breaks down over the different steps with the kernel and size settings per layer.我进行了计算,这是使用内核和每层大小设置在不同步骤中分解的大小。

84 ->  K:8 , S:4 => 20
20 -> K:3 , S:2 => 9
9 -> K:3 , S:1 => 7

7^2 * 64 => 3136  for the flattened layer

I am not sure where the 64x49 is coming from .我不确定64x49来自哪里。

I have tried your model and your calculation is totally correct.我试过你的模型,你的计算是完全正确的。 The problem lies in your input.问题在于您的输入。 For torch calculation, if your input shape is 1x84x84, a 3d torch, you should input a 4d torch indeed, where the first dimension represent the batch-size.对于火炬计算,如果你的输入形状是 1x84x84,一个 3d 火炬,你应该输入一个 4d 火炬,其中第一个维度代表批量大小。 You may find more information about batch, which is widely used to enhance computation speed.您可能会发现有关批处理的更多信息,批处理广泛用于提高计算速度。

If you just want to test on single data, you may just add a dimension like x = x[None, :] to make it become 4d torch.如果您只想测试单个数据,您可以添加一个像 x = x[None, :] 这样的维度,使其成为 4d 火炬。 This will be a quick fix to your problem这将快速解决您的问题

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM