简体   繁体   English

Conv1d 的以下实现有什么问题?

[英]what is wrong with the following implementation of Conv1d?

I am trying to implement a Conv1d layer with Batch Normalization but I keep getting the following error:我正在尝试使用 Batch Normalization 实现Conv1d层,但我不断收到以下错误:

RuntimeError                              Traceback (most recent call last)
<ipython-input-32-ef6e122ea50c> in <module>()
----> 1 test()
      2 for epoch in range(1, n_epochs + 1):
      3   train(epoch)
      4   test()

7 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    258                             _single(0), self.dilation, self.groups)
    259         return F.conv1d(input, weight, bias, self.stride,
--> 260                         self.padding, self.dilation, self.groups)
    261 
    262     def forward(self, input: Tensor) -> Tensor:

RuntimeError: Expected 3-dimensional input for 3-dimensional weight [25, 40, 5], but got 2-dimensional input of size [32, 40] instead

The data is passed on in batches of 32 using DataLoader class and it has 40 features and 10 labels.数据使用 DataLoader class 分批传递 32 个,它有 40 个特征和 10 个标签。 Here is my model:这是我的 model:

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        #self.flatten=nn.Flatten()
        self.net_stack=nn.Sequential(
            nn.Conv1d(in_channels=40, out_channels=25, kernel_size=5, stride=2), #applying batch norm
            nn.ReLU(),
            nn.BatchNorm1d(25, affine=True),
            nn.Conv1d(in_channels=25, out_channels=20, kernel_size=5, stride=2), #applying batch norm
            nn.ReLU(),
            nn.BatchNorm1d(20, affine=True),
            nn.Linear(20, 10),
            nn.Softmax(dim=1))

    def forward(self,x):
        #x=torch.reshape(x, (1,-1))
        result=self.net_stack(x)
        return result

I have tried given in other answers like unsqueezing the input tensor, but none of the models in such questions is using Conv1d with batchnorm1d so I am not able to narrow down the problem to which layer must be causing the error.我已经尝试在其他答案中给出,比如取消压缩输入张量,但是这些问题中的模型都没有使用带有 batchnorm1d 的 Conv1d,所以我无法将问题缩小到必须导致错误的层。 I have just started with using Pytorch and was able to implement a simple linear NN model, but I am facing this error while using a convolutional NN for the same data.我刚刚开始使用 Pytorch 并且能够实现一个简单的线性 NN model,但是在对相同数据使用卷积 NN 时我遇到了这个错误。

You need to add a batch dimension to your input (and also change the number of input channels).您需要在输入中添加批处理维度(并且还需要更改输入通道的数量)。

A conv1d layer accepts inputs of shape [B, C, L] , where B is the batch size, C is the number of channels and L is the width/length of your input. conv1d层接受形状为[B, C, L]的输入,其中B是批量大小, C是通道数, L是输入的宽度/长度。 Also, your conv1d layer expects 40 input channels:此外,您的conv1d层需要 40 个输入通道:

nn.Conv1d(in_channels=40, out_channels=25, kernel_size=5, stride=2)

hence, your input tensor x must have shape [B, 40, L] while now it has shape [32, 40] .因此,您的输入张量x必须具有形状[B, 40, L]而现在它具有形状[32, 40]

Try:尝试:

def forward(self,x):
    result=self.net_stack(x[None])
    return result

you will get another error complaining about dimensions mismatch, suggesting you need to change the number of input channels to 40.您将收到另一个抱怨尺寸不匹配的错误,提示您需要将输入通道数更改为 40。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM