简体   繁体   English

如何修复“ValueError: Expected input batch_size (1) to match target batch_size (4).”?

[英]How to fix "ValueError: Expected input batch_size (1) to match target batch_size (4)."?

I'm training a pytorch neural network on google colab to classify sign langauge alphabets of 29 classes in total.我正在 google colab 上训练一个 pytorch 神经网络来对总共 29 个类的手语字母进行分类。

We've been fixing the code by changing various params but it won't work anyway.我们一直在通过更改各种参数来修复代码,但无论如何它都不起作用。

    transform = transforms.Compose([

        #gray scale
        transforms.Grayscale(),

        #resize
        transforms.Resize((128,128)),

        #converting to tensor
        transforms.ToTensor(),

        #normalize
        transforms.Normalize( (0.1307,), (0.3081,)),
    ])

    data_dir = 'data/train/asl_alphabet_train'

    #dataset
    full_dataset = datasets.ImageFolder(root=data_dir, transform=transform)

    #train & test 
    train_size = int(0.8 * len(full_dataset))
    test_size = len(full_dataset) - train_size

    #splitting
    train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size])

    trainloader = torch.utils.data.DataLoader(train_dataset , batch_size = 4, shuffle = True )
    testloader = torch.utils.data.DataLoader(test_dataset , batch_size = 4, shuffle = False )

    #neural net architecture
    Net(
  (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (fc1): Linear(in_features=32768, out_features=128, bias=True)
  (fc2): Linear(in_features=128, out_features=29, bias=True)
  (dropout): Dropout(p=0.5)
   )

    loss_fn = nn.CrossEntropyLoss()
    #optimizer
    opt = optim.SGD(model.parameters(), lr=0.01)
    def train(model, train_loader, optimizer, loss_fn, epoch, device):
        #telling pytorch that training mode is on
        model.train()
        loss_epoch_arr = []

        #epochs
        for e in range(epoch):

            # bach_no, data, target
            for batch_idx, (data, target) in enumerate(train_loader):

                #moving to GPU
                #data, target = data.to(device), target.to(device)

                #Making gradints zero
                optimizer.zero_grad()

                #generating output
                output = model(data)

                #calculating loss
                loss = loss_fn(output, target)

                #backward propagation
                loss.backward()

                #stepping optimizer
                optimizer.step()

                #printing at each 10th epoch
                if batch_idx % 10 == 0:
                    print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                        epoch, batch_idx * len(data), len(train_loader.dataset),
                        100. * batch_idx / len(train_loader), loss.item()))


                #de-allocating memory
                del data,target,output
                #torch.cuda.empty_cache()

            #appending values
            loss_epoch_arr.append(loss.item())

        #plotting loss
        plt.plot(loss_epoch_arr)
        plt.show()

    train(model, trainloader , opt, loss_fn, 10, device)

ValueError: Expected input batch_size (1) to match target batch_size (4). ValueError:预期输入 batch_size (1) 与目标 batch_size (4) 匹配。

We're beginners in pytorch and trying to figure out what the problem is.我们是 pytorch 的初学者,并试图找出问题所在。

The most likely cause of this error relates to the value of in_features within the nn.Linear function You haven't provided your full code for this.此错误的最可能原因与 nn.Linear 函数中 in_features 的值有关您尚未为此提供完整代码。

One way to check for this is to add the following lines to you forward function (before x.view:检查这一点的一种方法是将以下几行添加到您的转发函数中(在 x.view 之前:

    print('x_shape:',x.shape)

The result will be of the form [a,b,c,d] .结果将采用[a,b,c,d]形式。 in_features value should be equal to b*c*d in_features 值应等于b*c*d

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Pytorch:ValueError:预期输入 batch_size (32) 与目标 batch_size (64) 匹配 - Pytorch: ValueError: Expected input batch_size (32) to match target batch_size (64) 为什么我收到错误 ValueError: Expected input batch_size (4) to match target batch_size (64)? - Why am I getting the error ValueError: Expected input batch_size (4) to match target batch_size (64)? batch_size作为权重的尺寸 - batch_size as a dimension for weights Keras CNN:任何 batch_size > 1 的形状 [batch_size*2,1] 与 [batch_size,1] 不兼容 - Keras CNN: Incompatible shapes [batch_size*2,1] vs. [batch_size,1] with any batch_size > 1 张量流中的Batch_size? 理解这个概念 - Batch_size in tensorflow? Understanding the concept Bulk_create 不适用于 batch_size 参数 - Bulk_create is not working with batch_size parameter 重塑的输入是具有“batch_size”值的张量,但请求的形状需要“n_features”的倍数 - Input to reshape is a tensor with 'batch_size' values, but the requested shape requires a multiple of 'n_features' joblib中的batch_size和pre_dispatch到底意味着什么 - What batch_size and pre_dispatch in joblib exactly mean GridSearching LSTM网络中的问题 - Batch_size问题 - Problem in GridSearching a LSTM network - Batch_size issue RASA Chatbot Framework在训练时给出错误:fit()为关键字参数'batch_size'获得了多个值 - RASA Chatbot Framework gives error while training :fit() got multiple values for keyword argument 'batch_size'
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM