简体   繁体   中英

RuntimeError: 1D target tensor expected, multi-target not supported Pytorch

I recently shifted to pytorch from keras and I am still trying to understand how all this work. Below is the code I have implemented to classify mnist dataset using a simple MLP. Just like I used to do in keras I have flattend each of 28x28 image into a vector of 784 , and I have also created a one-hot representation for my labels. In the model I was hoping that given a vector of 784 the model would output a one-hot vector with probabilities,but as soon as my code reaches to compute the loss I get the following error :

RuntimeError: 1D target tensor expected, multi-target not supported

Below is my code :

    import numpy as np
import matplotlib.pyplot as plt
import torch
import time
from torch import nn, optim
from keras.datasets import mnist
from torch.utils.data import Dataset, DataLoader

RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)


# ----------------------------------------------------

class MnistDataset(Dataset):

    def __init__(self, data_size=0):

        (x, y), (_, _) = mnist.load_data()

        x = [i.flatten() for i in x]
        x = np.array(x, dtype=np.float32)

        if data_size < 0 or data_size > len(y):
            assert ("Data size should be between 0 to number of files in the dataset")

        if data_size == 0:
            data_size = len(y)

        self.data_size = data_size

        # picking 'data_size' random samples
        self.x = x[:data_size]
        self.y = y[:data_size]

        # scaling between 0-1
        self.x = (self.x / 255)

        # Creating one-hot representation of target
        y_encoded = []
        for label in y:
            encoded = np.zeros(10)
            encoded[label] = 1
            y_encoded.append(encoded)

        self.y = np.array(y_encoded)

    def __len__(self):
        return self.data_size

    def __getitem__(self, index):

        x_sample = self.x[index]
        label = self.y[index]

        return x_sample, label


# ----------------------------------------------------

num_train_samples = 10000
num_test_samples = 2000

# Each generator returns a single
# sample & its label on each iteration.
mnist_train = MnistDataset(data_size=num_train_samples)
mnist_test = MnistDataset(data_size=num_test_samples)

# Each generator returns a batch of samples on each iteration.
train_loader = DataLoader(mnist_train, batch_size=128, shuffle=True)  # 79 batches
test_loader = DataLoader(mnist_test, batch_size=128, shuffle=True)  # 16 batches


# ----------------------------------------------------

# Defining the Model Architecture

class MLP(nn.Module):

    def __init__(self):
        super().__init__()

        self.fc1 = nn.Linear(28 * 28, 100)
        self.act1 = nn.ReLU()
        self.fc2 = nn.Linear(100, 50)
        self.act2 = nn.ReLU()
        self.fc3 = nn.Linear(50, 10)
        self.act3 = nn.Sigmoid()

    def forward(self, x):
        x = self.act1(self.fc1(x))
        x = self.act2(self.fc2(x))
        output = self.act3(self.fc3(x))

        return output


# ----------------------------------------------------

model = MLP()

# Defining optimizer and loss function
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)

# ----------------------------------------------------

# Training the model

epochs = 10

print("Training Started...")

for epoch in range(epochs):
    for batch_index, (inputs, targets) in enumerate(train_loader):

        optimizer.zero_grad()  # Zero the gradients
        outputs = model(inputs)  # Forward pass
        loss = criterion(outputs, targets)  # Compute the Loss
        loss.backward()  # Compute the Gradients
        optimizer.step()  # Update the parameters

        # Evaluating the model
        total = 0
        correct = 0
        with torch.no_grad():
            for batch_idx, (inputs, targets) in enumerate(test_loader):
                outputs = model(inputs)
                _, predicted = torch.max(outputs.data, 1)
                total += targets.size(0)
                correct += predicted.eq(targets.data).cpu().sum()
            print('Epoch : {} Test Acc : {}'.format(epoch, (100. * correct / total)))

print("Training Completed Sucessfully")

# ----------------------------------------------------

I also read some other posts related to the same problem & most of them said that the CrossEntropy loss the target has to be a single number ,which totally gets over my head.Can someone please explain a solution.Thank you.

For nn.CrossEntropyLoss , you don't need one-hot representation of the label, you just need to pass the prediction's logit, which shape is (batch_size, n_class) , and a target vector (batch_size,)

So just pass in the label index vector y instead of one-hot vector.

Fixed of your code:

class MnistDataset(Dataset):

    def __init__(self, data_size=0):

        (x, y), (_, _) = mnist.load_data()

        x = [i.flatten() for i in x]
        x = np.array(x, dtype=np.float32)

        if data_size < 0 or data_size > len(y):
            assert ("Data size should be between 0 to number of files in the dataset")

        if data_size == 0:
            data_size = len(y)

        self.data_size = data_size

        # picking 'data_size' random samples
        self.x = x[:data_size]
        self.y = y[:data_size]

        # scaling between 0-1
        self.x = (self.x / 255)

        self.y = y # <--

    def __len__(self):
        return self.data_size

    def __getitem__(self, index):

        x_sample = self.x[index]
        label = self.y[index]

        return x_sample, label

Take a look at Pytorch example for more detail: https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM