简体   繁体   中英

Char RNN classification with batch size

I'm replicating this example for a classification with a Pytorch char-rnn .

for iter in range(1, n_iters + 1):
    category, line, category_tensor, line_tensor = randomTrainingExample()
    output, loss = train(category_tensor, line_tensor)
    current_loss += loss

I see that every epoch only 1 example is taken and random. I would like that each epoch all the dataset is taken with a specific batch size of examples. I can adjust the code to do this myself but I was wondering if some flags already exist.

Thank you

If you construct a Dataset class by inheriting from the PyTorch Dataset class and then feed it into the PyTorch DataLoader class , then you can set a parameter batch_size to determine how many examples you will get out in each iteration of your training loop.

I have followed the same tutorial as you. I can show you how I have used the PyTorch classes above to get the data in batches.

# load data into a DataFrame using the findFiles function as in the tutorial
files = findFiles('data/names') # load the files as in the tutorial into a dataframe 
df_names = pd.concat([
    pd.read_table(f, names = ["names"], header = None)\
      .assign(lang = f.stem)\
    for f in files]).reset_index(drop = True)
print(df_names.head())

# output: 
#      names      lang
# 0      Abe  Japanese
# 1  Abukara  Japanese
# 2   Adachi  Japanese
# 3     Aida  Japanese
# 4   Aihara  Japanese

# Make train and test data 
from sklearn.model_selection import train_test_split
X_train, X_dev, y_train, y_dev = train_test_split(df_names.names, df_names.lang,
                                                   train_size = 0.8)
df_train = pd.concat([X_train, y_train], axis=1)
df_val = pd.concat([X_dev, y_dev], axis=1)

Now I construct a modified Dataset class using the dataframe(s) above by inheriting from the PyTorch Dataset class.

import torch
from torch.utils.data import Dataset, DataLoader
class NameDatasetReader(Dataset):
    def __init__(self, df: pd.DataFrame):
        self.df = df      
        
    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx: int):
        row = self.df.loc[idx] # gets a row from the df 
        input_name = list(row.names) # turns name into a list of chars 
        len_name = len(input_name) # length of name (used to pad packed sequence)
        labels = row.label # target 
        return input_name, len_name, labels
train_dat =  NameDatasetReader(df_train) # make dataset from dataframe with training data 

Now, the thing is that when you want to work with batches and sequences you need the sequences to be of equal length in each batch. That is why I also get the length of the extracted name from the dataframe in the __getitem__() function above. This is to be used in function that modifies the training examples used in each batch.

This is called a collate_batch function and in this example it modifies each batch of your training data such that the sequences in a given batch are of equal length.

# Dictionary of all letters (as in the original tutorial,
#  I have just inserted also an entry for the padding token)
all_letters_dict= dict(zip(all_letters, range(1, len(all_letters) +2)))
all_letters_dict['<PAD>'] = 0

# function to turn name into a tensor 
def line_to_tensor(line):
    """turns name into a tensor of one hot encoded vectors"""
    tensor = torch.zeros(len(line),
                         len(all_letters_dict.keys())) # (name_len x vocab_size) - <PAD> is part of vocab
    for li, letter in enumerate(line):
        tensor[li][all_letters_dict[letter]] = 1
    return tensor

def collate_batch_lstm(input_data: Tuple) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
    """
    Combines multiple name samples into a single batch
    :param input_data: The combined input_ids, seq_lens, and labels for the batch
    :return: A tuple of tensors (input_ids, seq_lens, labels)
    """
    
    # loops over batch input and extracts vals 
    names = [i[0] for i in input_data] 
    seq_names_len = [i[1] for i in input_data] 
    labels = [i[2] for i in input_data] 

    max_length = max(seq_names_len) # longest sequence aka. name 

    # Pad all of the input samples to the max length 
    names = [(name + ["<PAD>"] * (max_length - len(name))) for name in names]  
    
    input_ids = [line_to_tensor(name) for name in names] # turn each list of chars into a tensor with one hot vecs
    
    # Make sure each sample is max_length long
    assert (all(len(i) == max_length for i in input_ids))
    return torch.stack(input_ids), torch.tensor(seq_names_len), torch.tensor(labels) 

Now, I can construct a dataloader by inserting the dataset object from above, the collate_batch_lstm() function above, and a given batch_size into the DataLoader class.

train_dat_loader = DataLoader(train_dat, batch_size = 4, collate_fn = collate_batch_lstm)

You can now iterate over train_dat_loader which returns a training batch with 4 names in each iteration.

Consider a given batch from train_dat_loader:

seq_tensor, seq_lengths, labels = iter(train_dat_loader).next()
print(seq_tensor.shape, seq_lengths.shape, labels.shape)
print(seq_tensor)
print(seq_lengths)
print(labels)
# output: 
# torch.Size([4, 11, 59]) torch.Size([4]) torch.Size([4])
# tensor([[[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.]],

#         [[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.]],

#         [[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.]],

#         [[0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          [0., 0., 0.,  ..., 0., 0., 0.],
#          ...,
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.],
#          [1., 0., 0.,  ..., 0., 0., 0.]]])
# tensor([11,  3,  8,  7])
# tensor([14,  1, 14,  2])

It gives us an tensor of size (4 x 11 x 59). 4 because we have specified that we want a batch size of 4. 11 is the length of the longest name in the given batch (all other names have been padded with zeros such that they are equal length). 59 is the number of characters in our vocabulary.

The next thing is to incorporate this into your training routine and use a packing routine to avoid doing redundant calculations on the zeros that you have padded your data with :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM