简体   繁体   中英

PyTorch: Different training accuracies using same random seed

I am trying to evaluate my model on the whole training set after each epoch. This is what I did:

torch.manual_seed(1)
model = ConvNet(num_classes=num_classes)
cost_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)  

def compute_accuracy(model, data_loader):
    correct_pred, num_examples = 0, 0
    for features, targets in data_loader:
        logits = model(features)
        predicted_labels = torch.argmax(logits, 1)
        num_examples += targets.size(0)
        correct_pred += (predicted_labels == targets).sum()
    return correct_pred.float()/num_examples * 100

for epoch in range(num_epochs):
    model = model.train()
    for features, targets in train_loader:
        logits = model(features)
        cost = cost_fn(logits, targets)
        optimizer.zero_grad()
        cost.backward()
        optimizer.step()

    model = model.eval()
    print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
          epoch+1, num_epochs, 
          compute_accuracy(model, train_loader)))

the output was convincing:

Epoch: 001/005 training accuracy: 89.08%
Epoch: 002/005 training accuracy: 90.41%
Epoch: 003/005 training accuracy: 91.70%
Epoch: 004/005 training accuracy: 92.31%
Epoch: 005/005 training accuracy: 92.95%

But then I added another line at the end of the training loop, to also evaluate the model on the whole test set after each epoch:

for epoch in range(num_epochs):
    model = model.train()
    for features, targets in train_loader:
        logits = model(features)
        cost = cost_fn(logits, targets)
        optimizer.zero_grad()
        cost.backward()
        optimizer.step()

    model = model.eval()
    print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
          epoch+1, num_epochs, 
          compute_accuracy(model, train_loader)))
    print('\t\t testing accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))

But the training accuracies started to change:

Epoch: 001/005 training accuracy: 89.08%
         testing accuracy: 87.66%
Epoch: 002/005 training accuracy: 90.42%
         testing accuracy: 89.04%
Epoch: 003/005 training accuracy: 91.84%
         testing accuracy: 90.01%
Epoch: 004/005 training accuracy: 91.86%
         testing accuracy: 89.83%
Epoch: 005/005 training accuracy: 92.45%
         testing accuracy: 90.32%

Am I doing something wrong? I expected the training accuracies to remain the same because the manual seed is 1 in both cases. Is this an expected output?

The random seed had been set wasn't stop the model for learning to get higher accuracy becuase the random seed is a number for Pseudo random. In this case, you had told the model to shuffle the training data with a random number("1").

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM