![](/img/trans.png)
[英]Accuracy of same validation dataset differs between last epoch and after fit
[英]Validation accuracy and loss is the same after each epoch
我的驗證准確性在每個紀元之后都是相同的。 不確定我在這里做錯了什么? 我在下面添加了我的 CNN.network 和我的培訓 function。 我初始化了 CNN 一次。 然而,訓練 function 工作得非常好,損失下降,每個時期的准確性增加。 我做了一個測試 function 與我的驗證 function 相同的結構,同樣的事情發生了。 我的火車/瓦爾分裂是 40000/10000。 我正在使用 cifar 10。
下面是我的代碼:
#Make train function (simple at first)
def train_network(model, optimizer, train_loader, num_epochs=10):
total_epochs = notebook.tqdm(range(num_epochs))
model.train()
for epoch in total_epochs:
train_acc = 0.0
running_loss = 0.0
for i, (x_train, y_train) in enumerate(train_loader):
x_train, y_train = x_train.to(device), y_train.to(device)
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
optimizer.zero_grad()
running_loss += loss.item()
train_acc += accuracy(y_pred, y_train)
running_loss /= len(train_loader)
train_acc /= len(train_loader)
print('Evaluation Loss: %.3f | Evaluation Accuracy: %.3f'%(running_loss, train_acc))
@torch.no_grad()
def validate_network(model, optimizer, val_loader, num_epochs=10):
model.eval()
total_epochs = notebook.tqdm(range(num_epochs))
for epoch in total_epochs:
accu = 0.0
running_loss = 0.0
for i, (x_val, y_val) in enumerate(val_loader):
x_val, y_val = x_val.to(device), y_val.to(device)
val_pred = model(x_val)
loss = criterion(val_pred, y_val)
running_loss += loss.item()
accu += accuracy(val_pred, y_val)
running_loss /= len(val_loader)
accu /= len(val_loader)
print('Val Loss: %.3f | Val Accuracy: %.3f'%(running_loss,accu))
OUTPUT:
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
所以我想我的問題是,如何在驗證時獲得代表 output 的准確性和每個時期的損失。
這里發生的是你為number_of_epochs
運行一個循環,你只需要多次訪問 same.network 。 我建議您在每個 epoch 結束時的訓練期間調用驗證 function 來測試 epoch 對模型性能的改進。 這意味着訓練 function 應該類似於:
def train_network(model, optimizer, train_loader, val_loader, num_epochs=10):
total_epochs = notebook.tqdm(range(num_epochs))
model.train()
for epoch in total_epochs:
train_acc = 0.0
running_loss = 0.0
for i, (x_train, y_train) in enumerate(train_loader):
x_train, y_train = x_train.to(device), y_train.to(device)
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
optimizer.zero_grad()
running_loss += loss.item()
train_acc += accuracy(y_pred, y_train)
running_loss /= len(train_loader)
train_acc /= len(train_loader)
print('Evaluation Loss: %.3f | Evaluation Accuracy: %.3f'%(running_loss, train_acc))
validate_network(model, optimizer, val_loader, num_epochs=1)
請注意,我添加了驗證加載器作為輸入,並在每個紀元結束時調用驗證 function,將紀元的驗證數設置為 1。一個小的額外更改是從驗證 function 中刪除紀元循環。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.