![](/img/trans.png)
[英]Accuracy of same validation dataset differs between last epoch and after fit
[英]Validation accuracy and loss is the same after each epoch
我的验证准确性在每个纪元之后都是相同的。 不确定我在这里做错了什么? 我在下面添加了我的 CNN.network 和我的培训 function。 我初始化了 CNN 一次。 然而,训练 function 工作得非常好,损失下降,每个时期的准确性增加。 我做了一个测试 function 与我的验证 function 相同的结构,同样的事情发生了。 我的火车/瓦尔分裂是 40000/10000。 我正在使用 cifar 10。
下面是我的代码:
#Make train function (simple at first)
def train_network(model, optimizer, train_loader, num_epochs=10):
total_epochs = notebook.tqdm(range(num_epochs))
model.train()
for epoch in total_epochs:
train_acc = 0.0
running_loss = 0.0
for i, (x_train, y_train) in enumerate(train_loader):
x_train, y_train = x_train.to(device), y_train.to(device)
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
optimizer.zero_grad()
running_loss += loss.item()
train_acc += accuracy(y_pred, y_train)
running_loss /= len(train_loader)
train_acc /= len(train_loader)
print('Evaluation Loss: %.3f | Evaluation Accuracy: %.3f'%(running_loss, train_acc))
@torch.no_grad()
def validate_network(model, optimizer, val_loader, num_epochs=10):
model.eval()
total_epochs = notebook.tqdm(range(num_epochs))
for epoch in total_epochs:
accu = 0.0
running_loss = 0.0
for i, (x_val, y_val) in enumerate(val_loader):
x_val, y_val = x_val.to(device), y_val.to(device)
val_pred = model(x_val)
loss = criterion(val_pred, y_val)
running_loss += loss.item()
accu += accuracy(val_pred, y_val)
running_loss /= len(val_loader)
accu /= len(val_loader)
print('Val Loss: %.3f | Val Accuracy: %.3f'%(running_loss,accu))
OUTPUT:
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
Val Loss: 0.623 | Val Accuracy: 0.786
所以我想我的问题是,如何在验证时获得代表 output 的准确性和每个时期的损失。
这里发生的是你为number_of_epochs
运行一个循环,你只需要多次访问 same.network 。 我建议您在每个 epoch 结束时的训练期间调用验证 function 来测试 epoch 对模型性能的改进。 这意味着训练 function 应该类似于:
def train_network(model, optimizer, train_loader, val_loader, num_epochs=10):
total_epochs = notebook.tqdm(range(num_epochs))
model.train()
for epoch in total_epochs:
train_acc = 0.0
running_loss = 0.0
for i, (x_train, y_train) in enumerate(train_loader):
x_train, y_train = x_train.to(device), y_train.to(device)
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
optimizer.zero_grad()
running_loss += loss.item()
train_acc += accuracy(y_pred, y_train)
running_loss /= len(train_loader)
train_acc /= len(train_loader)
print('Evaluation Loss: %.3f | Evaluation Accuracy: %.3f'%(running_loss, train_acc))
validate_network(model, optimizer, val_loader, num_epochs=1)
请注意,我添加了验证加载器作为输入,并在每个纪元结束时调用验证 function,将纪元的验证数设置为 1。一个小的额外更改是从验证 function 中删除纪元循环。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.