簡體   English   中英

如何在 pytorch 中運行一批?

[英]How to run one batch in pytorch?

我是 AI 和 python 的新手,我試圖只運行一批以過度擬合。我找到了代碼: iter(train_loader).next()

但我不確定在我的代碼中在哪里實現它。 即使我這樣做了,我如何在每次迭代后檢查以確保我正在訓練相同的批次?

train_loader = torch.utils.data.DataLoader(
    dataset_train,
    batch_size=48,
    shuffle=True,
    num_workers=2
)

net = nn.Sequential(
    nn.Flatten(),
    nn.Linear(128*128*3,10)
)


nepochs = 3
statsrec = np.zeros((3,nepochs))

loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)


for epoch in range(nepochs):  # loop over the dataset multiple times

    running_loss = 0.0
    n = 0
    for i, data in enumerate(train_loader, 0):
        inputs, labels = data
        
         # Zero the parameter gradients
        optimizer.zero_grad() 

        # Forward, backward, and update parameters
        outputs = net(inputs)
        loss = loss_fn(outputs, labels)
        loss.backward()
        optimizer.step()
    
        # accumulate loss
        running_loss += loss.item()
        n += 1
    
    ltrn = running_loss/n
    ltst, atst = stats(train_loader, net)
    statsrec[:,epoch] = (ltrn, ltst, atst)
    print(f"epoch: {epoch} training loss: {ltrn: .3f}  test loss: {ltst: .3f} test accuracy: {atst: .1%}")

請給我一個提示

如果您希望在單個批次上進行訓練,請刪除數據加載器上的循環:

for i, data in enumerate(train_loader, 0):
    inputs, labels = data

並且在遍歷 epoch之前簡單地獲取train_loader迭代器的第一個元素:

inputs, labels = next(itr(train_loader))
for epoch in range(nepochs):
    optimizer.zero_grad() 
    outputs = net(inputs)
    loss = loss_fn(outputs, labels)
    loss.backward()
    optimizer.step()
    # ...

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM