簡體   English   中英

如果不是 (target.size() == input.size()): AttributeError: 'collections.OrderedDict' object has no attribute 'size' 我收到此錯誤

[英]if not (target.size() == input.size()): AttributeError: 'collections.OrderedDict' object has no attribute 'size' I'm getting this error

我正在嘗試使用遷移學習在 pytorch 中使用 deeplab v3 架構執行語義分割。 這就是錯誤。 我正在使用 ISIC 2017 皮膚軍團數據集,並將圖像和標簽轉換為 160 x 240。有人可以幫我解決這個問題嗎?

主文件

train function

def train_fn(loader, model, optimizer, loss_fn, scaler ):
loop = tqdm(loader)

for batch_idx, (data, targets) in enumerate(loop):
    data= data.to(device= DEVICE).float()
    targets= targets.float().unsqueeze(1).to(device =  DEVICE)
           #forward


    with torch.cuda.amp.autocast():
        predictions= model(data)
        loss= loss_fn(predictions, targets)

    #backward
    optimizer.zero_grad()
    scaler.scale(loss).backward()
    scaler.step(optimizer)
    scaler.update()


    #update tqdm loop
    loop.set_postfix(loss= loss.item())

它被稱為使用

model = DeepLabv3().to(DEVICE)
loss_fn = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr= LEARNING_RATE)
scaler = torch.cuda.amp.GradScaler()
for epoch in range(NUM_EPOCH):
    train_fn(train_loader, model, optimizer, loss_fn, scaler)
    # save model
    checkpoint = {
        "state_dict": model.state_dict(),
        "optimizer":optimizer.state_dict(),
    }
    save_checkpoint(checkpoint)

    #check accuracy
    check_accuracy(test_loader, model, device=DEVICE)

    # print some examples to a folder
    save_predictions_as_imgs(
        test_loader, model, folder="saved_images/", device=DEVICE
    )


def DeepLabv3(outputchannels=1):

model = models.segmentation.deeplabv3_resnet101(pretrained=True,
                                                progress=True)
model.classifier = DeepLabHead(2048, outputchannels)
# Set the model in training mode
model.train()
#print(model)
return model

DeepLabv3()

錯誤

    File "main.py", line 94, in <module>
    train_fn(train_loader, model, optimizer, loss_fn, scaler)
  File "main.py", line 75, in train_fn
    loss= loss_fn(predictions, targets)
  File "C:\Users\anush\anaconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\anush\anaconda3\envs\torch\lib\site-packages\torch\nn\modules\loss.py", line 707, in forward
    reduction=self.reduction)
  File "C:\Users\anush\anaconda3\envs\torch\lib\site-packages\torch\nn\functional.py", line 2979, in binary_cross_entropy_with_logits
    if not (target.size() == input.size()):
AttributeError: 'collections.OrderedDict' object has no attribute 'size'

我今天在使用 Deeplab 時遇到了同樣的問題。 我認為主要原因是來自 deeplab 的 output 是“class collections.OrderedDict”。 它無法與張量相提並論。 結構是這樣的:

OrderedDict([('out', tensor([[[[-1.7589, -1.7589, -1.7589,  ..., -1.3775, -1.3775, -1.3775],
          [-1.7589, -1.7589, -1.7589,  ..., -1.3775, -1.3775, -1.3775],
          [-1.7589, -1.7589, -1.7589,  ..., -1.3775, -1.3775, -1.3775],
          ...,
          [-1.9924, -1.9924, -1.9924,  ..., -2.2682, -2.2682, -2.2682],
          [-1.9924, -1.9924, -1.9924,  ..., -2.2682, -2.2682, -2.2682],
          [-1.9924, -1.9924, -1.9924,  ..., -2.2682, -2.2682, -2.2682]]],


        [[[-1.8675, -1.8675, -1.8675,  ..., -2.0556, -2.0556, -2.0556],
          [-1.8675, -1.8675, -1.8675,  ..., -2.0556, -2.0556, -2.0556],
          [-1.8675, -1.8675, -1.8675,  ..., -2.0556, -2.0556, -2.0556],
          ...,
          [-2.1846, -2.1846, -2.1846,  ..., -2.0779, -2.0779, -2.0779],
          [-2.1846, -2.1846, -2.1846,  ..., -2.0779, -2.0779, -2.0779],
          [-2.1846, -2.1846, -2.1846,  ..., -2.0779, -2.0779, -2.0779]]],


        [[[-1.9245, -1.9245, -1.9245,  ..., -1.9551, -1.9551, -1.9551],
          [-1.9245, -1.9245, -1.9245,  ..., -1.9551, -1.9551, -1.9551],
          [-1.9245, -1.9245, -1.9245,  ..., -1.9551, -1.9551, -1.9551],
          ...,
          [-2.1327, -2.1327, -2.1327,  ..., -2.1104, -2.1104, -2.1104],
          [-2.1327, -2.1327, -2.1327,  ..., -2.1104, -2.1104, -2.1104],
          [-2.1327, -2.1327, -2.1327,  ..., -2.1104, -2.1104, -2.1104]]],


        [[[-1.8399, -1.8399, -1.8399,  ..., -1.6801, -1.6801, -1.6801],
          [-1.8399, -1.8399, -1.8399,  ..., -1.6801, -1.6801, -1.6801],
          [-1.8399, -1.8399, -1.8399,  ..., -1.6801, -1.6801, -1.6801],
          ...,
          [-1.9659, -1.9659, -1.9659,  ..., -1.8788, -1.8788, -1.8788],
          [-1.9659, -1.9659, -1.9659,  ..., -1.8788, -1.8788, -1.8788],
          [-1.9659, -1.9659, -1.9659,  ..., -1.8788, -1.8788, -1.8788]]]],
       device='cuda:0', grad_fn=<UpsampleBilinear2DBackward1>))])

如您所見,張量結果在這個 OrderedDict 中。

因此,你需要做的只是改變

loss= loss_fn(predictions, targets)

loss= loss_fn(predictions['out'], targets)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM