简体   繁体   中英

RuntimeError: The size of tensor a (128) must match the size of tensor b (256) at non-singleton dimension 3

Please I need your help on how I can train my model on image dimension of 256x256 I changed the image size but I got this error that I couldn't solve it :

File "train_128.py", line 149, in main
g_img_rec_loss = torch.abs(img_rec - imgs).mean()
RuntimeError: The size of tensor a (128) must match the size of tensor b (256) at non-singleton dimension 3

The code source is : https://github.com/biswassanket/synth_doc_generation

thanks in advance

The error is happening because your variable img_rec is size (batch_size, 3, 128, 128) and the other variable imgs is size (batch_size, 3, 256, 256).

If you are passing 256 * 256 images from your dataLoader you also need to make sure you are generating 256 * 256 images which you are not doing.

Over here in this line of code . When you make the output you are getting a 128 * 128 image so you may need to change the params in netG or do a resize on the output.

Hope this helps

SanrthakJain: I checked the netG which is stand for the generator model https://github.com/biswassanket/synth_doc_generation/blob/main/layout2im/models/generator_128.py

could you please guide me which modification I should do to fix this error?

thanks in advance.

Thank you for your question I face the same problem please SarthakJain could you share with us how to resize in the output to solve this problem?

Help appreciated for a beginner like me

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM