简体   繁体   中英

How to fix error with pytorch conv2d function?

I am trying to use conv2d function on these two tensors:

Z = np.random.choice([0,1],size=(100,100))
Z = torch.from_numpy(Z).type(torch.FloatTensor)

print(Z)

tensor([[0., 0., 1.,  ..., 1., 0., 0.],
        [1., 0., 1.,  ..., 1., 1., 1.],
        [0., 0., 0.,  ..., 0., 1., 1.],
        ...,
        [1., 0., 1.,  ..., 1., 1., 1.],
        [1., 0., 1.,  ..., 0., 0., 0.],
        [0., 1., 1.,  ..., 1., 0., 0.]

and

filters = torch.tensor(np.array([[1,1,1],
                        [1,0,1],
                        [1,1,1]]), dtype=torch.float32)

print(filters)

tensor([[1., 1., 1.],
        [1., 0., 1.],
        [1., 1., 1.]])

But when I try to do torch.nn.functional.conv2d(Z,filters) this error returns:

RuntimeError: weight should have at least three dimensions

I really don't understand what is the problem here. How to fix it?

The input to torch.nn.functional.conv2d(input, weight) should be

在此处输入图像描述

You can use unsqueeze() to add fake batch and channel dimensions thus having sizes: input: (1, 1, 100, 100) and weight: (1, 1, 3, 3) .

torch.nn.functional.conv2d(Z.unsqueeze(0).unsqueeze(0), filters.unsqueeze(0).unsqueeze(0))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM