简体   繁体   中英

Perceptron on multi-dimensional tensor

I'm trying to use Perceptron to reduce a tensor of size: [1, 24, 768] to another tensor with size of [1, 1, 768] . The only way I could use was to first reshape the input tensor to [1, 1, 24*768] and then pass it through linear layers. I'm wondering if there's a more elegant way of this transformation --other than using RNNs cause I do not want to use that. Are there other methods generally for the transformation that I want to make? Here is my code for doing the above operation:

lin = nn.Linear(24*768, 768)

# x is in shape of [1, 24, 768]
# out is in shape of [1, 1, 768]
x = x.view(1,1,-1)
out = lin(x)

If the broadcasting is what's bothering you, you could use ann.Flatten to do it:

>>> m = nn.Sequential(
...    nn.Flatten(),
...    nn.Linear(24*768, 768))

>>> x = torch.rand(1, 24, 768)

>>> m(x).shape
torch.Size([1, 768])

If you really want the extra dimension you can unsqueeze the tensor on axis=1 :

>>> m(x).unsqueeze(1).shape
torch.Size([1, 1, 768])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM