简体   繁体   中英

How to pad a tensor

How would I pad this tensor by adding element 100 on the end

a = tensor([[ 101,  103],
    [ 101, 1045, 223],
    [ 101,  777, 665 , 889],
    [ 101,  888]])

So the result would be:

 b = tensor([[ 101,  103, 100, 100],
    [ 101, 1045, 223, 100],
    [ 101,  777, 665 , 889],
    [ 101,  888, 100, 100]])

I know the functions is torch.nn.functional.pad(), but i could not any simple example with a tensor like this that is probably a 2d tensor.

Which was surprising, because this is what a (most) typical padding is.

Similar to the numpy case, see Convert Python sequence to NumPy array, filling missing values , you could adjust the size of your sub lists using itertools.zip_longest .

from itertools import zip_longest
tensor_lists = [
    [ 101,  103],
    [ 101, 1045, 223],
    [ 101,  777, 665 , 889],
    [ 101,  888]
]
fillvalue = 100
padded_list = list(zip(*zip_longest(*tensor_lists, fillvalue=fillvalue)))
...  # convert to tensor and use it

Here, zip_longest adds the missing value, and the second zip transposes the result again. You could of cause first create the tensor and then transpose.

You can use torch.nested.to_padded_tensor ( docs ):

import torch
a = [
    [101, 103],
    [101, 1045, 223],
    [101, 777, 665, 889],
    [101, 888]
]

a = torch.nested.nested_tensor(list(map(torch.tensor, a)))

torch.nested.to_padded_tensor(a, 100)
tensor([[ 101,  103,  100,  100],
        [ 101, 1045,  223,  100],
        [ 101,  777,  665,  889],
        [ 101,  888,  100,  100]])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM