简体   繁体   English

pytorch 张量 cat on dim =0 对我不起作用

[英]pytorch tensors cat on dim =0 not worked for me

I have a problem with cat in pytorch.我在 pytorch 中遇到了cat问题。 I want to concatenate tensors on dim=0, for exampe, I want something like this我想在 dim=0 上连接张量,例如,我想要这样的东西

>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])

but, when I try to do it in my program I have但是,当我尝试在我的程序中这样做时,我有

def create_batches_to_device(train_df, test_df, device,batch_size=2):
    train_tensor = torch.tensor([])
    for i in range (batch_size):
        rand_2_strs = train_df.sample(2)
        tmp_tensor = torch.tensor([rand_2_strs.get('Sigma').iloc[0],rand_2_strs.get('Sigma').iloc[1],
                               rand_2_strs.get('mu').iloc[0],rand_2_strs.get('mu').iloc[1],
                               rand_2_strs.get('th').iloc[0],rand_2_strs.get('th').iloc[1],
                               np.log(weighted_mse(np.array(rand_2_strs.get('Decay').iloc[0]),np.array(rand_2_strs.get('Decay').iloc[1]),t)[0])])
        print("it is tmp tensor")
        print(tmp_tensor)
        train_tensor = torch.cat((train_tensor,tmp_tensor),dim=0)
        print("this is after cat")
        print(train_tensor)
create_batches_to_device(train_data, test_data, device)

I have a result我有结果

it is tmp tensor
tensor([ 0.3244, -0.6401, -0.7959,  0.9019,  0.1468, -1.7093, -6.4419],
       dtype=torch.float64)
this is after cat
tensor([ 0.3244, -0.6401, -0.7959,  0.9019,  0.1468, -1.7093, -6.4419],
       dtype=torch.float64)
it is tmp tensor
tensor([ 1.2923, -0.3088, -0.1275,  0.6417, -1.3383,  1.4020, 28.9065],
       dtype=torch.float64)
this is after cat
tensor([ 0.3244, -0.6401, -0.7959,  0.9019,  0.1468, -1.7093, -6.4419,  1.2923,
        -0.3088, -0.1275,  0.6417, -1.3383,  1.4020, 28.9065],
       dtype=torch.float64)

and the result has no matter was dim=0 or dim=-1, the result is the same for both variance This is example(look what dim=-1)结果无论是dim = 0还是dim = -1,两个方差的结果都是相同的这是示例(看看dim = -1)

def create_batches_to_device(train_df, test_df, device,batch_size=2):
    train_tensor = torch.tensor([])
    for i in range (batch_size):
        rand_2_strs = train_df.sample(2)
        tmp_tensor = torch.tensor([rand_2_strs.get('Sigma').iloc[0],rand_2_strs.get('Sigma').iloc[1],
                               rand_2_strs.get('mu').iloc[0],rand_2_strs.get('mu').iloc[1],
                               rand_2_strs.get('th').iloc[0],rand_2_strs.get('th').iloc[1],
                               np.log(weighted_mse(np.array(rand_2_strs.get('Decay').iloc[0]),np.array(rand_2_strs.get('Decay').iloc[1]),t)[0])])
        print("it is tmp tensor")
        print(tmp_tensor)
        train_tensor = torch.cat((train_tensor,tmp_tensor),dim=-1)
        print("this is after cat")
        print(train_tensor)
create_batches_to_device(train_data, test_data, device)
 

and the result is the same结果是一样的

it is tmp tensor
tensor([  1.0183,   0.2162,   0.4987,  -0.0165,   0.2094,   0.9425, -14.4564],
       dtype=torch.float64)
this is after cat
tensor([  1.0183,   0.2162,   0.4987,  -0.0165,   0.2094,   0.9425, -14.4564],
       dtype=torch.float64)
it is tmp tensor
tensor([ 0.2389, -1.0108, -0.2350,  0.7105, -0.9200,  0.3282,  7.5456],
       dtype=torch.float64)
this is after cat
tensor([  1.0183,   0.2162,   0.4987,  -0.0165,   0.2094,   0.9425, -14.4564,
          0.2389,  -1.0108,  -0.2350,   0.7105,  -0.9200,   0.3282,   7.5456],
       dtype=torch.float64)

The problem was what tmp_tensor had shape ([7]) so I could to concatenate only on one dimension.问题是 tmp_tensor 有什么形状([7]),所以我只能在一维上连接。 The solution was that I shold to add one new string tmp_tensor = torch.unsqueeze(tmp_tensor, 0) and now tmp_tensor([1,7]) and I could using torch.cat without problem解决方案是我应该添加一个新字符串tmp_tensor = torch.unsqueeze(tmp_tensor, 0)现在 tmp_tensor([1,7]) 我可以毫无问题地使用torch.cat

def create_batches_to_device(train_df, test_df, device,batch_size=3):
    train_tensor = torch.tensor([])
    for i in range (batch_size):
        rand_2_strs = train_df.sample(2)
        tmp_tensor = torch.tensor([rand_2_strs.get('Sigma').iloc[0],rand_2_strs.get('Sigma').iloc[1],
                               rand_2_strs.get('mu').iloc[0],rand_2_strs.get('mu').iloc[1],
                               rand_2_strs.get('th').iloc[0],rand_2_strs.get('th').iloc[1],
                               np.log(weighted_mse(np.array(rand_2_strs.get('Decay').iloc[0]),np.array(rand_2_strs.get('Decay').iloc[1]),t)[0])])
        print("it is tmp tensor")
        tmp_tensor = torch.unsqueeze(tmp_tensor, 0)
        print(tmp_tensor.shape)
        train_tensor = torch.cat((train_tensor,tmp_tensor),dim=0)
        print("this is after cat")
        print(train_tensor)
create_batches_to_device(train_data, test_data, device)

and the result is结果是

it is tmp tensor
torch.Size([1, 7])
this is after cat
tensor([[ 0.9207, -0.9658,  0.0492,  1.6959,  0.4620, -0.2433, -6.4764]],
       dtype=torch.float64)
it is tmp tensor
torch.Size([1, 7])
this is after cat
tensor([[ 0.9207, -0.9658,  0.0492,  1.6959,  0.4620, -0.2433, -6.4764],
        [-0.5921, -0.1198,  0.6192, -0.0977, -0.1704,  1.2384,  9.4497]],
       dtype=torch.float64)
it is tmp tensor
torch.Size([1, 7])
this is after cat
tensor([[ 0.9207, -0.9658,  0.0492,  1.6959,  0.4620, -0.2433, -6.4764],
        [-0.5921, -0.1198,  0.6192, -0.0977, -0.1704,  1.2384,  9.4497],
        [ 0.3839, -0.3153,  0.6467, -0.9995, -0.7415, -0.5487, -6.5500]],
       dtype=torch.float64)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM