簡體   English   中英

Pytorch 中 cat 的 Cuda 版本?

[英]Cuda version for cat in Pytorch?

我正在嘗試為 Word Embedding 構建一個 CNN 架構。 我正在嘗試使用 torch.cat 連接兩個張量,但它拋出了這個錯誤:

     22         print(z1.size())
---> 23         zcat = torch.cat(lz, dim = 2)
     24         print("zcat",zcat.size())
     25         zcat2=zcat.reshape([batch_size, 1, 100, 3000])

RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 1 in sequence argument at position #1 'tensors'

附上架構以供參考:

    def __init__(self,vocab_size,embedding_dm,pad_idx):
        super().__init__()
        self.embedding = nn.Embedding(vocab_size,embedding_dim,padding_idx = pad_idx)
        self.convs = nn.ModuleList([nn.Conv2d(in_channels = 1,out_channels = 50,kernel_size = (1,fs)) for fs in (3,4,5)])
        self.conv2 = nn.Conv2d(in_channels = 50,out_channels = 100,kernel_size = (1,2))
        self.fc1 = nn.Linear(100000,150) #Change this 
        self.fc2 = nn.Linear(150,1)
        self.dropout = nn.Dropout(0.5)
    def forward(self,text):
        print("text",text.size())
        embedded = self.embedding(text.T)
        embedded = embedded.permute(0, 2,1)
        print("embedded",embedded.size())
        x=embedded.size(2)
        y=3000-x
        print(y,"hello")
        batch_size=embedded.size(0)
        z=np.zeros((batch_size,100,y))
        z1=torch.from_numpy(z).float()
        lz=[embedded,z1]
        print(z1.size())
        zcat = torch.cat(lz, dim = 2)
        print("zcat",zcat.size())
        zcat2=zcat.reshape([batch_size, 1, 100, 3000])
        print("zcat2",zcat2.size())
#         embedded = embedded.reshape([embedded.shape[0],1,])
        print(embedded.size(),"embedding")
        conved = [F.relu(conv(embedded)) for conv in self.convs]
        pooled = [F.max_pool2d(conv,(1,2)) for conv in conved]
        print("Pool")
        for pl in pooled:
            print(pl.size())
        cat = torch.cat(pooled,dim = 3)
        print("cat",cat.size())
        conved2 = F.relu(self.conv2(cat))
        print("conved2",conved2.size())
        pooled2 = F.max_pool2d(conved2,(1,2))
        print(pooled2.size(),"pooled2")
        return 0
#         return pooled2 

難道我做錯了什么? 幫助表示贊賞。 謝謝!

知道了。 只需使用以下方法創建張量: - torch.zeros(batch_size,100,y,dtype = embedded.dtype,device = embedded.device)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM