簡體   English   中英

RuntimeError:預期的標量類型為 Double 但發現為 Float

[英]RuntimeError: expected scalar type Double but found Float

我正在與 GCNN 合作。 我的輸入數據是 float64。 但是每當我運行我的代碼時,都會顯示此錯誤。 我嘗試將所有張量轉換為雙張量,但沒有成功。 主要是我的數據在 numpy 數組中,然后我將它們轉換成 pytorch 張量。

這是我的數據。 這里我將numpy數組轉換成張量,並將張量轉換成幾何數據來運行gcnn。

e_index1 = torch.tensor(edge_index)
x1 = torch.tensor(x)
y1 = torch.tensor(y)

print(x.dtype)
print(y.dtype)
print(edge_index.dtype)

from torch_geometric.data import Data
data = Data(x=x1, edge_index=e_index1, y=y1)

輸出:

float64
float64
int64

這是我的 gcnn 類代碼和其余代碼。

import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv


class GCN(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = GCNConv(data.num_node_features, 16)
        self.conv2 = GCNConv(16, data.num_node_features)

    def forward(self, data):
        x, edge_index = data.x, data.edge_index

        x = self.conv1(x, edge_index)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.conv2(x, edge_index)

        return F.log_softmax(x, dim=1)
device = torch.device('cpu')
model = GCN().to(device)
data = data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

model.train()
for epoch in range(10):
    optimizer.zero_grad()
    out = model(data)
    loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
    loss.backward()
    optimizer.step()

錯誤日志

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-148-e816c251670b> in <module>
      7 for epoch in range(10):
      8     optimizer.zero_grad()
----> 9     out = model(data)
     10     loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
     11     loss.backward()

5 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-147-c1bfee724570> in forward(self, data)
     13         x, edge_index = data.x.type(torch.DoubleTensor), data.edge_index
     14 
---> 15         x = self.conv1(x, edge_index)
     16         x = F.relu(x)
     17         x = F.dropout(x, training=self.training)

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/conv/gcn_conv.py in forward(self, x, edge_index, edge_weight)
    193                     edge_index = cache
    194 
--> 195         x = self.lin(x)
    196 
    197         # propagate_type: (x: Tensor, edge_weight: OptTensor)

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/dense/linear.py in forward(self, x)
    134             x (Tensor): The features.
    135         """
--> 136         return F.linear(x, self.weight, self.bias)
    137 
    138     @torch.no_grad()

RuntimeError: expected scalar type Double but found Float

我還在 stackover 流博客中嘗試了給定的解決方案。 但是沒有用。 重復顯示相同的錯誤。

您可以使用model.double()將所有模型參數轉換為 double 類型。 鑒於您的輸入數據是雙倍的,這應該會提供一個兼容的模型。 請記住,由於 double 類型具有更高的精度,因此通常比 single 類型慢。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM