简体   繁体   English

在 Pytorch 中,当传输到 GPU 时,我收到错误“在 CPU 上,但预计在 GPU 上”

[英]In Pytorch, when transferring to GPU, I get an error “is on CPU, but expected to be on GPU”

Error example: "Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU".错误示例:“'out' 的张量在 CPU 上,参数 #1 'self' 的张量在 CPU 上,但预计它们在 GPU 上”。 I was stuck on the tutorial for classification:我被困在分类教程上:

https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html

Note: The code is for regression.注意:代码用于回归。

Code is below:代码如下:

class Net(nn.Module):
    def __init__(self, num_features, size_hidden_layer, n_hidden_layer):
        super(Net, self).__init__()
        self.size_hidden_layer = size_hidden_layer
        self.n_hidden_layer = n_hidden_layer
        self.hidden_layers = list()
        self.hidden_layers.append(nn.Linear(num_features, size_hidden_layer))
        for _ in range(n_hidden_layer-1):
            self.hidden_layers.append(nn.Linear(size_hidden_layer, size_hidden_layer))
        self.last_layer = nn.Linear(size_hidden_layer, 1)

    def forward(self, x):
        for i in range(self.n_hidden_layer):
            x = torch.relu(self.hidden_layers[i](x))
        return self.last_layer(x)

What does the tutorial section not mention is that the parameters have to be wrapped in order to be read by the GPU.教程部分没有提到的是必须包装参数才能被 GPU 读取。 For example, look at __init__ where normal and neural network layers are wrapped in nn.Sequential .例如,查看__init__ ,其中正常和神经网络层被包裹在nn.Sequential中。

class Net(nn.Module):
    def __init__(self, num_features, size_hidden_layer, n_hidden_layer):
        super(Net, self).__init__()
        self.size_hidden_layer = size_hidden_layer
        self.n_hidden_layer = n_hidden_layer
        hidden_layers = list()
        hidden_layers.append(nn.Linear(num_features, size_hidden_layer))
        for _ in range(n_hidden_layer-1):
            hidden_layers.append(nn.Linear(size_hidden_layer, size_hidden_layer))
        self.hidden_layers = nn.Sequential(*hidden_layers)
        self.last_layer = nn.Linear(size_hidden_layer, 1)

    def forward(self, x):
        for i in range(self.n_hidden_layer):
            x = torch.relu(self.hidden_layers[i](x))
        return self.last_layer(x)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 PyTorch RuntimeError: 参数 #1 'self' 的张量在 CPU 上,但预计它们会在 GPU 上 - PyTorch RuntimeError: Tensor for argument #1 'self' is on CPU, but expected them to be on GPU Pytorch CPU和GPU并行运行 - Pytorch CPU and GPU run in parallel 为什么 pytorch matmul 在 cpu 和 gpu 上执行时会得到不同的结果? - Why does pytorch matmul get different results when executed on cpu and gpu? 带有自定义层的 PyTorch 网络在 CPU 上运行良好,但在移动到 GPU 时得到 cudaErrorIllegalAddress - PyTorch network with customized layer works fine on CPU but get cudaErrorIllegalAddress when moved to GPU Pytorch / 加载优化器的 state dict 时的设备问题(cpu,gpu) - Pytorch / device problem(cpu, gpu) when load state dict for optimizer 'out' 的 Pytorch 张量在 CPU 上,参数 #1 'self' 的张量在 CPU 上,但希望它们在 GPU 上(同时检查 addmm 的参数) - Pytorch Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm) 将 PyTorch 代码从 CPU 移植到 GPU - Porting PyTorch code from CPU to GPU Pytorch速度比较 - GPU比CPU慢 - Pytorch speed comparison - GPU slower than CPU Pytorch CPU CUDA 设备负载无 gpu - Pytorch CPU CUDA device load without gpu 如何在pytorch中从gpu返回cpu? - How to return back to cpu from gpu in pytorch?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM