简体   繁体   English

PyTorch:是否可以将模型存储在CPU ram中,但是对于大型模型,可以在GPU上运行所有操作?

[英]PyTorch: Is there a way to store model in CPU ram, but run all operations on the GPU for large models?

From what I see, most people seem to be initializing an entire model, and sending the whole thing to the GPU. 从我看来,大多数人似乎正在初始化整个模型,然后将整个事情发送到GPU。 But I have a neural net model that is too big to fit entirely on my GPU. 但是我的神经网络模型太大了,无法完全适合我的GPU。 Is it possible to keep the model saved in ram, but run all the operations on the GPU? 是否可以将模型保存在ram中,但在GPU上运行所有操作?

I do not believe this is possible. 我认为这是不可能的。 However, one easy work around would be to split you model into sections that will fit into gpu memory along with your batch input. 但是,一种简单的解决方法是将模型分为多个部分,这些部分将与批处理输入一起放入gpu内存中。

  1. Send the first part(s) of the model to gpu and calculate outputs 将模型的第一部分发送到GPU并计算输出
  2. Release the former part of the model from gpu memory, and send the next section of the model to the gpu. 从gpu内存中释放模型的前一部分,然后将模型的下一部分发送到gpu。
  3. Input the output from 1 into the next section of the model and save outputs. 将输出从1输入到模型的下一部分并保存输出。

Repeat 1 through 3 until you reach your models final output. 重复1到3,直到达到模型的最终输出。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM