[英]My CNN Model uses too much memory on my GPU. How can I host some Tensors on my CPU memory?
I'm training a CNN model on an NVidia RTX 2080, it became bigger and bigger and I have now some memory issues with the card.我正在 NVidia RTX 2080 上训练一个 CNN 模型,它变得越来越大,现在我的卡出现了一些内存问题。 I read some paper about this subject, and it seems possible with Tensorflow to host some nodes on the CPU memory during the training and retrieve it in the GPU memory later when needed (as in http://learningsys.org/nips17/assets/papers/paper_18.pdf ).
我读了一些关于这个主题的论文,似乎可以使用 Tensorflow 在训练期间在 CPU 内存上托管一些节点,并在需要时在 GPU 内存中检索它(如http://learningsys.org/nips17/assets/论文/paper_18.pdf )。
Any ideas/docs/examples?任何想法/文档/示例?
Thanks!谢谢!
Without any code it's difficult to help.没有任何代码很难提供帮助。 Generally you can take a look at the documentation .
一般你可以看看文档。
For example with:例如:
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
you can create variables explicitly on the cpu.您可以在 cpu 上显式创建变量。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.