[英]Why does importing numpy add 1 GB of virtual memory on Linux?
I have to run python in a resource constrained environment with only a few GB of virtual memory. 我必须在资源受限的环境中运行python,只有几GB的虚拟内存。 Worse yet, I have to fork children from my main process as part of application design, all of which receive a copy-on-write allocation of this same amount of virtual memory on fork.
更糟糕的是,作为应用程序设计的一部分,我必须从我的主进程中派出子进程,所有这些都在fork上接收相同数量的虚拟内存的写时复制分配。 The result is that after forking only 1 - 2 children, the process group hits the ceiling and shuts everything down.
结果是,在仅分配了1-2个孩子之后,过程组击中了天花板并关闭了所有东西。 Finally, I am not able to remove numpy as a dependency;
最后,我无法删除numpy作为依赖; it is a strict requirement.
这是一个严格的要求。
Any advice on how I can bring this initial memory allocation down? 关于如何降低初始内存分配的任何建议?
eg 例如
Details: 细节:
Red Hat Enterprise Linux Server release 6.9 (Santiago) 红帽企业Linux服务器版本6.9(圣地亚哥)
Python 3.6.2 Python 3.6.2
numpy>=1.13.3 numpy的> = 1.13.3
Bare Interpreter: 裸露的翻译:
import os
os.system('cat "/proc/{}/status"'.format(os.getpid()))
# ... VmRSS: 7300 kB
# ... VmData: 4348 kB
# ... VmSize: 129160 kB
import numpy
os.system('cat "/proc/{}/status"'.format(os.getpid()))
# ... VmRSS: 21020 kB
# ... VmData: 1003220 kB
# ... VmSize: 1247088 kB
Thank you, skullgoblet1089, for raising questions on SO and at https://github.com/numpy/numpy/issues/10455 , and for answering. 谢谢你,skullgoblet1089,提出关于SO和https://github.com/numpy/numpy/issues/10455的问题 ,并回答。 Citing your 2018-01-24 post:
引用你的2018-01-24帖子:
Reducing threads with export OMP_NUM_THREADS=4
will bring down VM allocation. 通过
export OMP_NUM_THREADS=4
减少线程将降低VM分配。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.