简体   繁体   中英

Why does importing numpy add 1 GB of virtual memory on Linux?

I have to run python in a resource constrained environment with only a few GB of virtual memory. Worse yet, I have to fork children from my main process as part of application design, all of which receive a copy-on-write allocation of this same amount of virtual memory on fork. The result is that after forking only 1 - 2 children, the process group hits the ceiling and shuts everything down. Finally, I am not able to remove numpy as a dependency; it is a strict requirement.

Any advice on how I can bring this initial memory allocation down?

eg

  1. Change the default amount allocated to numpy on import?
  2. Disable the feature and force python / numpy to allocate more dynamically?


Details:

Red Hat Enterprise Linux Server release 6.9 (Santiago)
Python 3.6.2
numpy>=1.13.3

Bare Interpreter:

import os
os.system('cat "/proc/{}/status"'.format(os.getpid()))

# ... VmRSS: 7300 kB
# ... VmData: 4348 kB
# ... VmSize: 129160 kB

import numpy
os.system('cat "/proc/{}/status"'.format(os.getpid()))

# ... VmRSS: 21020 kB
# ... VmData: 1003220 kB
# ... VmSize: 1247088 kB  

Thank you, skullgoblet1089, for raising questions on SO and at https://github.com/numpy/numpy/issues/10455 , and for answering. Citing your 2018-01-24 post:

Reducing threads with export OMP_NUM_THREADS=4 will bring down VM allocation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM