[英]numpy wont overcommit memory even when vm.overcommit_memory=1
I am running into a numpy error numpy.core._exceptions.MemoryError
in my code.我在我的代码中遇到了 numpy 错误numpy.core._exceptions.MemoryError
。 I have plenty of available memory in my machine so this shouldn't be a problem.我的机器上有很多可用的 memory,所以这应该不是问题。 (This is on a raspberry pi armv7l, 4GB) (这是在树莓派 armv7l,4GB 上)
$ free
total used free shared buff/cache available
Mem: 3748172 87636 3384520 8620 276016 3528836
Swap: 1048572 0 1048572
I have found this post which suggested that I should allow overcommit_memory in the kernel, and so I did:我发现这篇文章建议我应该在 kernel 中允许 overcommit_memory,所以我这样做了:
$ cat /proc/sys/vm/overcommit_memory
1
Now when I try to run this example:现在,当我尝试运行此示例时:
import numpy as np
arrays = [np.empty((18, 602, 640), dtype=np.float32) for i in range(200)]
I get the same error:我犯了同样的错误:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32
Why is python (or numpy) behaving in that way and how can I get it to work?为什么 python(或 numpy)以这种方式运行,我怎样才能让它工作?
EDIT: Answers to questions in replies:编辑:答复中的问题答案:
This is a 32bit system (armv7l)这是一个32位系统(armv7l)
>>> sys.maxsize
2147483647
I printed the approximate size (according to the error message each iteration should be 26.5MiB) at which the example fails:我打印了示例失败时的大致大小(根据错误消息,每次迭代应为 26.5MiB):
def allocate_arr(i):
print(i, i * 26.5)
return np.empty((18, 602, 640), dtype=np.float32)
arrays = [allocate_arr(i) for i in range(0, 200)]
The output shows that this fails below at around 3GB of RAM allocated: output 显示在分配了大约 3GB RAM 时失败:
1 26.5
2 53.0
3 79.5
...
111 2941.5
112 2968.0
113 2994.5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "<stdin>", line 3, in allocate_arr
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32
Is 3GB the limit? 3GB是极限吗? Is there a way to increase that?有没有办法增加它? Also isn't this the point of overcommitting?这不是过度承诺的意义所在吗?
By default 32-bit Linux has a 3:1 user/kernel split.默认情况下,32 位 Linux 具有 3:1 用户/内核拆分。 That is, of the 4 GB one can address with a 32-bit unsigned integer, 3 GB is reserved for the user space but 1 GB is reserved for kernel space.也就是说,在可以使用 32 位无符号 integer 寻址的 4 GB 中,3 GB 保留给用户空间,而 1 GB 保留给 kernel 空间。 Thus, any single process can use at most 3 GB memory. The vm.overcommit setting is not related to this, that is about using more virtual memory than there is actual physical memory backing the virtual memory.因此,任何单个进程最多可以使用 3 GB memory。vm.overcommit 设置与此无关,即使用比支持虚拟 memory 的实际物理 memory 更多的虚拟 memory。
There used to be so-called 4G/4G support in the Linux kernel (not sure if these patches were ever mainlined?), allowing the full 4 GB to be used by the user space process and another 4 GB address space by the kernel, at the cost of worse performance (TLB flush at every syscall?).在 Linux kernel 中曾经有所谓的 4G/4G 支持(不确定这些补丁是否曾经被主线化过?),允许用户空间进程使用完整的 4 GB,kernel 使用另外 4 GB 地址空间,以更差的性能为代价(每次系统调用都会刷新 TLB?)。 But AFAIU these features have bitrotted as everyone who's interested in using lots of memory has moved to 64-bit systems a long time ago.但是 AFAIU 这些功能已经有点腐烂了,因为每个对使用大量 memory 感兴趣的人很久以前就已经转移到 64 位系统了。
Others have exp similar issues in the past.其他人过去也有过类似的问题。 Does the issue persist even on a 64 bit OS
?即使在64 bit OS
上,问题是否仍然存在? It's possible that the issue is related to the fact that you are using a 32-bit
system.该问题可能与您使用的是32-bit
系统有关。 On a 32-bit system, the maximum amount of addressable memory
for any given process is 4GB.在 32 位系统上,任何给定进程的最大addressable memory
为 4GB。 It is possible that the OS
is reserving some of the address space
for the kernel
( 1GB), which could explain why you are hitting the limit at around 3GB. OS
可能会为kernel
( 1GB) 保留一些address space
,这可以解释为什么您会达到 3GB 左右的限制。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.