简体   繁体   English

为什么numpy在插入数组时立即崩溃?

[英]Why does numpy crash instantly upon an array insertion?

I've come across some weird behaviour in a project of mine. 我在我的一个项目中遇到了一些奇怪的行为。 Specifically, when this code is run: 具体来说,运行此代码时:

import numpy as np 

coefficientMatrix = np.zeros([12500, 43750])
coefficientMatrix[229, 798] = 1.0942131827

my Python process crashes: 我的Python进程崩溃了:

错误信息

What can be wrong here? 这里有什么问题?

System specs (in case that's relevant here): Windows 7 x64, 8Gb of RAM, Python 2.7 32-bit, numpy 1.9.2. 系统规格(如果在此处相关):Windows 7 x64、8Gb RAM,Python 2.7 32位,numpy 1.9.2。

The reason why you would get a MemoryError when you assign to an element in coefficientMatrix, rather than when you create the array using np.zeros , is that most OSs (including Windows 7) use lazy memory allocation . 在分配给系数np.zeros的元素而不是使用np.zeros创建数组时会出现MemoryError的原因是,大多数操作系统(包括Windows 7)都使用惰性内存分配

When you instantiate the array using np.zeros , Windows only allocates virtual memory address space rather than physical RAM. 使用np.zeros实例化数组时,Windows仅分配虚拟内存地址空间,而不分配物理RAM。 However, when you actually try to write to that chunk of memory then the OS will need to find enough physical memory to hold the array. 但是,当您实际尝试写入该内存块时,操作系统将需要找到足够的物理内存来容纳该阵列。 If it fails to do so, you will get a MemoryError . 如果这样做失败,您将得到MemoryError

Since your Python process is 32bit, it can address a maximum of 4GB of memory (and possibly even less) . 由于您的Python进程是32位的, 因此它最多可以处理4GB的内存(甚至可能更少) A 12500x43750 array of float64 values will take up 4.375GB of memory, so you simply can't have an array that big using 32bit Python. 一个12500x43750的float64值数组将占用4.375GB的内存,因此使用32位Python根本无法拥有那么大的数组。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM