简体   繁体   English

Java在真正内存不足之前抛出内存不足异常?

[英]Java throwing out of memory exception before it's really out of memory?

I wish to make a large int array that very nearly fills all of the memory available to the JVM.我希望创建一个大的 int 数组,它几乎可以填满 JVM 可用的所有内存。 Take this code, for instance:以这段代码为例:

    final int numBuffers = (int) ((runtime.freeMemory() - 200000L) / (BUFFER_SIZE));
    System.out.println(runtime.freeMemory());
    System.out.println(numBuffers*(BUFFER_SIZE/4)*4);
    buffers = new int[numBuffers*(BUFFER_SIZE / 4)];

When run with a heap size of 10M, this throws an OutOfMemoryException, despite the output from the printlns being:当以 10M 的堆大小运行时,这会抛出 OutOfMemoryException,尽管 printlns 的输出是:

9487176
9273344

I realise the array is going to have some overheads, but not 200k, surely?我知道数组会有一些开销,但不是 200k,对吧? Why does java fail to allocate memory for something it claims to have enough space for?为什么 java 无法为它声称有足够空间的东西分配内存? I have to set that constant that is subtracted to something around 4M before Java will run this (By which time the printlns are looking more like: 9487176 5472256 )在 Java 运行它之前,我必须将减去的常量设置为 4M 左右(此时 printlns 看起来更像是:9487176 5472256)

Even more bewilderingly, if I replace buffers with a 2D array:更令人困惑的是,如果我用二维数组替换缓冲区:

buffers = new int[numBuffers][BUFFER_SIZE / 4];

Then it runs without complaint using the 200k subtraction shown above - even though the amount of integers being stored is the same in both arrays (And wouldn't the overheads on a 2D array be larger than that of a 1D array, since it's got all those references to other arrays to store).然后它使用上面显示的 200k 减法毫无怨言地运行 - 即使两个数组中存储的整数数量相同(并且二维数组的开销不会大于一维数组的开销,因为它得到了所有那些对要存储的其他数组的引用)。

Any ideas?有任何想法吗?

The VM will divide the heap memory into different areas (mainly for the garbage collector), so you will run out of memory when you attempt to allocate a single object of nearly the entire heap size. VM 会将堆内存划分为不同的区域(主要用于垃圾收集器),因此当您尝试分配几乎整个堆大小的单个对象时,您将耗尽内存。

Also, some memory will already have been used up by the JRE.此外,JRE 已经用完了一些内存。 200k is nothing with todays memory sizes, and 10M heap is almost unrealistically small for most applications. 200k 对于今天的内存大小来说不算什么,10M 堆对于大多数应用程序来说几乎是不切实际的小。

The actual overhead of an array is relatively small, on a 32bit VM its 12 bytes IIRC (plus what gets wasted if the size is less than the minimal granularity, which is AFAIK 8 bytes).数组的实际开销相对较小,在 32 位 VM 上它的 12 字节 IIRC(如果大小小于最小粒度,即 AFAIK 8 字节,则会浪费什么)。 So in the worst case you have something like 19 bytes overhead per array.所以在最坏的情况下,每个数组的开销大约为 19 字节。

Note that Java has no 2D (multi-dimensional) arrays, it implements this internally as an array of arrays.请注意,Java 没有 2D(多维)数组,它在内部将其实现为数组的数组。

In the 2D case, you are allocating more, smaller objects.在 2D 情况下,您正在分配更多、更小的对象。 The memory manager is objecting to the single large object taking up most of the heap.内存管理器反对单个大对象占用大部分堆。 Why this is objectionable is a detail of the garbage collection scheme-- it's probably because something like it can move the smaller objects between generations and the heap won't accomodate moving the single large object around.为什么这是令人反感的是垃圾收集方案的一个细节——这可能是因为类似它的东西可以在几代之间移动较小的对象,而堆不会容纳移动单个大对象。

This might be due to memory fragmentation and the JVM's inability to allocate an array of that size given the current heap.这可能是由于内存碎片和 JVM 无法在给定当前堆的情况下分配该大小的数组。

Imagine your heap is 10 x long:假设您的堆有 10 x长:

xxxxxxxxxx 

Then, you allocate an object 0 somehere.然后,你在这里分配一个对象0 This makes your heap look like:这使您的堆看起来像:

xxxxxxx0xx

Now, you can no longer allocate those 10 x spaces.现在,您不能再分配那 10 个x空间。 You can not even allocate 8 x s, despite the fact that available memory is 9 x s.你甚至不能分配 8 x s,尽管可用内存是 9 x s。

The fact is that an array of arrays does not suffer from the same problem because it's not contiguous.事实上,数组的数组不会遇到同样的问题,因为它不连续。

EDIT : Please note that the above is a very simplistic view of the problem.编辑:请注意,以上是对问题的非常简单的看法。 When in need of space in the heap, Java's garbage collector will try to collect as much memory as it can and, if really, really necessary, try to compact the heap.当堆中需要空间时,Java 的垃圾收集器将尝试收集尽可能多的内存,如果真的、真的有必要,会尝试压缩堆。 However, some objects might not be movable or collectible, creating heap fragmentation and putting you in the above situation.但是,某些对象可能不可移动或不可收集,从而产生堆碎片并使您处于上述情况。

There are also many other factors that you have to consider, some of which include: memory leaks either in the VM (not very likely) or your application (also not likely for a simple scenario), unreliability of using Runtime.freeMemory() (the GC might run right after the call and the available free memory could change), implementation details of each particular JVM, etc.您还必须考虑许多其他因素,其中一些包括:VM(不太可能)或您的应用程序(对于简单场景也不太可能)的内存泄漏,使用Runtime.freeMemory()的不可靠性( GC 可能会在调用后立即运行,并且可用的空闲内存可能会发生变化),每个特定 JVM 的实现细节等。

The point is, as a rule of thumb, don't always expect to have the full amount of Runtime.freeMemory() available to your application.关键是,根据经验,不要总是期望您的应用程序可以使用全部的Runtime.freeMemory()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM