简体   繁体   English

超越Java中的Integer.MAX_VALUE约束

[英]go beyond Integer.MAX_VALUE constraints in Java

Setting aside the heap's capacity, are there ways to go beyond Integer.MAX_VALUE constraints in Java? 抛开堆的容量,有没有办法超越Java中的Integer.MAX_VALUE约束?

Examples are: 例如:

  1. Collections limit themselves to Integer.MAX_VALUE. 集合将自己限制为Integer.MAX_VALUE。
  2. StringBuilder / StringBuffer limit themselves to Integer.MAX_VALUE. StringBuilder / StringBuffer将自身限制为Integer.MAX_VALUE。

If you have a huge Collection you're going to hit all sorts of practical limits before you ever have 2 31 - 1 items in it. 如果你有一个巨大的收藏品,你将会遇到各种各样的实际限制,然后才会有2 31 - 1项。 A Collection with a million items in it is going to be pretty unwieldy, let alone one with more than a thousands times more than that. 一个拥有一百万件物品的系列将会非常笨拙,更不用说超过千分之一的了。

Similarly, a StringBuilder can build a String that's 2GB in size before it hits the MAX_VALUE limit which is more than adequate for any practical purpose. 类似地,StringBuilder可以在达到MAX_VALUE限制之前构建一个大小为2GB的String,这对于任何实际目的都是足够的。

If you truly think that you might be hitting these limits your application should be storing your data in a different way, probably in a database. 如果您真的认为可能达到了这些限制,那么您的应用程序应该以不同的方式存储您的数据,可能是在数据库中。

With a long? 用了很久? Works for me. 适合我。

Edit: Ah, clarification of the question. 编辑: 啊,澄清问题。 Cool. 凉。 My new and improved answer: 我新的和改进的答案:

With a paging algorithm. 使用分页算法。

Coincidentally, somewhat recently for another question ( Binary search in a sorted (memory-mapped ?) file in java ) , I whipped up a paging algorithm to get around the int parameters in the java.nio.MappedByteBuffer API. 巧合的是,最近在另一个问题上在java中的有序(内存映射?)文件中进行二进制搜索 ,我提出了一种分页算法来绕过java.nio.MappedByteBuffer API中的int参数。

You can create your own collections which have a long size() based on the source code for those collections. 您可以根据这些集合的源代码创建自己的集合,这些集合具有long size()。 To have larger arrays of Objects for example, you can have an array of arrays (and stitch these together) 例如,要拥有更大的对象数组,您可以拥有一个数组数组(并将这些数组合在一起)

This approach will allow almost 2^62 elements. 这种方法将允许几乎2 ^ 62个元素。

Array indexes are limited by Integer.MAX_VALUE, not the physical size of the array. 数组索引受Integer.MAX_VALUE限制,而不是数组的物理大小。

Therefore the maximum size of an array is linked to the size of the array-type. 因此,数组的最大大小与数组类型的大小相关联。

byte = 1 byte => max  2 Gb data
char = 2 byte => max  4 Gb data
int  = 4 byte => max  8 Gb data
long = 8 byte => max 16 Gb data

Dictionaries are a different story because they often use techniques like buckets or an internal data layout as a tree. 字典是一个不同的故事,因为它们经常使用桶等技术或内部数据布局作为树。 Therefore these "limits" usually dont apply or you will need even more data to reach the limit. 因此,这些“限制”通常不适用,或者您需要更多数据才能达到极限。

Short: Integer.MAX_VALUE is not really a limit because you need lots of memory to actually reach the limit. 短:Integer.MAX_VALUE实际上不是限制因为你需要大量的内存来实际达到极限。 If you should ever reach this limit you might want to think about improving your algorithm and/or data-layout :) 如果您应该达到此限制,您可能需要考虑改进算法和/或数据布局:)

是的,使用BigInteger类。

内存升级是必要的.. :)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM