简体   繁体   中英

Why is HashSet in Java taking so much memory?

I'm loading a 1GB ASCII text file with about 38 million rows into a HashSet. Using Java 11, the process takes about 8GB of memory.

HashSet<String> addresses = new HashSet<>(38741847);
​try (Stream<String> lines = Files.lines(Paths.get("test.txt"), Charset.defaultCharset())) {
    lines​.forEach(addresses::add);
​}
​System.out.println(addresses.size());
​Thread.sleep(100000);

Why is Java taking so much memory?

In comparison, I've implemented the same thing in Python, which takes only 4GB of memory.

s = set()
with open("test.txt") as file:
for line in file:
    s.add(line)
print(len(s))
time.sleep(1000)

A HashSet has a load factor which defaults to 0.75. That means memory is reallocated once the hashset is 75% full. If your hash set should hold 38741847 elements, you have to initialize it with 38741847/0.75 or set a higher load factor:

new HashSet<>(38741847, 1); // load factor 1 (100%)

Meanwhile I found the answer here , where I also discovered a few alternative HashSet implementations which are part of the trove4j and hppc libraries. I've tested them with the same code.

trove4j took only 5.5GB

THashSet<String> s = new THashSet<>(38742847,1);

hppc took only 5GB

ObjectIdentityHashSet<String> s2 = new ObjectIdentityHashSet<>(38742847,1, 0.99); 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM