简体   繁体   中英

std::unordered_map very high memory usage

Yesterday i tried to use std::unordered_map and this code confused me how much memory it used.

typedef list<string> entityId_list;
struct tile_content {
   char cost;
   entityId_list entities;
};
unordered_map<int, tile_content> hash_map;

for (size_t i = 0; i < 19200; i++) {
   tile_content t;
   t.cost = 1;
   map[i] = t;
}

All this parts of code was compiled in MS VS2010 in debug mode. What I've been seen in my task manager was about 1200 kb of "clean" process, but after filling hash_map it uses 8124 kb of memory. Is it normal behavior of unordered_map ? Why so much memory used?

The unordered_map structure is designed to hold large numbers of objects in a way that makes adds, deletes, lookups, and orderless traverses efficient. It's not meant to be memory-efficient for small data structures. To avoid the penalties associated with resizing, it allocates many hash chain heads when it's first created.

That's roughly 6MB for ~20k objects, so 300 bytes per object. Given the hash table may well be sized to have several times more buckets than current entries, each bucket may itself be a pointer to a list or vector of colliding objects, each heap allocation involved in all of that has probably been rounded up to the nearest power of two, and you've got debug on which may generate some extra bloat, it all sounds about right to me.

Anyway, you're not going to get sympathy for the memory or CPU efficiency of anything in debug build ;-P. Microsoft can inject any slop they like in there, and the user has no right of expectations around performance. If you find it's bad in an optimised build, then you've got something to talk about.

More generally, how it scales with size() is very important, but it's entirely legitimate to wonder how a program would go with a huge number of relatively small unordered maps. Worth noting that below a certain size() even brute force searches in a vector, binary searches in a sorted vector, or a binary tree may out-perform an unordered map, as well as being more memory efficient.

This doesn't necessarily mean that the hash map uses so much memory, but that the process has requested that much memory from the OS.

This memory is then used to satisfy malloc/new requests by the program. Some (or most, I am not sure about this) memory allocators require more memory from the OS than needed at that point in time for efficiency.

To know how much memory is used by the unordered_map I would use a memory profiler like perftools .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM