简体   繁体   中英

What is the Big O performance of maps in golang?

The "Map types" section of the go language specification describes the interface and general usage of map types and the "Go maps in action" post on The Go Blog casually mentions hash tables and "fast lookups, adds, and deletes".

The current runtime/map.go source code describes its implementation as a hashtable (which are typically amortized O(1) ); however, I don't see any guarantee of performance characteristics (such as Big O performance) in the language specification or other materials.

Does the go language make any performance guarantees (eg constant-time insertion/lookup/deletion) for map types or only interface guarantees? (Compare to the Java language where interfaces and implementations are clearly separate.)

The language reference doesn't make explicit guarantees about the performance of maps. There's an implicit expectation that maps perform like you expect hash-tables to perform. I don't see how a performance guarantee would avoid being either vaguely specified or inaccurate.

Big-O complexity is a poor way to describe run-times for maps: practically speaking the actual clock time is relevant and complexity isn't. Theoretically, maps with keys from finite domains (such as ints) are trivially O(1) in space and time, and maps with keys with infinite domains (such as strings) require hashing and the specifics of equality testing to be included in the cost, which makes inserts and lookups best case O(N log N) on average (since the keys have to be at least O(log N) in size on average to construct a hash table with N entries. Unless you get these details right in the specification it's going to be inaccurate, and the benefit of getting it right isn't clearly worth it.

To provide guarantees about actual run-time rather than complexity it'd be also be difficult: there's a wide range of target machines, as well as the confounding problems of caching and garbage collection.

To be precise hash tables performance is O(1 + n/k) to resolve collisions, where n/k refer to load-factor. Go spec declare maps as non-restrictive in keys quantity. So they need dynamic resizing with partly rehashing to keep load-factor when growing or shrinking. This means near O(1) can be easily achieved for lookup in constant size maps, but can't even theoretically be guaranteed for massive insertion or deletions. Such operations want reallocation, partial rehashing and possible garbage collections to keep load-factor and reasonable memory usage.

From what I can see, the Go Programming Language Specification does not make any performance guarantees for map types. Hash tables in general are O(1) for inserting, looking up and deleting data; we can assume that this is also true for Go's maps.

If you are worried about map performance, you can always benchmark your code on different loads and decide, whether to use Go's maps or some other data structure.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM