简体   繁体   中英

System.Collections.Generic.Dictionary = Ultimate performance?

I'm writing a Haxe C# target, and I've been studying performance differences for Haxe's std library so we can provide the best performance possible through its cross platform code.

One very good example is for the hash table code. I was a little reluctant about using .NET's Dictionary, as it seems bulky (structs for key/value pairs can take up a huge amount of memory because of memory alignment issues, besides from unnecessary information held by it), and since on the std library there is no such thing as an object hash, I really thought I could squeeze a little performance by not having to call GetHashCode, and inline it all along.

Also it's clear that the Dictionary implementation uses a linked list to deal with collisions, which is far from ideal.

So we started to implement our own solution, starting with IntHash (Dictionary) We first implemented Hopscotch hashing , but it really didn't turn out very well, but it was kind of obvious that it wouldn't support very well huge hash tables, since H is normally a machine word, and as H / Length increases, the poorer the performance.

We then jumped to implement a khash -inspired algorithm. This one had much potential, as its benchmarks are impressive, and it handles collisions on the same array. It had also some great things, like resizing without needing twice as memory as we would.

The benchmarks were disappointing. Of course, there is no need to say that memory usage was much lower on our implementation than Dictionary's. But I was hoping to get a nice performance boost also, but that was not the case, unfortunately. It wasn't too far below - less than an order of magnitude - but for both sets and gets, .NET's implementation still performed better.

So my question is: is that the best we have for C#? I tried looking for any custom solution, and it seems there is almost none. There is that C5 generic collection, but the code is so cluttered I did not even test. And I found no benchmark also.

So... Is that it? Should I just wrap around Dictionary<> ?

I've found that the .NET Dictionary performs well, if not exceptionally well, in most situations. It's a good general purpose implementation. The problem I most often run into is the 2-gigabyte limit. On a 64-bit system, you can't add more than about 89.5 million items to a dictionary (when the key is an integer or a reference, and the value is a reference). Dictionary overhead appears to be 24 bytes per item.

That limit makes itself known in a very odd way. The Dictionary seems to grow by doubling--when it gets full, it increases capacity to the next prime number that's at least double the current size. Because of that, the dictionary will grow to about 47 million and then throw an exception because when it tries to double (to 94 million), the memory allocation fails (due to the 2 gigabyte limit). I get around the problem by pre-allocating the Dictionary (ie call the constructor that lets you specify the capacity). That also speeds up populating the dictionary because it never has to grow, which entails allocating a new array and re-hashing everything.

What makes you say that Dictionary uses a linked list for collision resolution? I'm pretty sure it uses open addressing, but I don't know how it does the probes. I guess if it does linear probing, then the effect is similar to what you'd get with a linked list.

We wrote our own BigDictionary class to get past the 2-gigabyte limit and found that a straightforward open addressing scheme with linear probing gives reasonably good performance. It's not as fast as Dictionary , but it can handle hundreds of millions of items (billions if I had the memory).

That said, you should be able to write a faster task-specific hash table that outperforms the .NET Dictionary in some situations. But for a general purpose hash table I think you'll be hard pressed to do better than what the BCL provides.

There are many things to consider in desigining a "better" hash table. One of the reasons that the custom approaches you tried were slower or no better than the .NET Dictionary is that very often the performance of a hash table is very dependant on:

  • The data being hashed
  • The performance of the hash function
  • The load factor of the table
  • The number of collisions vs non-collisions
  • The algorithm for collision resolution
  • The amount of data in the table and how it's stored (by pointer/reference or directly within the buckets)
  • The access patterns to the data
  • The number of insertions/deletions vs retrievals
  • The need for resizing in a closed hashing/open addressing implementation
  • and many other factors...

With so many things to tweak and tune, it is difficult, without a significant amount of effort to come up with a general high performance (time and speed) hash table. That is why, if you are going to try to create a custom hash table instead of one built into a standard library (such as .NET) be ready to spend countless hours and be aware that your finely tuned implementation may be only tuned for the specific type and amount of data you are hashing.

Therefore, no, the .NET Dictionary is not the ultimate hash table for any specific purpose. But, given the frequency of dictionary use, I am sure that the Microsoft BCL (Base Class Library) team performed a huge amount of profiling to choose the approach they chose for the general case.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM