简体   繁体   中英

Dictionary with two hash functions in C#?

I've got a huge (>>10m) list of entries. Each entry offers two hash functions:

  • Cheap: quickly computes hash, but its distribution is terrible (may put 99% of items in 1% of hash space)
  • Expensive: takes a lot of time to compute, but the distribution is a lot better also

An ordinary Dictionary lets me use only one of these hash functions. I'd like a Dictionary that uses the cheap hash function first, and checks the expensive one on collisions.

It seems like a good idea to use a dictionary inside a dictionory for this. I currently basically use this monstrosity:

Dictionary<int, Dictionary<int, List<Foo>>>;

I improved this design so the expensive hash gets called only if there are actually two items of the same cheap hash.

It fits perfectly and does a flawless job for me, but it looks like something that should have died 65 million years ago.

To my knowledge, this functionality is not included in the basic framework. I am about to write a DoubleHashedDictionary class but I wanted to know of your opinion first.

As for my specific case:
First hash function = number of files in a file system directory (fast) Second hash function = sum of size of files (slow)

  • Changed title and added more informations.
  • Added quite important missing detail

In your case, you are technically using a modified function (A|B), not a double-hashed. However, depending on how huge your "huge" list of entries is and the characteristics of your data, consider the following:

  • A 20% full hash table with a not-so-good distribution can have more than 80% chance of collision. This means your expected function cost could be: (0.8 expensive + 0.2 cheap) + (cost of lookups). So if your table is more than 20% full it may not be worth using the (A|B) scheme.

  • Coming up with a perfect hash function is possible but O(n^3) which makes it impractical.

  • If performance is supremely important, you can make a specifically tuned hash table for your specific data by testing various hash functions on your key data.

Have you taken a look at the Power Collections or C5 Collections libaries? The Power Collections library hasn't had much action recently, but the C5 stuff seems to be fairly up to date.

I'm not sure if either library has what you need, but they're pretty useful and they're open source so it may provide a decent base implementation for you to extend to your desired functionality.

You're basically talking about a hash table of hash table's, each using a different GetHashCode implementation... while it's possible I think you'd want to consider seriously whether you'll actually get a performance improvement over just doing one or the other...

Will there actually be a substantial number of objects that will be located via the quick-hash mechanism without having to resort to the more expensive to narrow it down further? Because if you can't locate a significant amount purely off the first calculation you really save nothing by doing it in two steps (Not knowing the data it's hard to predict whether this is the case).

If it is going to be a significant amount located in one step then you'll probably have to do a fair bit of tuning to work out how many records to store on each hash location of the outer before resorting to an inner "expensive" hashtable lookup rather than the more treatment of hashed data, but under certain circumstances I can see how you'd get a performance gain from this (The circumstances would be few and far between, but aren't inconceivable).

Edit

I just saw your ammendment to the question - you plan to do both lookups regardless... I doubt you'll get any performance benefits from this that you can't get just by configuring the main hash table a bit better. Have you tried using a single dictionary with an appropriate capacity passed in the constructor and perhaps an XOR of the two hash codes as your hash code?

First off, I think you're on the right path to implement your own hashtable, if what you are describing is truely desired.But as a critic, I'd like to ask a few questions:

Have you considered using something more unique for each entry?

I am assuming that each entry is a file system directory information, have you considered using its full path as key? prefixing with computer name/ip address?

On the other hand, if you're using number of files as hash key, are those directories never going to change? Because if the hash key/result changes, you will never be able to find it again.

While on this topic, if the directory content/size is never going to change, can you store that value somewhere to save the time to actually calculate that?

Just my few cents.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM