简体   繁体   中英

How to check if Dictionary already has a key 'x'?

I am trying to implement simple algorithm with use of C#'s Dictionary :

My 'outer' dictionary looks like this : Dictionary<paramID, Dictionary<string, object>> [where paramID is simply an identifier which holds 2 strings]

if key 'x' is already in the dictionary then add specific entry to this record's dictionary, if it doesn't exist then add its entry to the outer Dictionary and then add entry to the inner dictionary.

Somehow, when I use TryGetValue it always returns false, therefore it always creates new entries in the outer Dictionary - what produces duplicates.

My code looks more or less like this :

    Dictionary<string, object> tempDict = new Dictionary<string, object>();

    if(outerDict.TryGetValue(new paramID(xKey, xValue), out tempDict))
    {
     tempDict.Add(newKey, newValue);
    } 

Block inside the if is never executed, even if there is this specific entry in the outer Dictionary.

Am I missing something ? (If you want I can post screen shots from debugger - or something else if you desire)

If you haven't over-ridden equals and GetHashCode on your paramID type, and it's a class rather than a struct, then the default equality meaning will be in effect, and each paramID will only be equal to itself.

You likely want something like:

public class ParamID : IEquatable<ParamID> // IEquatable makes this faster
{
  private readonly string _first; //not necessary, but immutability of keys prevents other possible bugs
  private readonly string _second;
  public ParamID(string first, string second)
  {
    _first = first;
    _second = second;
  }
  public bool Equals(ParamID other)
  {
    //change for case-insensitive, culture-aware, etc.
    return other != null && _first == other._first && _second == other._second;
  }
  public override bool Equals(object other)
  {
    return Equals(other as ParamID);
  }
  public override int GetHashCode()
  {
    //change for case-insensitive, culture-aware, etc.
    int fHash = _first.GetHashCode();
    return ((fHash << 16) | (fHash >> 16)) ^ _second.GetHashCode();
  }
}

For the requested explanation, I'm going to do a different version of ParamID where the string comparison is case-insensitive and ordinal rather than culture based (a form that would be appropriate for some computer-readable codes (eg matching keywords in a case-insensitive computer language or case-insensitive identifiers like language tags) but not for something human-readable (eg it will not realise that "SS" is a case-insensitive match to "ß"). This version also considers {"A", "B"} to match {"B", "A"} - that is, it doesn't care what way around the strings are. By doing a different version with different rules it should be possible to touch on a few of the design considerations that come into play.

Let's start with our class containing just the two fields that are it's state:

public class ParamID
{
  private readonly string _first; //not necessary, but immutability of keys prevents other possible bugs
  private readonly string _second;
  public ParamID(string first, string second)
  {
    _first = first;
    _second = second;
  }
}

At this point if we do the following:

ParamID x = new ParamID("a", "b");
ParamID y = new ParamID("a", "b");
ParamID z = x;
bool a = x == y;//a is false
bool b = z == x;//b is true

Because by default a reference type is only equal to itself. Why? Well firstly, sometimes that's just what we want, and secondly it isn't always clear what else we might want without the programmer defining how equality works.

Note also, that if ParamID was a struct, then it would have equality defined much like what you wanted. However, the implementation would be rather inefficient, and also buggy if it contained a decimal, so either way it's always a good idea to implement equality explicitly.

The first thing we are going to do to give this a different concept of equality is to override IEquatable<ParamID> . This is not strictly necessary, (and didn't exist until .NET 2.0) but:

  1. It will be more efficient in a lot of use cases, including when key to a Dictionary<TKey, TValue> .
  2. It's easy to do the next step with this as a starting point.

Now, there are four rules we must follow when we implement an equality concept:

  1. An object must still be always equal to itself.
  2. If X == Y and X != Z, then later if the state of none of those objects has changed, X == Y and X != Z still.
  3. If X == Y and Y == Z, then X == Z.
  4. If X == Y and Y != Z then X != Z.

Most of the time, you'll end up following all these rules without even thinking about it, you just have to check them if you're being particularly strange and clever in your implementation. Rule 1 is also something that we can take advantage of to give us a performance boost in some cases:

public class ParamID : IEquatable<ParamID>
{
  private readonly string _first; //not necessary, but immutability of keys prevents other possible bugs
  private readonly string _second;
  public ParamID(string first, string second)
  {
    _first = first;
    _second = second;
  }
  public bool Equals(ParamID other)
  {
    if(other == null)
      return false;
    if(ReferenceEquals(this, other))
      return true;
    if(string.Compare(_first, other._first, StringComparison.InvariantCultureIgnoreCase) == 0 && string.Compare(_second, other._second, StringComparison.InvariantCultureIgnoreCase) == 0)
      return true;
    return string.Compare(_first, other._second, StringComparison.InvariantCultureIgnoreCase) == 0 && string.Compare(_second, other._first, StringComparison.InvariantCultureIgnoreCase) == 0;
  }
}

The first thing we've done is see if we're being compared with equality to null. We almost always want to return false in such cases (not always, but the exceptions are very, very rare and if you don't know for sure you're dealing with such an exception, you almost certainly are not), and certainly we don't want to throw a NullReferenceException.

The next thing we do is to see if the object is being compared with itself. This is purely an optimisation. In this case, it's probably a waste of time, but it can be very useful with more complicated equality tests, so it's worth pointing out this trick here. This takes advantage of the rule that identity entails equality, that is, any object is equal to itself (Ayn Rand seemed to think this was somehow profound).

Finally, having dealt with these two special cases, we get to the actual rule for equality. As I said above, my example considers two objects equal if they have the same two strings, in either order, for case-insensitive ordinal comparisons, so I've a bit of code to work that out.

(Note that the order in which we compare component parts can have a performance impact. Not in this case, but with a class that contains both an int and a string we would compare the ints first because is faster and we will hence perhaps find an answer of false before we even look at the strings)

Now at this point we've a good basis for overriding the Equals method defined in object :

public override bool Equals(object other)
{
  return (other as ParamID);
}

Since as will return a ParamID reference if other is a ParamID and null for anything else (including if null was what we were passed in the first place), and since we already handle comparison with null, we're all set.

Try to compile at this point and you will get a warning that you have overriden Equals but not GetHashCode (the same is true if you'd done it the other way around).

GetHashCode is used by the dictionary (and other hash-based collections like HashTable and HashSet) to decide where to place the key internally. It will take the hashcode, re-hash it down to a smaller value in a way that is its business, and use it to place the object in its internal store.

Because of this, it's clear why the following is a bad idea were ParamID not readonly on all fields:

ParamID x = new ParamID("a", "b");
dict.Add(x, 33);
x.First = "c";//x will now likely never be found in dict because its hashcode doesn't match its position!

This means the following rules apply to hash-codes:

  1. Two objects considered equal, must have the same hashcode. (This is a hard rule, you will have bugs if you break it).
  2. While we can't guarantee uniqueness, the more spread out the returned results, the better. (Soft rule, you will have better performance the better you do at it).
  3. (Well, 2½.) While not a strict rule, if we take such a complicated approach to point 2 above that it takes forever to return a result, the nett effect will be worse than if we had a poorer-quality hash. So we want to try to be reasonably quick too if we can.

Despite the last point, it's rarely worth memoising the results. Hash-based collections will normally memoise the value themselves, so it's a waste to do so in the object.

For the first implementation, because our approach to equality depended upon the default approach to equality of the strings, we could use strings default hashcode. For my different version I'll use another approach that we'll explore more later:

public override int GetHashCode()
{
  return StringComparer.OrdinalIgnoreCase.GetHashCode(_first) ^ StringComparer.OrdinalIgnoreCase.GetHashCode(_second);
}

Let's compare this to the first version. In both cases we get hashcodes of the component parts. If the values where integers, chars or bytes we would have worked with the values themselves, but here we build on the work done in implementing the same logic for those parts. In the first version we use the GetHashCode of string itself, but since "a" has a different hashcode to "A" that won't work here, so we use a class that produces a hashcode ignoring that difference.

The other big difference between the two is that in the first case we mix the bits up more with ((fHash << 16) | (fHash >> 16)) . The reason for this is to avoid duplicate hashes. We can't produce a perfect hashcode where every different object has a different value, because there are only 4294967296 possible hashcode values, but many more possible values for ParamID (including null, which is treated as having a hashcode of 0). (There are cases where prefect hashes are possible, but they bring in different concerns than here). Because of this imperfection we have to think not only about what values are possible, but which are likely. Generally, shifting bits like we've done in the first version avoids common values having the same hash. We don't want {"A", "B"} to hash the same as {"B", "A"}.

It's an interesting experiment to produce a deliberately poor GetHashCode that always returns 0, it'll work, but instead of being close to O(1), dictionaries will be O(n), and poor as O(n) goes for that!

The second version doesn't do that, because it has different rules so for it we actually want to consider values the same but for being switch around as equal, and hence with the same hashcode.

The other big difference is the use of StringComparer.OrdinalIgnoreCase . This is an instance of StringComparer which, among other interfaces, implements IEqualityComparer<string> and IEqualityComparer . There are two interesting things about the IEqualityComparer<T> and IEqualityComparer interfaces.

The first is that hash-based collections (such as dictionary) all use them, it's just that unless passed an instance of one to their constructor they will use DefaultEqualityComparer which calls into the Equals and GetHashCode methods we've described above.

The other, is that it allows us to ignore the Equals and GetHashCode mentioned above, and provide them from another class. There are three advantages to this:

  1. We can use them in cases (string is a classic case) where there is more than one likely definition of "equals".

  2. We can ignore that by the class' author, and provide our own.

  3. We can use them to avoid a particular attack. This attack is based on being in a situation where input you provide will be hashed by the code you are attacking. You pick input so as to deliberately provide objects that are different, but hash the same. This means that the poor performance we talked about avoiding earlier is hit, and it can be so bad that it becomes a denial of service attack. By providing different IEqualityComparer implementations with random elements to the hash code (but the same for every instance of the comparer) we can vary the algorithm enough each time as to twart the attack. The use for this is rare (it has to be something that will hash based purely on outside input that is large enough for the poor performance to really hurt), but vital when it comes up.

Finally. If we override Equals we may or may not want to override == and != too. It can be useful to keep them refering to identity only (there are times when that is what we care most about) but it can be useful to have them refer to other semantics (`"abc" == "ab" + "c" is an example of an override).

In summary:

The default equality of reference objects is identity (equal only to itself).

The default equality of value types is a simple comparison of all fields (but poor in performance).

We can change the concept of equality for our classes in either case, but this MUST involve both Equals and GetHashCode*

We can override this and provide another concept of equality.

Dictionary, HashSet, ConcurrentDictionary, etc. all depend on this.

Hashcodes represent a mapping from all values of an object to a 32-bit number.

Hashcodes must be the same for objects we consider equal.

Hashcodes must be spread well.

*Incidentally, anonymous classes have a simple comparison like that of value types, but better performance, which matches almost any case in which we mght care about the hash code of an anonymous type.

Most likely, paramID does not implement equality comparison correctly.

It should be implementing IEquatable<paramID> and that means especially that the GetHashCode implementation must adhere to the requirements (see "Notes to implementers").

As for keys in dictionaries, MSDN says:

As long as an object is used as a key in the Dictionary(Of TKey, TValue), it must not change in any way that affects its hash value. Every key in a Dictionary(Of TKey, TValue) must be unique according to the dictionary's equality comparer. A key cannot be Nothing, but a value can be, if the value type TValue is a reference type.

Dictionary(Of TKey, TValue) requires an equality implementation to determine whether keys are equal. You can specify an implementation of the IEqualityComparer(Of T) generic interface by using a constructor that accepts a comparer parameter; if you do not specify an implementation, the default generic equality comparer EqualityComparer(Of T).Default is used. If type TKey implements the System.IEquatable(Of T) generic interface, the default equality comparer uses that implementation.

Since you don't show the paramID type I cannot go into more detail.

As an aside: that's a lot of keys and values getting tangled in there. There's a dictionary inside a dictionary, and the keys of the outer dictionary aggregate some kind of value as well. Perhaps this arrangement can be advantageously simplified? What exactly are you trying to achieve?

Use the Dictionary.ContainsKey method .

And so:

Dictionary<string, object> tempDict = new Dictionary<string, object>();

paramID searchKey = new paramID(xKey, xValue);
if(outerDict.ContainsKey(searchKey))
{  
   outerDict.TryGetValue(searchKey, out tempDict);
   tempDict.Add(newKey, newValue);
} 

Also don't forget to override the Equals and GetHashCode methods in order to correctly compare two paramIDs:

class paramID
{
    // rest of things

    public override bool Equals(object obj)
    {
         paramID p = (paramID)obj;

         // how do you determine if two paramIDs are the same?
         if(p.key == this.key) return true;
         return false;
    }

    public override int GetHashCode()
    {
         return this.key.GetHashCode();
    }
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM