简体   繁体   中英

Difference between C# interface and Haskell Type Class

I know that there is a similar question here, but I would like to see an example, which clearly shows, what you can not do with interface and can with Type Class

For comparison I'll give you an example code:

class Eq a where 
    (==) :: a -> a -> Bool
instance Eq Integer where 
    x == y  =  x `integerEq` y

C# code:

interface Eq<T> { bool Equal(T elem); }
public class Integer : Eq<int> 
{
     public bool Equal(int elem) 
     {
         return _elem == elem;
     }
}

Correct my example, if not correctly understood

Typeclasses are resolved based on a type, while interface dispatch happens against an explicit receiver object. Type class arguments are implicitly provided to a function while objects in C# are provided explicitly. As an example, you could write the following Haskell function which uses the Read class:

readLine :: Read a => IO a
readLine = fmap read getLine

which you can then use as:

readLine :: IO Int
readLine :: IO Bool

and have the appropriate read instance provided by the compiler.

You could try to emulate the Read class in C# with an interface eg

public interface Read<T>
{
    T Read(string s);
}

but then the implementation of ReadLine would need a parameter for the Read<T> 'instance' you want:

public static T ReadLine<T>(Read<T> r)
{
    return r.Read(Console.ReadLine());
}

The Eq typeclass requires both arguments have the same type, whereas your Eq interface does not since the first argument is implicitly the type of the receiver. You could for example have:

public class String : Eq<int>
{
    public bool Equal(int e) { return false; }
}

which you cannot represent using Eq . Interfaces hide the type of the receiver and hence the type of one of the arguments, which can cause problems. Imagine you have a typeclass and interface for an immutable heap datastructure :

class Heap h where
  merge :: Ord a => h a -> h a -> h a

public interface Heap<T>
{
    Heap<T> Merge(Heap<T> other);
}

Merging two binary heaps can be done in O(n) while merging two binomial heaps is possible in O(n log n) and for fibonacci heaps it's O(1). Implementors of the Heap interface do not know the real type of the other heap so is forced to either use a sub-optimal algorithm or use dynamic type checks to discover it. In contrast, types implementing the Heap typeclass do know the representation.

AC# interface defines a set of methods that must be implemented. A Haskell type class defines a set of methods that must be implemented (and possibly a set of default implementations for some of the methods). So there's a lot of similarities there.

(I guess an important difference is that in C#, an interface is a type, whereas Haskell regards types and type classes as strictly separate things.)

The key difference is that in C#, when you define a type (ie, write a class), you define exactly what interfaces it implements, and this is frozen for all time. In Haskell, you can add new interfaces to an existing type at any time.

For example, if I write a new SerializeToXml interface in C#, I cannot then make double or String implement that interface. But in Haskell, I can define my new SerializeToXml type class, and then make all the standard, built-in types implement that interface ( Bool , Double , Int ...)

The other thing is how polymorphism works in Haskell. In an OO language, you dispatch on the type of the method the object is being invoked on. In Haskell, the type that the method is implemented for can appear anywhere in the type signature. Most particularly, read dispatches on the return type you want — something you usually can't do at all in OO languages, not even with function overloading.

Also, in C# it's kind of hard to say "these two arguments must have the same type". Then again, OO is predicated on the Liskov substitution principal; two classes that both descend from Customer should be interchangeable, so why would you want to constrain two Customer objects to both be the same type of customer?

Come to think of it, OO languages do method lookup at run-time , whereas Haskell does method lookup at compile-time . This isn't immediately obvious, but Haskell polymorphism actually works more like C++ templates than usual OO polymorphism. (But that's not especially to do with type classes, it's just how Haskell does polymorphism as such.)

Others have already provided excellent answers.

I only want to add a practical example about their differences. Suppose we want to model a "vector space" typeclass/interface, which contains the common operations of 2D, 3D, etc. vectors.

In Haskell:

class Vector a where
   scale :: a -> Double -> a
   add :: a -> a -> a

data Vec2D = V2 Double Double
instance Vector (Vec2D) where
   scale s (V2 x y) = V2 (s*x) (s*y)
   add (V2 x1 y1) (V2 x2 y2) = V2 (x1+x2) (y2+y2)

-- the same for Vec3D

In C#, we might try the following wrong approach (I hope I get the syntax right)

interface IVector {
   IVector scale(double s);
   IVector add(IVector v);
}
class Vec2D : IVector {
   double x,y;
   // constructor omitted
   IVector scale(double s) { 
     return new Vec2D(s*x, s*y);
   }
   IVector add(IVector v) { 
     return new Vec2D(x+v.x, y+v.y);
   }
}

We have two issues here.

First, scale returns only an IVector , a supertype of the actual Vec2D . This is bad, because scaling does not preserve the type information.

Second, add is ill-typed! We can't use vx since v is an arbitrary IVector which might not have the x field.

Indeed, the interface itself is wrong: the add method promises that any vector must be summable with any other vector, so we must be able to sum 2D and 3D vectors, which is nonsense.

The usual solution is to switch to F-bounded quantification AKA CRTP or whatever it's being called these days:

interface IVector<T> {
   T scale(double s);
   T add(T v);
}
class Vec2D : IVector<Vec2D> {
   double x,y;
   // constructor omitted
   Vec2D scale(double s) { 
     return new Vec2D(s*x, s*y);
   }
   Vec2D add(Vec2D v) { 
     return new Vec2D(x+v.x, y+v.y);
   }
}

The first time a programmer meets this, they are usually puzzled by the seemingly "recursive" line Vec2D : IVector<Vec2D> . I surely was :) Then we get used to this and accept it as an idiomatic solution.

Type classes arguably have a nicer solution here.

After a long study of this issue, I came to an easy method of explaining. At least for me it's clear.

Imagine we have method with signature like this

public static T[] Sort(T[] array, IComparator<T> comparator) 
{
    ...
}

And implementation of IComparator :

public class IntegerComparator : IComparator<int> { }

Then we can write code like this:

var sortedIntegers = Sort(integers, new IntegerComparator());

We can improve this code, first we create Dictionary<Type, IComparator> and fill it:

var comparators = new Dictionary<Type, IComparator>() 
{
    [typeof(int)]    = new IntegerComparator(),
    [typeof(string)] = new StringComparator() 
}

Redesigned IComparator interface so that we could write like above

 public interface IComparator {} public interface IComparator<T> : IComparator {} 

And after this let's redesign Sort method signature

public class SortController
{
    public T[] Sort(T[] array, [Injectable]IComparator<T> comparator = null) 
    {
        ...
    }
}

As you understand we are going to inject IComparator<T> , and write code like this:

new SortController().Sort<int>(integers, (IComparator<int>)_somparators[typeof(int)])

As you already guessed this code will not work for other types until we outline the implementation and add in Dictionary<Type, IComparator>

Notice, the exception we will see only on runtime

And now imagine if this work was done for us by the compiler during build and it threw exception if it could not find the comparator with corresponding types.

For this, we could help the compiler and add a new keyword instead of usage attribute. Out Sort method will be look like this:

public static T[] Sort(T[] array, implicit IComparator<T> comparator) 
{
    ...
}

And code of realization concrete Comparator:

public class IntegerComparator : IComparator<int> implicit { }

Note, we use the keyword 'implicit', after this compiler will be able to do routine work, which we wrote above, and the exception will be thrown during compile-time

var sortedIntegers = Sort(integers);

// this gives us compile-time error
// because we don't have implementation of IComparator<string> 
var sortedStrings = Sort(strings); 

And give the name to this style of implementation Type Class

public class IntegerComparator : IComparator<int> implicit { }

I hope that I understood correctly and understandably explained.

PS: The code does not pretend to work.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM