简体   繁体   中英

Is Last a free monoid?

The free monoids are often being regarded as "list monoids". Yet, I am interested in other possible structures which might give us free monoids .

Firstly, let us go over the definition of free monoids . I have never quite understood how is it possible to define a free monoid as a structure which abides by monoid laws and nothing else. How do we prove that something abides by no rules but stated above? Or is this just an intuition?

Anyway, we are going to speak functors. If some monoid is free , we got it with a free functor . It is obvious that a list comes in quite handy here:

free :: Set -> Mon
free a = ([a], (++), [])

Yet, one might come up with several others. For example, here is one for Last of Data.Monoid :

freeLast :: Set -> Mon
freeLast a = (Last a, (<>) :: Last a -> Last a -> Last a, Last Nothing) 

So, does this functor make Last a free monoid ? More generally, if there is a law-abiding instance for Monoid (T a) , is T a free monoid ?

Here's one way to understand a free monoid: If somebody gives you a value, how much can you deduce about how it was created? Consider an additive monoid of natural numbers. I give you a 7 and ask you how I got it. I could have added 4+3, or 3+4, or 2+5, etc. There are many possibilities. This information was lost. If, on the other hand, I give you a list [4, 3] , you know it was created from singletons [4] and [3] . Except that maybe there was a unit [] involved. Maybe it was [4]<>[3]<>[] or [4]<>[]<>[]<>[3] . But it definitely wasn't [3]<>[4] .

With a longer list, [1, 2, 3] , you have additional options ([1]<>[2]) <> [3] , or [1] <> ([2]<>[3]) , plus all possible insertions of the empty list. So the information you lose follows the unit laws and associativity, but nothing else . A free monoid value remembers how it was created, modulo unit laws and associativity.

For the sake of example, let's take non-negative Integer numbers, ie 0,1,2,... . How many monoids can we make?

Defining mempty = 0 and (<>) = (+) . You can proof easily that this is a monoid.

Defining mempty = 1 and (<>) = (*) . Again, This is a monoid (Prove it, it is easy)

The two monoids defined above, are called additive and multiplicative monoids over Natural numbers. They are different in structure, for example, the element 0 in the multiplicative monoid, behaves totally different than any other element in the additive monoid, hence there is something inner to Natural numbers, that makes this monoids different (hold this assertion till the next paragraph).

There exists a third monoid we can create, let's call it concatenation monoid.

Defining mempty = no-action and (<>) = glue one integer beside the other .

As an example, 3 <> mempty = 3 and 3 <> 2 = 32 . Notice, that the fact that elements, are natural numbers is not relevant here. If instead of Natural, we take Rationals, or what ever symbols you like, the monoid would be exactly the same thing.(* read foot note) Hence, there is nothing inner to the underlying set that makes the monoid different to others . Thats why, the monoid is free because it doesn't depend on arithmetic rules of the Naturals, nor any other rule aside from monoid ones.

And this is the only way to build a monoid freely, not depending on the inner rules of the underlying set. Of course, concatenation is expressed as lists in haskell.

  • Note: The only important bit is that they share the same number of elements. For example, the free monoid with 3 elements a , b and c would be any arbitrary concatenation of those three, but you can choose what ever symbol: 1 , 2 , 3 or α , β , γ ... and the monoid would be the very same thing

Here is another law that Last satisfies:

forall (t :: Type) (x, y :: t).
  Last (Just x) <> Last (Just y) === Last (Just y)

Since it satisfies another law, it must not be the free Monoid.

Firstly, let us go over the definition of free monoids. I have never quite understood how is it possible to define a free monoid as a structure which abides by monoid laws and nothing else. How do we prove that something abides by no rules but stated above? Or is this just an intuition?

Let me illustrate the purpose of free monoids.

If I tell you there is a monoid, with some elements a , b , c , what can you deduce from that?

  • We can find more elements of that monoid by writing expressions involving the generators a , b , c and the monoid operations (+) and 0 (aka. (<>) and mempty ). (cf. Definition 1, in the second half of this answer.)
  • We can use the monoid laws to prove that some expressions denote the same element: we can prove equations such as ((a + 0) + b) = (a + b) . (Definition 2.) In fact, equations we can prove with just that knowledge are equations which hold in any monoid, for any values a , b , c . (Theorem 1.)

What about equations we can't prove from just the monoid laws? For example, we can't prove (a + b) = (b + a) . But we can't prove its negation either, (a + b) /= (b + a) , if we only know the monoid laws. What does that mean? It turns out that that equation holds in some monoids (eg, commutative monoids), but not in others: for example, pick a monoid where x + y = y for almost all x and y (this is the Last monoid in Haskell), if we choose distinct a and b , then (a + b) /= (b + a) .

But that was just one example. What can we say in general about equations that we cannot prove from just the monoid laws? The free monoid offers a definitive answer, in fact, a universal counterexample: unprovable equations are false in the free monoid (generated by a , b , c ). In other words, we can prove an equation e = f using just the monoid laws if and only if it is true in the free monoid (emphasis on "if"). (Theorem 2.) This corresponds to the intuition that the free monoid "only abides by the monoid laws and nothing else".

So, does this functor make Last a free monoid? More generally, if there is a law-abiding instance for Monoid (T a), is T a free monoid?

The Last monoid is not free because it makes more equations true than what you can actually prove purely from the monoid laws. See other answer :

forall (t :: Type) (x, y :: t).
  Last (Just x) <> Last (Just y) === Last (Just y)

Here's a sketch of how to formalize the above.

Definition 1. The set of monoidal expressions generated by (some atomic symbols) A , B , C is defined by the grammar:

e ::=
  | A | B | C   -- generators
  | e + e       -- binary operation (<>)
  | 0           -- identity (mempty)

Given any "suitable monoid", that is to say, a monoid (M, (+), 0) with some chosen elements a , b , c in M (which don't have to be distinct), an expression e denotes an element eval e in M .

Definition 2. An equation is a pair of expressions, written e ~ f . The set of provable equations is the smallest set of equations ("smallest" when ordered by inclusion) satisfying the following:

  1. It includes the monoid laws: (e + 0) ~ e , (0 + e) ~ e , ((e + f) + g) ~ (e + (f + g)) are provable.
  2. It is an equivalence relation (viewing a set of tuples as a relation): for example, for reflexivity, e ~ e is provable.
  3. It is a congruence relation: if e ~ f is provable then (g + e) ~ (g + f) and (e + g) ~ (f + g) are provable.

(The idea of that definition is that the assertion " e ~ f is provable" holds if and only if it can be deduced by "applying" those rules. "Smallest set" is a conventional method to formalize that.)

The definition of "provable equations" may seem arbitrary. Are those the right rules to define "provability"? Why these three rules in particular? Notably, the congruence rule may not be obvious in a first attempt at giving such a definition. This is the point of the following theorems, soundness and completeness. Add a (non-redundant) rule, and we lose soundness. Remove a rule, and we lose completeness.

Theorem 1. (Soundness) If e ~ f is provable, then eval e = eval f in any "suitable monoid" M .

Theorem 2. (Completeness) If e ~ f is not provable, then their denotations differ in F , eval e /= eval f , where F is the free monoid generated by A , B , C .

(Soundness is much easier to prove than completeness. Exercises for the reader.)

This completeness theorem is a characterization of the free monoid: any other monoid F which keeps the statement of the theorem true is isomorphic to the free monoid (technically, this requires both completeness and an assumption that the denotation function eval : Expr -> M is surjective). That is why we may say "the free monoid" instead of "the monoid of lists"; that practice is most accurate in contexts where the representation does not matter ("up to isomorphism").

In fact, completeness is trivial if you define "the free monoid" as the quotient of monoidal expressions by the equivalence relation " _ ~ _ is provable". The hard work actually resides in a separate proof, that this monoid is isomorphic to the monoid of lists.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM