60

What's the difference between these?

On Wikipedia, there is little information and no clear code explaining these terms.

What are some very simple examples explaining these terms?

How is corecursion the dual of recursion?

Are there any classic corecusive algorithms?

maple_shaft
  • 26,401
  • 11
  • 57
  • 131
user167908
  • 711
  • 1
  • 5
  • 5
  • 50
    See the answer to SO http://stackoverflow.com/questions/10138735/whats-the-difference-between-recursion-and-corecursion (sorry, couldn't stop myself) – High Performance Mark Apr 13 '12 at 10:02
  • 8
    @HighPerformanceMark, it doesn't explain what corecursion is, we need another question – Abyx Apr 13 '12 at 10:03
  • 5
    But seriously, what's wrong with the Wikipedia explanation of these terms ? – High Performance Mark Apr 13 '12 at 10:11
  • 1
    @HighPerformanceMark where the corecursion term is correct in wiki? –  Apr 13 '12 at 10:12
  • 1
    And nobody has answered about classic corecusrion algos ( in wiki too ) or binary-tree represantaition of corecursion or does exist tail-corecursion? –  Apr 13 '12 at 10:14
  • 6
    The corecursion explanation on wikipedia is awful. I doubt it makes sense to anyone who doesn't already know what corecursion is. – Marcin Apr 13 '12 at 10:45
  • Where it's clear in Wiki? Only Haskell/Lisp ready-code which is not clear for begginers! No one imperative simple code! And there is no explaination about its duality of recursion! Only words "it's dual" and all!!! Even in talk section there was question about duality!!!! – user167908 Apr 13 '12 at 10:59
  • 1
    @user1131997: At http://en.wikipedia.org/wiki/Corecursion there is also a Python example. – Giorgio Apr 13 '12 at 11:28
  • 9
    @High Performance Mark: I clicked on the link three times thinking there was a mistake before I understood the pun. LOL – Giorgio Apr 13 '12 at 17:44
  • @HighPerformanceMark there's no base case so I ended up on a STACK OVERFLOW by following your link lol – Daniel Jan 27 '21 at 23:30

3 Answers3

29

There are a number of good ways of looking at this. The easiest thing for me is to think about the relation between "Inductive" and "Coinductive definitions"

An inductive definition of a set goes like this.

The set "Nat" is defined as the smallest set such that "Zero" is in Nat, and if n is in Nat "Succ n" is in Nat.

Which corresponds to the following Ocaml

type nat = Zero | Succ of nat

One thing to note about this definition is that a number

omega = Succ(omega)

is NOT a member of this set. Why? Assume that it was, now consider the set N that has all the same elements as Nat except it does not have omega. Clearly Zero is in N, and if y is in N, Succ(y) is in N, but N is smaller than Nat which is a contradiction. So, omega is not in Nat.

Or, perhaps more useful for a computer scientist:

Given some set "a", the set "List of a" is defined as the smallest set such that "Nil" is in List of a, and that if xs is in List of a and x is in a "Cons x xs" is in List of a.

Which corresponds to something like

type 'a list = Nil | Cons of 'a * 'a list

The operative word here is "smallest". If we didn't say "smallest" we would not have any way of telling if the set Nat contained a banana!

Again,

zeros = Cons(Zero,zeros)

is not a valid definition for a list of nats, just like omega was not a valid Nat.

Defining data inductively like this allows us to define functions that work on it using recursion

let rec plus a b = match a with
                   | Zero    -> b
                   | Succ(c) -> let r = plus c b in Succ(r)

we can then prove facts about this, like "plus a Zero = a" using induction (specifically, structural induction)

Our proof proceeds by structural induction on a.
For the base case let a be Zero. plus Zero Zero = match Zero with |Zero -> Zero | Succ(c) -> let r = plus c b in Succ(r) so we know plus Zero Zero = Zero. Let a be a nat. Assume the inductive hypothesis that plus a Zero = a. We now show that plus (Succ(a)) Zero = Succ(a) this is obvious since plus (Succ(a)) Zero = match a with |Zero -> Zero | Succ(a) -> let r = plus a Zero in Succ(r) = let r = a in Succ(r) = Succ(a) Thus, by induction plus a Zero = a for all a in nat

We can of-course prove more interesting things, but this is the general idea.

So far we have dealt with inductively defined data which we got by letting it be the "smallest" set. So now we want to work with coinductivly defined codata which we get by letting it be the biggest set.

So

Let a be a set. The set "Stream of a" is defined as the largest set such that for each x in the stream of a, x consists of the ordered pair (head,tail) such that head is in a and tail is in Stream of a

In Haskell we would express this as

data Stream a = Stream a (Stream a) --"data" not "newtype"

Actually, in Haskell we use the built in lists normally, which can be an ordered pair or an empty list.

data [a] = [] | a:[a]

Banana is not a member of this type either, since it is not an ordered pair or the empty list. But, now we can say

ones = 1:ones

and this is a perfectly valid definition. Whats more, we can perform co-recursion on this co-data. Actually, it is possible for a function to be both co-recursive and recursive. While recursion was defined by the function having a domain consisting of data, co-recursion just means it has a co-domain (also called the range) that is co-data. Primitive recursion meant always "calling oneself" on smaller data until reaching some smallest data. Primitive co-recursion always "calls itself" on data greater than or equal to what you had before.

ones = 1:ones

is primitively co-recursive. While the function map (kind of like "foreach" in imperative languages) is both primitively recursive (sort of) and primitively co-recursive.

map :: (a -> b) -> [a] -> [b]
map f []     = []
map f (x:xs) = (f x):map f xs

same goes for the function zipWith which takes a function and a pair of lists and combines them together using that function.

zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith f (a:as) (b:bs) = (f a b):zipWith f as bs
zipWith _ _ _           = [] --base case

the classic example of functional languages is the Fibonacci sequence

fib 0 = 0
fib 1 = 1
fib n = (fib (n-1)) + (fib (n-2))

which is primitively recursive, but can be expressed more elegantly as an infinite list

fibs = 0:1:zipWith (+) fibs (tail fibs)
fib' n = fibs !! n --the !! is haskell syntax for index at

an interesting example of induction/coinduction is proving that these two definitions compute the same thing. This is left as an exercise for the reader.

Philip JF
  • 2,193
  • 14
  • 10
  • 1
    @user1131997 Thanks. I'm planning on translating some of the code into Java, stay tuned – Philip JF Apr 14 '12 at 21:29
  • @PhilipJF: I feel stupid but I dont see why "...Clearly Zero is in N, and if y is in N, Succ(y) is in N...". What happen if y is some thing satisfying Succ(y) = omega? (because you do not use any property of Zero and Succ, I can replace Succ = root square and Zero = 2) – Ta Thanh Dinh Jun 13 '14 at 14:20
  • ... and then I see omega = 1. – Ta Thanh Dinh Jun 13 '14 at 14:26
  • the goal is to show omega is not in nat. We do this by contradiction. If omega were in nat than the set N = nat - {omega} would satisfy the laws. That is because nat satisfies the laws. If y in N, 1. y is not omega and 2. y in nat. From 2 we know Succ(y) in nat, and by 1 y is not omega Succ(y) is not omega. Thus Succ(y) in N. N also includes zero. But, N is smaller than nat. This is a contradiction. Thus, nat does not include omega. – Philip JF Jun 14 '14 at 04:03
  • 1
    This is all a bit of lie since ocaml has [value recursion](http://caml.inria.fr/pub/docs/manual-ocaml/extn.html#s%3aletrecvalues) I really should have used SML which is the only "mainstream"-ish language that supports inductive reasoning. – Philip JF Jun 14 '14 at 04:06
  • @PhilipJF: thanks a lot, I am very appreciated your response. But I still do not get why "...and y is not omega then Succ(y) is not omega..."?. I understand that if y is omega then Succ(y) is omega, but that does not derive "if y is not omega then Succ(y) is not omega", that derives "if Succ(y) is not omegra then y is not omega" instead. – Ta Thanh Dinh Jun 14 '14 at 21:55
  • @tathanhdinh Because omega is defined to be Succ(omega) so if Succ(y) = omega then Succ(y) = Succ(omega) and y = omega. The last step is admittedly a bit sketchy without fleshing out your foundations and formalizing the definition. – Philip JF Jun 15 '14 at 00:42
  • It doesn't really matter whether Nat contains bananas or not: [we don't know and we don't care](http://research.microsoft.com/en-us/um/people/lamport/tla/book.html). What we *do* care is about the natural numbers it contains. This is the axiomatic viewpoint. – 0 _ Apr 24 '16 at 22:45
  • @IoannisFilippidis we do care, at least in set theory, in that we want a specific induction principle to hold that only mentions Succ and Zero. We care how nats are used, not just how they are made. Of course, "contains" is a fuzzy concept--for example, the category of pointed sets has a "natural number object" which is a set with an element which is neither zero nor a successor. But in the usual settings the notion of "smallest" is indeed what you want. Even for old school PA if you want to be computable you only have standard models (Tennenbaum's theorem) and no bananas – Philip JF Apr 26 '16 at 05:31
  • "co-domain (also called the range)" Nitpick: the co-domain of `\x -> if x then 'T' else 'F'` is all possible `Char`s, but the range is just `{'T', 'F'}`. – Joseph Sible-Reinstate Monica Sep 29 '22 at 06:44
13

Basically, corecursion is recursion accumulator-style, building its result on the way forward from the starting case, whereas regular recursion builds its result on the way back from the base case.

(speaking Haskell now). That's why foldr (with a strict combining function) expresses recursion, and foldl' (with strict comb. f.) / scanl/ until/ iterate/ unfoldr/ etc. express corecursion. Corecursion is everywhere. foldr with non-strict comb. f. expresses tail recursion modulo cons.

And Haskell's guarded recursion is just like tail recursion modulo cons.

This is recursion:

fib n | n==0 = 0
      | n==1 = 1
      | n>1  = fib (n-1) + fib (n-2)

fib n = snd $ g n
  where
    g n | n==0 = (1,0)
        | n>0  = let { (b,a) = g (n-1) } in (b+a,b)

fib n = snd $ foldr (\_ (b,a) -> (b+a,b)) (1,0) [n,n-1..1]

(read $ as "of"). This is corecursion:

fib n = g (0,1) 0 n where
  g n (a,b) i | i==n      = a 
              | otherwise = g n (b,a+b) (i+1)

fib n = fst.snd $ until ((==n).fst) (\(i,(a,b)) -> (i+1,(b,a+b))) (0,(0,1))
      = fst $ foldl (\(a,b) _ -> (b,a+b)) (0,1) [1..n]
      = fst $ last $ scanl (\(a,b) _ -> (b,a+b)) (0,1) [1..n]
      = fst (fibs!!n)  where  fibs = scanl (\(a,b) _ -> (b,a+b)) (0,1) [1..]
      = fst (fibs!!n)  where  fibs = iterate (\(a,b) -> (b,a+b)) (0,1)
      = (fibs!!n)  where  fibs = unfoldr (\(a,b) -> Just (a, (b,a+b))) (0,1)
      = (fibs!!n)  where  fibs = 0:1:map (\(a,b)->a+b) (zip fibs $ tail fibs)
      = (fibs!!n)  where  fibs = 0:1:zipWith (+) fibs (tail fibs)
      = (fibs!!n)  where  fibs = 0:scanl (+) 1 fibs
      = .....

Folds: http://en.wikipedia.org/wiki/Fold_(higher-order_function)

Will Ness
  • 275
  • 2
  • 13
5

Check this at Vitomir Kovanovic' s blog. I found it to the point:

Lazy evaluation in one very nice feature found in programming languages with functional programming capabilities such as lisp, haskell, python etc. It mans that evaluation of variable value is delayed to the actual usage of that variable.

It means that for example if you wanted to create a list of million elements with something like this (defn x (range 1000000)) it is not actually created, but it is just specified and when you really use that variable for the first time, for instance when you want 10th element of that list interpreter creates only first 10 elements of that list. So the first run of (take 10 x) actually creates these elements and all subsequent calls to the same function are working with already existing elements.

This is very useful because you can create infinite lists without out of memory errors.The list will be large just how much you requested. Of course, if your program is working with large data collections it can hit memory limit in the usage of these infinite lists.

On the other hand corecursion is dual to recursion. What this means? Well just like the recursive functions, which are expressed in the terms of themselves, corecursive variables are expressed in the terms of themselves.

This is best expressed on the example.

Let’s say we want list of all prime numbers...

Max
  • 103
  • 4