

Y combinator in Scheme - momo-reina
http://jao.io/blog/2014/08/06/spj-s-y-combinator-in-scheme

======
jarcane
I like this explanation, because it focuses in it's latter portion on the one
thing that actually made Y click for me: expanding it.

I think it's why it's so short: the Y-combinator makes perfect sense if you
explain it in the context of expansion rather than recursion. The best
demonstrator of how it combinators work isn't a bunch of algebraic derivations
and explanations relative to tail-recursion, it's seeing how they expand to
resolve the function given. For me, it was stepping through a factorial
function in Racket.

In the end I wound up including it in the Heresy standard library.

~~~
viksit
That was exactly my problem too - I've tried in a complementary post to talk
about some practical applications of the YCombinator.

[http://www.viksit.com/tags/clojure/practical-
applications-y-...](http://www.viksit.com/tags/clojure/practical-applications-
y-combinator-clojure)

~~~
jarcane
That's pretty nice. I ultimately only included Y in Heresy as an academic
exercise (I see the main purpose of it, if any, as being a playground for
learning FP and Lisp concepts); on Racket at least, even with it's very good
tail-call optimizations, it wasn't really performant enough by itself to be
useful.

------
klibertp
I recently found this talk:
[http://www.confreaks.com/videos/1287-rubyconf2012-y-not-
adve...](http://www.confreaks.com/videos/1287-rubyconf2012-y-not-adventures-
in-functional-programming) which I think it's a good introductory material: it
not only derives Y, but starts with introducing all the tools needed for the
derivation and explains rationale for using given tool at every step of
derivation. Nice, however rather long.

Personally I found "The Little Schemer" explanation the first that made it
click for me. Its unique style of making you do the exercises even if you
don't do them (it's hard to explain, just go and read the book) gives you an
impression that you really understand what is happening.

------
dionidium
There are some good summaries here, too:

[http://stackoverflow.com/questions/93526/what-is-a-y-
combina...](http://stackoverflow.com/questions/93526/what-is-a-y-
combinator/6714066)

Full disclosure: I ended up answering there as well, but I won't say which one
is mine.

------
stevenspasbo
I really liked this article:
[http://mvanier.livejournal.com/2897.html](http://mvanier.livejournal.com/2897.html)

I have been working through SICP and I ended up taking a weekend break from it
to really understand everything in the article.

------
pash
To understand the theory of recursive functions you must understand fixed
points, but I think there's a simpler way to understand the magic of the Y
combinator.

Here's how it works. Say we have a recursive function like _map_ :

    
    
        map f [] = []  -- We'll ignore the base case since there's no recursion here
        map f (x:xs) = f x : map f xs  -- This is the interesting part
    

We want to write this more primitively, without explicit recursion. How can we
do that? Well, if it's possible at all, we obviously need to get rid of the
_map_ on the right hand side of the second case.

How? Well, let's abstract over the recursive call, replacing it with a
function that we'll add as parameter to our definition:

    
    
        map' _ f [] = []  -- Still boring
        map' g f (x:xs) = f x : g g f xs  -- Pay attention for later!
    

(Read _map '_ as "map-prime", i.e., a variant of the definition of _map_.)

All we've done so far is replaced _map '_ where it should be on the right-hand
side of its own definition with _g_ , and then added _g_ as parameter of the
function. We end up with a double _g_ on the right-hand side because of that
added parameter.

OK, now what? Well, we need somehow to make _g_ be _map '_, the thing we're
defining. which means we need somehow to pass our definition of _map '_ to
itself, so that (in the second case) we'd end up with something that evaluates
to:

    
    
        map' map' f (x:xs) = f x : map' map' f xs
    

Then the right hand side would obviously be right; it's what we were shooting
for at the start (assuming we can get _map ' map'_ to equal _map_ , which is
what we're trying to do). The left hand side looks a little strange [0], but
it's just saying that we need the first parameter of our definition to be the
the thing we're defining.

So how do we make this happen? Well, what is _map map f (x:xs)_? It's _map_
applied to itself, then to some other arguments we need. So the key, it seems,
is to figure out how to apply _map_ to itself. Well, in the lambda calculus
that's pretty easy [1]:

    
    
        why f x y = f f x y
    

That is, _why_ is just

    
    
        lambda f. lambda x. lambda y. (f f) x) y
    

OK, so _fix_ takes a function and applies it to itself (after applying it
first to arguments _x_ and _y_ ). Great. That's what we needed. Now we can
take our defintion of _map '_ (the one marked _Pay attention for later!_
above) and do this:

    
    
        map = why map'
    

And I've re-used the name _map_ there because we can easily show that this
definition is equivalent to our original definition of _map_ by showing that
it reduces to the same expression:

    
    
        map f (x:xs) = why map' f (x:xs)
          == (by def. of `why`)
        map' map' f (x:xs)
          == (by def. of `map'`
        f x : map' map' f xs
    

Which, if you keep expanding the expression _map ' map' f xs_, you will see is
indeed equal to the definition of _map_.

Voilà. So that wasn't too hard. It's basically two steps: (1) abstract over
the recursive call, then (2) figure out how to pass the function you're
defining to itself. If you look at the Y combinator, you'll see that that's
exactly what's going on there.

If this didn't make a lot of sense to you, it's probably because I wrote it on
my iPad at 4:30 in the morning. It really is just those two steps: abstract
over the recursive call, then self-apply. ...

To get deeper into the recursive mindset (if you're not there yet) and to
build up deliberately to the Y combinator, take a look at one of my favorite
books on functional programming, _The Little Schemer_ [2]. (And watch out for
the jelly stains!)

\------

0\. And it's ill typed, but never mind that. It's not possible to define the Y
combinator (or any other fixed-point combinator) in the simply typed-lambda
calculus, so just imagine that my Haskell-style syntax is untyped, like the
basic lambda calculus.

1\. Although, again, there's no way to give _fix_ a good type in the simply
typed lambda calculus.

2\. [http://www.amazon.com/The-Little-Schemer-4th-
Edition/dp/0262...](http://www.amazon.com/The-Little-Schemer-4th-
Edition/dp/0262560992)

~~~
amelius
Regarding your note "0", could anybody here explain how the Y combinator be
properly typed in a more advanced type system? Is there a (practical) language
that would allow it to be typed?

Also, would the Y combinator be a useful abstraction-aid in a compiler (for a
typed language), or does it exist merely as a curiosity?

~~~
klibertp
Take a look at these examples:

    
    
        http://rosettacode.org/wiki/Y_combinator#Haskell
        http://rosettacode.org/wiki/Y_combinator#OCaml
        http://rosettacode.org/wiki/Y_combinator#Scala
        http://rosettacode.org/wiki/Y_combinator#Standard_ML
        http://rosettacode.org/wiki/Y_combinator#Swift
    

The Mu/Roll trick seems to be a default way of typing Y, while some languages
(like OCaml) provide other type-system features like recursive types or
polymorphic variants.

