
Lambda Calculus - jacquesm
http://safalra.com/science/lambda-calculus/
======
Darmani
This page looks like it gives a very nice introduction to "programming" in the
pure-untyped lambda calculus. But reading this, one would have no idea that a
"meatier" side to lambda calculus existed. Just understanding the Church-
Rosser theorem and its consequences, for example, would give a far deeper
understanding than anything shown here. One thing that saddens me is that it
shows construction of the combinators used to compute partial-recursive
functions, but makes no discussion of the significance of this (the partial-
recursive functions are an important computational class, and this is
essentially a proof that they're lambda computable. Throw in a 6th type of
combinator, and you've just proved lambda calculus capable of emulating one of
the major models of computation).

It's certainly cool, but there's only so much to be gained from knowing that
you can implement the factorial function with (λf .(λx .f (xx ))(λx .f (xx )))
(λfn.((λn.n(λu.λxy .y )(λxy .x ))n)(λfx .fx ) ((λmnf .m(nf ))n(f ((λnfx
.n(λgh.h(gf ))(λu.x )(λu.u))n)))) .

~~~
bitdiddle
I also think this article is lacking. Something also about domain[1] theory
and categorical models could be discussed. Particular the latter as they
relate to abstract machines, .eg. Curien's thesis work on combinators and
functional programming. I'm not saying one can explore these in depth, but at
least throw them out there for further reading to help provide a roadmap for
the "meatier" side.

One could also cite a few programming constructs, .eg. (let (... and show how
lisp is essentially syntactically sugared lambda calculus.

I should also add Carl Gunter's excellent "Semantics of Programming Languages:
Structures and Techniques", as a good intro to this area.

[1] <http://en.wikipedia.org/wiki/Domain_theory>)

------
jacquesm
I posted this because I'm trying to get my head wrapped around this subject
and it is one of the most concise and clear explanations that I've found so
far.

I want to get a more complete understanding of functional programming, and
that seems to start with understanding the lambda calculus.

~~~
camccann
Understanding modern functional programming via pure lambda calculus is
somewhat like understanding C++ via raw assembly. Which is to say, you'll have
a great low-level understanding to build on but a long, long road ahead.

It _is_ a lot of fun, though. I once spent a delightful afternoon puzzling out
how to subtract Church numerals knowing only that it was possible to do so,
having previously worked out the far simpler addition and multiplication
functions. Highly recommended!

~~~
jacquesm
I figure you can't really say you understand a programming language until you
can re-implement it from the ground up.

So that's my intermediate goal.

Spot on about C/C++ through the assembler route by the way, if there is one
excellent way to really grok those then it is via the -S flag.

~~~
fogus
Implementing a language is complicated enough that by doing so you fully learn
the implemented language AND the language used for implementation. Win-win!

------
clemesha
This page: <http://safalra.com/science/lambda-calculus/recursion/> contains
exactly the phrase "Y combinator", which I found exciting. Still trying to
understand many other bits of the article.

~~~
jacquesm
Recursion in pure functional programming has me stumped. I can't see how in
practice it would ever 'end' the recursion, in other words, if I try to
envision the recursion I see an endless loop in there.

In practice something needs to decide to end the recursion.

Can somebody shed some light on this ?

~~~
procrastitron
The Y combinator only works in languages with either a lazy or call-by-name
evaluation strategy. If you run it in a language with an eager evaluation
strategy then it actually would result in an infinite loop. For those
languages you would have to use the Z combinator or an equivalent. The
z-combinator is just the Y combinator with the argument wrapped in a thunk to
prevent premature evaluation.

~~~
jacquesm
Apologies for ignorant questions, but I don't understand how lazy or call-by-
name evaluation would solve that.

After all lazy is just another way of saying 'deferred', and call-by-name is a
way to identify what it is that you are calling, in either case you'd sooner
or later have to do the evaluation and then you're back to that loop again.

So, what am I missing ?

~~~
procrastitron
Lazy doesn't just mean deferred; an unnecessary computation might never be
performed at all. Take for example a function where the recursive argument is
never called:

sample = (\this->(\x->x))

Then, (Y sample) can reduce as follows:

1: (Y sample)

2: (sample (Y sample))

3: ((\this->(\x->x)) (Y sample))

4: (\x->x)

Now, if your evaluation strategy required evaluating arguments to final form
before applying an operator to them, then line 2 would expand forever:

2: (sample (Y sample))

3: (sample (sample (Y sample)))

4: (sample (sample (sample (Y sample))))

... And so on.

However, neither call-by-need (lazy) nor call-by-name evaluation strategies
require evaluating an argument before applying some operator to it. Thus, in
languages that support either of those evaluation strategies, the operator can
be evaluated to the point where it throws away the argument, and the argument
need never be evaluated.

This is a contrived example but real world recursion works for the same
reason; at some point in the evaluation of the function, it no longer needs to
be applied recursively, and once it reaches that base case, the recursive
function can be discarded.

~~~
jacquesm
Thank you!

Ok, I think I see a glimmer of understanding here, it doesn't quite 'click'
yet but it is a lot clearer now.

I get the 'never needed' bit, that's like in many other languages where you
say for instance

    
    
       if x and y then 
    

And x = false, then y is never evaluated. Not quite the same, but it's a case
where you can optimize out the evaluation of something because it's result
will be discarded anyway.

The last part of your explanation still has me going around about what makes
the decision to discard, it can't be the function itself, so it has to be some
criterion outside the function causing that. For instance, maybe if work is no
longer done, or if some arbitrary precision level has been reached.

Interesting stuff this!

In 'pure' functional programming you'd not have an 'if' statement to make the
decision with, or is that allowed to be used to end recursion ? I just can't
see that working using functions alone.

~~~
btilly
Why wouldn't a pure functional language allow you to have an if statement?
Haskell certainly has one!

The key is that given the same input you need to produce the same output every
time. Which is great as long as you're not doing I/O. (Haskell addresses that
with monads - even though it looks like you're giving the same input, you're
really giving a different input each time!)

~~~
jacquesm
Ok, so that dilutes the 'pure' of the functional part. I know that in practice
functional programs have an if to force termination, but the lambda calculus
deals with abstracts. Anything evaluating lambda expressions then has to deal
with that or you'd have to expand the syntax of the calculus to contain a
termination function which 'magically' decides that enough is enough.

According to the definition all functions are unary functions, an if would
have to be a binary function wouldn't it ?

~~~
btilly
With currying, you can easily make functions of n variables out of functions
of one variable. To avoid syntactic confusion, here is a comparison in
JavaScript

    
    
      // Call this with foo(x, y)
      function foo (x, y) {
          return x*y;
      }
    
      // Call this with foo(x)(y)
      function foo (x) { return function (y) {
          return x*y;
      }}
    

The key thing to notice is that you can get from one form to the other with a
straight text substitution. So restricting yourself to functions of one
variable involves no loss of generality.

Going back to the lambda Calculus, you can do Boolean logic there as well. See
<http://safalra.com/science/lambda-calculus/boolean-logic/> for details. In
fact I believe that the if construct in Haskell is just syntactic sugar around
that. So it isn't that there is a decision made within the function, but
rather that you call a function which turns out to be one of two different
functions.

~~~
jacquesm
Ah, right, yes, that's true, I forgot about the Currying bit (I read up on it
earlier, promptly forgot it).

I'm coming at this from a 'coders' perspective, not the mathematical side of
it, so the code example is much appreciated, that makes it much easier to
understand.

The function 'foo' returns a function that has already 'bound' 'x' to whatever
value you put in, the 'scope' then allows you to fish out the right 'x' when
you're in the inner function doing the multiplication.

So, does that mean that as far as the inner function is concerned 'x' is
'free' ?

So far when reading articles about the lambda calculus it seemed like the
'free' variables where somehow magically present in the scope of the function
without a hint about where they came from.

~~~
btilly
I could easily be wrong, but here is my understanding. Consider this snippet
of JavaScript:

    
    
      function foo (x) { return function (y) {
          return x*y;
      }}
    
      bar = foo(5);
    

Within bar, x is bound and y is free.

Does that make sense to you?

~~~
jacquesm
Ok, so now 'bar' contains a function where the 'x' is already bound to the
'5', but the 'y' is still free because it has not yet been 'bound' to any
parameter, is that what you mean ?

bar contains "function(y) { return 5*y; }"

The 'x' has been replaced by the '5' because of the function call. So
'binding' effectively pins down a function parameter and allows you to replace
its instances in 'named' form with its value to reduce the expression ?

~~~
btilly
That is the way that I understand it. However you should be warned that I have
never made a detailed study of the lambda calculus, and so might have basic
misconceptions about it.

