
What is the contribution of lambda calculus to the theory of computation? - kocheez75
http://cstheory.stackexchange.com/questions/21705/what-is-the-contribution-of-lambda-calculus-to-the-field-of-theory-of-computatio
======
tikhonj
The λ-calculus is basically the MVP of programming languages. It allows
somebody designing a language feature or part of a type system to experiment
with that feature _in isolation_. Fast iteration for programming language
designers.

It's also great as the core of a programming language. If you can express a
feature just in terms of the λ-calculus, you can implement it almost for free.
It's just desugaring. Most of Haskell is like this: while the surface language
has become rather complex, the majority of the features boil away rather
transparently into typed lambda terms. Haskell uses a variant of System F
which it actually calls "Core" to underscore how core this concept is :).

Of course, as the accepted answer points out, the λ-calculus is also very
useful for logic. In some sense, it is a sort of "proof essence": the basic
building block of what a proof _is_. I really like this notion because it
gives me a concrete, first-class object to represent a proof. Seeing proofs as
programs really helped me come to terms (heh) with mathematical logic.

One surprising use for the λ-calculus outside of CS is in linguistics. In
particular, linguists use typed λ-calculi to construct formal semantics for
sentences. It's a way to build up the _meaning_ of, well, meaning. Or at least
the things we say ;). I'm not super familiar with semantics in linguistics,
but I've always thought it was very cool that they _also_ use some of the same
tools as programming language theory! Wikipedia has more about this:
[http://en.wikipedia.org/wiki/Formal_semantics_%28linguistics...](http://en.wikipedia.org/wiki/Formal_semantics_%28linguistics%29)

~~~
mafribe
Two brief comments:

(1) The λ-calculus is better thought of as the MVP of SEQUENTIAL programming
languages. It is not a good formalism for concurrency. Thee π-calculus is more
suitable for concurrency, and, in a real sense, subsumes λ-calculus (see R.
Milner's "Functions as Processes" and its subsequent elaborations for
details).

(2) Proofs = λ-terms works best for CONSTRUCTIVE logic. With classical logic
where not-not-A = A, it kind of works, but not really well.

~~~
DanWaterworth
(1) I don't think that's fair to say. The lambda calculus is a way of
expressing computations with little regard for how they are actually executed.

(2) True, it works better if you add continuations.

~~~
chrismonsanto
re 1) one of the primary limitations of the lambda calculus wrt concurrency is
that it does not model time--there is no way to determine how long a
computation takes. An example of a function the lambda calculus cannot define
is f(x, y) where the fastest computation of x and y wins. You'll see that
function pop up in a number of places in the Haskell world, where it is called
"amb". This function is obviously important from a pragmatic POV but it turns
out it is also important when doing formal work as well: see the parallel-
or/full abstraction problem for PCF[0].

My personal favorite concurrency formalism is the Chemical Abstract
Machine[1]. The y-calculus is a neat extension that preserves the spirit of
the lambda calculus, but also permits (purely non-deterministic) concurrency.

[0]:
[http://en.wikipedia.org/wiki/Fully_abstract#Abstraction](http://en.wikipedia.org/wiki/Fully_abstract#Abstraction)

[1]:
[http://www.lix.polytechnique.fr/~fvalenci/papers/cham.pdf](http://www.lix.polytechnique.fr/~fvalenci/papers/cham.pdf)

~~~
mafribe
Most process calculi (including the CHAM) don't model time. I think it's
perfectly possible to consider computation without timing.

Adding timing is straightforward for any model of computation with operational
semantics. This addition generally changes the semantics dramatically (eg.
equalities that hold). There are many timed versions of process calculus, e.g.
timed CCS, timed CSP, timed pi-calculus.

------
javajosh
The importance of _calculus_ is most easily demonstrated by applying it to
physics problems. For example, if I tell you how a ball moves in time, and
then ask you how fast it's moving after a certain number of seconds, then
calculus will help you find the answer (take the derivative of the equation of
motion and plug in the time).

What is the equivalent computer science problem that lambda calculus can help
you solve? Challenge: pretend like you're Richard Feynman and avoid jargon, if
at all possible.

EDIT: I find it quite curious that I got so badly down-voted (-3 and
counting!) for simply asking for concrete examples of applicability to actual,
concrete problems. I've always found that tools are best understood in the
context of their use. Even an abstract concept is useful to speak about in
this way - for example, complexity/Big O analysis helps us with capacity
planning, comparing algorithms, and so on. It may be that lambda calculus
helps us with, oh, decomposition of computation or something like that. But
for all the digital ink I've read about it, it's always seemed like an
academic form of name-dropping. Even the name is intimidating, right? Reminds
me of terms like "schroedinger's equation" or "canonical ensemble" from the
good old days in physics class. But behind the intimidating names is just a
tool for solving problems - and I have yet to see anyone demonstrate this for
lambda calculus. Granted I haven't looked very hard!

It takes self-awareness to realize that you are enthralled with something
without understanding it. The litmus test for this is the umbrage taken by
someone who's asked simple question about what their high-status concept is
really used for. That's why I mentioned Feynman in particular, because of his
wonderful reputation as having knowledge that was totally grounded in reality.

~~~
nandemo
A century ago, mathematicians were busy proving theorems (for instance, about
calculus), and some of them were trying to figure out if we could start with a
mathematical/logical statement and just calculate the answer, "this is true"
or "this is false". They were not thinking about electronic computers, and
they weren't concerned about how long it would take, they just wanted to know
if you could calculate the answer at all, using an algorithm and pen an paper,
and eventually finishing with an answer. For example, if you have just
something like "10^23423 is bigger than 23454^10", there's no question about
it, you can calculate the answer by using the definitions of integer numbers,
product and exponentiation . But what about _any_ mathematical or logical
statement? That was the big question.

A re-statement of your example is that differential calculus gives you the
definitions of limit and derivative, and it helps you with real world problems
that are well modeled by differentiable functions. It also tells you that some
functions are not differentiable.

Lambda calculus starts with a definition of what is calculating, and then it
tells us that the answer to big question is "no", some functions cannot be
calculated. But it also tells us that whatever can be calculated, can be done
with a very, very simple programming language.

~~~
javajosh
Thanks. Can you give me an example of using LC to prove a simple function can
(or can't) be calculated? Is f(x){return 1/0} an example of a program that
"can't be calculated" or is that just an error (does LC even have a concept of
"error")? What about f(x){assert(false)}? Also, f(x){f(x)} clearly won't ever
terminate; but how does LC "prove" this?

~~~
Grothendieck
All these statements/calculations/proofs are near-trivial once you've defined
an appropriate semantics for your calculus - probably the easiest route is via
a small-step operational semantics - see, e.g., _Types and Programming
Languages_.

There are many lambda calculi, ranging from the untyped lambda calculus with
strict evaluation underlying Scheme to the Calculus of Inductive Constructions
which provides an alternative foundation for mathematics to set theory and is
the basis of the Coq theorem prover which has formalized proofs of the Four-
Color and Feit-Thompson theorems.

~~~
javajosh
So a student asks a question about how physics helps describe the path of a
thrown ball. You answer:

"All these physics problems are near trivial once you've laid out the right
differential equations and solved for the equations of motion. Probably the
easiest route is to use the Leibnitz notation to solve f=ma for various time-
independent force functions.

"There are many formulations of (classical) mechanics, ranging from ordinary
linear differential questions to the use of Lagrangians. The latter is an
alternative foundation for mechanics which was useful in the evolution of both
classical e&m (Maxwell's equations), quantum mechanics, and even statistical
thermodynamics."

The problem that I have with LC is not that I think it's useless - I have too
much respect for Alonzo Church to believe that. Heck, I've even read Goedel
Escher Bach, and enjoyed large swatches of it! What gets me is that people run
around singing the praises of the lambda calculus, but when you ask about what
it's really good for, you get more formalism.

I'm beginning to suspect that the whole thing is an intricate practical joke,
that lambda calculus is fundamentally so self-referencing it really doesn't
have anything to do with anything, other than itself, which is everything.

~~~
groovy2shoes
You can think of the lambda calculus as the first general-purpose programming
language. Logicians were using it to explore the theory of computation before
the modern computer was even invented.

But, despite being a general-purpose programming language, it's not very
practical to use as one. This is simply because the lambda calculus is so
minimal, and we have more sophisticated and featureful programming languages
to solve problems with. So while theoretically the LC can compute _anything_ ,
programmers turn to other languages instead.

So what is the LC used for, if not for writing programs?

Today, it's used as a basis for research into programming languages. PL
researchers will often take the untyped LC and extend it with some feature.
This allows that feature to be explored in isolation, and provides a rigorous,
well-understood platform for writing proofs. This is how we wound up with the
various typed lambda calculi for exploring type theory (e.g., the simply typed
LC, System F, etc.).

GHC actually reduces Haskell into a typed lambda calculus known as System FC,
also known as Core. It uses the Core representation to perform most of its
optimizations. I suspect that having a library of proofs about System F
available helped quite a bit with implementing the optimizations.

The reason students still learn and use the infinitesimal calculus is because
it's still one of the best tools we have for certain problems. The reason
students don't learn the lambda calculus is because we have better tools for
many of it's applications (pick just about any other general-purpose PL). But
if you talk to students of type theory, they'll tell you that they did learn
the LC and that they use it quite a bit in their research. I think someone
already mentioned Pierce's _Types and Programming Languages_ , which is a
really good introduction to the topic, starting with the untyped lambda
calculus and gradually building upon it. If you're genuinely curious about the
stuff LC is used for, that's the place to start.

I reckon that for most programmers, the lambda calculus is nothing more than
an intellectual curiosity, but for PL researchers it's still a useful tool --
a useful formalism for exploring and demonstrating properties of programming
languages and type systems.

------
nvarsj
For anyone that wants a readable overview of the λ-calculus, I recommend
reading the second chapter in Simon Peyton Jones' book:

[http://research.microsoft.com/en-
us/um/people/simonpj/papers...](http://research.microsoft.com/en-
us/um/people/simonpj/papers/slpj-book-1987/)

------
hyp0
see also
[https://cstheory.stackexchange.com/questions/3650/historical...](https://cstheory.stackexchange.com/questions/3650/historical-
reasons-for-adoption-of-turing-machine-as-primary-model-of-
computatio/3680#3680)

It's interesting that Turing never rigorously proved Turing Machines were a
model of computation, it was only an intuitive appeal. He actually apologised
for this when introducing them, in his _Entscheidungsproblem_ paper
[http://www.turingarchive.Entscheidungsproblemorg/browse.php/...](http://www.turingarchive.Entscheidungsproblemorg/browse.php/B/12)

Also curious is that Church wrote to him, saying that he found Turing's model
more intuitively convincing.

Intuitive appeal isn't everything of course.

~~~
mafribe
The reason why he did not there was no rigorous proof is probably in parts
that the result is not controversial at all.

------
analog31
This is probably going to sound really dumb, but I have utterly no formal
computer science background. Lambda calculus in the languages where I have
seen it (Scheme and Python) simply seems like a way to express a function as a
one-liner. Surely, I'm missing something important, but I can't figure out
what.

~~~
bcoates
That's just the lambda abstraction, not the lambda calculus.

The idea is that a lambda abstraction, in a math sense, is much less general
than a single argument function. 'sin θ' is a single argument function that's
defined by some business about the ratio of sides of a right triangle with
angle θ, but it's not a lambda abstraction because all they're allowed to do
is substitute a variable into some expression. 'f(x) = x² - x' can be directly
expressed as a lambda abstraction (f = λ x. x² - x)

Once you guarantee that the function takes the specific form, you can
manipulate it in ways you can't manipulate the general-case function, and it
turns out those manipulations give you enough power to define almost anything
else you'd want to do.

As it turns out, (IIRC) Python makes lambdas essentially opaque objects, and
doesn't let you peek into them any more than you can a general-case function.
This means you can't do any lambda-calculus on them, even simple stuff like
determining if they are exactly the same expression.

~~~
kd0amg
_Once you guarantee that the function takes the specific form, you can
manipulate it in ways you can 't manipulate the general-case function, and it
turns out those manipulations give you enough power to define almost anything
else you'd want to do._

In λ calculus, the only way one term can inspect another is by applying it.
You don't get an intensional view of a λ abstraction. The other language
features that get built on top of this (conditionals, tuples, etc.) only rely
on functions' extensional features.

------
betterunix
This comes to mind:

[http://en.wikipedia.org/wiki/Curry-
Howard_Correspondence](http://en.wikipedia.org/wiki/Curry-
Howard_Correspondence)

