
Lisp is not based on the Lambda Calculus - danielszm
https://danielsz.github.io/blog/2019-08-05T21_14.html
======
vga805
The post quotes McCarthy:

"one of the myths concerning LISP that people think up or invent for
themselves becomes apparent, and that is that LISP is somehow a realization of
the lambda calculus, or that was the intention. The truth is that I didn't
understand the lambda calculus, really" \- John McCarthy

So there are a two issues here, 1) whether or not it was McCarthy's intention
to realize the Lambda Calculus in LISP, and 2) whether or not LISP is such a
realization. Or at least some kind of close realization.

The answer to 1 is clearly no. This doesn't imply an answer to 2 one way or
another.

If 2 isn't true, what explains the widespread belief? Is it really just that
he, McCarthy, borrowed some notation?

~~~
kmill
If it was a realization of a lambda calculus, then it is one with (a)
primitives, (b) strict evaluation, (c) quoted lambda terms, and (d) "dynamic"
bindings.

(a) In classic lambda calculus, _everything_ is a lambda term. McCarthy's Lisp
has primitives like lists and numbers. However, it is known that lambda
calculus is powerful enough to encode these things as lambda terms (for
example,

    
    
      null = (lambda (n c) (n))
      (cons a b) = (lambda (n c) (c a b))
    

gives a way to encode lists. The car function would be something like

    
    
      (car a) = (lambda (lst)
                  (lst (lambda () (error "car: not cons"))
                       (lambda (a b) a)))
    

This would not work in the original Lisp because of binding issues: the
definition of cons requires the specific a and b bindings be remembered by the
returned lambda.)

(b) Lambda calculus does not have any evaluation rules. Rather, it is like
algebra where you can try to normalize an expression if you wish, but the
point is that some lambda terms are equivalent to others based on some simple
rules that model abstract properties of function compositions. Lambda-
calculus-based programming languages choose some evaluation rule, but there is
no guarantee of convergence: there might be two programs that lambda calculus
says are formally equivalent, but one might terminate while the other might
not. Depending on how you're feeling, you might say that no PL for a computer
can ever realize the lambda calculus, but more pragmatically we can say most
languages use lambda calculus with a strict evaluation strategy.

(c) The lambda terms in lambda calculus are not inspectable objects, but more
just a sequence of symbols. Perhaps one of the innovations of McCarthy is that
lambda terms can be represented using lists, and the evaluator can be written
as a list processor (much better than Godel numbering!). In any case, the fact
that terms have the ability to evaluate representations of terms within the
context of the eval makes things a little different. It's also not too hard to
construct a lambda evaluator in the lambda calculus[1], but you don't have the
"level collapse" of Lisp.

(d) In lambda calculus, one way to model function application is that you
immediately substitute in arguments wherever that parameter is used in the
function body. Shadowing is dealt with using a convention in PL known as
lexical scoping, and an efficient implementation uses a linked list of
environments. In the original Lisp, there was a stack of variable bindings
instead, leading to something that is now known as dynamic scoping, which
gives different results from the immediate substitution model. Pretty much
everything fun you can do with the lambda calculus depends on having lexical
scoping.

All this said, the widespread belief about Lisp being the lambda calculus
probably comes from Scheme, which _was_ intentionally lambda calculus with a
strict evaluation model. Steele and Sussman were learning about actors for AI
research, and I think it was Sussman (a former logician) who suggested that
their planning language Schemer (truncated to Scheme) ought to have real
lambdas. At some point, they realized actors and lambdas (with mutable
environments) had the exact same implementation. This led to "Scheme: An
Interpreter for Extended Lambda Calculus" (1975) and the "Lambda the ultimate
_something_ " papers. Later, many of these ideas were backported to Lisp
during the standardization of Common Lisp.

[1]
[https://math.berkeley.edu/~kmill/blog/blog_2018_5_31_univers...](https://math.berkeley.edu/~kmill/blog/blog_2018_5_31_universality_quines.html)

~~~
paulddraper
> In classic lambda calculus, everything is a lambda term

OO says everything is an object. Even though Java has non-object primitives,
we're still gonna classify Java as OO.

> Lambda calculus does not have any evaluation rules.

> The lambda terms in lambda calculus are not inspectable objects, but more
> just a sequence of symbols.

It's not clear to me why this makes Lisp not in the family of Lambda
implementations.

> In the original Lisp, there was a stack of variable bindings instead,
> leading to something that is now known as dynamic scoping.

That's true. Every modern Lisp (Scheme, Clojure, Racket) has lexical scoping.
And Common LISP uses lexical by default.

> Later, many of these ideas were backported to Lisp during the
> standardization of Common Lisp.

Again this contributes to the notion that LISP/Schema/Lambda Calculus were
"discovered", not that Lambda calculus has an explicit pedigree.

~~~
pron
> this contributes to the notion that LISP/Schema/Lambda Calculus were
> "discovered", not that Lambda calculus has an explicit pedigree.

That notion is wrong (at least with a very high likelihood), and it's usually
stated by people who fetishize the lambda calculus but know little of its long
evolution. It's just your ordinary case (of hubris) where people aesthetically
drawn to something describe it as inevitable or even a law of nature. And I
know it's wrong in part because of the following quote:

> _We do not attach any character of uniqueness or absolute truth to any
> particular system of logic. The entities of formal logic are abstractions,
> invented because of their use in describing and systematizing facts of
> experience or observation, and their properties, determined in rough outline
> by this intended use, depend for their exact character on the arbitrary
> choice of the inventor._

This quote is by the American logician Alonzo Church (1903-1995) in his 1932
paper, _A Set of Postulates for the Foundation of Logic_ , and it appears as
an introduction to the invention Church first described in that paper: the
(untyped) lambda calculus [1].

The simpler explanation, which has the added benefit of also being true, or at
least supported by plentiful evidence, is that the lambda calculus was
invented as a step in a long line of research, tradition and aesthetics, and
so others exposed to it could have (and did) invent similar things.

If you're interested in the real history of the evolution of formal logic and
computation (and algebra) you can find the above quote, and many others, in a
300-page anthology of (mostly) primary sources that I composed about a year
and a half ago [2]. They describe the meticulous, intentional invention of
various formalisms over the centuries, as well as aesthetic concerns that have
led some to prefer one formalism over another.

[1]: Actually, in that paper, what would become the lambda calculus is
presented as the proof calculus for a logic that was later proven unsound. The
calculus itself was then extracted and used in Church's more famous 1936
paper, An Unsolvable Problem of Elementary Number Theory in an almost-
successful attempt to describe the essence of computation. That feat was
finally achieved by Turing a few months later.

[2]: [https://pron.github.io/computation-logic-
algebra](https://pron.github.io/computation-logic-algebra)

~~~
carlehewitt
BTW, the Church/Turing theory of computation is not universal for digital
computation as explained in the following article:

[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003)

~~~
pron
That proof is like disproving the conservation of energy by pointing out that
the water inside a kettle boils. Or speaking about the "Toaster-Enhanced
Turing Machine"
([https://www.scottaaronson.com/blog/?p=1121](https://www.scottaaronson.com/blog/?p=1121)).
It's easy to "disprove" Turing's thesis when you misstate it.

Turing's thesis talks about some system transforming an input to an output.
Clearly, a TM could simulate the actor itself in your proof. If it is not able
to simulate the entire actor-collaborator system, that's only because you may
have given the _collaborator_ (whatever it is that generates the messages)
super-Turing powers. You assumed that there could be something that could
issue a `stop` after an arbitrary number of `go`'s, but you haven't
established that such a mechanism could actually exist, and _that 's_ where
the super-Turing computation actually hides: in a collaborator whose existence
you have not established. As you have not established the existence of the
collaborator, you have not established the existence of your actor-
collaborator system. I claim that a TM cannot simulate it simply because it
cannot exist (not as you describe it, at least).

So here's another "proof": The actor machine takes two messages, Q and
A(Bool), and it gets them alternately, always Q followed by A. Every time it
gets a Q, it increments a counter (initialized to zero) by 1 to the value N,
and emits a string corresponding to the Nth Turing machine. It then gets a
message A containing a value telling it whether the Nth TM terminates on an
empty tape, and in response it emits A's argument back. And here you have an
actor machine that decides halting!

~~~
ProfHewitt
The article referenced presents a strongly-typed proof that the halting
problem is compurationally undecidable. Nevertheless, Actors can perform
computations impossible on a nondeterministic Turing Machine.

~~~
pron
If they can, you haven't shown that. The "computation" you present is not just
the actor's behavior, but the behavior of a combined actor-collaborator system
(the collaborator is whatever it is that sends the actor messages). This
system presents "super-Turing" behavior iff the collaborator is super-Turing.
You haven't shown such a collaborator, that can reliably emit a `stop` command
after an arbitrary number of `go`s, can exist. It's always possible that the
scientific consensus is wrong, but that requires proof, and the "proof" in the
paper isn't one.

~~~
ProfHewitt
There is no "collaborator" in the Actor computation that Plotkin:s proof shows
cannot be preformed by a nondeterministic Turing Machine.

~~~
pron
Yes, there is. The "computation" of the actor (really, actor-collaborator)
relies on something that can reliably emit a `stop` after an arbitrary number
of `go`s. Please show that something can actually do that, and I'll show you a
TM that can do the same. It's easy to _describe_ non-computable behaviors; the
difficulty is demonstrating that there are actual physical systems that can
carry them out.

~~~
ProfHewitt
The Actor (which cannot be implemented by a nondeterministic TM) sends the
'stop' message to itself. However, just as there can be an arbitrarily long
amount of time between two steps of a computation, there can be a arbitrarily
long amount of time for a message to be delivered.

~~~
pron
> The Actor (which cannot be implemented by a nondeterministic TM)

As you now describe it, it cannot be implemented (physically realized) at all.
Or, conversely, any physically implementable refinement of it (which will not
exhibit the entire range of behaviors) _will_ be simulatable by a TM.

There are many _abstract_ machines that cannot be implemented by TMs -- e.g.
various oracle TMs. There is nothing special, surprising or new about that.

There are many formalisms that are more or less convenient abstractions for
various kinds of systems. There is nothing special, surprising or new about
that, either. In fact, some formalisms that can describe non-computable
behaviors are commonly used to specify either software or hardware as they're
convenient (like Lamport's TLA).

But you're making a claim about the Church-Turing thesis, which, as commonly
interpreted today (as the physical Church-Turing thesis), is the claim that
any mechanism that can be implemented by a _physical mechanism_ can be
simulated by a TM. Unless you show how to build a physical system that cannot
be simulated by a TM, your claim is no refutation of the thesis; it has
nothing to do with it. Your claim that arbiters in digital circuits cannot be
simulated has not been established and is not recognized by scientific
consensus.

> However, just as there can be an arbitrarily long amount of time between two
> steps of a computation, there can be a arbitrarily long amount of time for a
> message to be delivered.

This is a completely different use of "arbitrary". In TMs, the fact that an
arbitrary amount of time can pass between steps means that any device, with
_any_ finite amount of time between steps, can produce the full range of TM
behaviors. In your actor case, to get non-computable behavior, you need to
show that the device can delay the message by _every_ amount of time. You need
to show that such a physical device can exist.

Put simply, it's one thing to propose a non-computable abstraction that's
convenient to model some systems, and another thing altogether to claim that
there are realizable physical systems that cannot be simulated by a Turing
machine. The former is useful but mundane (in fact, all of classical analysis
falls in this category); the latter has not been achieved to date.

~~~
ProfHewitt
Digital arbiters can theoretically take an arbitrary amount of time to settle
although statistically they tend to settle soon rather than later. Also, if an
Actor sends itself a 'stop' message over the Internet via Timbuktu, it can
take an arbitrary amount of time to be received back. Also, an Actor can take
an arbitrary amount of time to process a message.

In the above models of computation, _arbitrary_ means absolutely arbitrary,
i.e., there is no a priori bound on the amount of time that it can take.

Your trouble may be with Plotkin's proof, which shows that state machine
models of nondeterministic computation are inadequate.

~~~
pron
I have no problem with the proof. It's easy to come up with non-computable
abstractions. All of calculus is one (in fact, Turing himself pointed it out
in "On Computable Numbers", and he invented and used others when he found them
useful), yet it's commonly used to model natural systems without anyone
considering it a proof against Church-Turing. So the fact that an
_abstraction_ is non-computable is unsurprising and has nothing to do with the
thesis. The Turing thesis is relevant when you're talking about a physical
realization, and you have _not_ provided any proof that you've found one
that's not Turing-computable.

Every physical object does have an a priori bound on the amount of time it can
take to do something, unless that time could possibly be infinite. The reason
is that it needs some sort of a counter, so it needs some state, and there's
only so much state storable in the universe to store a counter.

------
nils-m-holm
Lambda Calculus (LC) versus LISP is not just about lexical scoping, but also
about partial application (currying), which is cumbersome in LISP and natural
in LC. In LC (where \ = lambda)

    
    
        (\xy.x)M  ==>  \y.M
    

while in LISP

    
    
        ((lambda (x y) x) M)  ==>  undefined
    

because the lambda function expects two arguments. Of course

\xy.x is just an abbreviation for \x.\y.x, so the LISP counterpart would
really be

    
    
        ((lambda (x) (lambda (y) x)) M)  ==>  (lambda (y) M)
    

but this only proves the point that currying is natural in LC and not in LISP,
because LC provides syntactic sugar that allows to treat higher-order
functions and functions of multiple variables in the same way.

Also, LC is not compatible with functions with a variable number of arguments,
which is common in LISP. For instance,

    
    
        (+ 1)  ==>  1
    

in most LISPs, but given PLUS == \mnfx.mf(nfx) and 1 == \x.fx

    
    
        PLUS 1  ==>  \nfx.f(nfx) == SUCC
    

i.e., (PLUS 1) reduces to "SUCCessor", the function adding one to its
argument.

In most LISP dialects, you can pass any number of arguments to a variable-
argument function like +. So what does the syntax (F X) denote in general? The
application of a unary function to one argument or the partial application of
a binary function? Or a ternary one...?

In LC it does not matter, because multi-variable functions and higher-order
functions are the same.

I have developed a LISPy language that uses currying instead of functions of
multiple arguments in the book Compiling Lambda Calculus
([https://www.t3x.org/clc/index.html](https://www.t3x.org/clc/index.html)).

You can download the code here:
[https://www.t3x.org/clc/lc1.html](https://www.t3x.org/clc/lc1.html).

------
jasim
If I understood you right, Lisp was not directly inspired by lambda calculus,
but from McCarthy's own research into recursive functions where he found that
the three primary functions can cover the whole of computation.

What I'm extrapolating from this is that McCarthy's ideas are similar in
implication to Lambda Calculus where you can define computation with just
function abstraction and application, and use Peano numbers to represent data.
Both approaches end up creating a purely functional way to write programs.

Would that be correct? I also wonder whether there is anything we can take
away from this knowledge that is applicable to programming or how we look at
it?

~~~
tudelo
What "three primary functions" are you referring to?

~~~
dualogy
Probably means the 3 irreducable primitives in LC: applications, abstractions,
and "variables" (ie. attribute identifiers)

~~~
tomstuart
In this context, it’s more likely the 3 basic primitive recursive functions:
constant, successor, projection.
[https://en.wikipedia.org/wiki/Primitive_recursive_function#D...](https://en.wikipedia.org/wiki/Primitive_recursive_function#Definition)

~~~
cardiffspaceman
Those functions compose into programs that halt (if I read the linked Wiki
right). The LC encompasses those programs plus more programs, which cannot be
shown to halt.

------
bandrami
I'm not even sure why it gets pushed as "functional"; I mean, you can pass and
return functions, but that's really not the point of the language like it is
with ML or Haskell. It's primarily a symbolic language.

------
pwpwp
One of the newest Lisp dialects, Kernel, is pretty close to lambda calculus,
though. Like in LC, there is no implicit evaluation of arguments. A fexpr
receives the "source code" of its input expressions, similar to LC. Then it
can explicitly evaluate those it cares about.

[https://web.cs.wpi.edu/~jshutt/kernel.html](https://web.cs.wpi.edu/~jshutt/kernel.html)

~~~
kd0amg
"Receiving the source code" of an argument in lambda calculus is an accident
of notation. The source code is not observable by the function it is passed
to. Confluence implies that there is no way within lambda calculus to
distinguish the result of reducing a term from the term itself.

------
bjourne
Afaik, Haskell is a realization of the (typed!) lambda calculus. Lisps aren't
because they don't do lazy evaluation. The LC beta reduction of (\a. a) (\c.
d) (\e. f) is (\c. d) (\e. f) but most lisps will reduce it to (\a. a) d. This
might seem like a minor detail but means general recursion using the y
combinator isn't actually implementable in lisps (I could be wrong though).

~~~
mrkeen
> general recursion using the y combinator isn't actually implementable in
> lisps

I think the 'typed' bit is key. You can't implement Y in plain old Haskell
because it would need to recurse infinitely during type-checking.

~~~
carlehewitt
There is a strongly-typed definition of Y here:

[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003)

~~~
ProfHewitt
Some enterprising hacker should research who should be credited for the
strongly-typed _recursive_ def of Y.

------
juliangamble
There are some common themes here. Let's get some precise terminology so we
can all talk about the same thing.

Some questions to ponder:

Is Lisp a term re-writing system?
[https://news.ycombinator.com/item?id=9554335](https://news.ycombinator.com/item?id=9554335)

Is lambda calculus a term rewriting system?
[https://cstheory.stackexchange.com/questions/36090/how-is-
la...](https://cstheory.stackexchange.com/questions/36090/how-is-lambda-
calculus-a-specific-type-of-term-writing-system)

Is the Mathematica language a term-rewriting system?
[https://mathematica.stackexchange.com/questions/119933/why-d...](https://mathematica.stackexchange.com/questions/119933/why-
did-the-mathematica-language-choose-term-rewriting-instead-of-the-lambda-cal)

And to round it all up: Is Lisp an evaluation system and Lambda calculus an
evaluation system? [I'll leave this one to the reader]

------
danharaj
It is difficult to believe that McCarthy did not understand he was beating the
same horse along with Church, Curry, Schoenfinkel, et al.

~~~
lonelappde
Why? Does every compiler writer know all the theoretical underpinnings and
generalizations of their work? Or do that make something that solves a problem
without investigating the entire universe around it?

~~~
danharaj
Because he was certainly aware of the literature and he was a top notch
scholar. Your follow-up questions seem to be implying something, care to spell
it out for me?

~~~
phkahler
But was he top notch back then? He's most well known for "creating" Lisp. And
I put that in quotes because he never meant for anyone to implement it on a
real machine.

~~~
ptrott2017
re:was he top notch? By 1955 he was an assistant professor of Mathematics and
known in his field. In 1956 he organised the Dartmouth conference where the
field of Artificial Intellgence - got its name. He was a peer of Claude
Channon, Marvin Minsky and Nathanial Rochester - so yes he was top notch. YC
audience knows him for Lisp - but he was known for a lot more in his fields of
research.

------
leshow
Stupid question, why is it often written "_the_ lambda calculus" and not just
"lambda calculus"

~~~
QuercusMax
Probably for the same reason that some people refer to regular calculus as
"the calculus".

~~~
commandlinefan
You'll always sound smart if you remember that "maths" is plural and one of
them is "the calculus".

~~~
leadingthenet
Maths is the common spelling and pronunciation in standard British English,
though.

------
waitwhatwhere
Interesting parallel with stories out there where authors think teachers get
their writings “wrong”.

Unintended metaphor and application are things.

Smacks of a cognitive bias known as functional fixedness:
[https://en.m.wikipedia.org/wiki/Functional_fixedness](https://en.m.wikipedia.org/wiki/Functional_fixedness)

A screwdriver can also be a pry bar :-)

Imo this is why looser IP laws are important. Humanity needs to be able to
rethink and find new application of its epistemological ideas to find new
ideas of interest.

Too often we’re held to thinking about IP only the way the author intended.
It’s almost pushing into thought policing.

------
dogfishbar
I spent a lot of time on this. See M-LISP: a representation-independent
dialect of LISP with reduction semantics, TOPLAS, 1992, the relevant bit is in
section 2.

It's true that J. McCarthy had only a passing familiarity with LC.
M-expression LISP, as it was originally conceived, was all about first-order
recursion schemes over S-expressions. But due to a very simple error in the
base case of an inductive definition, LISP 1.0 "featured" or "supported"
higher-order functions, ala LC.

------
namelosw
The problem is like 'is Erlang an Actor language?'. The answer is yes.

Carl Hewitt developed the Actor model based on Smalltalk in the 1970s.

Joe Armstrong created Erlang in the 1980s, which he didn't know the Actor
model at all at that time. Erlang doesn't even have the concept of Actor, it
accidentally implemented Actor model by the elegant design of processes.

But when it comes to the Actor model nowadays, Erlang is basically a must-
mention language, although the intention wasn't about Actor.

~~~
ProfHewitt
The following article has a critique of Erlang as an Actor language:

[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418003)

------
tempguy9999
This is not relevant directly to the subject but perhaps someone in formal
langs can help me. I'm interested in optimisation of (necessarily) pure
functional langs. Starting with deforesting (the elimination of intermediate
structures) eg.

    
    
      map(f, map(g, list(1, 2, 3)))
    

can be optimised trivially by a human to

    
    
      map(f.g, list(1, 2, 3))
    

(where f.g is functional composition) but I want to do this automatically, and
the first step is to play with it. I've defined defined stuff on paper then
started substituting but it's slow and, being me, error prone, when done with
paper and pen.

Does anyone know of a symbolic manipulation software for haskell, or similar
syntax (prefer to avoid lisp syntax if poss, but ok if nothing else) which
will allow me to do this easily and get a feel for it?

Thanks

~~~
bontaq
You could use uniplate and a small AST to play more with it, the paper has
examples of transformations

the paper: [https://ndmitchell.com/downloads/paper-
uniform_boilerplate_a...](https://ndmitchell.com/downloads/paper-
uniform_boilerplate_and_list_processing-30_sep_2007.pdf)

small tutorial:
[https://www.cs.york.ac.uk/fp/darcs/uniplate/uniplate.htm](https://www.cs.york.ac.uk/fp/darcs/uniplate/uniplate.htm)

~~~
tempguy9999
This seems (AFAICT) a bit higher what I'm after, but very interesting
nonetheless, I'll have a play, thanks.

------
didibus
Can we extend from this another misconception then? That functional
programming stems from the Lambda Calculus? When in reality, it might come
from Lisp, which does not come from Lambda Calculus, thus making Lisp the root
of the tree for the origin of functional programming?

~~~
cannabis_sam
Not really, Peter Landin’s ISWIM is essentially syntactic sugar over lambda
calculus, and went on to influence ML and Haskell. So there is a more direct
lineage from lambda calculus to functional programming via that route.

(Regarding the sibling comment: Landin’s paper also predates Backus’ paper by
about 10 years)

~~~
didibus
No idea how accurate it is, but Wikipedia says that ISWIM was influenced by
Lisp...

~~~
cannabis_sam
That is correct.

From Landin’s paper:

6\. Relationship to LISP

ISWIM can be looked on as an attempt to deliver LISP from its eponymous
commitment to lists, its reputation for hand-to-mouth storage allocation, the
hardware dependent flavor of its pedagogy, its heavy bracketing, and its
compromises with tradition.

------
didibus
It isn't clear though if McCarthy didn't know anything about the Lambda
Calculus, or simply didn't know it well and didn't create Lisp as a concrete
realization of Lambda Calculus. In that, he might have created Lisp for
whatever other reasons and was doing his own exploration, but it's probable
that in doing so, he used his inherent knowledge of many pre-existing
literature, which could include some of Lambda Calculus, thus Lisp having some
resemblance to it, like the use of lambda to define functions.

Also, realistically speaking, no programming language is based on the Lambda
Calculus as is, even those that try to be.

------
peterkelly
Here's a simpler version:

Lisp has mutable variables. Lambda calculus doesn't.

------
deckard1
When pedantry goes wrong...?

This article repeats this "TL;DR Lisp is not based on the Lambda Calculus"

But that's not actually what McCarthy said. McCarthy said:

> one of the myths concerning LISP that people think up or invent for
> themselves becomes apparent, and that is that LISP is somehow a realization
> of the lambda calculus

"Based on" and "realization of" are two different things. This kind of
exaggerated or hyperbolic pedantry strikes me as clickbait. Which is
unfortunate, because the article does contain some good content.

If you read the LISP I manual, you will see that concepts beyond the obvious
lambda notation are used directly from The Calculi of Lambda Conversion.
Notably, the distinction between forms and functions.

Clearly, we're splitting some very fine hairs here.

------
proc0
So it wasn't based on LC, but is probably isomorphic to LC, right? This is why
it's hard for me to believe math is invented. Different people and separate
efforts but all arrive at the same patterns with different names.

------
pankajdoharey
It is entirely possible to realise Lambda calculus using lisp. But McCarthy
not understanding it is surprising.

~~~
Isamu
>McCarthy not understanding it is surprising.

I think he is commenting on the subtleties of it.

I think many reading here will say they understand it or have studied it in a
course but I am not so sure everyone gets the subtle points. Myself I have
always puzzled over the difference between what programmers call LC and what
seems to be discussed by Church.

~~~
ozmaverick72
I understand that Turing and Church came up with different approaches to
describing the fundamentals of computing. You can see there is a relationship
between LC and LISP. My question is how did we get to the von Neumann
architecture and CPU instruction sets from either Church or Turing's work ?

------
qubex
Anybody who holds forth of syntactical matters (lambda calculus and LISP being
two examples thereof) and commits the grammatical heresy of writing “I wasn’t
_going to go_ home” (emphasis mine) in lieu of “I wouldn’t be going home” has
just neutered themselves, in my humble opinion at least.

~~~
foldr
There's nothing grammatically wrong with "going to go home".

~~~
qubex
When I was taught English it was most definitely frowned upon and disparaged
as “at best an Americanism”. It is grating to the native British ear and has
no place in formal writings. There is no situation where it cannot be avoided
by rephrasing the sentence (usually, by no more than employing “will be
going”, but occasionally resorting to other constructs). During the IB we were
absolutely forbidden from using it and would be marked down severely.

~~~
foldr
I'm a native British English speaker and it sounds completely normal to me.
There's certainly nothing grammatically wrong with it. It's the same structure
as "going to eat" or "going to walk".

Marking you down for using an expression used by all native English speakers
is bonkers.

