
Algebra and the Lambda Calculus (1993) - espeed
https://people.csail.mit.edu/jaffer/lambda.txt
======
rbonvall
I took the liberty of converting this to LaTeX:
[https://www.overleaf.com/read/vbnkshcwhcyk](https://www.overleaf.com/read/vbnkshcwhcyk)

~~~
headsupftw
Thanks for doing this. There's an error on page 2 though: 0 = 1^2 + g - g@ It
should be 0 = 1 + g^2 - g@

~~~
rbonvall
Good catch. Fixed.

------
emmanueloga_
I was looking at the references at the bottom and saw `6`, I thought "Hygienic
Macro Expansion? M.Fellinson? He probably must be referring to Matthias
Felleisen...". Effectively: [1]

Doesn't mean I understood the rest of the paper... sigh :-p

1:
[https://dl.acm.org/citation.cfm?id=319859](https://dl.acm.org/citation.cfm?id=319859)

------
joe_the_user
I like the article's discussion of algebraic operations and computer
representation as well as how to implement the lambda operation. That's all
abstract algebra and its extensions.

What I'm still mystified by when confronted by these discussions is the way
the lambda calculus is considered equivalent to a Turing machine. Is it
essentially that you have these complex variable substitution schemes which
can "encode" a Turing machines' tape and transitions or is there something
more straightforward?

~~~
s_m_t
I guess the tape is just a linked list of bits? What confuses me about lambda
calculus, and a lot of math in general, I guess, as opposed to something like
the turing machine, is that, how do I put this...

To actually implement Lisp or lambda calculus in real life you need "cons",
right(?), and some way to increment your equivalent to an "instruction
pointer". Or else you don't have any memory or information to operate on and
you don't have any way to move on to the next step. If you are doing it on
paper I guess the act of moving your pen to an empty space is your cons and
moving your eyes across the page is like incrementing your instruction
pointer... In the idea of the turing machine this is all explicitly explained
but every time I try and read about lambda calculus it seems like they just
expect you to know it right away. This might seem really trivial to other
people but its how I tend to think.

~~~
empath75
Lambda calculus is simple, mechanical symbol substitution. There’s no
information or memory needed outside of the initial ‘program’ other than the
substitution rules.

Read about church encodings to see how numbers, pairs, conditionals, etc are
encoded.

~~~
kazinator
I doubt it; substitution will blow up on self-referential tricks like the Y
combinator, where a function receives itself as an argument: any attempt to
substitute the raw lambda text for this argument leads to infinite regress (
_à la_ Droste effect).

~~~
empath75
It’s still just an infinite series of substitutions.

~~~
kazinator
That's a failure to reduce, showing that the reduction by substitution is an
incomplete evaluation strategy at best. The Y combinator works under call-by-
value and other strategies; you can stick it in production code, if you're so
inclined.

------
mesarvagya
I have found paper by Raúl Rojas, which is also a good start to learn
λ-calculus. Here it is [http://www.inf.fu-
berlin.de/lehre/WS03/alpi/lambda.pdf](http://www.inf.fu-
berlin.de/lehre/WS03/alpi/lambda.pdf)

~~~
espeed
The significance of this paper is not about learning the λ-calculus.

Rather, to me the significance of this paper is that it presents a novel way
of "implementing the lambda calculus in an algebraic system" and provides a
correspondence between the λ-calculus and the matrix model of computation:

    
    
      Vector and matrix valued functions can be represented
      by vectors and matrices some of whose entries are 
      lambda expressions.
    

I have been looking for a lingua franca for programming languages, a way to
unify the langs and make use of the decades of wisdom encoded into our langs'
Great Libs. Maybe the matrix model and linear algebra is the lingua franca I
seek.

~~~
carapace
Paul Halmos' work on Algebraic Logic may be relevant.

(And see also Conal Elliott's "Compiling to Categories"
[http://conal.net/papers/compiling-to-
categories/](http://conal.net/papers/compiling-to-categories/) )

------
ssijak
I don`t know how to call this effect, but often I look to learn about
something specific and later that day the same topic pops up on the front page
of HN. Earlier today I googled about "algebra and lambda calculus" and
variations of that and lo and behold this peice on the front page few hours
later. Is this some kind of advanced geek ad tracking or what? :D

~~~
Zalastax
It's the frequency illusion (also known as the Baader-Meinhof phenomenon).

[https://rationalwiki.org/wiki/Frequency_illusion](https://rationalwiki.org/wiki/Frequency_illusion)

~~~
aaachilless
also known coloquially as 'blue car syndrome'.

[https://www.urbandictionary.com/define.php?term=Blue%20Car%2...](https://www.urbandictionary.com/define.php?term=Blue%20Car%20Syndrome)

------
emmanueloga_
I wonder if this correspondence could be used for program
transformations/optimizations?

code -tx-> polynomial -tx-> simplification -tx-> [possibly more efficient]
code

~~~
espeed
My burgeoning conjecture (as hinted at in a previous comment) is that all can
be unified, transformed, and optimized for modern hardware under the matrix
model of computation -- type theory, the lambda calculus, the actor model,
neural nets, graph computing -- I'm starting to see a path to where all models
are aligned across vectors of unity.

Here are some the correspondences I've been looking at...

* Graph algos in the lang of linear algebra now realized and encoded into GraphBLAS [1]

* The three normed division algebras are unified under a complex Hilbert space [2]

* Ascent sequences and the bijections discovered between four classes of combinatorial objects [3]

* Dependent Types and Homotopy Type Theory [4]

* Bruhat–Tits buildings, symmetry, and spatial decomposition [5]

* Distributed lattices, topological encodings, and succinct representations [6]

* Zonotopes and Matroids and Minkowski Sums [7]

* Holographic associative memory and entanglement renormalization [8]

[1] Graph Algorithms in the Language of Linear Algebra (Jeremy Kepner)
[http://www.mit.edu/~kepner/](http://www.mit.edu/~kepner/) Discussion:
[https://news.ycombinator.com/item?id=18099520](https://news.ycombinator.com/item?id=18099520)

[2] Division Algebras and Quantum Theory (John Baez)
[http://math.ucr.edu/home/baez/rch.pdf](http://math.ucr.edu/home/baez/rch.pdf)

[3] (2 + 2)-free posets, ascent sequences and pattern avoiding permutations
[pdf]
[https://www.sciencedirect.com/science/article/pii/S009731650...](https://www.sciencedirect.com/science/article/pii/S0097316509001885/pdf)

[4] Cartesian Cubical Computational Type Theory: Constructive Reasoning with
Paths and Equalities [pdf]
[https://www.cs.cmu.edu/~rwh/papers/cartesian/paper.pdf](https://www.cs.cmu.edu/~rwh/papers/cartesian/paper.pdf)

[5] Bruhat–Tits buildings and p-adic Lie groups
[https://en.wikipedia.org/wiki/Building_(mathematics)](https://en.wikipedia.org/wiki/Building_\(mathematics\))

[6] Distributive lattices and Stone-space dualities
[https://en.wikipedia.org/wiki/Distributive_lattice#Represent...](https://en.wikipedia.org/wiki/Distributive_lattice#Representation_theory)

[7] Solving Low-Dimensional Optimization Problems via Zonotope Vertex
Enumeration [video]
[https://www.youtube.com/watch?v=NH_CpMYe3tw](https://www.youtube.com/watch?v=NH_CpMYe3tw)
[https://en.wikipedia.org/wiki/Zonohedron](https://en.wikipedia.org/wiki/Zonohedron)

[8] Entanglement Renormalization (G Vidal)
[https://authors.library.caltech.edu/9242/1/VIDprl07.pdf?hovn...](https://authors.library.caltech.edu/9242/1/VIDprl07.pdf?hovno=/)
Holography
[https://en.wikipedia.org/wiki/Holographic_associative_memory](https://en.wikipedia.org/wiki/Holographic_associative_memory)

Google Scholar: [https://scholar.google.com/scholar?q=related:Yi-
GtarGxh0J:sc...](https://scholar.google.com/scholar?q=related:Yi-
GtarGxh0J:scholar.google.com/&scioq=entanglement+renormalization+and+holography&hl=en&as_sdt=0,44&as_vis=1)

~~~
Zalastax
Can you expand on what you mean? What is "the matrix model of computation" and
"vectors of unity"?

Which actor model are you talking about? The variants are very different and
Hewitt's original paper is mainly referenced for coming up with the name
rather and kicking off the field than inventing a usable model.

Type theory is even vaster. Are we talking homotopy type theory? Calculus of
constructions? System F?

