
A circuit-like notation for lambda calculus (2015) - apsec112
https://csvoss.com/circuit-notation-lambda-calculus
======
csvoss
Author here. Thanks for the appreciation!

Since this piece was written, I have discovered another notation by John Tromp
which I consider to be the most breathtakingly beautiful:

\-
[https://tromp.github.io/cl/diagrams.html](https://tromp.github.io/cl/diagrams.html)

Here used by Paul Crowley to describe Graham's number:

\- [https://mindsarentmagic.org/2020/02/19/a-picture-of-
grahams-...](https://mindsarentmagic.org/2020/02/19/a-picture-of-grahams-
number/)

\- [https://mindsarentmagic.org/2020/02/24/some-more-numbers-
as-...](https://mindsarentmagic.org/2020/02/24/some-more-numbers-as-lambda-
calculus/)

~~~
plutonorm
I've been thinking about ways to feed algorithms into a neural network. This
looks perfect! Just give the images to a convolutional NN.

~~~
ford_o
Why not feed it a text?

------
dvt
Noteworthy: Frege developed the first "visual" (2-dimensional) way of doing
logic with his Concept Script[1]. Linear logic also uses a similar "tree-like"
structure for proofs[2].

I was going to also mention Alligator Eggs, but that already made it in the
post :)

[1]
[https://www.math.uwaterloo.ca/~snburris/htdocs/scav/frege/fr...](https://www.math.uwaterloo.ca/~snburris/htdocs/scav/frege/frege.html)

[2] [https://plato.stanford.edu/entries/logic-
linear/](https://plato.stanford.edu/entries/logic-linear/)

~~~
voidhorse
If you get a chance to read it, the _Begriffsschrift_ (literally “concept
writing”) in which Frege explains the system and uses it is a fascinating
read. Last time I checked it’s pretty hard to find a copy—mostly university
libraries have it.

~~~
dvt
Yep, it's incredibly ahead of its time (written in the late 1800s)! Had to
read parts of it for a course on (you guessed it) Lambda Calculus :)

------
foxes
I do love diagrams, I love thinking about some problems in terms of diagrams,
eg Feynman diagrams, some enumerative problems, knot theory etc. It can be
kind of deep. I think they are good for when you can do some graphical
manipulation to prove a statement, or help gain extra insight. Usually there
is some monoidal category floating around.

Perhaps one area this could be useful is program simplification? It would be
nice if I could take a more complicated thing like a Haskell program or
function and then do some graphical manipulation to see how to rewrite it.
Kind of reminds me of pointfree too. Sometimes that can lead to interesting
insights, where you spot something more general or simpler. You are just
focusing on the functions (==diagrams) rather than the strands leading in.

~~~
saeranv
Back in architecture (as in physical buildings, not software) school, our
professors always encouraged us to represent our ideas into diagrams to
clarify our thoughts. This is a kind of designing and thinking feedback loop
popularized by the Bauhaus.

------
jmholla
I'm surprised no one had pointed out LabVIEW[0]. It's a graphical programming
language that would be very easy to implement the concepts here with. It is
proprietary, but worth a look if you're interested in applying this.

Full disclosure, I used to work for National Instruments, hence my familiarity
with it.

[0]:
[https://en.m.wikipedia.org/wiki/LabVIEW](https://en.m.wikipedia.org/wiki/LabVIEW)

------
moonchild
The notation is presented somewhat confusingly; this definition given for +:

    
    
      \x. \y. (\s. \z. x s (y s z))
    

Isn't different from

    
    
      \x. \y. \s. \z. x s (y s z)
    

And usually syntax sugar is assumed such that you can just say:

    
    
      \xysz.xs(ysz)
    

Which is much easier to read.

Personally, I find the textual, symbolic representation much easier to reason
about; my eyes just glaze over the circuit diagrams.

------
kmill
There's a graphical notation you can get from string diagrams for a "reflexive
object" (an inscrutable reference that will help if you know how to deal with
2-categories graphically already:
[https://ncatlab.org/nlab/show/reflexive+object](https://ncatlab.org/nlab/show/reflexive+object))

The main difference is that instead of a lambda abstraction box, you make the
input branch off the output. That is, you have lambda nodes with two outputs
and one input. One output is the lambda expression, one output is the
expression's argument, and the input is the value of the evaluated function.

I've been wanting to understand non-standard diagrams -- might they be
continuations or some other kind of computation?

------
openfuture
Reinventing string diagrams and other notation for symmetric monoidal
categories.

Graphicallinearalgebra.net is a good resource.

~~~
galaxyLogic
That is quite fascinating including the fact that much of it seems to have
been invented in the 1600s. Why didn't they teach me this stuff in school?

Does diagrammatic Linear Algebra show us that in fact Lambda Calculus and
Linear Algebra are really very related?

------
louthy
Also good: To Dissect a Mockingbird [1]

1\. [http://dkeenan.com/Lambda/](http://dkeenan.com/Lambda/)

------
galaxyLogic
So how can we put this to practical use to describe (functional) programs
written in practical programming languages like say Lisp or JavaScript or even
Haskell?

I assume that would require having some additional notation for built-in
concepts like Numbers for instance, and built-in named library functions ?

------
galaxyLogic
Related: [https://ycombinator.chibicode.com/functional-programming-
emo...](https://ycombinator.chibicode.com/functional-programming-emojis)

------
lordleft
An excellent article, I really enjoyed this visual take on the λ-calculus.
Knowing this would have made my Programming Language Theory much smoother back
when I was a CS undergrad.

------
dang
If curious see also

2018
[https://news.ycombinator.com/item?id=16721427](https://news.ycombinator.com/item?id=16721427)

------
antidamage
Nice. Now we'll never be sure if buried wall decorations are aliens or future
us.

------
naikrovek
I got lost when the author explained zero and one.

Some folks just can't teach. Like, at all.

I write software daily, and I have for 24 years. I design my own CPUs and play
with logic gates in FPGAs all the time.

 _I understand this stuff._

The author of this article lost me when explaining zero and one...

~~~
vga805
Check out the explanation of church numerals and the successor function on the
lambda calculus wikipedia.

Church numerals are quite beautiful. We use the symbols 1, 2, 3, 4, etc., but
there's nothing intrinsic to those symbols that denotes what number they refer
to.

Church numerals, on the other hand, do encode the actual numbers. A church
numeral is simply a function that takes in two other functions as arguments,
call them a and b, and applies a to b N number of times, where N is the
numeral you're encoding.

So (using JS lambdas to demonstrate):

0 = (a, b) => b // applies a to b 0 times

1 = (a, b) => a(b) // applies a to b 1 time

2 = (a, b) => a(a(b)) // and so on...

~~~
somewhereoutth
Also checkout Scott encoding - for numerals each number is constructed by
embedding it's predecessor (with 0 as per Church):

    
    
      0 = \f \x x
      N+1 = \f \x f N
    

To interrogate such a numeral, you supply a function that accepts the
predecessor numeral, and a constant for zero. Recursion is needed to effect a
fold.

On the face of it heavier than Church, but allows the recursive definition of
infinity. Lists and more complex structures can be similarly defined, again
with the possibility of recursively defined infinite components.

Then you can do fun stuff like 'taking' infinite elements from a (possibly
infinite) list, adding infinity to any number to get infinity, multiplying
likewise. Division by zero naturally returns infinity by construction, no
convention necessary. 'dropping' infinity elements from an infinite list does
not terminate however - you can never reach the first element of that result.

