
To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus (1996) - dpatru
http://dkeenan.com/Lambda/
======
motohagiography
I have this naive intuition that graphs are a universal encoding scheme, all
theorems of category theory can be expressed as graphs, with the implication
there is a massive unifying leap forward in maths that will result from
expressing existing problems in terms that are consistent with such a
representation. Maybe it's literary sci fi handwaving, but there must be a
rules based level of abstraction that contains all we can conceive of and
express. These diagrams appear to be an example of it.

~~~
visarga
> I have this naive intuition that graphs are a universal encoding scheme

I have thought of the same thing in the domain of neural nets. You want to
understand an image? Build a graph of the objects and their relations. You
want to understand a text? Turn it into a graph. Need to simulate the
environment? You can represent it as a graph. Want to mine a huge database of
common facts? Represent it as a graph (database of triplets subject-relation-
object). Even the execution of a program could be better conceptualised as a
graph.

Graphs seem like an universal format. Graph Neural Nets have been a blooming
subfield lately, ever since Thomas Kipf's paper [1] which in three years has
accumulated over 2300 citations.

The main difference between classical neural nets and GNNs is that GNNs have
permutation invariance. You can rename the nodes of the graph and still have
the same graph.

Previously neural nets only had translation invariance (CNN) and time
invariance (RNNs). Graphs would solve the combinatorial explosion by virtue of
their permutation invariance - a problem which limits previous models of
neural nets. Instead of learning all possible combinations of input objects
you learn the pairwise relations, then you can generalise those relations to
new configurations.

On a side note, the insanely popular transformer architecture (based on soft
attention) is a kind of implicit graph, where the connectivity is evaluated
based on the dot product similarity of the key and query objects.

[1]
[https://scholar.google.ro/scholar?hl=en&as_sdt=0%2C5&q=kipf+...](https://scholar.google.ro/scholar?hl=en&as_sdt=0%2C5&q=kipf+Semi-
supervised+classification+with+graph+convolutional+networks&btnG=)

~~~
slifin
I'm definitely coming around to graph first systems like Fulcro and Datomic,
with Datahike being a great Datomic introduction

------
joe_the_user
Is there are relationship between the lambda calculus and Brainfuck and other
few instruction set languages?

Edit: well, an easy find,
[https://esolangs.org/wiki/Lambda_Calculus_to_Brainfuck](https://esolangs.org/wiki/Lambda_Calculus_to_Brainfuck)

~~~
posterboy
there's an isomorphism, because bf is np complete, as is simply typed lc.

simply speaking, bf can implement lc, and vice versa, which would proof the
claim.

Edit: the ugly bit is that IO is always an ugly hack and potentially makes the
program indetermined and thus impossible to proof a priory. but one can
probably prove that they are equivalently unprovable.

~~~
anchpop
> because bf is np complete, as is simply typed lc.

Do you mean turing complete?

------
galaxyLogic
Another approach for visualizing lambda calculus is presented in
[https://ycombinator.chibicode.com/functional-programming-
emo...](https://ycombinator.chibicode.com/functional-programming-emojis) .

It looks quite different and I wonder which one is better? Or are they the
same really?

~~~
tromp
Yet another is my Lambda Diagrams [1], an obsolete link for which appears at
the bottom of the article.

[1]
[https://tromp.github.io/cl/diagrams.htm](https://tromp.github.io/cl/diagrams.htm)

------
xorand
A js lambda to graphs parser and reducer
[https://mbuliga.github.io/quinegraphs/lambda2mol.html](https://mbuliga.github.io/quinegraphs/lambda2mol.html)

------
asplake
Oh wow, a reference to Laws of Form (George Spencer-Brown), takes me back!

~~~
bordercases
Definitely an underrated piece of work.

~~~
carapace
"The Markable Mark" site is a great exposition of the Laws of Form:
[http://www.markability.net/](http://www.markability.net/)

~~~
bordercases
I think I benefited less from the letter of the Law, than through the spirit
of it. My first exposure was very indirect coming from a decision analysis
textbook, that motivated the construction of decision trees from something
like the benefits of making distinctions, which were well-defined. (From that
representation it becomes simple to analyze what conditional probabilities are
relevant to your decision-making context.)

They cited GSB through Francisco Varela who proposed that making distinctions
was the fundamental operation of all cognitive thought. I found the idea
compelling (if we know the roots of thinking, could that give us the levers to
improve it?) but found Varela to be near-impenetrable.

So I picked up GSB in hopes that this thin tome could shed some light on the
manner.

My god. You start seeing distinctions then you start seeing them everywhere.
You understand what computational types are and why the adjunctions are
ubquitous – and important. You get a sense of why psychological time must
exist. You get why information always requires there to be an observer, and
how the combination of all perspectives will necessarily be empty - our
limitations to comprehension form the richness of our universe. You see the
freedom we have in letting some things be distinguished over others and why
people get confused about whether mathematics is discovered or invented.
Surely it is neither, or both: we let something be to find out what it is.

And the fact that you can draw a knot-theorist, a biologist, a decision
theorist, and a Taoist out from the conceptual framework is a testament to how
rich it is.

Still, eventually I'm going to have to learn how to calculate with it.

------
genezeta
Link has been updated

~~~
dang
Whoops - changed from
[http://www.cs.virginia.edu/~evans/cs655-S00/readings/mocking...](http://www.cs.virginia.edu/~evans/cs655-S00/readings/mockingbird.html).

This was an invited repost of
[https://news.ycombinator.com/item?id=890715](https://news.ycombinator.com/item?id=890715)
and I forgot to update the link from long ago.

See
[https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...](https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=by%3Adang%20repost%20invit&sort=byDate&type=comment)
for more about invited reposts. There is a list of them at
[https://news.ycombinator.com/invited](https://news.ycombinator.com/invited).

