Hacker News new | past | comments | ask | show | jobs | submit login
To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus (1996) (dkeenan.com)
102 points by dpatru 58 days ago | hide | past | web | favorite | 29 comments

I have this naive intuition that graphs are a universal encoding scheme, all theorems of category theory can be expressed as graphs, with the implication there is a massive unifying leap forward in maths that will result from expressing existing problems in terms that are consistent with such a representation. Maybe it's literary sci fi handwaving, but there must be a rules based level of abstraction that contains all we can conceive of and express. These diagrams appear to be an example of it.

> I have this naive intuition that graphs are a universal encoding scheme

I have thought of the same thing in the domain of neural nets. You want to understand an image? Build a graph of the objects and their relations. You want to understand a text? Turn it into a graph. Need to simulate the environment? You can represent it as a graph. Want to mine a huge database of common facts? Represent it as a graph (database of triplets subject-relation-object). Even the execution of a program could be better conceptualised as a graph.

Graphs seem like an universal format. Graph Neural Nets have been a blooming subfield lately, ever since Thomas Kipf's paper [1] which in three years has accumulated over 2300 citations.

The main difference between classical neural nets and GNNs is that GNNs have permutation invariance. You can rename the nodes of the graph and still have the same graph.

Previously neural nets only had translation invariance (CNN) and time invariance (RNNs). Graphs would solve the combinatorial explosion by virtue of their permutation invariance - a problem which limits previous models of neural nets. Instead of learning all possible combinations of input objects you learn the pairwise relations, then you can generalise those relations to new configurations.

On a side note, the insanely popular transformer architecture (based on soft attention) is a kind of implicit graph, where the connectivity is evaluated based on the dot product similarity of the key and query objects.

[1] https://scholar.google.ro/scholar?hl=en&as_sdt=0%2C5&q=kipf+...

I'm definitely coming around to graph first systems like Fulcro and Datomic, with Datahike being a great Datomic introduction

Thanks for the link to Kipf's paper, I was not aware of it. I'm a mathematician working on graph rewrite systems, I would like to explore in the NN direction, in case you're interested contact me.

What kinds of graphs? There are a number of graphical representations that turn out to be incredibly useful in category theory, but they're far from equivalent. IIRC, https://arxiv.org/abs/1803.05316 introduces some of them.

Just saying objects and morphisms. The types of each are found in CT, which abstract-up to these, but are bounded/encompassed by the limits we know of regarding elementary ideas like cliques, cycles, paths, colouring, various morphisms.

Compression, folding, encoding, and other information theory ideas have analogues represented as these, and the bottom up/depth first search of the problem space seems to come at it from the wrong direction.

We're well into indulging Internet crank territory here but I'm naively speculating that for every known theorem, there is a consistent object/morphism representation of it, and the rules we're getting good at finding for these relationships (graphs) will yield insights we were missing for lack of an encompassing consistent abstraction.

Graphs are a natural way to express structures, of things being composed of parts which are somehow related to each other connected to each other.

What are some alternatives to graphs? Well some kind of language, say a programming language. But underlying any program written in any programming language is a graph of the parts of the program and their connections to each other. For instance a variable may be written in one section of the code and read in another, and if it is the same variable then those locations in code are part of the graph showing the data-flow of the program.

So it is my intuition too that graphs are the fundamental representation of reality as we see it, the parts of reality and the connections and relations between those parts.

Knowledge is knowledge about containers and parts and how those related to each other. We can describe it in natural language or perhaps in algebra but really the picture we are expressing in any language is that of wholes and parts and their connections.

This is my intuition about it I don't know if such a general intuition is much good in actual theory formation. Theories are graphs I think.

I guess what I'm after here is Semantic Web, which was supposed to be able to represent all knowledge, and would be a graph.

Since there's a correspondence between proofs and programs (Curry-Howard), maybe you're just looking for a graph-based visual programming language.

But given the failure (of adoption) of dataflow programming languages I don't think there will really be that much of a jump in understanding just by visualizing stuff as graphs.

Similarly all propositions in modal logic can be expressed as kripke models, which have quite the correlation to graphs

As good a reference as I could find... https://rkirsling.github.io/modallogic/

> Maybe it's literary sci fi handwaving, but there must be rules based level of abstraction that contains all we can conceive of and express.

> rules based level of abstraction

that's not literature, that's bad grammar.


Please.If only egregious crimes against the language were the bar.

I think, similarly, that there is an underground graphical rewriting formalism for all of mathematics, category theory included. Category theory is somehow a step towards it, but we don't keep categories in our heads, only the graph reducer.

What language expresses itself in the most intuitive way is a somewhat undefined problem as far as I understand it. Humans are fuzzy messy objects, our languages too.

Lambda calculus is partially fun just because of the minimal extent of its description.

Is there are relationship between the lambda calculus and Brainfuck and other few instruction set languages?

Edit: well, an easy find, https://esolangs.org/wiki/Lambda_Calculus_to_Brainfuck

I wrote a Brainfuck interpreter in (binary) Lambda Calculus [1], which was included in my 2012 IOCC submission [2]. Writing a lambda calculus interpreter in BF would be a fun challenge.

[1] https://tromp.github.io/cl/Binary_lambda_calculus.html#Brain...

[2] http://www.ioccc.org/2012/tromp/hint.html

there's an isomorphism, because bf is np complete, as is simply typed lc.

simply speaking, bf can implement lc, and vice versa, which would proof the claim.

Edit: the ugly bit is that IO is always an ugly hack and potentially makes the program indetermined and thus impossible to proof a priory. but one can probably prove that they are equivalently unprovable.

> because bf is np complete, as is simply typed lc.

Do you mean turing complete?

Another approach for visualizing lambda calculus is presented in https://ycombinator.chibicode.com/functional-programming-emo... .

It looks quite different and I wonder which one is better? Or are they the same really?

Yet another is my Lambda Diagrams [1], an obsolete link for which appears at the bottom of the article.

[1] https://tromp.github.io/cl/diagrams.htm

A js lambda to graphs parser and reducer https://mbuliga.github.io/quinegraphs/lambda2mol.html

Oh wow, a reference to Laws of Form (George Spencer-Brown), takes me back!

Definitely an underrated piece of work.

"The Markable Mark" site is a great exposition of the Laws of Form: http://www.markability.net/

I think I benefited less from the letter of the Law, than through the spirit of it. My first exposure was very indirect coming from a decision analysis textbook, that motivated the construction of decision trees from something like the benefits of making distinctions, which were well-defined. (From that representation it becomes simple to analyze what conditional probabilities are relevant to your decision-making context.)

They cited GSB through Francisco Varela who proposed that making distinctions was the fundamental operation of all cognitive thought. I found the idea compelling (if we know the roots of thinking, could that give us the levers to improve it?) but found Varela to be near-impenetrable.

So I picked up GSB in hopes that this thin tome could shed some light on the manner.

My god. You start seeing distinctions then you start seeing them everywhere. You understand what computational types are and why the adjunctions are ubquitous – and important. You get a sense of why psychological time must exist. You get why information always requires there to be an observer, and how the combination of all perspectives will necessarily be empty - our limitations to comprehension form the richness of our universe. You see the freedom we have in letting some things be distinguished over others and why people get confused about whether mathematics is discovered or invented. Surely it is neither, or both: we let something be to find out what it is.

And the fact that you can draw a knot-theorist, a biologist, a decision theorist, and a Taoist out from the conceptual framework is a testament to how rich it is.

Still, eventually I'm going to have to learn how to calculate with it.

Link has been updated

Whoops - changed from http://www.cs.virginia.edu/~evans/cs655-S00/readings/mocking....

This was an invited repost of https://news.ycombinator.com/item?id=890715 and I forgot to update the link from long ago.

See https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for more about invited reposts. There is a list of them at https://news.ycombinator.com/invited.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact