I have thought of the same thing in the domain of neural nets. You want to understand an image? Build a graph of the objects and their relations. You want to understand a text? Turn it into a graph. Need to simulate the environment? You can represent it as a graph. Want to mine a huge database of common facts? Represent it as a graph (database of triplets subject-relation-object). Even the execution of a program could be better conceptualised as a graph.
Graphs seem like an universal format. Graph Neural Nets have been a blooming subfield lately, ever since Thomas Kipf's paper  which in three years has accumulated over 2300 citations.
The main difference between classical neural nets and GNNs is that GNNs have permutation invariance. You can rename the nodes of the graph and still have the same graph.
Previously neural nets only had translation invariance (CNN) and time invariance (RNNs). Graphs would solve the combinatorial explosion by virtue of their permutation invariance - a problem which limits previous models of neural nets. Instead of learning all possible combinations of input objects you learn the pairwise relations, then you can generalise those relations to new configurations.
On a side note, the insanely popular transformer architecture (based on soft attention) is a kind of implicit graph, where the connectivity is evaluated based on the dot product similarity of the key and query objects.
Compression, folding, encoding, and other information theory ideas have analogues represented as these, and the bottom up/depth first search of the problem space seems to come at it from the wrong direction.
We're well into indulging Internet crank territory here but I'm naively speculating that for every known theorem, there is a consistent object/morphism representation of it, and the rules we're getting good at finding for these relationships (graphs) will yield insights we were missing for lack of an encompassing consistent abstraction.
What are some alternatives to graphs? Well some kind of language, say a programming language. But underlying any program written in any programming language is a graph of the parts of the program and their connections to each other. For instance a variable may be written in one section of the code and read in another, and if it is the same variable then those locations in code are part of the graph showing the data-flow of the program.
So it is my intuition too that graphs are the fundamental representation of reality as we see it, the parts of reality and the connections and relations between those parts.
Knowledge is knowledge about containers and parts and how those related to each other. We can describe it in natural language or perhaps in algebra but really the picture we are expressing in any language is that of wholes and parts and their connections.
This is my intuition about it I don't know if such a general intuition is much good in actual theory formation. Theories are graphs I think.
But given the failure (of adoption) of dataflow programming languages I don't think there will really be that much of a jump in understanding just by visualizing stuff as graphs.
As good a reference as I could find...
> rules based level of abstraction
that's not literature, that's bad grammar.
Lambda calculus is partially fun just because of the minimal extent of its description.
Edit: well, an easy find, https://esolangs.org/wiki/Lambda_Calculus_to_Brainfuck
simply speaking, bf can implement lc, and vice versa, which would proof the claim.
Edit: the ugly bit is that IO is always an ugly hack and potentially makes the program indetermined and thus impossible to proof a priory. but one can probably prove that they are equivalently unprovable.
Do you mean turing complete?
It looks quite different and I wonder which one is better? Or are they the same really?
They cited GSB through Francisco Varela who proposed that making distinctions was the fundamental operation of all cognitive thought. I found the idea compelling (if we know the roots of thinking, could that give us the levers to improve it?) but found Varela to be near-impenetrable.
So I picked up GSB in hopes that this thin tome could shed some light on the manner.
My god. You start seeing distinctions then you start seeing them everywhere. You understand what computational types are and why the adjunctions are ubquitous – and important. You get a sense of why psychological time must exist. You get why information always requires there to be an observer, and how the combination of all perspectives will necessarily be empty - our limitations to comprehension form the richness of our universe. You see the freedom we have in letting some things be distinguished over others and why people get confused about whether mathematics is discovered or invented. Surely it is neither, or both: we let something be to find out what it is.
And the fact that you can draw a knot-theorist, a biologist, a decision theorist, and a Taoist out from the conceptual framework is a testament to how rich it is.
Still, eventually I'm going to have to learn how to calculate with it.
This was an invited repost of https://news.ycombinator.com/item?id=890715 and I forgot to update the link from long ago.
See https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for more about invited reposts. There is a list of them at https://news.ycombinator.com/invited.