> Parsed as a linear sequence of tokens, transformed to an AST, compiled.
So are the Tree Notation programs. They are parsed as linear sequence of tokens -- when I look at the https://jtree.treenotation.org/designer/ , the inputs are 1-d sequence of characters separated by spaces and newlines. Those are parsed one at a time, and make a tree structure.
If the "2 dimensionality" refers to the fact that you have lines (Y dimension) and each line is split into words (X dimension), then this is not very new either -- this is how TCL and Unix shells view the world.
> I'm claiming that these are chasmically inferior to 2 and 3 dimensional languages.
I see no evidence of it. What can Tree Notation do that lisp/scheme can't?
> In Tree Notation the shape of your source code is immutable during parsing.
Are you talking about how characters on the screen correspond to nodes of your AST tree? If yes, than this seems extremely limiting. Each screen line is not that long -- probably 120 characters is a reasonable limit. What if you want a longer payload? What if in the output of your program has nodes with longer payload which no longer fit on the screen?
Or is the idea that you take complex tree language, like s-expr, and _restrict_ it so it is representable as punctuation-free text? I guess this could be cool for education values, but I fail to see how such restrictions make the language "chasmically superior"
> The AST and CST have the same shape.
Like Lisp programs?
> Tree programs are isomorphic to geometric trees.
So are Lisp programs.
> Human brains parse text in parallel, in 2+-dimensions, and rely on geometry for meaning
Are you talking about character recognition or text parsing? The former is in parallel, but the latter is not. You can test it yourself trivially -- press the Zoom button in your browser. The font size changes, all the words move around, but you can still parse the text just fine.
> Computers can to. And my bet is that they will.
They already do! All the image recognition tasks (including character recognition) look at all the letters in parallel. You might have heard about this technology, it is called "neural networks". Moreover, they made special chips which work on a huge _2-dimensional_ matrix of numbers all at the same time!
> but this will change everything. Or I could be wrong.
I am not sure what is there to change, given that every part has been successfully used for many tens of years.
Today's machines impose a way of doing things. But in the future
there will be a new type of machine. In the interim we can harvest
a lot of benefits from Tree Languages by working within the constraints
created by today's register model of computing.
> on the screen
The vast majority of research and experimentation with
Tree Notation does not happen on the screen at all. Experiments
are done in higher dimensions in different mediums.
> So are Lisp programs.
No they're not. There is a proof out there somewhere from 2017 proving why they are not.
The 2-D layout is not an "afterthought". It is not a "beautified" version of Lisp. The 2-D layout is an essential ingredient of Tree Notation. You can change the rules of Lisp/SExp to achieve the same 2-D layout, and what you end up with is Tree Notation! So while Tree Notation can be thought of as one way to write S-Expressions, it's a very special way that has a 2-D geometric isomorphism that (this does not have). And that goes on to make a huge difference.
> Today's machines impose a way of doing things. But in the future there will be a new type of machine. [...] working within the constraints created by today's register model of computing.
Which "way of doing things" would that be, and how exactly is this constraining us? Sure, most physical CPUs do use registers, but a lot of languages define their own machine model which are not register-based. We have stack-based machines, vector-based machines, matrix-based machines, the functional languagues with tree evaluation models, declarative programming, and lots more.
> No they're not. There is a proof out there somewhere from 2017 proving why they are not.
faq> It is largely accurate to say Tree Notation is S-Expressions without parenthesis. But this makes them very different! Tree Notation gives you fewer chances to make errors, easier program concatenation and ad hoc parser writing, easier program synthesis, easier visual programming, easier code analysis, and more.
This looks like a restricted subset of lisp's s-expressions. And the explanation only talks about incremental improvements over s-expr, not radically new capabilities.
Is there a better document I have missed?
> it's a very special way that has a 2-D geometric isomorphism ... that goes on to make a huge difference.
Are we talking about geometric isomporphism in the mathematical sense, as in graph embedding on a plane? If yes, I am very interested to see what kind of useful results one can get out of it. While graph embedding is useful for certain problems, it seems generally inapplicable for the common computing tasks.
Or even better, is there a sample code which demonstrates the advantages of isomorphism? The biggest sample code I found was "Grammar"[0] and it seems to be sadly one-dimensional...
This. Imagine you didn't have to transform a program
into a thousand permutations to execute on 1D registers. Imagine
just loading the program into 2D/3D registers and having it
compute the result in a single cycle.
AFAIK no one has ever built these higher dimensional registers before
(done some searching, asked Vinod once, and a number of other cpu folks
but no one gets it), but I'm very confident (10%) they will work and be
better than quantum, but I don't have the money to fire at a rocket pace
so this will take time. I've pitched it to DARPA, and do expect them at some point to take it up.
> I searched for this proof
Sorry, I thought I made a latex version of this somewhere. I found
some old notes that I uploaded:
> Or even better, is there a sample code which demonstrates the advantages
> of isomorphism?
One of the more recent ones that starts to hint at the benefits of a 2-D
syntax and the isomorphism is this video: ("If Spreadsheets and Programming
Languages had a baby")
> Imagine just loading the program into 2D/3D registers and having it compute the result in a single cycle.
You mean like SSE extensions in Intel CPUs, which can operate at 64 bytes at a time, potentially doing 64 operations in a single cycle?
Or FPGAs, which can have an arbitrary wide registers -- I have seen a design where entire video scanline, 600+ points, is processed in one cycle?
Or video cards, which have multiple parallel executors -- a high-end NVIDIA card can process 64x64 pixels in parallel, with each pixel getting its own little execution core.
Or maybe famous Cray 1 machine [0], which had a vector instructions, such as "a single instruction a(1..1000000) = addv b(1..1000000), c(1..1000000)"
Or maybe a Connection Machine? It sounds as close to 3-D machine as you can make it -- it even had 3-D shape with bits arranged in cube [1]:
> The CM-1, depending on the configuration, has as many as 65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes.
Your say that "no one has ever built these higher dimensional registers" -- but as you can see, there are plenty of examples one can come up with. It is entirely possible that your idea is different from all of these, but it is not clear how. I think your website would benefit greatly from the comparison with those other computing devices, as well as more details as to what this "tree architecture" looks like -- how many ALUs, memory blocks, instruction decoders etc.. do you want to have and how are they interconnected.
> Parsed as a linear sequence of tokens, transformed to an AST, compiled.
So are the Tree Notation programs. They are parsed as linear sequence of tokens -- when I look at the https://jtree.treenotation.org/designer/ , the inputs are 1-d sequence of characters separated by spaces and newlines. Those are parsed one at a time, and make a tree structure.
If the "2 dimensionality" refers to the fact that you have lines (Y dimension) and each line is split into words (X dimension), then this is not very new either -- this is how TCL and Unix shells view the world.
> I'm claiming that these are chasmically inferior to 2 and 3 dimensional languages.
I see no evidence of it. What can Tree Notation do that lisp/scheme can't?
> In Tree Notation the shape of your source code is immutable during parsing.
Are you talking about how characters on the screen correspond to nodes of your AST tree? If yes, than this seems extremely limiting. Each screen line is not that long -- probably 120 characters is a reasonable limit. What if you want a longer payload? What if in the output of your program has nodes with longer payload which no longer fit on the screen?
Or is the idea that you take complex tree language, like s-expr, and _restrict_ it so it is representable as punctuation-free text? I guess this could be cool for education values, but I fail to see how such restrictions make the language "chasmically superior"
> The AST and CST have the same shape.
Like Lisp programs?
> Tree programs are isomorphic to geometric trees.
So are Lisp programs.
> Human brains parse text in parallel, in 2+-dimensions, and rely on geometry for meaning
Are you talking about character recognition or text parsing? The former is in parallel, but the latter is not. You can test it yourself trivially -- press the Zoom button in your browser. The font size changes, all the words move around, but you can still parse the text just fine.
> Computers can to. And my bet is that they will.
They already do! All the image recognition tasks (including character recognition) look at all the letters in parallel. You might have heard about this technology, it is called "neural networks". Moreover, they made special chips which work on a huge _2-dimensional_ matrix of numbers all at the same time!
> but this will change everything. Or I could be wrong.
I am not sure what is there to change, given that every part has been successfully used for many tens of years.