Plain text files are an incredibly powerful way of "storing ASTs", the advantages are far too numerous to list. The primary one being complete and total interoparability with all other tools that accept plain text files.
I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.
Get a better editor - one that lets you operate on semantic units. And/or get a better programming language - one that lets you operate on code as AST.
I think you're making good points, but please let me know when semantic editors are available for Go, Rust, JavaScript and Python.
One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.
> I think you're making good points, but please let me know when semantic editors are available for Go, Rust, JavaScript and Python.
I think there are ports of paredit-like features to those languages in Emacs too, and all the other semantic features of Emacs itself work with those. As long as the language's major mode properly defines what is e.g. a function, a symbol, etc. you can use semantic navigation and editing.
> One other advantage of directly manipulating AST - it's very easily converted into any language runtime you want. It won't matter if you are targeting the JVM, V8 or native bytecode; you can do it all from the same AST. This same thing is possible with plain text code, but not quite as common.
I don't think this is something that AST gives you. AST is just a more machine-friendly representation of what you typed in the source code. Portability between different platforms depend on what bytecode/machine code gets generated from that AST. And since AST is generated from the compiled source anyway as one of the first steps in compilation, getting it to emit a right set of platform-specific instructions means you can compile the original source there too.
And AST doesn't solve the problem of calling platform-specific functions and libraries anyway.
Sure, there are many (excellent) AST based editors. However an AST editor based on a keyboard, and requires you to learn to type at 160 WPM, won't help most tablets be good code creation devices.
Data structures are shapes. A shape is better drawn than described in text.
My point is - there are AST-based editors and languages (e.g. Emacs with Paredit and Common Lisp) and you can see that even in that mode of "thinking" about code, you can't beat the speed, efficiency and flexibility of the keyboard.
> Data structures are shapes. A shape is better drawn than described in text.
Draw me a linked list. Tell me how much faster it is than typing:
(list 1 2 (foobar) (make-hash-table) (list "a" "b" "c") 6)
Even on a visual keyboard on a tablet, it's faster to type than to draw data structures. A flat sheet of glass maybe gives us the ability to get the (x, y) coordinates of a touched point easier and with more precision, but it sacrifices many other important aspects - like tactile feedback and the ability to feel shapes. With physical keyboard, you're employing more of the features your body and mind has, and that's why it's faster than a touchscreen.
Unless you can find a completely different way of designing UX, then a tablet won't be a suitable device for creation. None of the currently existing solutions come close to beating a physical keyboard and a mouse.
> "list joe (subtle gesture) mary (subtle gesture) dave end
How will you go about drawing "joe" and "mary"? Is it faster than typing? Note that you can't always select stuff from dropdowns - you often have to create new symbols and values.
> Everyone in the room I'm in now can naturally talk at 200 words per minute.
How fast they can track back and correct a mistake made three words before? Or take the last sentence and make it a subnode of the one before that? Speech is not flexible enough for the task unless you go full AI and have software that understands what you mean.
> I gave an example of opening an existing structure and modifying it in the comment you're replying to.
Sorry, I misunderstood what you meant by "subtle gesture" there.
Anyway, in the original comment you said:
> Data structures are shapes. A shape is better drawn than described in text.
I'll grant you that speaking + gestures may not be a bad way of entering and manipulating small data structures and preforming simple operations. But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random), physical keyboards will still be much faster and much less annoying.
But I still doubt you could extend that to more complex editing and navigating tasks. Take a brief look at the things you can do in Paredit:
Consider the last three or four subsections and ask yourself, how to solve them with touch, gestures and speech. Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel? How about splitting a node into two at a particular point? Joining them together? Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for)? Searching in code for a particular substring?
Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction. There are stories on the Internet of blind programmers using Emacs who can achieve comparable speed to sighted ones. This usually involves using voice pitch and style as a modifier, and also using short sounds for more complex operations. Like "ugh" for "function" and "barph" for "public class", etc. So yeah, with enough trickery it can be done. But the question is - unless you can't use the screen and the keyboard, why do it?
> Like in a DOM? Easily: grab it and move it, just like you do it in DevTools today, except with your hands rather than a mouse.
DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.
> But until we have a technology that can recognize speech and gestures reliably and accurately (and tablets with OSes that don't lag and hang up for half a second at random)
Totally agreed. Theoretically, you should just be able to gesture a list with your hands and say "joe mary dave" and the software knows from your tone that's three items and not one.
I don't know that much about lisp and s-expressions asides from that it can edit it's own AST. That's not a way of avoiding the question, it's just my own lack of experience.
> Are you going to drag some kind of symbolic representation of "tree node" to move a bunch elements into a sublevel?
Yes, I already think of a tree of blocks/scopes when editing code with a keyboard, visualising that seems reasonable.
> Repeating this (or a more complex transformation) action 20 times in a row (that's what a decent editor has keyboard macros for).
Here's the kind of stuff I use an AST for: finding function declarations and making them function expressions. I imagine that would be (something to switch modes) "find function declarations and make them function expressions". Likewise "rename all instances of 'res' to 'result'" with either tone or placement to indicate the variable names. More complex operations on the doc would be very similar to complex operations in the doc.
> Searching in code for a particular substring?
Easy. Have a gesture or tone that makes 'search' a word for operating on the document, not in it.
> Sure, it can be done with the modes of input you're advocating, but I doubt it can be done in an efficient way that would still resemble normal speech and interaction.
Yep, I don't think it would still resemble normal speech and interaction either, the same way reading code aloud doesn't. It would however be easier to learn, removing the need to type efficiently as well as the (somewhat orthogonal) current unnecessary ability to create syntax errors.
> DevTools are a bad example for this task. Using keyboard is much faster and more convenient than mouse. C.f. Paredit.
Not sure if I'm reading you correctly here: typing DOM methods in a keyboard in devtools is obviously slower than a single drag and drop operation. Using hands to do it directly is obviously even faster with the mouse.
Stepping back a little: I guess some people assume speech and gestures won't get significantly better, I assume they will.
> I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.
Unless AI advances considerably. For years I've imagined myself talking to the small specialized AI living in my computer, giving it instructions that it would translate to code...
Natural language is a terrible way to specify software.
Writing software is about telling a blazingly fast, literal, moron what to do. The ambiguity inherent in natural language is not a good way of telling such a thing what to do.
I suddenly envision a "The feeling of power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power )-type scenario, where one programmer suddenly discovers that he or she can understand and create binary patterns without relying on the AI.
And if we _are_ Ima start buying Spotify ads that just shout out "semicolon exec open bracket mail space dash s space passwords space owned at gmail dot com space less than space slash etc slash passwd close bracket semicolon" at top volume.
I will bet you £100 that we won't be programming by speech and gestures in even 25 years time as the disadvantages are enormous.