There is an app called Lisping which allows writing a variety of Lisps (Scheme and Clojure) on iPads. The great thing about it was that there was minimal typing - the interface made the AST very clickable, with minimal cursor movement. When I tried it, it was such a great way to edit code on an iPad. Demo video: https://www.youtube.com/watch?v=nHh00VPT7L4
>the interface made the AST very clickable, with minimal cursor movement.
Every so often the concept of visual programming comes up, where people wish that instead of editing text they could somehow directly manipulate an AST. Wouldn't it be funny if it turned out that it was Lisp they were looking for all along?
Lisp is certainly amenable to this, and structured editors for lisp have been around for a long time, but the concept is not specific to lisp except inasmuch as it's much easier to implement since the syntax is so similar to the AST.
For instance, I like to use Paredit [1] for elisp and Racket hacking, and there's a similar mode for Haskell in emacs [2].
Wow, I watched the whole video and all I can say is -- huh?
That looked totally painful, and not at all fun. Granted, much of this is due to the fact that it's a touch interface, but I'm not seeing it as very useful.
I'm a huge Lisp/Scheme/Clojure fan, but I didn't find this at all impressive.
Yeah, it's really hard to see the utility in that.
That literally takes 10x longer to enter anything than it does in a decent editor. I imagine some vim and emacs wizards can edit even faster than that.
Dumb snark incoming but it's like using a hammer to chop wood.
I think that the keyboard should always stay present and there should be no dialog appearing when he edits the symbols. There should only be a textinput at the location of the symbol. Also, some icons on the right could be present so that you could do some more actions.
Coincidentally I just started working on an editor for touch screens that relies on displaying nesting like this. I also worked on a syntactic zoom which I'm sort of proud of, but don't know how practical it could be yet.
The core thesis of this work is "that text is an inappropriate way to model structured program code". I think that is wrong, text is a great way to model structured program code.
That doesn't mean that the ways we use text these days for programming couldn't be improved and augmented. For example, the example in Figure 3 is very pretty and readable. But there is no reason why this couldn't be what you see when still editing everything as text. Basically, it could be a more advanced form of syntax highlighting. The reason why this is difficult/impossible to achieve with Java is because of Java's syntax.
Therefore I think when approaching programming from a user interface point of view, it is crucial to also work on a better syntax for the programming language itself. Ideally, work like described in this paper should go hand in hand with work to develop better syntaxes.
IMO, the big problem with text as a representation of (imperative) code is that it only represents the program's execution flow; the visualization of the flow of data must be done in a person's mental model. The side effect of this is that a programmer must build the proper mental model anytime they want to make a change, or they risk mucking up the data. Object orientated programming was an attempt at localizing data changes, at the cost of complicating the overall flow of the program's execution.
Representing the flow of data and not the program is one advantage that many functional languages have, at the cost of removing the visualization of the program's flow. Often this doesn't matter, but sometimes it does. When it does matter, you're faced with the same challenge for the programmer.
So ultimately, even the best syntax can only help you model one thing - either the flow of the data or the flow of the program - and the other flow must be modeled internally. It would be cool if both could be visualized by our tooling simultaneously.
Yeah, it's a pretty good idea. I've been using lisp and expand-region[1] and it just makes this frame-based editing dream come alive.
I've just recorded a video where I'll refactor some code for fun. This code adds two arrays one has strings in it like ["1" "2" "3"] the other has numbers like [1 2 3].
Not just with a mouse. The frames can be detected automatically. Imagine a typical IDE where:
while (...) {
Automatically inserted the next `}` and moved the cursor between it. But in the frame based editor, the `{` and `}` are elided from the presentation, replaced with a colored block or frame around the inner kernel of the while loop.
+++ while (condition)
+ --- if (condition)
+ - *** actions
+ ------------------
++++++++++++++++++++++
Scratch (the example in Figure 1) is definitely a mouse driven interface, but beyond that and its visual presentation, it's more a cousin to the frame based model than a precise representation.
It gives you a better (arguable?), structural way to view code, and to edit it with both text input and mouse.
(curmudgeon)A well-formed program is "defined" as what the compiler expects. Like literate programming, this hides from me what's being presented to the compiler, and sets me adrift from what the program actually is.(/curmudgeon)
It also reminds me vaguely of writing with Framemaker.
> (curmudgeon)A well-formed program is "defined" as what the compiler expects. Like literate programming, this hides from me what's being presented to the compiler, and sets me adrift from what the program actually is.(/curmudgeon)
(idealist)Well, there's no reason the compiler has to accept programs only as plain text. It's actually a pretty poor representation of your program: the language has to go out of its way to parse it. Internally, your program is an AST; this editor works over frames, which are closer to the AST than they are to plain text; perhaps they could both use the same non-textual file format.(/idealist)
> (idealist)Well, there's no reason the compiler has to accept programs only as plain text. It's actually a pretty poor representation of your program: the language has to go out of its way to parse it. Internally, your program is an AST; this editor works over frames, which are closer to the AST than they are to plain text; perhaps they could both use the same non-textual file format.(/idealist)
For that you have Lisps, which hit the sweet spot by being human-writable ASTs that are trivial to parse (at least if you don't play with reader macros too much).
>this hides from me what's being presented to the compiler
Isn't that the whole point of abstractions? Like, when you write a function call, you see "foo()" but the compiler sees all the individual instructions in the function. The same with macros, templates, etc.
And on a purely visual level, the example 'jtsummers gave just seems to be an extension of code folding and other tools that let you focus on one part of the code while ignoring another.
The essence of programming is being capable of constantly walking up and down the abstraction ladder. Trying to hide behind a perfect abstraction is as problematic as the belief that abstractions are evil because they're too complicated.
Abstractions are transient. You want to use them when they help you, but it's good to be able to step down and verify what's going on.
>The essence of programming is being capable of constantly walking up and down the abstraction ladder ... it's good to be able to step down and verify what's going on.
Good, yes, desirable, often, but I wouldn't call it "essential" to the art of programming.
I once worked for a company whose data was managed by a COBOL application on a mainframe a couple hundred miles away; there was no walking down that abstraction ladder (without shelling out a five-figure support fee), so our in-house IT department had to take advantage of the fact that the 3270 terminal emulator we used to access the mainframe, could be driven through a COM interface.
I'm sure there are lots of other examples of jobs that don't have the luxury of being able to step down and verify lower-level code, and I wouldn't characterize them as missing out on some "essential" aspect of programming.
It doesn't necessarily hide anything. Imagine Python augmented by this display. With C or Java you can show the curly braces but fade them into the background a bit and highlight the overall structure. I've seen similar things done for Lisp in the past, all the parens were there but deemphasised in favor of structural highlighting like this frame based editing concept.
Syntax errors can be a large impediment to learning to program, so that's one big advantage. Frame-based editing has some other advantages, some of which you'd see in more complex IDEs like being context aware of what the user is truing to do.
In the case of BlueJ this is not surprising: it's made by the same people and in fact this frame-based editor has been incorporated into BlueJ (this is mentioned briefly in the paper).
Greenfoot really was the biggest inspiration, but I'd really like to build something more general purporse. Aka pass in a grammar file and then you can use the editor for that language.
This is pretty incredible work wrapped in an unassuming name. It basically presents an entirely new way to interact with any programming language. You could make an emacs mode that worked this way, for example.
I could see this being one of the fundamental ways of overcoming Lisp bias. Lisp still isn't mainstream. Clojure was a nice attempt, but it fell short. You can find companies that use Clojure, but it's not the lingua franca of any domain (except perhaps text editors).
If you were to expose a way to write Lisp without dealing with any parens at all, it might have a chance of sparking the interest of younger programers long enough to seduce them to Lisp's other benefits: when you write in Lisp, you're writing in the abstract syntax tree normally generated by other languages. This allows you to write macros, which transform the tree in arbitrary ways. It's trivial to write a program to analyze your entire codebase in arbitrary ways (what are the most popular function calls? what's my dependency graph look like? which functions are unreferenced?) which is normally a herculean effort in other languages.
And it all comes down to syntax. `(if a (b) (or c d))` is so utterly foreign to most programmers compared to `if (a) { return b(); } else { return c || d; }` that it's nearly impossible to overcome the inertia long enough to persuade them that the tradeoff in readability is worth the power you get.
I think this work could be helpful here. When you throw a newcomer in front of a Lisp editor and say "Write a program," what goes through their minds? "How do I write an if statement?" "How do I call a function?" "How do I set a variable?" "How do I loop while a condition is true?" All of these are abstract concepts, not specific to any language, let alone Lisp. That'd let them avoid dealing with parens altogether. Then you can introduce the idea of a macro, which just combines new ways of using existing concepts.
Regarding color: I've often wished for colored quasiquotation. `(foo `(bar `(,baz))) would be more useful if your editor displayed the background color differently for each level of quasiquation. It would also de-mystify statements like (fn (x) `(define-macro foo (y) `(define ,y ,',x))) which can otherwise be incredibly difficult to reason about. (Thankfully nested quasiquotation is rare, but suffice to say I wish editors used color more effectively rather than stylistically.)
Sadly, emacs is one of the only editors flexible enough to implement the concepts presented in this paper, and the intersection between emacs users and novice programmers is about as large as the intersection between hackers and MBAs. But you can flip it around: emacs is flexible enough. That means you can ship a custom preconfigured emacs designed specifically for Lisp hacking, set up to mimic Atom/Coda/Sublime/whatever.
And it all comes down to syntax. `(if a (b) (or c d))` is so utterly foreign to most programmers compared to `if (a) { return b(); } else { return c || d; }`
That's a very strong claim without much evidence. Most programmers routinely deal with multiple syntaxes many of which are weirder than and less readable than lisp. A more likely thing is that there just isn't a sufficiently useful lisp.
I'm fairly sure it's safe to say the C/Algol derivative syntax is more familiar to programmers as the most popular and used programming language in the world uses it.
The claim was not about popularity or even familiarity, it was that Lisp syntax is so utterly alien, it's the primary thing that stands in the way of lisp adoption.
I'm not worried but I appreciate the concern, I guess.
I look at the loader code I use in haskell w/ optparse-applicative and how much better it is than anything I've used in my damn life and I say, "Why is this considered unusable?"
Oh man, I 100% agree about color! I've worked on Snap! (which looks a lot like Scratch) and it has a feature called "zebra coloring" that's very similar. I've seen it be very effective with students.
Hacking up a friendly version of emacs does sound like an interesting project. I suspect it's an uphill battle to convince people to use it, but someone should try. :)
I keep hearing this kind of talk about lisp. I've never actually written anything in it, but if you asked me why, it would have nothing to do with parens or syntax. It has everything to do with the fact that, for most purposes, Python suits me just fine, and for almost all other purposes, I want a language that compiles to machine code.
Lisps just don't bring much to the table. Their dynamic-ness brings overhead that is not justified unless you're doing crazy macro stuff, and ad-hoc crazy macro stuff is not easy to put into code without hurting readability and comprehension.
Plenty of implementations of dialects of Lisp have compilers that produce machine code. I would even say that for Common Lisp implementations it is more common to have a compiler than to not have one. There are several great Scheme compilers too, and Racket has a JIT.
Here's a sample session with SBCL, where I define a function, which is immediately compiled, and then I request a disassembly:
As far as overhead goes, the SHL makes room for tag bits (and notice that's only done once, not per math op), and CLC indicates that the function doesn't return multiple values. That is not significant overhead, and will absolutely destroy reference Python in speed.
Maybe it's worth pointing out that to get that code like that from SBCL it's not enough to declare that m and n are fixnums, you also need to promise the compiler that the multiplication won't overflow:
(defun foo (n m)
(declare (type fixnum m n) (optimize (safety 0)))
(the fixnum (* (1+ m) n)))
The reason I didn't list it is because there's so many different ways to do it.
The optimization declaration can be made globally, or per-file; declaring the type of the foo function (including parameters & return values) externally will fully cover this function without having anything additional in the body; there's helper macros to explicitly manage static types, etc.
So the type declarations don't need to be intrusive into the expressions themselves, and can be as transparent as you want. Of course, type inference also means that you don't have to specify every single type, while still getting the speed & checking benefit of fully typed code.
> Their dynamic-ness brings overhead that is not justified unless you're doing crazy macro stuff, and ad-hoc crazy macro stuff is not easy to put into code without hurting readability and comprehension.
I think that is today's major barrier to Lisp, but not in the way you think.
Most of Lisp's inventive features have crept into other languages by now, except for the in-language compile-time code transformation/generation (ie macros & homoiconicity).
The major phrase I want to point out is "crazy macro stuff". To people from other programming languages, code generation and metaprogramming are way out there concepts, only suitable for insane scenarios. People said the same thing about functional programming, yet any well-seasoned programmer today has a reasonable handle on its usefulness and advantages, and FP style is now used by programmers in many languages to contain complexity.
Lisp-style macro programming is a simple way to do metaprogramming, while in other languages it's an impedance-mismatch-riddled nightmare of separate wonky build steps. Once it is made simple, it's easy to add great conveniences, compile-time optimizations, and architectural support to your programming, with almost no-brainer effort.
It is also much easier to human-parse Lisp code which utilizes a good per-project macro library, than it is to parse other languages with boilerplate and sprawling behavioral dependencies injected all over their usage scenarios.
The catch-22 is that this style of super-easy metaprogramming is linked to code-is-data-is-code homoiconicity, and people still reject that at face value, which slows the propagation of metaprogramming (however, I think there is a continually slow-rising acceptance). Functional programming doesn't require any unique syntax, just semantics, so it was able to spread without that barrier.
Lisp has been compiled since 1960. Every major Common Lisp implementation is compiled, and some are only compiled; as in, every darn thing you type into the REPL, including (+ 2 2) is converted to native machine code and then branched to: there is no interpreter and hence the eval function compiles, too. That kind of thing was done long before "JIT" was a buzzword.
Macros make a tremendous contribution to readability and comprehension. I'm informed by 17 years of Lisp experience versus your zero.
The syntax actually enticed me to play with lisp, and I do like it a lot, but I never found strong reason to prefer using, say, Common Lisp over a more mainstream language/framework stack. I'm just starting to dig into Clojure and I think the combination of lisp syntax and strong support for, and encouragement towards, immutable data really brings a lot to the table in both reason-ability and efficiency.
Literally sets an approach that's hit its limits in stone. [And yet people complain about punctuation.]
Looking forward there will be books on programming with FBE. fbeop will be all the rage. Naturally we'll discuss the form of code. Is that bulge too far out? What is better: a plain of declarations that meets a mountain range of control loops and flow, or, is it better to "sprinkle the frame" with mini frames of plains and hills? There will be debates. Conferences will be held. Papers will be presented with analysis of old school code rendered in FBE. There will also be attempts to finally bring generics to Go, with FBE frontends with little angular < slots >. Efforts begin to teach monkeys to code.
[p.s. and where exactly is that crab in Fig. 1? Very confusing ..]