Unfortunately, the notion of editing an AST is by definition language specific, so it's unlikely that someone could create the structural editor and just have it work for everybody. Moreover, as they mention here for Lamdu, it often requires looking at the language slightly differently and enforcing some rules that wouldn't normally exist in the code. But, with a decent foundation we can at least make writing the necessarily language specific parts relatively straightforward.
For those new to this world and wondering why this stuff doesn't seem to exist, one of the biggest problems with projectional editors is handling the translation problem. How do you reconcile changes in your representation with changes in the underlying code? Can you reliably parse handwritten code into your AST representation write it back and so on without any loss? What happens with handwritten styles that maybe don't fit into the projection's way of viewing the world? What if I know a better way to output the AST than you? It's also particularly difficult dealing with change over time since there are no true unique identifiers for bits of code.
The approach they're taking is likely the correct approach for the "future" - we should be designing languages and approaches to coincide with the ability to tool them. Even better would be to never have a handwritten format at all: the cannonical representation is always the AST. And though you might be able to edit a projection of it as text, you're never at a loss for how to get back to your "good" representation. This is the world I think we ultimately need to get to and you'll be seeing some really cool stuff from us in that vein early in 2014.
There are things we can do to have our cake (flow) and eat it (feedback) also. Code completion was one of the great boons of structured editing (introduced in Alice Pascal circa 1985), and by the late 90s we learned how to create "language aware" editors that could leverage this feature without flow disrupting structured editing. The same goes for static typing: we can, through some heavy type inference, infer semantic information responsively while the user is typing, and use that to provide responsive feedback.
I'm in the camp where the programming experience should be considered holistically. The IDE is a part of that experience, and so language design should occur concurrent with IDE design. With some smart incremental compilation magic along with language-specific rendering in the IDE, we can build programming experiences that provide the benefits of structured (and projectional) editing without the flow costs. Or at least, that is the premise of my research :)
Since the AST for such languages is s-expressions, and some people struggle with parens, this is an interesting "fun size" example of editing the AST.
The thing is, paredit is quite challenging for most people to adopt, due to what you mentioned about the "flow" they've already learned. Magnar Sveen has a great video about this at http://www.youtube.com/watch?v=D6h5dFyyUX0
Best quote: "If you think paredit is not for you, then you need to become the kind of person that paredit is for."
EDIT: To clarify, Magnar is quoting technomancy a.k.a. Phil Hagelberg.
It overrides some deletion commands to maintain your code well-formed, but there's simple ways (like cutting text) to break those rules, so it's ill-formed. (Currently, paredit won't start if it sees you're editing ill-formed text. But if it's already started, it'll continue running.)
I mention it because it's an example of structured editing of an extremely simple AST. And even people who want to use it (who want to leverage the so-called "straitjacket"), often find it quite hard to change their flow and adapt it.
That seems not to bode well for this being enthusiastically used generally. OTOH I suppose you could argue that a lot of people like auto-completing IDEs, and/or hate to type, so who knows.
I agree that structured and visual editing paradigms have failed. But ultimately all you're saying here is that structure approaches haven't offered enough positive to offset the cost of learning them. That may be true but it would be good to know why.
My guess is that the data structures generated by the text of programs is just too varied and complex for any visual representation system to be of use. The programmer is interested in the final structure of their application and every intermediate one. Creating a program to represent those visually should be harder than implementing the application. Multiply that by every application that could exist.
No, I'm claiming that bad flow, even after you've learned and adapted to a structured paradigm, needs to be solved directly. Even if you can provide really great feedback, the bad flow will still exist and cause inefficiencies not offset by enhanced feedback, which is aimed at an orthogonal set of problems.
Edit: a good analogy might be between agility and armor in a tank. You can obviously make the tank more survivable by improving either one, but its overall usage becomes more limited when emphasizing one feature at the expense of the other.
This is, well, false.
See, for example, the harmonia project at berkeley(http://harmonia.cs.berkeley.edu/harmonia/index.html), and ensemble before that (http://harmonia.cs.berkeley.edu/harmonia/publications/ensemb...)
This stuff has been done forever, and it's been "the next generation" forever. Things like separation of structural editing from the language specficness of an AST while still providing a sane and useful editor are paths that have been tread before.
Not that this means people shouldn't try, but as you can see from just these two links, there has been a lot of research and effort put into this area in general. This is an area where i'd read a lot more than i'd sit down and code, because a lot of smart people have been this way before.
Case and point: Susan Graham (and maybe Tim Wagner), who lead the Harmonia project (and many projects before that), is basically the goto person in the world if you ever wanted to know anything about incremental analysis of programming languages (she also helped write gprof, among other things)
Harmonia was more about language-aware editing, Ensemble and Pan were more about structured editing.
> This stuff has been done forever, and it's been "the next generation" forever.
The Harmonia stuff never worked that well when I tried it. I've also read many of Graham and Wagner's papers on their tooling, and you know what....
> This is an area where i'd read a lot more than i'd sit down and code, because a lot of smart people have been this way before.
...it turns out we can do much better than what they did given per language consideration. Once I decided not to use Harmonia for the Scala IDE, I came up with a way to incrementalize Martin's compiler fairly easily. It wasn't that hard in hindsight, but none of those fancy algorithms were really necessary at all! Instead, we just needed an incremental computation framework like Glitch  to do the heavy lifting transparently, which eventually led to my work in live programming.
So the moral of this story: don't trust everything you read from an academic conference. Often work is incomplete, not appropriate for your context, outdated, or even unworkable. Usually, its incomplete and there is lots of room for improvement.
I don't think it can be called true or false that easily. Sure you can translate multiple languages into one AST representation (maybe embellished to allow round-tripping). But is it still the same language? I would say no. But I would go further and say that not even a language specific AST is the same language.
I think the idea that we can seperate programming languages from their surface syntax and still claim they are the same languages is misguided. Yes programming languages have semantics, but the user interface aspect is an inherent part of their identity because it is an inherent part of what it means for humans to use language. There is no such thing as syntactic sugar.
This is what a next-generation programming environment must have. Primarily, one manipulates AST nodes, projected visually into lines of code. But, to preserve the comfort and flexibility of free-form text editing, the editor knows about non-AST text fragments, tree "holes", and other representations of intermediary editing state. (A big UI problem for sure!)
An editor that can deal with the ways in which code changes over time elegantly is what we really need. Since editing a plaintext projection discards code movement metadata, editing the AST directly is paramount. Without that metadata, the programming environment can't version control structured code automatically.
Looking forward to 2014!
To use a traditional flat editor (not a structured editor), but show inferred information around the code. I am thinking of visuals that are akin to http://explainshell.com/ but a lot more compact.
The inferred information need not be shown always; it could be shown dynamically based on the user's context (current line, current function, etc)
Genuinely asking, is this really a problem? It sounds solvable to me.
Unfortunately, we still have many in the PL community that don't think that way.
key quote: "In fact, the most powerful languages may initially have the least powerful tool support. The reason for this is that the language developer, like the language adopter, has to make a choice: whether to dedicate limited development resources towards language features, or towards tool support."
The main problem with these kinds of things is the incredibly sharp learning curve. It's more of a learning wall, really. It's a lot of effort to learn, and then it turns out that it's really hard to get in the mindset of defining new languages to solve problems, even when it's been made a lot easier than using yacc/lex/writing your own compiler backend/etc. I never really found a use for it, but I keep it filed away in the back of my mind in case one day I do.
It allows state-machines to be coded in an enhanced C like language with nice visual representations.
Not saying there's anything wrong with that, but it's worth emphasizing, imho.
Lamdu is open source.
The same is not true for C and Perl, where I use simple editors. I can think problems out on the paper, or even build a abstract version of a whole complex app with just pen and paper.
The best kind of IDE's is what I've seen in the embedded domain. Which assist you during work, you still have to read and internalize the documentation well. But the IDE will suggest you improvements, let you probe ports, registers and let you put break points to see if things are actually going around as you think they should. Above all you still have to learn the best practices the hard way, the IDE simply assists you to do that along the way.
You see, I usually write in the functional style, so my code seldom has to ask arrays about their sizes, and when it does, it uses '[*]'. When I was back in for-loop land, I spoke like a for-eigner...
Indeed the fonts may not be as pretty as with Cairo etc but on the other hand there are very nice animations when edits make stuff move around.
The problem with these type of development tools is that it moves your brain from thinking in terms that of a human to thinking in a very structured way more attuned to machines.
This is a problem with functional programming in general, it is fundamentally anti-human, people don't think functionally but rather procedurally.
Full disclosure here, IDE maker so I have skin in this game :)
People really like to think by analogy and think based on relations. Functional programming makes this much simpler by giving you simple abstractions and, crucially, letting you not worry about extraneous machine details. In a functional language, even the order your code gets evaluated is below your level of abstraction.
Ultimately, functional programming lets you talk about what where imperative languages force you to talk about how. That's pro-human. It's exposing the underlying machine and computation--imperative programming, in a word--that's anti-human!
People really like to think by analogy and metaphor. Go objects!
Granted, if a human wants to describe a physical process, they will use procedural language. But computers are primarily about information, and we think about information in terms of relationships. That's exactly what FP is about: expressing computation in terms of relationships.
As humans I think we need the ability to be messy and imprecise in order to be creative.
>The reason we tend to think in imperative terms is one of familiarity
That's debatable given that both approaches have been around for about the same length of time with different outcomes in terms of adoption.
The "inflexible approach to solving problems" you refer to has, as far as I can tell, nothing at all to do with "machines", except perhaps in an exceptionally abstract sense, in the form of virtual machines designed for thinking about. On the other hand, problem solving doesn't get much more flexible than raw assembly. My point stands.
I think the early adoption of imperative languages was more due to pragmatism than elegance; at first, you had to stay close to the machine to get anything done, at first because that's all that existed (Fortran beat Lisp into existence by a year) and then for performance. Remember, Fortran was still basically a shortcut for assembly, while Lisp started as a purely mathematical abstraction, designed expressly for thinking about, that some goofball wrote an interpreter for.
 This principle is still clearly in force today. We wouldn't bother with C++ and JVM languages and Unix so much if pragmatism wasn't paramount.
I'm using Fortran as my example of an early imperative language. There might have been earlier ones (maybe a version of COBOL?), but since they were imperative it doesn't materially affect my point.
If unix had been created in LISP or ML, I believe we might be in the opposite position. Though that hinges on a LISP unix being as successful as the C one.
There are of course many other things it could have been, but these are the first that come to my mind.
No, they didn't. They invented a syntax two milleniums ago, optimized for use-cases of those ages, using alphabets of those times. Thinking in greek symbols is not helping anyone think clearly. It's just more confusion, because now suddenly, i can't type or phrase a relevant question out loud, until i get a greek keyboard and learn the proper pronounciations of the greek alphabet.
The notion we should just stick to a notation and conventions optimized for a different era, a different culture, a different alphabet and a different writing tool (paper), is ridiculous. Do NASA scientists calculate speed using knots? Or power using horse power? They do not.
Math is to computer science, what classical music is to pop music. A historical relic that stopped having economic and cultural value beyond being a mere status symbol.
>of which FP is nearly the computational manifestation.
No, it's not. One could argue the same thing for logic programming. Computation is not the manifestation of math. It's a strategy to answer a mathametical question. The original strategy was to just 'try things and explore' and when you found (guessed?) the answer, you would prove it to be sure.
Functional programming, in the religious Haskell sense, is just a term rewriter. That's not the manifestation of computation. It's just one way to specify a strategy. In the case of Haskell, which is a term rewriter with a bible full of fine print and exceptions, a very sado-machistic one.
>But computers are primarily about information, and we think about information in terms of relationships
No, that would be Prolog. A different default strategy. Less fine print, but not the cure-all either.
>That's exactly what FP is about: expressing computation in terms of relationships.
Nope. A relationship doesn't have a computational direction. In math, all these statements express the same relationship:
double( x ) / x = x
double( x ) = x * x
x * x = double( x )
> But computers are primarily about information, and we think about information in terms of relationships
Yes, and the biggest challenge, the pain everyone tries to lessen, is managing the coordination and standardisation of changes (mutations) to that information. Wrapping every computation with the same type of state-monad actually helps a lot with this, and has been the most popular strategy to deal with this problem in the last 20 years.
Yes, me too. And that type of functional programming is very popular and very succesfull. Every programmer uses it often, when they touch SQL, jQuery, LINQ, etc.
>I don't think you and I really disagree much on the big picture.
I love functional programming.
But i consider languages like Haskell and their derivates to do a lot of harm to the reputation of functional programming. Lazy evaluation and the whole pretending math == computation. It's borderline harmfull to the development of a programmer to even be exposed to it. The last thing you want a programmer to believe, is that there is some intrinsic order of execution that is magically correct and optimal, and can easily be derived. There isn't. The correct order of execution is not even objective (in a GUI one would trade throughput for lower latency, for example), so the notion we can just skip that whole part, and have the 'compiler take care of it' seems damaging to me. Languages that allow you to specifiy these things manually are considered ugly mutations of some kind of pure math. Sinners. That we need return to one true god, which is "pure" math, mascerading as a term rewriter and bible full of fine print, and a zero tolerance on maintaining global state. Yuck.
I would argue that the "anything goes" dynamic procedural languages ( like Ruby, Python ) are far more anti-human, in that human ability to reason about large masses of code scales orders of magnitude worse than in languages that provide a strong theoretic framework for reasoning about code.
The reason straight jackets lead to poor usability is that human minds are fairly diverse in the way they solve problems, and rigid constraints imposed by a tool are likely to trip up your thinking.
There is a tension between flow and feedback, but its not clear at all that one dominates the other as you've stated.
Ideally, we could achieve flow without sacrificing semantic feedback (or vice versa). It is definitely a worthy goal
Is it anti-human to be able to interact with the code, and explore what it actually does? A run-time type error is one that has real example data.
The assumption we can write perfect code immediately and easily, or that a type analysis can just guide us through, is not the case. And it's breaks down even worse, when 99% of your code is interacting with systems outside of the scope of the typing system. (database servers, client-side browsers, network connections, file-systems). Dynamically typed language are good fits, when the code is mostly glue-code between multiple systems outside of the scope of any type analysis.
But let's not equate functional programming with static typing.
>in languages that provide a strong theoretic framework for reasoning about code.
You act like many of us are ever writing complicated algorithms. We're not. We're writing simple algorithms that deal with complex structures of information. And in the few cases where the algorithms get so complicated you want your invariants to be formally proven, anything less than a full blown theory prover, will be insuffient anyway.
While dynamic procedural languages have their problems, I think they work for the most part, hence their success Vis-à-vis functional languages. As far as the issue of scale, that's why modular constructs as added to manage large code bases.
The adoption of programming languages is more influenced by economic factors than informed by solid engineering.
We can debate this ad infinitum and probably wont come to an agreement.
There was a study a while back that showed non-programmers a series of statements of the sort,
int a = 10;
int b = 20;
a = b;
The thing is, if you'd never programmed before but you HAD basic algebra knowledge, the third line above would have broken your brain. 10 =/= 20. You can't change the value of a, it's against everything you know!
The point is, people don't think functionally or procedurally. Both of these programming styles are just learned behaviours, not basic human nature.
I agree with the argument against functional programming, though I would qualify that with pureness (doing everything functionally) rather than just the use of functions in general: functions are sometimes the most natural way to do something. Haskell is good for what it is, an experiment in pure functional programming, which has taught us a lot about programming in general.
My company is building an IDE that is purely about productivity, specifically for building business web applications. Check my profile for link.
I'm not sure if it is appropriate to think about specific groups of developers in a general purpose IDE. You really can't guess how they will use your tool, which applies to PL design in general. Personally speaking, there are functional programming enthusiasts who think wildly different from the way I do, and I guess they would sort of select out given my lack of understanding of their psychology as reflected in my design.
Incidentally some time back I posted about the similarities between Google and Microsoft logos: https://news.ycombinator.com/item?id=6678308 :)
I noticed that the (single) argument to sum in most of the screenshots doesn't have a name? Is that special-cased in the language or the editor, or am missing something bigger?
We might change a lot of our decisions later as we tune it for working with more realistic code.
You can change it while Lamdu is running for immediate effect :-)
I don't know about the first two but GPL is not a good choice.
- IDE should have an integrated compiler.
- IDE should download only necessary libaries or functions needed by a program(from central repository, local or remote).
- IDE should catch a bug and allow me to fix it while program is running.
- IDE where I am able to fix a bug on my clients phone or his wrist watch remotely without the need to download whole Haskell Platform there. No JS/HTML5 BS.