The irony of this is that many programmers treat continuations as some ivory-tower Lisp nonsense that seems unapproachable and inscrutable, and then they go to work and try to figure out how to get their node.js callbacks to work properly with their node.js promises library.
"I don't have time to learn about PL theory! I'm too busy reinventing how to implement basic control flow and concurrency in a bad-ass rock-star programming environment!"
Maybe when Prometheus brings generator expressions down to us mere mortal node.js programmers, then we can express how to do two things in sequence!
It really takes a significant shift to start "thinking in continuations". For one, in any sequentially written language it's easy to forget that flow control can be a first-class citizen. At least some part of what makes concurrency hard is that you can no longer survive without thinking about what sequentiality really means, so it's no surprise that when Node throws programmers deep into the churn on concurrency then they'll start to get an eye for continuations.
Maybe part of the problem is that continuations are rarely explained in the same clarity as a callback, a generator, or a promise. Perhaps the article will clarify for me, but even though I'm pretty sure I understand the concept, I'm not I truly understand what using continuations looks like in a Lisp right now.
For those who are curious why CPS is such a great compiler representation for implementing optimizations, it's because the passes require less analysis complexity. You only have to handle the "call" portion and don't have to worry separately about "return" points (though you've really just converted returns into a call to a new function!).
It sounds trivial, but in practice it dramatically reduces the size of not only optimization passes such as inlining, but it makes analyses that drive optimizations simpler as well (e.g., control-flow analysis). For an example from my experience (Manticore - http://manticore.cs.uchicago.edu/), my pretty coarse CFA analysis on our direct-style IR is about 750 lines of SML code whereas an optimized CFA with three different variants and extra support for environment analysis is only 700 lines of SML.
SSA and CPS are actually the same thing, in some fashion. I'm not 100% up to speed on the details myself, but as I understand it, CPS is the functional-language form and SSA is the imperative-language form.
The basic result is that if you just CPS-convert a language without first-class continuations (so, no call-CC), all of that is interchangeable with SSA and any optimization written against one IR can be rewritten fairly mindlessly against the other IR.
If you allow first-class continuations in user programs, though, all bets are off and CPS is more expressive.
Richard Kelsey wrote the paper that talks about how to convert between CPS and SSA forms, along with the first-class continuation limitation mentioned above:
A good reference is Andrew Appel's "SSA is functional programming" which discusses the similarities. There are practical differences between SSA, ANF, and CPS, but the similarity at a high level is quite strong.
I recommend Andrew Kennedy's "Compiling with Continuations, Continued" article if you are actually interested in using continuations in a production compiler. Might's formulation of CPS is good for analysis but not actually that great for code generation in my opinion. My take on the topic is here: http://wingolog.org/archives/2014/01/12/a-continuation-passi...
That's certainly the case for full CPS, but using a CPS intermediate language does not imply call/cc. Indeed Kennedy targeted the .Net CIL; a direct-style VM. The important thing to note is that continuations and functions can be statically distinguished. The article conflates the two, which is why I don't find it very useful.
This is a classic problem. Programming concepts, like math, build. If you are missing fundamentals then you are not ready for the next step, but if you never see that the next step is there, you never know about your missing fundamentals.
recently I asked if people wanted to have a look at and comment on the abstract math pieces I'm slowly writing and a number have said yes. Someone replied that they "would love a simply math blog." The problem is, the simple stuff you'll skim, the complex stuff you'll get frustrated by, and no one will be happy.
Unless you go through the stages you won't have the skills for the next level, and until you get exposed to the higher levels, you won't realize that there's more work to do. This is, of course, related to the "Blub paradox" that we're all so familiar with.
But what we've seen many times is that many programmers do not have an undergrad CS degree. More, many programmers claim that formal education is a complete waste of time, and that they can quite happily program capably and competently without having gone to college or university.
Some of them will have been autodidacts who have taught themselves this sort of thing already. Some will be autodidacts who have enough foundations to be ready to understand this.
But some will not. Some will have picked things up here and there and become useful programmers, but find this sort of thing hard going and mind-blowing. The question was to wonder what proportion of HN readers fall in each category.
I guess doing a survey here in HN is probably the only way of measuring that.
As for CS graduates, I suspect that even those who had a course in that area might not actually be that interested - I was fascinated by lambda calculus, combinators and implementations of functional languages so the article was probably more interesting for me than most.
I was thrilled when I read about CPS, now my favorite "crazy thing" is logic embedded eval, see evalo (eval embedded in minikanren logic) http://youtu.be/fHK-uS-Iedc?t=27m35s giving you naive bidirectional evaluation. Too funky.
Constraint-based logic programming is actually pretty straightforward. You just define a constraint, and it tries to go and find things that fit.
Quines is as simple as saying that you want eval(x)==x, and it will try to fill in answers for x. It's a bit underwhelming, though, to realize that it will tell you that "5" is a quine, since eval(5)==5.
You don't think that would be enough to explain continuations to a working programmer? It doesn't really use any specific or formal mathematical language, and explains them mostly in terms of js with examples.
If you want another example, checkout coffeemug's cl-cont , with the full source here. It's no accident some of the best software like RethinkDB have people behind them that are deeply rooted in the mind-blowing.