Hacker News new | past | comments | ask | show | jobs | submit login
Writing a Lisp: Continuations (reinvanderwoerd.nl)
136 points by reinvdwoerd on May 18, 2017 | hide | past | web | favorite | 49 comments

Continuations are strangely underused, as they enables writing long-living processes in a simple way, without having to keep them in running in a thread, or even in memory.

Then a real programming language can replace all "business processing" crap languages. Let's say you write a framework that escapes to a continuation whenever the "process" is waiting for Futures or Promises to complete, and returns the thread to a pool.

Depending on what you are waiting for, such as webhooks, asynch events/messages, or with some work even outstanding network requests ( if it's meaningful ), it would be possible to serialize the continuation to storage, potentially restore it on another machine, weeks later.

With some bookkeeping, it would be possible to also store what Futures have been completed, and what the continuation is currently waiting for, so it would be possible to keep track of the progress of the long running process, and use that information to show a nice status report.

I have implemented interpreters that support both full and delimited continuations. They are underused for many very good reasons. A continuation makes an implicit state machine. Implicit means anonymous, which means harder to talk about, which means harder to reason about. But there are technical problems too.

Continuation based sessions, as used in fringe web frameworks such as Seaside or even Arc's library that powers Hacker News, is a really problematic approach. Reifying a slice of call stack in to a continuation and serializing it introduces all sorts of problems for resource management and the user experience, especially in the presence of change.

For example, let's say you deploy a new version of your code: What happens to all the in-progress sessions? Do you let old code keep running? Do you force users to start over? Compare to "Edit and Continue" in a Java/C#/Smalltalk codebase where you can change a method and all instances get the new behavior, vs if you change a function that constructs another function: All the old closures stick around with the old code.

Let's say a user leaves a browser tab open for a while: How long do you wait before invalidating the session and free its resources? What impact does that have on the user experience? Hacker News has mostly eliminated its reliance on continuations to remedy these sorts of problems.

Speaking of resources: What resources does a continuation hold on to? If you serialize a continuation, do you keep all file handles open? Do you perform static analysis to know a priori what data should be garbage collected or excluded from the continuation as "semantic garbage"? This problem is double bad when laziness is involved. Consider paginating through a database cursor.

What happens when a continuation must be forked? Consider using a continuation-id to resume an interaction with a web app: Does opening a new tab duplicate any held resources by the continuation? Do your external resources even support cloning operations or are they thread safe? Compare to unix "fork" and file handles vs shared memory.

> For example, let's say you deploy a new version of your code: What happens to all the in-progress sessions?

This is a problem inherent to long-running processes, no matter the implementation. We had the same problem in our business process engine using jbpm (implemented as an interpreter serializing state to database every step).

Ultimately you have to decide if you bind functions/subroutines/subprocesses/whatever you call them early (so when you call them later and there's a new version - they still use old version), or late - so they are always calling the newest version. And you have to adjust your coding assumptions and the way you update your software to new version basing on that decision.

Neither way is always the correct one.

Came to say the exact same thing, all the problems mentioned aren't unique to continuations. I'm not saying this makes a positive argument to use coroutines everywhere, but it's not their fault.

As ajuc mentioned, long-running processes and early/late bound is always a thing. Same with database migrations. Heck, even clients accessing a versioned api (foo.com/api/v1/bar) vs not has the same consideration.

> How long do you wait before invalidating the session and free its resources?

Same decision has to be made for explicit session data like login state and pagination cursors in eg. Facebook's graph API. Continuations are larger and therefore this question may have more weight.

> What resources does a continuation hold on to?

Anything reachable (static and/or dynamic analysis). And if you run your program in a monad (or other restricted, pure manner), then you can control when a function can suspend, like not in the middle of paging a database.

> Does opening a new tab duplicate any held resources by the continuation?

If you live in an immutable world, you get duplication for free with structural sharing. Same goes for an OS loading a shared library once and giving the same pages to multiple programs.

I do agree that having a blob of stuff serialized isn't very nice. Same goes for closures as well. The example I always compare in my head is a functional representation of a set vs. any other data structure.

I toyed around with an interpreter that would let you reflect a closure (same could be applied to continuations) to a program, which is akin to a residual program in partial evaluation. Then you could do whatever you want with the source of the closure/continuation. For the early/late bound question above, you could analyze the reflected continuation and decide exactly which bits you want to rebind before continuing the continuation.

If only Brendan Eich had ended up "doing Scheme" in Netscape as he was originally recruited to! (https://news.ycombinator.com/item?id=2786720)

Javascript survived (and eventually caught on) not just because it was the "only game in town" for the web, but also because it was familiar.

If Eich indeed had produced a scheme, then it would have been superseded and replaced by another algol-derived language by browser makers soon -- and users would have flocked to that.

If familiarity mattered that much, why wasn't HTML itself replaced by something with curly braces, or CSS by something procedural?

Because HTML wasn't a programming language.

Template languages were like that (e.g. SGML) and worse.

Besides, unlike JavaScript it wasn't the product of a single vendor, but a standard, and the basis upon which the web stood from the beginning.

We are doing exactly this in my current company. There is a quite nice language called Orc for this. A site can do IO, we can serialize the state and return to it weeks later when we have a triggering event.

Continuations have a context which takes up space in memory. That context keeps a reference on an environment (or chain of environments) containing all the resources which must be there when the continuation is invoked.

I would say that cotinuations In the sense of call/cc are overused. You have to look hard for a problem where undelimited continuations are in any way better than delimited ones.

For the vast majority of cases, delimited continuations are faster, use less memory (and makes garbage Collection easier) and are generally easier to reason about.

I remember trying to learn continuations during my CS degree, and evidently even today I still don't understand them. The examples don't seem to help either – how exactly does the control flow function?

You know how in Python the "yield" statement is actually an expression and can be sent values using "next"? Imagine there is a special "yield" called "call-with-current-continuation" that can be used anywhere (not just in generators) which bundles everything up and makes a generator-like thing called a "continuation." This continuation object can be called later on, like using "next" in Python. Unlike generators, though, continuations can usually be resumed repeatedly, always resuming at the same point where the call-with-current-continuation expression was.

A weird thing is that call-with-current-continuation doesn't yield back to something; it yields to the function given to call-with-current-continuation.

Would that be like having a yield statement AND passing a context (e.g. binding a this that represents "current-continuation")?

Here are two examples in Python syntax, assuming "yield" is no longer a keyword.

  def f():
    def receiver(yield):
      # yield jumps out of the 'receiver' function and never returns
      print("This never prints")
    x = callcc(receiver)
    return x+1
  # f returns 2+1
  # backtracks stores (continuation,[values]) pairs.  Whenever a computation fails, we pull out another
  # value from the list and feed it to the continuation.  The idea is to perform a depth-first search.
  backtracks = []
  def fail():
    """Aborts the current continuation and backtracks to another alternative stored in backtracks."""
    if not backtracks: raise Exception('amb ran out of alternatives')
    yield, alternatives = backtracks.pop()
    alt = alternatives.pop()
    if alternatives: # this continuation still has alternatives, so put them back on the list in case of a future 'fail'
      backtracks.push((yield, alternatives))
  def amb(*alternatives):
    """The "ambivalent operator."  The arguments to amb are alternatives, and the return value of amb
    is an alternative which makes the future computation not fail."""
    if not alternatives: fail()
    def receiver(yield):
      backtracks.append((yield, alternatives))
    return callcc(receiver)
  def expect(b):
    if not b: fail()
  def test_amb():
    """Let's find a Pythagorean triple."""
    x = amb(*range(1,100))
    y = amb(*range(1,100))
    z = amb(*range(1,100))
    expect(x**2 + y**2 == z**2)
    return (x,y,z)
  # We can print out all the Pythagorean triples.
  trip = test_amb()
  print("(x,y,z)=(%s,%s,%s)" % trip)
  fail() # fail every time so it keeps backtracking, but the side effect of printing out remains

I think a good way to understand how they work is by implementing a simple Scheme interpreter that explicitly passes around the continuations of the guest language.

This page helped me quite a bit a few years ago http://blog.theincredibleholk.org/blog/2013/11/27/continuati...

Do you know setjmp/longjmp in C? It's like that, except you can longjmp to the same target more than once, because Scheme's equivalent of "stack frames" are on the garbage-collected heap instead of "the stack". (The implementation may be fancier, but that's the easiest way to do it.)

Im fond of the operating system analogies: one-shot continuations are like suspended processes. The OS saves all the program state to memory (including call stack and current registers) and switches to a different line of execution. Later, it can go back to the original process and resume from where you paused. You find something like this under many names (coroutines, generators, async, etc) and it is very useful for writing cooperative code.

Multishot continuations are kind of like forks (but without the parallelism). You can save the program state and make copies of it so later on you can backtrack to the point of the fork if you want. With this you can do things like logic programming or define your own exception handling mechanism. However, this is hard to implement and is confusing of you mix with mutable state so very few languages have this feature...

BTW, one of the hard parts of understanding lisp continuations is that the api is "call with current continuation" (pass the continuation as a function arg) instead of "get current continuation" (which would return the continuation)

I usually like to think of delimited continuations from the inside. First a haskell example because there I can heap on syntax sugar:

      x <- foo
      y <- bar (x+1)
      baz (x*y)
let's look at bar:

    bar x = Cont (\fr -> ...)
Or without the type wrapper:

    bar x fr = ...
x represents the environment - all variables that are in scope and can be used to compute the next step. They represent everything that came before bar.

fr are all continuations that come after us, reified as a function. We can use the type of bar

    bar :: env -> (next-> result) -> result
to find ways to use continuations. Time for type tetris!

Easiest way to get a result value is to calculate a next value from env. But we can do much more fun things as well, like implementing control jumps. Lets look at how we can implement this jumping:

    jumpCont env _ignoredRestOfBlock = jumpTarget (...env)
We ignore the function that represents the rest of our current block and use something else instead. Now we just have to figure out what 'something else' might be:

    callCC comnputeInnerBlock = \_ignoredEnv jumpTarget -> comnputeInnerBlock jumpCont
        where jumpCont env _ignoredRestOfBlock  = jumpTarget env
So we have two streams of control - the one callCC is part of and one nested within callCC. If we call jumpCont we ignore the rest of the nested block and continue with the callCC control stream, exiting the current block:

    callCC $ \exitBlock -> do
        x <- foo
        when (x < 3) exitBlock
        y <- bar (x+1) -- <- here until the rest of the callCC block would be _ignoredRestOfBlock
        lift $ print (x*y)
    ... -- <- this is jumpTarget, the point callCC would jump to
So if normal functions take the result of what came before as an argument and compute a new result, continuations take the result of what came before and a function that represents the rest of the computation and compute a new result. You can think of this function as created for you by some compiler magic.

My favorite part of this blog post is the Fira Code webfont

Are there any continuation implementations in Common Lisp?

Yes, cl-cont by our very own Slava "coffeemug" Akhmechet. Unfortunately, a recent site rebuild at common-lisp.net seems to have left the project page blank; nor do I see a cl-cont repo on Slava's Github page. It's Quicklisp-installable, though.

Not first-class, no. They all depend on using macros to turn your code into CPS. You'd need some sort of special form that CL doesn't have in order to do better than that.

there is cl-cont which works by CPS transformation, so if performance is your concern it's not ideal.

Do any non-Lisp/Scheme languages have continuations?

Here's the obligatory "Haskell has a monad for that" comment


Speaking of monads, one of the themes in Prof. Sussman's "Adventures in advanced symbolic programming" course was that many metacircular evaluator experiments can be set up by using mostly the same core evaluator but swapping out the underlying monad. I'm pretty sure I implemented the "ambivalent operator" that way, using the list monad (in Scheme), but when we lifted amb to the host Scheme, it needed call/cc to be able to use the built-in control flow.

One thing I like about Haskell is that it's designed so that adding new control flow is not a big deal -- just design a monad. In Scheme it seems to be more work to get it all right.

That kind of feels like cheating though. I mean, the complicated part of a continuation is that it restores the same context and state that the app had right before the plunge. But Haskell inherently doesn't have "floating" state, it's all self-contained within each parameter. So in Haskell it's more or less a fancy goto-statement.

The linked module defines a "monad transformer" that can wrap other monads with a continuation-capturing layer. This means, among other things (Haskell's solution is awesomely general) that you can indeed use it with state, if you compose it with the state monad, or even the IO monad.

(The bottom of the documentation page has a contrived example of this.)

There's lots of continuation styles & libraries, but most are delimited continuations.

In my opinion, delimited ones are far cleaner to implement and to work with than the more universal continuations of Scheme. call/cc, as a product of Scheme's efforts on simplicity[1], has to encompass a larger amount of low-level state than most real-world uses of it use (and it's used far less frequently in "real applications" than academically). Having a better suite of flow control forms and keeping continuations delimited has far less constraints on the design of the runtime and leads to more focused and explicit operation without a bunch of implicit unused implications hanging around.

[1 = simplicity as in fewer built-in operations]

Also, delimited continuations are no less general than the regular ones. Regular continuations are also delimited, because they cannot capture control beyond the program's startup function.

If we place a prompt at the top of the main function of the program and use that for making delimited continuations, we basically get regular continuations: continuations that can potentially return all the way to the top, just as far as regular continuations.

(We can even set it up so that if any such a delimited continuation does actually get that far, the process will exit.)

Delimiting is actually an extra bit of expressive power, not a constraint.

We can compare the delimited continuation to a boomerang or yo-yo: we know that it will only go so far, and then return. That's why it can be used as a function. When resumed, it will get no farther than the delimiting prompt, and then whatever value pops out of that contour bounces back to us, who resumed the continuation.

The continuation won't keep going and then bail the program.

Regular continuations can simulate delimited ones: there is some jig put in place (with macros or whatever) which catches when the continuation has bubbled up to the faked-out prompt, and redirects the control---by invoking some other continuation that represents the boomerang return. Or something like that.

There are tricks for implementing the delimited ones, though, that don't involve such a thing. Like pushing the delimited continuation onto the stack, so that the return out of the prompt is just an actual return. I.e. entire "future computation" of the delimited continuation is just "installed" into the stack at the current top. The activation frames are hooked up, and then there is a new topmost frame: that of the restarted continuation, to which the procedure context is restored.

> If we place a prompt at the top of the main function of the program and use that for making delimited continuations, we basically get regular continuations: continuations that can potentially return all the way to the top, just as far as regular continuations.

While this is certainly true in theory, in practice it doesn't seem that simple. When you start dealing with already-compiled code (can't apply a macro to it), first-class (built-into the language) continuations will work as you'd expect but delimited not at all. That can be a serious practical restriction.

I'm not used to Common Lisp but I assume you can customize how the "compile" command in ASDF works so maybe you can wrap your entire program, ASDF requirements at all, in that, and that's convenient. But consider Clojure which has equally powerful macros, but whose require forms won't customize like that. You could have your program's compilation phase go through and recompile the entire Clojure distribution of course, but there's some things you can only consume as bytecode (like anything Java related) and obviously you can't help that via CPS transformation. You'll need to get into bytecode manipulation like Quasar does. Not unlike those libraries in the C world that translate blocking system calls into nonblocking system calls for fiber / green-thread / whatever-you-want-to-call-it usage.

Coincidentally, this is exactly the reason I've found Clojure's async module to be such a pain to work with, compared to Quasar.

"Delimited" in continuations doesn't refer to a non-first-class hack. Delimited continuations are first-class, built-into-the-language objects (unless kludged otherwise, which is true of any continuations). They are just semantically nuanced relative to undelimited continuations in that they allow the program to clamp the future computation to within a specified dynamic contour. When the restarted continuation bubbles out to that boundary, it terminates and returns a value to whomever dispatched the continuation. And of course, it can be called again and again to do the same thing. That's why delimited continuations behave like functions and are composable.

Any recommendations for write-ups on how delimited continuations are implemented?

my reading is that by 'delimited continuations', white-flame is just referring to explicit closures and function calls. which lets you build up event-driven machines in any language that supports closures.

the distinction is that in scheme, where (at least in the standard implementation) each function call takes an implicit continuation and the code is everted using a bulk cps-transform before being interpreted or further compiled.

in this later case, its possible to use call/cc to create a continuation at any point in a 'normal' program, without having to construct the control flow explicitly. its this 'continuations everywhere' approach that can burden the runtime with a lot of consequences, possibly even precluding the use of a stack at all depending on the implementation.

in the former case you can basically do by-hand cps or event-handling in anything. asm, C, c++, python (3), go, js..

Check out Andy wingo's blog. He implemented delimited continuations for guile, and provides lots of sources for his implementation.

And ofcourse everything by Oleg Kiselyov

Wikipedia provides a list of continuation implementations in a variety of languages:


Some of these are first-class continuations; others support continuation passing style.

As an example, here's a simple proof-of-concept webapp I wrote in Ruby, based on continuations:


The Rhino javascript implementation has continuations. Chris Double once stuck it in Jetty, and I threw together a horrifying proof-of-concept Seaside-like continuation-based web programming library based on that: http://homepages.kcbbs.gen.nz/tonyg/lshift_archive/a-rhino-a...

As a proof-of-concept, it was fun, but that's as far as it went :-)

Rhino was a fun JS system. One of the continuation examples I did for a talk was migrating threads in Rhino: https://bluishcoder.co.nz/2006/06/11/migrating-javascript-th.... Start a thread on one machine, serialize the continuation and send it across the network and resume it on that machine.

Ruby has `#callcc` which produces first-class continuations AFAIK https://ruby-doc.org/core-2.2.0/Continuation.html

Yep. Relatedly, Ruby also has fibers.

Some Smalltalks have them and they are used by Seaside but made optional in Seaside 3.0 because of the platforms that don't have them.

I wonder how it could affect the performance of a interpreter if is build around (internal) continuations, so all the other cool things as exceptions are made on top of it.

I try a small demo, and the complexity of the interpreter get high fast.

Also, if not continuation, what else can be used?

C# has async/await and TPL, e.g. .ContinueWith(...)

Javscript has promises and async/await

If anyone can feedback on why my comment is wrong I'd appreciate it.

I know we're not meant to comment on downvoting but I want to uncover the error in my understanding of this subject because at least 3 people downvoted but there is no reply yet??

Async/await does not allow you to suspend a computation and then resume it, and it is not easy to chain.

If you implement your own awaiter with INotifyCompletion [1], you get the continuation as an Action that you can invoke as you will. TBH I haven't done it, but I believe you could use that to implement something akin to call/cc.

[1] https://msdn.microsoft.com/en-us/library/system.runtime.comp...


I took continuation to include cases where you respond to asynchronous call backs while the program is still executing, in which case suspension isn't needed.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact