
What’s in a Continuation (2016) - ianrtracey
http://jlongster.com/Whats-in-a-Continuation
======
mbrock
A common strategy for implementing compilers for continuation-enabled
languages is transforming code into "continuation-passing style" (CPS).

That means, basically, converting it into total "callback hell", so that
functions don't return, they just call other functions with callbacks.

(There's a special "exit" callback that is implicitly the callback for the
main function.)

Some imagination is required to see how, say, a for loop is rendered into CPS,
but if you first imagine turning the loop into functional programming then
it's easier (like in JavaScript, if you want to wait for an asynchronous thing
in your for loop body, you have to basically do a CPS transformation manually,
or use async/await which is closely related).

So basically in Scheme, your code is always in a callback-passing style, just
that the language cleverly hides it, and then lets you explicitly access the
current callback (using call/cc).

If you have experience with async programming in JavaScript, it should make
sense that this lets you easily implement things like concurrency, custom
blocking operations, etc.

Just like how JavaScript callbacks can be called several times, so with
continuations. Since the callbacks are implicit in Scheme, you can make what
appears to be a function that returns several times ("nondeterminism").

Callbacks can of course take several arguments. Most languages have an
asymmetry where functions have several parameters but can only return one
value. With continuations, it's easy to imagine calling them with several
arguments. So in a language with continuations, it makes sense to have
multiple return values too.

~~~
tonyg
This is one of the reasons why having an implementation that correctly handles
tail calls is so important.

------
convolvatron
reading the comments here is pretty painful. inasmuch as the call stack seems
to be so central to people's perception of what computing is.

if you look at it from the assembly perspective, its just a jump thats been
augmented with state (the closure) and additional parameters. i think trying
to describe it as a snapshot, or multiple returns, is confusing since it
describes them in terms of their stack behaviors.

the easiest way to think about them is to add an implicit argument to each
function, which is the place to return to (jump to with the context, augmented
with the return value). call it c. return x is c(x).

there is no longer any stack or implicit return target (above me). removing
that common control flow assumption lets you make all sorts of different
plumbing and get into an arbitrarily deep amount of trouble (the good kind and
the bad kind).

call/cc has a pretty natural implementation in this model (heap allocated
activation records)

but as someone else mentioned if you choose the simple continuation model,
that makes a lot of choices for you in the runtime and the compiler. common
complaint from compiler land is that it makes it difficult to reason about
reordering later on. you also lean really heavily on the gc to clean up the
frames that the stack was taking care of for you (see charlie on the mta)

~~~
fmap
At the assembly level, we are always working with continuation passing style.
Normal calls are implemented by pushing their "return address" on the stack
and then jumping to it after the procedure has finished. Formally, this is
exactly a continuation.

However, all continuations here are linear (or rather affine - called at most
once) and this allows you to allocate and free "closures" for your
continuations with a stack discipline.

~~~
JadeNB
> all continuations here are linear (or rather affine - called at most once)

Does the term 'linear' just not exist in this context, or does it have a
different meaning?

~~~
lmkg
Linear means called _exactly_ once. Linearity requires proof that the value is
definitely used, which is tricky. Affine types are more common in practice,
even though people will call them linear.

~~~
gugagore
How does this relate to linear(/affine) as in e.g. y = mx (+ b) ?

~~~
JadeNB
It was suggested on LtU that Girard explicitly addressed this connection,
although it may still be somewhat rarefied; see the "Added in print" note at
the bottom of logical p. 131 of
[http://www.sciencedirect.com/science/article/pii/01680072889...](http://www.sciencedirect.com/science/article/pii/0168007288900255)
.

------
tonyle
Simple continuation explanation for web developers.

    
    
      1. Write some JS code in chrome and verify expected behaviour.
      2. Add a debugger statement.
      3. When the debugger pops up, go down a few frames and add debug point.
      4. Right click the frame and select restart.
      5. In the new break point, write some code in the console to modify the state.
      6. When you step through the code, it now does something else.
    

Now imagine if the language allow you to do this programmatically in the code
by passing the frame around as an argument similar to functions.

I leave the rest to imagination.

~~~
agumonkey
Suddenly prolog

------
pdelgallego
Lisp in Small Pieces [0] has a good explanation on how to implement
Continuation based compilars

[0] [https://www.amazon.com/Lisp-Small-Pieces-Christian-
Queinnec/...](https://www.amazon.com/Lisp-Small-Pieces-Christian-
Queinnec/dp/0521545668)

------
skybrian
The bizarre thing about continuations is that you can call a function once and
return from it an arbitrary number of times. It seems like this would break
invariants in any function that takes a callback argument but doesn't expect
the callback to save a continuation?

If the continuation could be resumed at most once, this would be more like
suspending a thread/fiber and resuming it later.

~~~
sillysaurus3
If your function takes a callback and you save a continuation inside of it,
then one of two things happen:

\- if you call the callback before you save the continuation, then the
callback isn't ever called again (unless you call the whole function again, of
course).

\- if you call the callback after saving the continuation, then the callback
is called whenever you resume the continuation.

So basically, resuming the continuation is equivalent to either never calling
the callback, or calling the callback like normal. It sounds weird, but
everything works out.

~~~
skybrian
I meant the case where the function doesn't use continuations at all. It calls
the callback. The callback saves a continuation. Then the callback could
return more than once to a function that doesn't know anything about
continuations.

~~~
sillysaurus3
Ah, yeah. So in that case the stack frame is set up exactly as it was whenever
the continuation is saved. If your function's output is based entirely on its
input, you'll get the same result each time. If it uses some state (like
incrementing a member variable) then that action happens each time the
continuation is invoked.

EDIT: Sorry, I think you were saying you know what they do but that it seems
like that'd break all kinds of invariants. It's true that you have to consider
scenarios a bit more carefully, but I find it's not that much of a burden
compared to the benefits.

~~~
fiddlerwoaroof
I think common lisp's restart system and/or delimited continuations handle
most of the "reasonable" uses of continuations without some of the less
intuitive aspects of full continuations.

~~~
JadeNB
> I think common lisp's restart system and/or delimited continuations handle
> most of the "reasonable" uses of continuations without some of the less
> intuitive aspects of full continuations.

As you may know, Oleg makes the same argument, that call/cc has a lot of
problems, and, in practice, isn't really used anyway:
[http://okmij.org/ftp/continuations/against-
callcc.html#illus...](http://okmij.org/ftp/continuations/against-
callcc.html#illusion) . (EDIT: I see that sillysaurus
([https://news.ycombinator.com/item?id=14680711](https://news.ycombinator.com/item?id=14680711))
linked this same discussion a while ago.)

------
noway421
They are quite powerful, and not to diminish their use of course, but they do
feel like glorified, annotated goto statements. Which is not that bad in the
end, because even break; would be just a special case of goto.

~~~
edejong
Goto statements capturing the state. A normal 'goto' makes reasoning of
program state difficult precisely because it mixes state and program flow.

------
Nelkins
I only started to use continuations in practice (and not too often, they can
be hard to debug) after understanding that they can be used as a tool to defer
computation when you don't have all the data you need at a particular point in
a program. Here's an article (uses F#) that really helped me:
[https://sidburn.github.io/blog/2016/04/16/fold-
continuations](https://sidburn.github.io/blog/2016/04/16/fold-continuations)

------
Tarean
Delimited continuations reify the rest of the block as a function. From within
the definition of a continuation you can use all variables in scope together
with that function and can mash them together however you want.

You can reimplement low level control flow with this but generally it is
mostly useful as a reinversion of control. Some code (like async IO) expects
callbacks so you lose control over the program flow which makes composition
difficult. You can reinverse this by using futures which often just wrap
continuations.

------
sebastianconcpt
Smalltalk has continuations and you can run it on JavaScript
[https://squeak.js.org](https://squeak.js.org) there you can go full meta

------
lngnmn
A stack frame for calling RET captured at current IP?

------
kazinator
Continuation based list flattening in TXR Lisp:

    
    
      This is the TXR Lisp interactive listener of TXR 181.
      Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
      1> (defun yflatten (obj)
           (labels ((flatten-rec (obj)
                      (cond
                        ((null obj))
                        ((atom obj) (yield-from yflatten obj))
                        (t (flatten-rec (car obj))
                           (flatten-rec (cdr obj))))))
             (flatten-rec obj)
             nil))
      yflatten
      2> (yflatten '(1 (2 3 (4) . 5) 6))
      #S(sys:yld-item val 1 cont #<intrinsic fun: 1 param>)
    

Oops, we need the obtain macro work with a function which yields:

    
    
      3> (obtain (yflatten '(1 (2 3 (4) . 5) 6)))
      #<interpreted fun: lambda (: resume-val)>
      4> [*3]
      1
      5> [*3]
      2
      6> [*3]
      3
      7> [*3]
      4
      8> [*3]
      5
      9> [*3]
      6
      10> [*3]
      nil
      11> [*3]
      nil
    

No continuation passing here: real stack where we can have unwind-protect.

Let's do it again --- but this time let's trace the function. For this we beak
out flatten-rec into a top-level function we can trace.

    
    
      (defun flatten-rec (obj)
        (cond
          ((null obj))
          ((atom obj) (yield-from yflatten obj))
          (t (flatten-rec (car obj))
             (flatten-rec (cdr obj)))))
      
      (defun yflatten (obj)
        (flatten-rec obj)
        nil)
    

Now:

    
    
      1> (trace yflatten flatten-rec)
      nil
      2> (obtain (yflatten '(1 (2 3 (4) . 5) 6)))
      #<interpreted fun: lambda (: resume-val)>
      3> [*2]
      (yflatten ((1 (2 3 (4) . 5) 6))
        (flatten-rec ((1 (2 3 (4) . 5) 6))
          (flatten-rec (1)
        #S(sys:yld-item val 1 cont #<intrinsic fun: 1 param>))
      1
      4> [*2]
            nil)
          (flatten-rec (((2 3 (4) . 5) 6))
            (flatten-rec ((2 3 (4) . 5))
              (flatten-rec (2)
      2
      5> [*2]
                nil)
              (flatten-rec ((3 (4) . 5))
                (flatten-rec (3)
      3
      6> [*2]
                  nil)
                (flatten-rec (((4) . 5))
                  (flatten-rec ((4))
                    (flatten-rec (4)
      4
      7> [*2]
                      nil)
                    (flatten-rec (nil)
                      t)
                    t)
                  (flatten-rec (5)
      5
      8> [*2]
                    nil)
                  nil)
                nil)
              nil)
            (flatten-rec ((6))
              (flatten-rec (6)
      6
      9> [*2]
                nil)
              (flatten-rec (nil)
                t)
              t)
            t)
          t)
      nil
    

At this point, the flattening is done. What if we keep calling it?

    
    
      10> [*2]
                nil)
              (flatten-rec (nil)
                t)
              t)
            t)
          t)
      nil
      11> [*2]
                nil)
              (flatten-rec (nil)
                t)
              t)
            t)
          t)
      nil
    

It's just sputtering now, repeating the slice of execution spanning from
several nestings deep into flatten-rec, up to the delimiting prompt, because
no new continuation is captured.

------
sillysaurus3
It took a long time to grok continuations. There's an easy way to explain what
they are:

Imagine running a program in a VM. You know how you can take a snapshot and
then restore to it later? That snapshot is equivalent to a continuation.

Another way of phrasing it is, it's your program frozen in time. You can
snapshot your program and restore to that point later.

To put it technically, step through each call stack frame, serialize all the
local variables, and you have yourself a continuation. To invoke it, call
those functions in order and set the local variables to those values, then set
the program counter to wherever it was. (You don't literally do this, but
maybe that makes it easier to understand what's going on with it.)

The confusion: What about a database connection? Or a network connection of
any kind? An open file handle? Etc. The answer is that those things can't be
saved in a continuation.

The way that this works in Scheme is that there's a special primitive called
"dynamic wind". It takes three callback functions: "before", "during", and
"after". It invokes those callback functions in order. If execution leaves
"during" for any reason whatsoever, then "after" is invoked.

Here's the kicker: If execution _goes back into "during"_, then "before" is
invoked. I.e. if you save a continuation inside "during," then "before" is the
place that you'd put the code to re-initiate a database connection or re-open
a file handle. Or fail.

And of course, no discussion of continuations would be complete without the
argument for why call/cc is generally an anti-pattern:
[http://okmij.org/ftp/continuations/against-
callcc.html](http://okmij.org/ftp/continuations/against-callcc.html)

Yet "On Lisp" presents several interesting ways to use them, and they are
extremely powerful. One particular use is that you can implement cut, fail,
and mark for non-deterministic backtracking. If you've used emacs lisp and
you've ever written an edebug specification for how to debug a macro
([https://www.gnu.org/software/emacs/manual/html_node/elisp/Sp...](https://www.gnu.org/software/emacs/manual/html_node/elisp/Specification-
List.html#Specification-List)), some of the more complex features require
backtracking: [https://github.com/emacs-
mirror/emacs/blob/0648edf3e05e224ee...](https://github.com/emacs-
mirror/emacs/blob/0648edf3e05e224ee8410ab244df7364f919dc58/lisp/emacs-lisp/cl-
macs.el#L2578-L2595)

That's an area where continuations really shine, because the implementation
can be just a few dozen lines compared to hundreds.

(elisp doesn't actually use continuations -- this is just an example of the
territory they're useful in.)

~~~
conistonwater
Do you find this explanation better or worse than the one that uses
shift/reset and expressions with holes in them?

~~~
DonaldPShimoda
I'm not OP, but my functional teacher (Matthew Flatt) taught continuations
using the "holes". I really didn't get it at first; it was just a little too
abstract for me.

He had us implementing an interpreter, and step-by-step we were adding more
features. Eventually we added continuations (building a CEK machine). It
wasn't until I actually used the interpreter and played with it after
finishing the homework that continuations started to make sense.

But they really solidified for me a year later when I took a course in
operational semantics from him. We walked through the evolution of semantics
in various languages through history, and had to write out the evaluations of
expressions step-by-step (by hand, on paper). Then continuations _really_ made
sense.

