That means, basically, converting it into total "callback hell", so that functions don't return, they just call other functions with callbacks.
(There's a special "exit" callback that is implicitly the callback for the main function.)
So basically in Scheme, your code is always in a callback-passing style, just that the language cleverly hides it, and then lets you explicitly access the current callback (using call/cc).
Callbacks can of course take several arguments. Most languages have an asymmetry where functions have several parameters but can only return one value. With continuations, it's easy to imagine calling them with several arguments. So in a language with continuations, it makes sense to have multiple return values too.
if you look at it from the assembly perspective, its just a jump thats been augmented with state (the closure) and additional parameters. i think trying to describe it as a snapshot, or multiple returns, is confusing since it describes them in terms of their stack behaviors.
the easiest way to think about them is to add an implicit argument to each function, which is the place to return to (jump to with the context, augmented with the return value). call it c. return x is c(x).
there is no longer any stack or implicit return target (above me). removing that common control flow assumption lets you make all sorts of different plumbing and get into an arbitrarily deep amount of trouble (the good kind and the bad kind).
call/cc has a pretty natural implementation in this model (heap allocated activation records)
but as someone else mentioned if you choose the simple continuation model, that makes a lot of choices for you in the runtime and the compiler. common complaint from compiler land is that it makes it difficult to reason about reordering later on. you also lean really heavily on the gc to clean up the frames that the stack was taking care of for you (see charlie on the mta)
However, all continuations here are linear (or rather affine - called at most once) and this allows you to allocate and free "closures" for your continuations with a stack discipline.
Does the term 'linear' just not exist in this context, or does it have a different meaning?
Then drdeca (https://news.ycombinator.com/item?id=14684219) 's explanation of why affine logic is so called seems plausible, but I haven't looked into it.
With linear algebra, a function f is linear if, for any a,b, f(a+b)=f(a)+f(b)
Suppose there is a vending machine that sells soda for $1 each (doesn't have a menu, just dispenses the soda, for simplicity). If you put in $1, you get a soda. If you put in $1+$1, you get a soda + a soda.
The amount of soda you get is linear in the amount of money you put in.
This might not be a correct explanation of why it is called linear.
Also I don't really understand how the affine fits in this analogy, other than that "affine" is a somewhat weaker assumption than linear.
I hope someone can give a better answer than I did, because I thought I knew the answer, but when I tried to explain it, I found that I did not really know the answer.
That model doesn't help you when functions do not have an implicit argument and you cannot add one, and the run-time has a stack (which you like and want), and you want continuations anyway.
It certainly doesn't help if you're debugging something with continuations and they are not implemented that way; or only helps you so far.
1. Write some JS code in chrome and verify expected behaviour.
2. Add a debugger statement.
3. When the debugger pops up, go down a few frames and add debug point.
4. Right click the frame and select restart.
5. In the new break point, write some code in the console to modify the state.
6. When you step through the code, it now does something else.
I leave the rest to imagination.
If the continuation could be resumed at most once, this would be more like suspending a thread/fiber and resuming it later.
- if you call the callback before you save the continuation, then the callback isn't ever called again (unless you call the whole function again, of course).
- if you call the callback after saving the continuation, then the callback is called whenever you resume the continuation.
So basically, resuming the continuation is equivalent to either never calling the callback, or calling the callback like normal. It sounds weird, but everything works out.
EDIT: Sorry, I think you were saying you know what they do but that it seems like that'd break all kinds of invariants. It's true that you have to consider scenarios a bit more carefully, but I find it's not that much of a burden compared to the benefits.
As you may know, Oleg makes the same argument, that call/cc has a lot of problems, and, in practice, isn't really used anyway: http://okmij.org/ftp/continuations/against-callcc.html#illus... . (EDIT: I see that sillysaurus (https://news.ycombinator.com/item?id=14680711) linked this same discussion a while ago.)
The so-called full continuations lack the flexibility to put the delimiting prompt anywhere but the top-level entry point to the program or thread.
Delimited continuations are complete in the sense that they let the programmer control where the top is.
Put the delimiting prompt around the top-level startup function and you have a "full" continuation.
The obvious fix would be to never modify the content of a stack frame, instead recreating it with a different value for each iteration, but that doesn't strike me as very efficient.
A continuation captures variable bindings similarly to a closure. (It just captures more than just variable bindings and more than just the lexically apparent ones.)
A function that has arguments on the stack and really returns and all that will not be happy if it invokes a callback and that callback is actually a continuation of yours that doesn't return. You have stuff on the stack that hasn't been cleaned away.
You can reimplement low level control flow with this but generally it is mostly useful as a reinversion of control. Some code (like async IO) expects callbacks so you lose control over the program flow which makes composition difficult. You can reinverse this by using futures which often just wrap continuations.
This is the TXR Lisp interactive listener of TXR 181.
Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
1> (defun yflatten (obj)
(labels ((flatten-rec (obj)
((atom obj) (yield-from yflatten obj))
(t (flatten-rec (car obj))
(flatten-rec (cdr obj))))))
2> (yflatten '(1 (2 3 (4) . 5) 6))
#S(sys:yld-item val 1 cont #<intrinsic fun: 1 param>)
3> (obtain (yflatten '(1 (2 3 (4) . 5) 6)))
#<interpreted fun: lambda (: resume-val)>
Let's do it again --- but this time let's trace the function. For this we beak out flatten-rec into a top-level function we can trace.
(defun flatten-rec (obj)
((atom obj) (yield-from yflatten obj))
(t (flatten-rec (car obj))
(flatten-rec (cdr obj)))))
(defun yflatten (obj)
1> (trace yflatten flatten-rec)
2> (obtain (yflatten '(1 (2 3 (4) . 5) 6)))
#<interpreted fun: lambda (: resume-val)>
(yflatten ((1 (2 3 (4) . 5) 6))
(flatten-rec ((1 (2 3 (4) . 5) 6))
#S(sys:yld-item val 1 cont #<intrinsic fun: 1 param>))
(flatten-rec (((2 3 (4) . 5) 6))
(flatten-rec ((2 3 (4) . 5))
(flatten-rec ((3 (4) . 5))
(flatten-rec (((4) . 5))
Imagine running a program in a VM. You know how you can take a snapshot and then restore to it later? That snapshot is equivalent to a continuation.
Another way of phrasing it is, it's your program frozen in time. You can snapshot your program and restore to that point later.
To put it technically, step through each call stack frame, serialize all the local variables, and you have yourself a continuation. To invoke it, call those functions in order and set the local variables to those values, then set the program counter to wherever it was. (You don't literally do this, but maybe that makes it easier to understand what's going on with it.)
The confusion: What about a database connection? Or a network connection of any kind? An open file handle? Etc. The answer is that those things can't be saved in a continuation.
The way that this works in Scheme is that there's a special primitive called "dynamic wind". It takes three callback functions: "before", "during", and "after". It invokes those callback functions in order. If execution leaves "during" for any reason whatsoever, then "after" is invoked.
Here's the kicker: If execution goes back into "during", then "before" is invoked. I.e. if you save a continuation inside "during," then "before" is the place that you'd put the code to re-initiate a database connection or re-open a file handle. Or fail.
And of course, no discussion of continuations would be complete without the argument for why call/cc is generally an anti-pattern: http://okmij.org/ftp/continuations/against-callcc.html
Yet "On Lisp" presents several interesting ways to use them, and they are extremely powerful. One particular use is that you can implement cut, fail, and mark for non-deterministic backtracking. If you've used emacs lisp and you've ever written an edebug specification for how to debug a macro (https://www.gnu.org/software/emacs/manual/html_node/elisp/Sp...), some of the more complex features require backtracking: https://github.com/emacs-mirror/emacs/blob/0648edf3e05e224ee...
That's an area where continuations really shine, because the implementation can be just a few dozen lines compared to hundreds.
(elisp doesn't actually use continuations -- this is just an example of the territory they're useful in.)
See these two in order:
// source: https://en.wikipedia.org/wiki/Call-with-current-continuation
Oleg Kiselyov, author of a delimited continuation implementation for OCaml and designer of an API for delimited stack manipulation for the implementation of control operators, advocates the use of delimited continuations instead of the full-stack continuations that call/cc manipulates: "Offering call/cc as a core control feature in terms of which all other control facilities should be implemented turns out a bad idea. Performance, memory and resource leaks, ease of implementation, ease of use, ease of reasoning all argue against call/cc"
He had us implementing an interpreter, and step-by-step we were adding more features. Eventually we added continuations (building a CEK machine). It wasn't until I actually used the interpreter and played with it after finishing the homework that continuations started to make sense.
But they really solidified for me a year later when I took a course in operational semantics from him. We walked through the evolution of semantics in various languages through history, and had to write out the evaluations of expressions step-by-step (by hand, on paper). Then continuations really made sense.
More or less, but a continuation really only captures the PC and the stack, not the entire state of the system.
When discussing call/cc with friends, I like to liken it to Tracer's Recall ability from Overwatch, which rewinds Tracer's state -- including location, health, and ammo -- to a point in the past, while leaving everybody else's alone.
Does this remind anyone of MonadBaseControl from the Haskell package monad-control?
Or are they like tail calls?
---------- original -----------
They save the state of your whole program. It's just like a checkpoint in a video game. (call/cc try-the-left-door) is like doing a quick save before doing something crazy.
Say you've got a big complicated data structure, and you need to make a bunch of destructive changes to it like a stack of 3d model transformations. You have a user sitting there watching a progress bar fill up. One approach is to just go for it, and make the changes. If the user clicks cancel, you have to undo all the work you've done so far.
Another approach is to deep copy the structure, and do your changes on the copy. If the user clicks cancel, you just throw away the modified copy.
the call/cc approach let's you skip doing the whole copy manually. The underlying system will save the state of your whole program, so you can change whatever you want. If the user clicks cancel, you call/cc and it'll restore the state of your program.
Perhaps you have two libraries. in a very dynamic language like ruby, you could call/cc, load up a library, monkey patch it, use it, see it didn't parse the way you wanted it to, and then call the continuation. You jump back to your last save point. It's as though you never loaded or modified the library, and then try the other library.
they get weird, because after you tried that first library, but before you gave up, you could take another quicksave of your program from that point. you can have as many save points as you want. so you try out the second library, decide to try one more thing back in the world where you'd tried out the first approach.
It's kinda sorta like, if some of your character's inventory was preserved across saves. save the game, go down the left hall and get the key. load the game go down the right hall and use the key (because you get to keep some things across savepoints). save again and grab the mystic idol. load up that first save, and use the idol.
They're really really neat. they're not really tricky, but you kind of have to know everything your program is doing.
Your points are true if you save that complicated data structure as local variables on the stack, though.
It basically just saves the whole call stack and lets you restore exactly that stack anytime you want. So it's like a longjmp, but also "jumps" to whatever function return chain was in progress at the time the continuation was saved.
If it seems weird, it is. But with weirdness comes power.