Then a real programming language can replace all "business processing" crap languages. Let's say you write a framework that escapes to a continuation whenever the "process" is waiting for Futures or Promises to complete, and returns the thread to a pool.
Depending on what you are waiting for, such as webhooks, asynch events/messages, or with some work even outstanding network requests ( if it's meaningful ), it would be possible to serialize the continuation to storage, potentially restore it on another machine, weeks later.
With some bookkeeping, it would be possible to also store what Futures have been completed, and what the continuation is currently waiting for, so it would be possible to keep track of the progress of the long running process, and use that information to show a nice status report.
Continuation based sessions, as used in fringe web frameworks such as Seaside or even Arc's library that powers Hacker News, is a really problematic approach. Reifying a slice of call stack in to a continuation and serializing it introduces all sorts of problems for resource management and the user experience, especially in the presence of change.
For example, let's say you deploy a new version of your code: What happens to all the in-progress sessions? Do you let old code keep running? Do you force users to start over? Compare to "Edit and Continue" in a Java/C#/Smalltalk codebase where you can change a method and all instances get the new behavior, vs if you change a function that constructs another function: All the old closures stick around with the old code.
Let's say a user leaves a browser tab open for a while: How long do you wait before invalidating the session and free its resources? What impact does that have on the user experience? Hacker News has mostly eliminated its reliance on continuations to remedy these sorts of problems.
Speaking of resources: What resources does a continuation hold on to? If you serialize a continuation, do you keep all file handles open? Do you perform static analysis to know a priori what data should be garbage collected or excluded from the continuation as "semantic garbage"? This problem is double bad when laziness is involved. Consider paginating through a database cursor.
What happens when a continuation must be forked? Consider using a continuation-id to resume an interaction with a web app: Does opening a new tab duplicate any held resources by the continuation? Do your external resources even support cloning operations or are they thread safe? Compare to unix "fork" and file handles vs shared memory.
This is a problem inherent to long-running processes, no matter the implementation. We had the same problem in our business process engine using jbpm (implemented as an interpreter serializing state to database every step).
Ultimately you have to decide if you bind functions/subroutines/subprocesses/whatever you call them early (so when you call them later and there's a new version - they still use old version), or late - so they are always calling the newest version. And you have to adjust your coding assumptions and the way you update your software to new version basing on that decision.
Neither way is always the correct one.
As ajuc mentioned, long-running processes and early/late bound is always a thing. Same with database migrations. Heck, even clients accessing a versioned api (foo.com/api/v1/bar) vs not has the same consideration.
> How long do you wait before invalidating the session and free its resources?
Same decision has to be made for explicit session data like login state and pagination cursors in eg. Facebook's graph API. Continuations are larger and therefore this question may have more weight.
> What resources does a continuation hold on to?
Anything reachable (static and/or dynamic analysis). And if you run your program in a monad (or other restricted, pure manner), then you can control when a function can suspend, like not in the middle of paging a database.
> Does opening a new tab duplicate any held resources by the continuation?
If you live in an immutable world, you get duplication for free with structural sharing. Same goes for an OS loading a shared library once and giving the same pages to multiple programs.
I do agree that having a blob of stuff serialized isn't very nice. Same goes for closures as well. The example I always compare in my head is a functional representation of a set vs. any other data structure.
I toyed around with an interpreter that would let you reflect a closure (same could be applied to continuations) to a program, which is akin to a residual program in partial evaluation. Then you could do whatever you want with the source of the closure/continuation. For the early/late bound question above, you could analyze the reflected continuation and decide exactly which bits you want to rebind before continuing the continuation.
If Eich indeed had produced a scheme, then it would have been superseded and replaced by another algol-derived language by browser makers soon -- and users would have flocked to that.
Template languages were like that (e.g. SGML) and worse.
For the vast majority of cases, delimited continuations are faster, use less memory (and makes garbage Collection easier) and are generally easier to reason about.
A weird thing is that call-with-current-continuation doesn't yield back to something; it yields to the function given to call-with-current-continuation.
# yield jumps out of the 'receiver' function and never returns
print("This never prints")
x = callcc(receiver)
# f returns 2+1
# backtracks stores (continuation,[values]) pairs. Whenever a computation fails, we pull out another
# value from the list and feed it to the continuation. The idea is to perform a depth-first search.
backtracks = 
"""Aborts the current continuation and backtracks to another alternative stored in backtracks."""
if not backtracks: raise Exception('amb ran out of alternatives')
yield, alternatives = backtracks.pop()
alt = alternatives.pop()
if alternatives: # this continuation still has alternatives, so put them back on the list in case of a future 'fail'
"""The "ambivalent operator." The arguments to amb are alternatives, and the return value of amb
is an alternative which makes the future computation not fail."""
if not alternatives: fail()
if not b: fail()
"""Let's find a Pythagorean triple."""
x = amb(*range(1,100))
y = amb(*range(1,100))
z = amb(*range(1,100))
expect(x**2 + y**2 == z**2)
# We can print out all the Pythagorean triples.
trip = test_amb()
print("(x,y,z)=(%s,%s,%s)" % trip)
fail() # fail every time so it keeps backtracking, but the side effect of printing out remains
This page helped me quite a bit a few years ago
Multishot continuations are kind of like forks (but without the parallelism). You can save the program state and make copies of it so later on you can backtrack to the point of the fork if you want. With this you can do things like logic programming or define your own exception handling mechanism. However, this is hard to implement and is confusing of you mix with mutable state so very few languages have this feature...
BTW, one of the hard parts of understanding lisp continuations is that the api is "call with current continuation" (pass the continuation as a function arg) instead of "get current continuation" (which would return the continuation)
x <- foo
y <- bar (x+1)
bar x = Cont (\fr -> ...)
bar x fr = ...
fr are all continuations that come after us, reified as a function. We can use the type of bar
bar :: env -> (next-> result) -> result
Easiest way to get a result value is to calculate a next value from env. But we can do much more fun things as well, like implementing control jumps. Lets look at how we can implement this jumping:
jumpCont env _ignoredRestOfBlock = jumpTarget (...env)
callCC comnputeInnerBlock = \_ignoredEnv jumpTarget -> comnputeInnerBlock jumpCont
where jumpCont env _ignoredRestOfBlock = jumpTarget env
callCC $ \exitBlock -> do
x <- foo
when (x < 3) exitBlock
y <- bar (x+1) -- <- here until the rest of the callCC block would be _ignoredRestOfBlock
lift $ print (x*y)
... -- <- this is jumpTarget, the point callCC would jump to
One thing I like about Haskell is that it's designed so that adding new control flow is not a big deal -- just design a monad. In Scheme it seems to be more work to get it all right.
(The bottom of the documentation page has a contrived example of this.)
In my opinion, delimited ones are far cleaner to implement and to work with than the more universal continuations of Scheme. call/cc, as a product of Scheme's efforts on simplicity, has to encompass a larger amount of low-level state than most real-world uses of it use (and it's used far less frequently in "real applications" than academically). Having a better suite of flow control forms and keeping continuations delimited has far less constraints on the design of the runtime and leads to more focused and explicit operation without a bunch of implicit unused implications hanging around.
[1 = simplicity as in fewer built-in operations]
If we place a prompt at the top of the main function of the program and use that for making delimited continuations, we basically get regular continuations: continuations that can potentially return all the way to the top, just as far as regular continuations.
(We can even set it up so that if any such a delimited continuation does actually get that far, the process will exit.)
Delimiting is actually an extra bit of expressive power, not a constraint.
We can compare the delimited continuation to a boomerang or yo-yo: we know that it will only go so far, and then return. That's why it can be used as a function. When resumed, it will get no farther than the delimiting prompt, and then whatever value pops out of that contour bounces back to us, who resumed the continuation.
The continuation won't keep going and then bail the program.
Regular continuations can simulate delimited ones: there is some jig put in place (with macros or whatever) which catches when the continuation has bubbled up to the faked-out prompt, and redirects the control---by invoking some other continuation that represents the boomerang return. Or something like that.
There are tricks for implementing the delimited ones, though, that don't involve such a thing. Like pushing the delimited continuation onto the stack, so that the return out of the prompt is just an actual return. I.e. entire "future computation" of the delimited continuation is just "installed" into the stack at the current top. The activation frames are hooked up, and then there is a new topmost frame: that of the restarted continuation, to which the procedure context is restored.
While this is certainly true in theory, in practice it doesn't seem that simple. When you start dealing with already-compiled code (can't apply a macro to it), first-class (built-into the language) continuations will work as you'd expect but delimited not at all. That can be a serious practical restriction.
I'm not used to Common Lisp but I assume you can customize how the "compile" command in ASDF works so maybe you can wrap your entire program, ASDF requirements at all, in that, and that's convenient. But consider Clojure which has equally powerful macros, but whose require forms won't customize like that. You could have your program's compilation phase go through and recompile the entire Clojure distribution of course, but there's some things you can only consume as bytecode (like anything Java related) and obviously you can't help that via CPS transformation. You'll need to get into bytecode manipulation like Quasar does. Not unlike those libraries in the C world that translate blocking system calls into nonblocking system calls for fiber / green-thread / whatever-you-want-to-call-it usage.
Coincidentally, this is exactly the reason I've found Clojure's async module to be such a pain to work with, compared to Quasar.
the distinction is that in scheme, where (at least in the standard implementation) each function call takes an implicit continuation and the code is everted using a bulk cps-transform before being interpreted or further compiled.
in this later case, its possible to use call/cc to create a continuation at any point in a 'normal' program, without having to construct the control flow explicitly. its this 'continuations everywhere' approach that can burden the runtime with a lot of consequences, possibly even precluding the use of a stack at all depending on the implementation.
in the former case you can basically do by-hand cps or event-handling in anything. asm, C, c++, python (3), go, js..
And ofcourse everything by Oleg Kiselyov
Some of these are first-class continuations; others support continuation passing style.
As a proof-of-concept, it was fun, but that's as far as it went :-)
I try a small demo, and the complexity of the interpreter get high fast.
Also, if not continuation, what else can be used?
Javscript has promises and async/await
I know we're not meant to comment on downvoting but I want to uncover the error in my understanding of this subject because at least 3 people downvoted but there is no reply yet??
I took continuation to include cases where you respond to asynchronous call backs while the program is still executing, in which case suspension isn't needed.