IO, a data-type for suspending side effects, including asynchronous ones, an FP/lawful replacement for Promise; ported from Haskell, but better: https://funfix.org/api/effect/classes/io.html
For the record, many implementors are against STC and PTC for a few reasons:
- It may be significantly slower than regular recursive calls (though this claim seems to be false both in practice and theory)
- It may break debug tools
- It may break browser telemetry (especially on IE)
Additionally, they would have to decide on an explicit syntax for STC (return.tail or something like that).
Personally, I would love to see STC anf PTC land so I can write proper functional code in JS. It would also make functional programming in JS easier to teach (less gotchas).
This is the first time I completely understand what thunks and trampolines are. For some reason I always thought a thunk was some mysterious kind of closure that wasn't quite a closure. But a bind() that sends all the expected arguments is a thunk. It's very simple, I'm not sure why it hadn't quite crystallized before.
It is worth pointing out that large recursions can often be easily turned into iterative code by pre-allocating your own pseudo-stack from the heap, as long as you have a reasonable maximum upper bound size. The pseudo-stack is just an array that holds only the recursion state you need. Plus, this works even if you have branching recursion and need backtracking. Recursive flood fill is a good example, this can be done by allocating another image as the stack, and then doing the recursion via iteration and storing the backtracking coordinates in the (image) stack. Both safe and fast.
A trampoline is a loop that iteratively invokes thunk-returning functions.
That's a description of a very common technique for implementing, e.g., Schemes. It is described in, for example, LISP in Small Pieces. But we don't usually call that a "thunk" so much as a "continuation". It's just a function (closure, generally, but the state it closes over can be passed in explicitly, which is what is done when compiling to C, naturally) that does one step of a computation and returns a continuation to do the next step.
The main point of this is that you can start allocating your true function activation frames on a heap and mostly lose the stack. This then allows one to do crazy things like return multiple times through the same path (search for call/cc).
Having different terms for the same concepts is a fact of life in this industry.
A loop which dispatches closures can implement cross module tail recursion (absent of continuations). Function A wants to call B. Instead of a direct call, it performs a dynamic return to the loop, passing it B and the arguments to be applied to B (wrapped up as a closure, or other object).
This shouldn't be called a trampoline; it's just a dispatch loop.
Trampolines are a technique for turning function pointers into closures. GCC uses trampolines for pointers to nested functions.
The trampoline mechanism constructs a piece of machine code on the stack (or possibly elsewhere). Next to that machine code at some fixed offset, there is the closure info: the real function plus context. The trampoline is referenced by the pointer to the code: a low level function pointer, compatible with C function pointers and the like. When that is called, it retrieves the closure info (via instruction-pointer-relative addressing) and calls the real target function.
Yes. I said the same sort of thing below. Trampolines are also used in signal handling and other such things. The common thread is a need to adjust the stack, so to speak, to handle a strange situation (signal handling, calling GCC-style local function closures, ...). I think TFA just misues terminology, though it's possible that those uses are common in some communities, so maybe they aren't really misuses. Having to carry a mental dictionary for translating terminology in the world of CS seems to be... a requirement that will never go away. It's OK.
Yep, overloaded language is common, and I've heard about continuations as well. First time I heard about thunks (a looong time ago) was in my 2nd year CS class that was taught in Scheme. I implemented one, and still never quite grokked what it was. The term continuation was being used as well, and I didn't know if it was the same thing or different. I hadn't heard of trampolines until fairly recently.
The next mysterious term I don't know but is probably overloaded and probably simple is monad.
Well, "continuation" is the most common name for this.
In a language like C a continuation is... the return address to the caller, and, ipliedly, the caller's frame pointer.
That is, the closure {<return address>, <frame pointer>} is a function of one argument (the value being returned). (When this finally clicked for me, a long time ago, I was astounded by the beauty and obviousness of this idea. Exclamation points abounded in my head.)
call-with-current-continuation (aka, call/cc) in Scheme is basically a reification of the caller's return closure as the "current continuation". To "reify" is a fancy term that means to "make visible an implied value" (the implied value is the return address and frame that is then made available as a first-class value).
So it's all very simple, really.
And if you structure the whole thing as a loop over calling the "current continuation" which then returns the "next current continuation", then you can move all activation frames onto the heap.
But putting each activation frame on the heap is a bit... expensive, as the work the GC has to do greatly increases, as does the amount of garbage (especially if you never call call/cc!). So the next idea is to have "delineated continuations", which... is basically like having a stack for each series of frames delineated by calls to call/cc... which is a lot like... co-routines... which people tend to implement with call/cc...
Now, monads... are a lot like such a loop over calling the current continuation. Normally, in a language like Haskell the order of evaluation of expressions is undefined for optimization reasons, and also in order to force you to have pure functions (or vice versa). Having only pure functions means that the order of evaluation can be left undefined, which increases available opportunities to parallelize, defer computation, etc... But when doing I/O, you kinda need to force some order. Well, a monad is the construct that lets you do that. In a monadic context you can think of the current continuation as not just a simple closure that returns the next current continuation, but as a closure that closes over the current state of the monad, which for the IO monad is actually the state of the world. But it's really still a plain closure at the end of the day -- one that notionally closes over more of the state of the world than a typical closure does.
Also, a trampoline is a very different thing in the contexts I'm used to. Typically it's a bit of executable object code generated on the fly to bridge something. The most obvious examples are: trampolines used in signal handling on Unix systems, the trampolines generated by GCC for local functions (closures of dynamic extent), and trampolines used in schedulers to switch tasks on CPU.
The loop-over-calling-the-current-continuation concept... kinda fits that description of "trampoline".
Chrome has removed support for tail recursion (it was briefly enabled). Does anybody know if there are plans to reenable it? Or are they still lobbying for an explicit keyword approach?
No. Last I heard, they were trying to remove proper tail calls from the spec completely. Safari, I believe, is the only browser where it is currently implemented.
Why? I recall when ES6 first came out, Crockford gave a talk on the good and bad of it. He hated that classes were brought into the language, but was super bullish on tail calls, saying it was the most important new feature.
You’re going to have to ask the Chrome and Firefox teams about that. I’m guessing there’s performance issues—perhaps every function invocation needs to check if it’s a proper tail call?
Nit regarding proper tail calls versus tail call optimization: As I understand, JS maybe will be getting the former but not necessarily the latter. I.e., you'd avoid blowing the call stack, but not necessarily get a dramatic speed boost just for moving something to tail position.
Eval, a data-type for handling synchronous evaluation: https://funfix.org/api/effect/classes/eval.html
IO, a data-type for suspending side effects, including asynchronous ones, an FP/lawful replacement for Promise; ported from Haskell, but better: https://funfix.org/api/effect/classes/io.html
Also note that Promise's implementation leaks memory in "then" chains, so be cautious when using Promises to work around JavaScript's call-stack — https://alexn.org/blog/2017/10/11/javascript-promise-leaks-m...