
Algebraic Effects for the Rest of Us - alaaf
https://overreacted.io/algebraic-effects-for-the-rest-of-us/
======
reikonomusha
The author ought to look into and write about the Common Lisp condition
system, which allows error handlers to invoke restarts at different parts of
the call stack. [1] The long-story-short on them is that they decouple the
treatment of exceptional situations (or _conditions_ ) into three orthogonal
roles: _signaling_ the condition (akin to “throwing”), _handling_ the
condition (akin to “catching”), and _recovering_ from the condition (which has
no resemblance in popular languages). The signaler, the handler, and the
recoverer can be three disjoint bodies of code sitting in different parts of
your call stack.

Doesn’t have a cool name like “algebraic effects”, and doesn’t have cool
operational semantics written out, but it does something quite similar to what
the article describes. Here is a little example I cooked up for a Julia
programming audience. The code offers an API for computing roots of functions,
and has a DIVERGENCE-ERROR condition and a handful of different restarts which
handlers of said error can invoke. [2]

If you want to see what languages will look like in N years, it’s always wise
to see what Common Lisp or Scheme are up to.

[1] [http://www.gigamonkeys.com/book/beyond-exception-handling-
co...](http://www.gigamonkeys.com/book/beyond-exception-handling-conditions-
and-restarts.html)

[2] [https://github.com/stylewarning/lisp-
random/blob/master/talk...](https://github.com/stylewarning/lisp-
random/blob/master/talks/4may19/root.lisp)

~~~
riffraff
Smalltalk also had resumable exceptions, and I remember implementing then in
ruby too, with callcc, but I think algebraic effects are more general and by
focusing on exceptions the author has sort of hidden that, even if he kept
saying it's just an example.

~~~
TeMPOraL
Like reikonomusha said, CL's condition system is pretty general in itself. The
naming choice isn't an accident - the thing that's being "raised" is a
"condition" (of which "error" is just a subtype), and what you do with a
condition is "signal" it. In CL, most of the time it's used for exception
handling, but I've seen code using this system for tasks not related to
errors.

As a simple example, you can imagine data processing function for use in
potentially interactive application, that reports progress and allows for
aborting:

    
    
      (define-condition progress ()
        ((amount :initarg :amount :reader amount)))
      
      (defun process-partial-data (data)
        "NOOP placeholder"
        (declare (ignore data)))
      
      (defun process-data (data)
        (restart-case
            (loop
               initially
                 (signal 'progress :amount 0)
               with total = (length data)
               for datum in data
               for i below total
               do
                 (process-partial-data datum)
                 (signal 'progress :amount (/ i total))
               ;; Report progress
               finally
                 (signal 'progress :amount 1)
                 (return :done))
          (abort-work ()
            (format *trace-output* "Aborting work!")
            :failed)))
    

The "business meat" of our function is the loop form. You'll notice it reports
its progress by signalling a 'progress condition, which, without installed
handlers, is essentially a no-op (unlike throwing an exception). The "meat" is
wrapped in restart-case form, in order to provide an alternative flow called
'abort-work (you can provide more than one named flow).

Now for the REPL sessions (-> denotes returned value). First, regular use:

    
    
      CL-USER> (process-data '(1 2 3 4 5 6))
      -> :DONE
    

Let's simulate a GUI progress bar, by actually listening to the 'progress
condition:

    
    
      CL-USER> (handler-bind ((progress (lambda (p) (format *trace-output* "~&Progress: ~F~%" (amount p)))))
                 (process-data '(1 2 3 4 5 6)))
    
      Progress: 0.0
      Progress: 0.0
      Progress: 0.16666667
      Progress: 0.33333334
      Progress: 0.5
      Progress: 0.6666667
      Progress: 0.8333333
      Progress: 1.0
      -> :DONE
    

A progress bar in a GUI usually has a "cancel" button. Let's simulate it by
assuming that user clicked "cancel" around the 50% progress mark, through
invoking the 'abort-work restart programmatically:

    
    
      CL-USER> (handler-bind ((progress (lambda (p) (format *trace-output* "~&Progress: ~F~%" (amount p))
                                                    (when (>= (amount p) 0.5)
                                                      (invoke-restart 'abort-work)))))
                 (process-data '(1 2 3 4 5 6)))
      Progress: 0.0
      Progress: 0.0
      Progress: 0.16666667
      Progress: 0.33333334
      Progress: 0.5
      Aborting work!
      :FAILED
    

You'll note that function code is entirely transparent for how the progress
reporting and abort decision work; it's callee-level handlers that are
concerned with it. It works in console, it can work with Lisp's interactive
debugger, and it could work with a GUI just as well. Hell, it could work with
network requests (and I've seen similar code for writing handler response code
for multiple protocols, letting you deliver partial results where supported,
and transparently buffering them where it isn't.)

N.b. your typical experience with restarts in Common Lisp is the interactive
debugger that pops up when an error gets unhandled. This example serves as a
reminder that restarts are not just for errors, and that you can invoke them
programmatically - building applications that can figure out how to handle
their own errors.

~~~
TeMPOraL
I've expanded on this example and added some further thoughts in a blog post:
[http://jacek.zlydach.pl/blog/2019-07-24-algebraic-effects-
yo...](http://jacek.zlydach.pl/blog/2019-07-24-algebraic-effects-you-can-
touch-this.html).

------
dgudkov
I, like many people in this thread, learnt about algebraic effects for the
first time from the posted article. However, many commenters seem to be
mislead by the explanation based on an example with exceptions. What I learnt
from [1] linked below is that algebraic effects is a generalization of which
language constructs like try/catch, async/await, or generators are just
particular cases.

From that perspective, algebraic effects make sense and look very interesting.

[1] [https://github.com/ocamllabs/ocaml-effects-
tutorial](https://github.com/ocamllabs/ocaml-effects-tutorial)

~~~
kazinator
Algebraic effects are synchronous; they can't be a generalization of async
anything, because that uses threads.

A generalization has to do everything that the specialization does, like
dispatch on multiple processors.

The synchronous version of async/await is delay/force; that is just macrology
over some lambdas.

~~~
dgudkov
Not all implementations of async are thread-based. For instance, in C# it's
task-based. In F# it's thread-based.

------
konstmonst
Am I the only one, that thinks that that is not a good feature. Instead A
calling B calling C and having a well defined hierarchy and encapsulating
complexity you have A calling B calling C calling maybe B calling maybe C
again. I mean I see real value in have unidirectional call graphs because they
are some much more easier to reason about. I feel like this is another gimmick
to break abstractions and increase architectural complexity. You can't just
for example take C and maybe rewrite it without having to know and touch B.
This increases coupling and so is a bad idea in my book.

~~~
ernst_klim
Effects are not about calling hierarchy.

Effects are about doing a computation dependent on wider context: IO, thread
scheduling, non-deterministic computations, mutable references.

Now, in your example `A calling B calling C and having a well defined
hierarchy and encapsulating complexity` you already have effects: threads are
being scheduled by OS/runtime, IO is performed, memory cells are written.

The only difference your avg. imperative language with A calling B have is
that Effects are implicitly baked into the language, while algebraic effects
let you define your own effects as well.

So instead of `async val` you could simply do `(perform Async val)`, which
would return val in the context of fiber scheduler (aka will do the necessary
scheduling for continuing the computation). With effects you could extend your
language with new effectful semantics without descending into metaprogramming
hell/fixing the language.

In fact, you could think of Effects as of Monads, but composable.

------
chombier
Near the end of the article:

> Because algebraic effects are coming from statically typed languages, much
> of the debate about them centers on the ways they can be expressed in types.
> This is no doubt important but can also make it challenging to grasp the
> concept. That’s why this article doesn’t talk about types at all.

I'm only remotely familiar with algebraic effects, but I thought the _whole
point_ was to have a nice & composable way of dealing with effects in the type
system, as an alternative to monads that generally do not mix well.

Also, Daan Leijen's papers on Koka are pretty accessible.

~~~
rocqua
In e.g. Haskell, you often Dont explicitly write out types, letting the
compiler infer them for you. This could essentially cascade all the way up.

Meanwhile, if you want to add logging in Haskell via a monad, you don't just
need to change the type of each calling function to make it monadic, but you
need to rewrite the function to make it monadic. That is harder work than
changing some types. Moreover it is harder to automate by an IDE.

~~~
kybernetikos
That explains why one of his examples is a log handler, which in javascript
would be sensibly provided by dependency injection, but that doesn't work as
well for haskell.

Things I'd want more information on: what about errors in effect handlers (or
would errors be reimplemented as effects?), effects with no handler, which
handlers do effects inside handlers use? Is it really worth making it much
harder to reason about apparently straight through local code in order to gain
the benefits (imagine you check a condition that your later code relies on and
then call a log function that unbeknownst to you happens to use effects and
doesn't return control until much later when the condition no longer holds?)
Can we provide timeouts to effects? Get progress updates? Where should we use
effects and where should we use dependency injection? Can library code detect
whether handlers are installed for the effects it might want to use up front
and fail fast or provide defaults if they aren't there? Will our debuggers
understand? What are the practical best practices to avoid the kind of insane
spaghetti that this seems to invite?

~~~
chowells
Dependency injection works fine on Haskell. An effect system is one way of
implementing it.

That is, you write code that says it may perform any effects from the
following list. At some central point, you execute that code in the context of
a handler for those effects.

That's the same idea as using dependency injection to provide a service. The
only difference is that the ergonomics are better. You can't forget to inject
a service, the types prevent it.

~~~
kybernetikos
You're right, this is an interesting way to implement dependency injection.

A big problem this approach solves is that it avoids code that relies on
injected code from needing to predict in advance all the things that injected
code might need to do. This problem is much less pronounced in a dynamically
typed language (since you haven't had to predict the type signature of the
injected code).

In javascript the most significant time this is a problem is when the injected
code might need to be asynchronous. In standard javascript, if that's the
case, the surrounding code needs to know about it.

In Haskell, you have the same problem doing something as simple as debug
logging.

My concern is just that adding effects reduces how much you know about what a
particular function does just by looking at it, and this is true in both typed
and untyped languages.

------
pwpwp
Algebraic effects are similar to Common Lisp restart handlers, but in
addition, they also receive the continuation of the invoker. This means, you
can use algebraic effect handlers to implement higher-order control features
like coroutines, probabilistic programming, and nondeterminism (which you
can't in Common Lisp).

However, what most people get wrong: you do not need higher-order control if
you just want to resume after an error. This is demonstrated by Common Lisp,
which doesn't have coroutines, algebraic effects, nor continuations, but _can_
resume after an error.

The main example of the article could be done just fine in Common Lisp,
because it doesn't use any higher-order control.

~~~
amboo7
As CL is programmable, I would not claim "which you can't in Common Lisp".

~~~
pwpwp
The only way to get higher-order control in CL is through whole-program
rewriting, e.g. SCREAMER
[https://www.cliki.net/screamer](https://www.cliki.net/screamer) so I think
this qualifies as "can't".

------
fjfaase
I doubt if you need any new language construct to introduce this. Could you
not simply pass an error handling function/object with all your
functions/methods, which is called when an error occurs? This function/object
could then resolve the error or throw an exception if it cannot resolve the
error. It is possible, and relatively easy, to chain such error handling
functions/methods to implement complex error handling methods.

~~~
dan-robertson
The article only gives a limited example of algebraic effects. The thing that
is required for a condition system (ie good resumable exceptions) is lexical
non-local transfer of control and doesn’t need algebraic effects (Common Lisp
had a condition system but not algebraic effects in the late 1980s).

Lexical non-local transfer of control is effectively the ability to write code
which looks like (made up JS like syntax):

    
    
      function find(haystack, needle) {
        iterate_big_datastructure(function(x) { if (x == needle) goto found; });
        return false
        found:
        return true
      }
    

And allows unwinding the stack to lexically scoped labels (whereas exceptions
transfer control to dynamically bound labels). This is reliable because the
stack-unwinding can’t really be stopped like it can with an exception.

This then allows conditions to be implemented by dynamically binding a list of
condition handlers (functions which decide what to do when there is a
condition) and restarts (functions which do the thing by non-locally
transferring control out of themselves). Then instead of raising an exception
and unwinding the stack, one signals a condition by creating the condition
object, computing the set of restarts, and asking each handler in turn what to
do until a handler invoked a restart which will unwind the stack to the right
place to resume.

So one might ask how conditions (or rather lexical goto) are different to
algebraic effects. The difference is that all these do is let you unwind the
stack to specific places using closures and syntax to give the impression that
control flow briefly jumps up the stack and back. Algebraic effects instead
give you delimited continuations: when an effect is performed, the handler is
given a continuation which is a bit like a return pointer plus the bit of the
stack between the handler and the place the effect was performed, packaged up
to look like a function. This means that one can put these continuations into
data structures and do other things, effectively forking the stack. Lexical
goto doesn’t let you implement something like async/await but algebraic
effects do.

As a diagram, here is a schematic of the stack in the two language features.
We’ll try to write down stack frames with dots, a bar for a condition/effect
handler and > for the top of the control stack.

    
    
      Condition system
      ........|..........>
        Condition signalled
        Call closure created by function at |
      ........|...........>
        Find and invoke restart
        e.g.
      ........|......>
        or
      ......>
      
      Effect system:
      .........|...........>
        perform effect
                 ,.........*
      .........|<
                 \.>
         stack is now “forked”, control is at the effect handler
         It can return control back to *, invoke another function (leaving the stack forked),
          return up the stack (invalidating the continuations),
          or otherwise transfer control (e.g. perform another effect).
    

A final question is how algebraic effects are different from having delimited
continuations and building an effect system on top. The answer is that they
work in strongly typed languages where one must know what type will result
from performing an effect and be sure that all a functions effects will be
handled.

Another way to do these effect like things is with monads which effectively
convert one’s code into continuation passing style. These can make things slow
if the compiler doesn’t like inlining/allocating/calling closures. Then one
can have programs to evaluate these monads which work like the effect handlers
because the program is already set up to put its continuations into the monad.
It is hard to compose multiple monads (in particular if one doesn’t have
typeclasses or a MTL equivalent) but it looks like algebraic effects will be
more composable.

~~~
fjfaase
One type of refactoring is to replace recursion with iteration. You could
replace iterate_big_datastructure with an iteration class. I know, it will
require dynamic memory allocation. But do not forget, that you can also run
out of stack if you allow unlimited recursion. Usually, the heap is much
bigger than the stack, especially if you are working with a lot of threads
that all require their own stack. In a sense it would be like allocating one
chunk of memory and use this for the 'recursive' invocations inside your
iterator. An iterator like implementation would also allow you to 'jump'
around and restart at a different location.

~~~
dan-robertson
I don’t know if you’re trying to solve the problem in the first snippet or the
more general problem. The first snippet is just to illustrate what a lexical
goto can do rather than the only thing it can do.

A similar transformation would be converting to CPS with a big trampoline.
Both of these differ from lexical goto and algebraic effects in a few ways:

\- they require annoying manual/automatic code transformations that may make
your code slower

\- they don’t necessarily provide much safety

\- they can be hard to type

\- they can be hard to compose

These are issues which algebraic effects aim to solve.

Note that these techniques are strictly more powerful than lexical goto which
only lets you unwind the stack to a particular tack frame (and recall
unwinding means you forget about everything unwound).

------
itsbits
I worked on schooling project for Algebraic effects using Eff. I would surely
recommend it if you want to know more on it.

[https://www.eff-lang.org/try/](https://www.eff-lang.org/try/)

------
mehrdadn
Is it safe to say this is basically a more practical version of
EXCEPTION_CONTINUE_EXECUTION? [1]

Also, on another note, it's not really true that "things in the middle don’t
need to concern themselves with error handling". That's what exception-safety
is about. You very much do need to concern yourself with exception handling if
you want to allow the possibility of a caller handling a callee's exception.
Also see [2].

[1] [https://docs.microsoft.com/en-
us/windows/win32/debug/excepti...](https://docs.microsoft.com/en-
us/windows/win32/debug/exception-handler-syntax)

[2]
[https://devblogs.microsoft.com/oldnewthing/20120910-00/?p=66...](https://devblogs.microsoft.com/oldnewthing/20120910-00/?p=6653)

~~~
TuringTest
Or "structured" coroutines, which jump to a previously established handler? In
the _enumerateFiles_ example, control keeps jumping between that routine and
the different procedures declared in the handler, which take care of different
needed functions.

------
chc4
Maybe I'm missing something, but isn't this essentially just coroutines? In
Lua you can do `myFile = coroutine.yield("get a file")` to pause your
coroutine, and when the caller does `coroutine.resume(someFile)` it's resumed
with the value passed in.

EDIT: I guess the difference would be in Lua, yields return to where they were
resumed each time, while in algebriac effects they return to the nearest
handler for that case. You'd need some boilerplate to bubble up all effects
you don't care about up another level at each handler in Lua.

~~~
onion2k
JS's generator functions have a yield operator that that works in a very
similar way - a function can 'pause' and return a value and then resume from
the same place the next time it's called. I think that's closer to Lua's yield
than the effect Dan is talking about in the article.

[https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Operators/yield)

~~~
wruza
Except that in js all of the call stack must be marked as generators. Lua
thread can yield through any careless function, iterator and even C routine
(if the latter does a simple continuation trick).

~~~
k__
Seemed like a JS specific problem to me, because it's single threaded.

~~~
wruza
Coroutines are light (cooperative) threads, not preemptive ones, and Lua is
strictly single-threaded too^. Lua Lanes library provides hardware threads to
Lua programs by creating separate vm states and managing interactions between
these: [http://lualanes.github.io/lanes/](http://lualanes.github.io/lanes/)
but it is another beast.

^ except rare cases when embedded with lua_lock() defined as a thread locking
routine. Then it becomes thread-safe, but still not multithreaded.

------
fermigier
In Python:

[https://pypi.org/project/effect/](https://pypi.org/project/effect/) (Effect
library)

[https://www.youtube.com/watch?v=fM5d_2BS6FY](https://www.youtube.com/watch?v=fM5d_2BS6FY)
(talk from PyConNZ 2015).

(Shameless plug: this is one of the libraries listed in
[https://github.com/sfermigier/awesome-functional-
python](https://github.com/sfermigier/awesome-functional-python) ).

------
pron
I'm far from convinced of the utility of algebraic effects, but if you like
them, implementing them in Java (or any Java platform language) would be
possible soon thanks to Project Loom [1]. The scoped (ALA multi-prompt)
delimited continuations provided by Loom are intended for other uses, but they
could also be used to implement algebraic effects.

[1]
[https://wiki.openjdk.java.net/display/loom/](https://wiki.openjdk.java.net/display/loom/)

------
xvilka
Well, OCaml it the only mainstream language that works on the integration of
the algebraic effects. But the work[1] is being done is very slow for a
project of such importance (it is also a part of multicore). Nevertheless,
OCaml loses some points to Rust, due to its lack of proper parallelism. So I
hope Rust ecosystem would put more attention to the efforts[2][3][4] to bring
algebraic effects to the language.

[1] [https://github.com/ocaml-multicore/ocaml-
multicore/projects/...](https://github.com/ocaml-multicore/ocaml-
multicore/projects/3)

[2] [https://github.com/pandaman64/effective-
rust](https://github.com/pandaman64/effective-rust)

[3] [https://kcs1959.jp/archives/4387/general/algebraic-
effects-f...](https://kcs1959.jp/archives/4387/general/algebraic-effects-for-
rust)

[4]
[https://qiita.com/__pandaman64__/items/9fd47af5a39f0d2a6bbb](https://qiita.com/__pandaman64__/items/9fd47af5a39f0d2a6bbb)

------
azangru
Brandon Dail from the React team gave a talk about what they mean by algebraic
effects at React Rally a year ago:
[https://youtu.be/7GcrT0SBSnI](https://youtu.be/7GcrT0SBSnI)

------
amelius
Where does the name come from? The word "effects" makes me think of "side
effects", which is something I'd usually like to avoid.

~~~
smilliken
Algebraic effects are the opposite of "side" effects: they are intentional and
controlled. In haskell, the effects of a function are described explicitly in
its type (and transitively to its callers' types).

------
wvlia5
What is 'algebraic' about algebraic effects?

~~~
strictfp
I've heard the phrase "forming an algebra" being used for delaying effects by
means of recording the intents into a data structure. So for instance a
function which initially performs file i/o as a side effect, could instead
return a data structure "FileOperations" with entries like "FileWrite(a.txt,
Hello world)".

Maybe that's where they got the name from? Although performing the effects
directly through a pause/resume mechanism doesn't sound algebraic to me.

------
otakucode
>It turns out, we can call resume with asynchronously from our effect handler
without making any changes to getName or makeFriends:

This sounds like a terrible idea. Especially given the nature of Javascript,
this basically would mean that you would have to write every single bit of
code to be re-entrant. What if you 'perform' and then the 'resume' doesn't
happen until every single assumption made about the entire program state for
the entirety of the function in which the perform occurs has been invalidated?
After doing a 'perform', you would have to operate under the presumption that
nothing done in the first portion of the function has any relevance any
longer, no? The enumerateFiles example later in the article is an even better
example. It performs in multiple places, but carries on like it's a normal
function, not considering that at each of those performs, the entire state of
the program could be changed, and none of the conditions established prior to
those lines can be relied upon to still hold.

------
gumby
Instead of ES2025 he should have used "ES1978" because the early lisp
signalling systems were more general than just errors and had continuable
exceptions. This evolved into the Common Lisp Object System when Common Lisp
was standardized in the early 1980s.

------
arsdragonfly
Can anyone tell me the relationship between this and call/cc?

------
transfire
I wonder how much of this is more easily (or less easily) handled with Ruby-
like blocks. One can pass in a procedure as an argument to handle the
conditional execution.

------
dusted
Interesting sideeffect of Dijkstras rant :)

"Imagine that you’re writing code with goto, and somebody shows you if and for
statements."

10 for a = 0 to 10

20 if a = 5 then goto 40

30 next

40 end

50 print "5"

60 goto 30

I lack imagination.

That said, algebraic effects sounds interesting indeed.

------
zbentley
I'm not wild about this article; it gives lots of "you can go from code like
this, to code like this!" examples with non-equivalent functionality, which is
confusing if you're skimming.

Totally separate from stylistic quibbles, I also think effect systems are
often oversold by folks who like them (usually, in my experience, folks from a
functional background).

Fundamentally, a lot of the code we write _is_ the effectful IO plumbing. By
that I mean: there are very few complicated algorithms, or really any hand-
written algorithms at all, in a large amount of code written for modern
systems. Instead, the complexity and value of the code is in the way it
coordinates different external IO sources/sinks. This is pretty well
illustrated in the article's toy directory enumeration/file handling example:
with the IO/system specific stuff extracted into the effect receivers
(processors? handlers?), the remaining code is not just simple, it's
_vacuously_ simple. The complexity and trickiness of handling IO, dealing with
error conditions, etc. all remains, though, in the effect receivers. This is
subjective, but that seems more akin to the "over-extracting methods to the
point where all you do is increase the line count" school of refactoring than
the "improving the comprehensibility/maintainability of the code" school.
Generifying IO interactions behind an effect system in a codebase that is
primarily occupied with gluing together external systems results in moving so
much of the code into effect receivers that nothing useful remains behind.

Put another way: often, _what_ we're doing is intimately coupled with _how_ :
like, sure, I'm technically "piping data from a source into a sink with a
transformer in between", but they don't pay me to write "source |> transformer
|> sink", they pay me to write (for example) the SELECT statement in the
source, the column mapping/reformatting logic of the transformer, and the
POST-to-endpoint in the sink. If those things already existed, it would be the
one-liner above, but they don't for the business domain, so we make them. Once
they're written, by all means, modularize them and make them easily usable in
a streamable, convenient way. But most of the interesting code, once you peel
back the curtain on "source" or "sink" is still going to be in its
effectfulness.

Then there's the argument from modularity/swappability: that you can replace
effect handlers with equivalent handlers for doing other things. If you're
writing a system with many swappable backends, this may be useful. However,
most systems don't have that property. Datastores and effect receivers change
much less often than the data flow itself. And past a certain point you end up
with "old Java"-style modularity: things abstracted so far away in service to
unneeded pluggability that the code becomes harder to follow and maintain
(especially given that the code may be primarily/near-exclusively concerned
with specifics of IO flow).

To be sure, there are some cases where effect systems can really help. I just
don't think those are as numerous as FP proponents think they are.

------
dvfjsdhgfv
> Algebraic Effects are a research programming language feature. This means
> that unlike if, functions, or even async / await, you probably can’t really
> use them in production yet. They are only supported by a few languages that
> were created specifically to explore that idea. There is progress on
> productionizing them in OCaml which is… still ongoing. In other words, Can’t
> Touch This.

WalterBright: "Challenge accepted!"

------
ragerino
In Java you simply extend an Exception class (checked or unchecked) and handle
it properly regardless of the error message.

Modern IDEs can detect exception types which don't exist yet, and create them
on the fly while you try to use them for the first time.

~~~
contravariant
This would be a replacement if Java had the concept of a continuation, but as
far as I know that isn't the case.

Although in this case it simply seems a way to interact with some kind of
environment, which in object oriented languages is easily achieved with
dependency injection (which does mean you have to pass through an extra
argument to every function, but that's not usually that big a problem, and it
can give hints on how to combine and divide environments).

------
layoutIfNeeded
Those Game of Thrones references are super cringey.

------
wruza
Now conditions/restarts. How much more decades should we wait till they
rediscover all the programming tools? Please, rewrite your browser legacies
once to support non-trivial execution model and allow any decent language to
stop this madness.

~~~
tiborsaas
Step 1) Find a language that can do that

Step 2) Compile it to WebAssembly

Step 3) Validate those input fields

Step 4) Profit

~~~
wruza
I better wait a decade, it ain’t much competition since we’re all in the same
boat. It’s nice that many are happy with what we have now, but I don’t
understand why you dismiss this suggestion so easily (and superficially, as it
seems).

Wasm doesn’t allow 1+2, since browsers dictate how io/device/extension-
communicating code should be done and their native routines are not ready for
techniques of the level higher than just a bare callback. Wasm is not a
solution, since the platform and primitives are the same. It is basically the
same javascript-in-a-browser model with syntax and scoping rules to be
implemented by someone else.

One can emulate any language by turning js/wasm into a virtual machine, but
that’s not speed or battery-friendly.

------
phoe-krk
> The best we can do is to recover from a failure and maybe somehow retry what
> we were doing, but we can’t magically “go back” to where we were, and do
> something different. But with algebraic effects, we can.

This is _literally_ Common Lisp's condition system which a) decouples
signaling conditions from handling conditions, b) allows you to execute
arbitrary code in the place that signals, c) allows you to stack independent
pieces of condition-handling code on one another so they run together, and d)
allows you to unwind the stack or _not_ unwind the stack at any moment, so
your code may continue running even if it ends up signaling.

ANSI Common Lisp was standardized in 1994. This post is from 2019. That's
reinventing a 25+ year old wheel the author is likely unaware of.

~~~
tiborsaas
> That's reinventing a 25+ year old wheel the author is likely unaware of.

These comments are not helpful. React is a UI library and this concept would
be new in this world. Who cares if it's done by the Simpsons already? It's a
nice trivia, but what should we do with this information?

~~~
phoe-krk
The usual. Find out how the concept was successfully implemented before, learn
from these implementations' mistakes, and leverage all of that knowledge in
your current work.

~~~
tiborsaas
That's a much nicer, forward pointing comment and it's good to know there's
previous act. I've read some bitterness between the lines.

~~~
phoe-krk
Probably just bitterness from the general trend of reinventing _everything_ in
computer science, all the time. People who are unaware of former work on any
given topic are forced to reinvent it, poorly. People who are aware of it are
forced to watch everyone else reinvent them.

Without any sarcasm, communication is a hard and unsolved problem in general.

~~~
codebje
The irony here is that you appear to be unaware of the former work on
algebraic effects, to the point that this one lightweight blog post forms your
entire understanding of it and you mistake it for a limited and ill-defined
language construct...

