Hacker News new | past | comments | ask | show | jobs | submit login
Algebraic Effects for the Rest of Us (overreacted.io)
239 points by alaaf 83 days ago | hide | past | web | favorite | 103 comments

The author ought to look into and write about the Common Lisp condition system, which allows error handlers to invoke restarts at different parts of the call stack. [1] The long-story-short on them is that they decouple the treatment of exceptional situations (or conditions) into three orthogonal roles: signaling the condition (akin to “throwing”), handling the condition (akin to “catching”), and recovering from the condition (which has no resemblance in popular languages). The signaler, the handler, and the recoverer can be three disjoint bodies of code sitting in different parts of your call stack.

Doesn’t have a cool name like “algebraic effects”, and doesn’t have cool operational semantics written out, but it does something quite similar to what the article describes. Here is a little example I cooked up for a Julia programming audience. The code offers an API for computing roots of functions, and has a DIVERGENCE-ERROR condition and a handful of different restarts which handlers of said error can invoke. [2]

If you want to see what languages will look like in N years, it’s always wise to see what Common Lisp or Scheme are up to.

[1] http://www.gigamonkeys.com/book/beyond-exception-handling-co...

[2] https://github.com/stylewarning/lisp-random/blob/master/talk...

Algebraic effects can actually be implemented with delimited continuations [0].

Algebraic effects are more aimed towards statically typed languages like Haskell or Ocaml. They can replace most uses of monad transformers, which has both cognitive and performance benefits.

As you showed, there are better ways of solving the problem in lisps.

Tagless final algebras are another much more popular alternative that has been proven very effective in practical software. In tagless final, one writes composable DSLs (which are just records of functions) with the nature and interpretation of effects left abstract.

One then writes interpreters which interpret the DSL, giving meaning to the effects. This achieves the same fundamental goals as algebraic effects, but just using the ordinary language features of static FP languages.

[0] https://docs.racket-lang.org/reference/eval-model.html#%28pa...

[1] http://okmij.org/ftp/tagless-final/index.html

Algebraic effects and delimited continuations are strictly more powerful than the Common Lisp condition system which is largely powered by the lexical goto (or return-from) feature. Invoking a restart can only unwind the stack (and the bit which is unwound can never be gotten to again), whereas algebraic effects allow more of a “forking” behaviour.

I think raganwald made the same point here: https://www.reddit.com/r/javascript/comments/cfz8lo/algebrai...

And the author (gaeron, aka Dan Abramov) replied:

> As far as I understand, the difference is that effects are actually typed (as part of your function signature), which I didn't go into — and so you have a lot more guarantees about what a function can or cannot do.

Smalltalk also had resumable exceptions, and I remember implementing then in ruby too, with callcc, but I think algebraic effects are more general and by focusing on exceptions the author has sort of hidden that, even if he kept saying it's just an example.

The condition system is also quite general. It’s more of a system for communicating with and passing control to distant (but structured) parts of the call stack. It’s also quite dynamic in nature; the behavior isn’t captured with purely lexical functions. It’s possible to use the condition system purely as a communication mechanism, as a way to implement dynamic bindings, and many other things. It just happens it’s framed in the language of error handling and that’s its most popular use.

With that said, I’m not claiming they’re equivalent to algebraic effects. However, looking at the semantics provided by Pretnar [1], the Common Lisp condition system is quite close to the functionality offered in the purely functional formalism.

[1] https://www.eff-lang.org/handlers-tutorial.pdf

> I think algebraic effects are more general

Not an expert on algebraic effects, but I suspect they are both more and less general.

On the one hand, "algebraic" hints at some constraints in the name of static typeability. Such restrictions wouldn't be present in Common Lisp.

On the other hand, Common Lisp only has single-shot downward continuations, so you won't get, e.g. nondeterminism-as-an-effect.

Like reikonomusha said, CL's condition system is pretty general in itself. The naming choice isn't an accident - the thing that's being "raised" is a "condition" (of which "error" is just a subtype), and what you do with a condition is "signal" it. In CL, most of the time it's used for exception handling, but I've seen code using this system for tasks not related to errors.

As a simple example, you can imagine data processing function for use in potentially interactive application, that reports progress and allows for aborting:

  (define-condition progress ()
    ((amount :initarg :amount :reader amount)))
  (defun process-partial-data (data)
    "NOOP placeholder"
    (declare (ignore data)))
  (defun process-data (data)
             (signal 'progress :amount 0)
           with total = (length data)
           for datum in data
           for i below total
             (process-partial-data datum)
             (signal 'progress :amount (/ i total))
           ;; Report progress
             (signal 'progress :amount 1)
             (return :done))
      (abort-work ()
        (format *trace-output* "Aborting work!")
The "business meat" of our function is the loop form. You'll notice it reports its progress by signalling a 'progress condition, which, without installed handlers, is essentially a no-op (unlike throwing an exception). The "meat" is wrapped in restart-case form, in order to provide an alternative flow called 'abort-work (you can provide more than one named flow).

Now for the REPL sessions (-> denotes returned value). First, regular use:

  CL-USER> (process-data '(1 2 3 4 5 6))
  -> :DONE
Let's simulate a GUI progress bar, by actually listening to the 'progress condition:

  CL-USER> (handler-bind ((progress (lambda (p) (format *trace-output* "~&Progress: ~F~%" (amount p)))))
             (process-data '(1 2 3 4 5 6)))

  Progress: 0.0
  Progress: 0.0
  Progress: 0.16666667
  Progress: 0.33333334
  Progress: 0.5
  Progress: 0.6666667
  Progress: 0.8333333
  Progress: 1.0
  -> :DONE
A progress bar in a GUI usually has a "cancel" button. Let's simulate it by assuming that user clicked "cancel" around the 50% progress mark, through invoking the 'abort-work restart programmatically:

  CL-USER> (handler-bind ((progress (lambda (p) (format *trace-output* "~&Progress: ~F~%" (amount p))
                                                (when (>= (amount p) 0.5)
                                                  (invoke-restart 'abort-work)))))
             (process-data '(1 2 3 4 5 6)))
  Progress: 0.0
  Progress: 0.0
  Progress: 0.16666667
  Progress: 0.33333334
  Progress: 0.5
  Aborting work!
You'll note that function code is entirely transparent for how the progress reporting and abort decision work; it's callee-level handlers that are concerned with it. It works in console, it can work with Lisp's interactive debugger, and it could work with a GUI just as well. Hell, it could work with network requests (and I've seen similar code for writing handler response code for multiple protocols, letting you deliver partial results where supported, and transparently buffering them where it isn't.)

N.b. your typical experience with restarts in Common Lisp is the interactive debugger that pops up when an error gets unhandled. This example serves as a reminder that restarts are not just for errors, and that you can invoke them programmatically - building applications that can figure out how to handle their own errors.

I've expanded on this example and added some further thoughts in a blog post: http://jacek.zlydach.pl/blog/2019-07-24-algebraic-effects-yo....

Note also that this mechanism is usually hidden behind a macro. In user code one would just write:

    (with-progress-noted (datum data)
      (process-partial-data datum))
And this then would expand into a machinery like above.

The Lisp Machine OS had a system-wide progress bar at the bottom of the screen, which then would show the progress of some process - usually the front process or something related.

Thanks for this example! Reminds me of callbacks in JavaScript or event listeners in OOP, with slightly different (better?) structure!

The essential feature of algebraic effects is that the restarts can be passed around as first class values resumed multiple times. Can CL do that?

Scheme certainly can, via call/cc

I was thinking throughout reading this: "Isn't this just call/cc?"

All monadic computation can be accomplished with continuations, which offer a way to thread the monadic bind between sub-programs. But at some point this is equivalent to saying that all continuations can be implemented with machine language... True, but beside the point.

The main purpose of ideas like algebraic effects is to build on a firm understanding of a model of computation. Implementing algebraic effects (whether with call/cc or otherwise) lets one specify a program's behaviour more precisely.

Whether this seems useful or not is probably closely aligned with whether you think static types and purity are useful or not, perhaps.

call/cc with handlers i.e. delimited continuations, yes.

Even Visual Basic has a crude variant in the form of:

    On Error { GoTo N | Resume Next }

Yes. I read the article and thought "Wow. Somebody has rediscovered handler-bind from Common Lisp."

I, like many people in this thread, learnt about algebraic effects for the first time from the posted article. However, many commenters seem to be mislead by the explanation based on an example with exceptions. What I learnt from [1] linked below is that algebraic effects is a generalization of which language constructs like try/catch, async/await, or generators are just particular cases.

From that perspective, algebraic effects make sense and look very interesting.

[1] https://github.com/ocamllabs/ocaml-effects-tutorial

I tried to address this in the article but I guess people skipped over this?

>Note, however, that algebraic effects are much more flexible than try / catch, and recoverable errors are just one of many possible use cases. I started with it only because I found it easiest to wrap my mind around it.

Algebraic effects are synchronous; they can't be a generalization of async anything, because that uses threads.

A generalization has to do everything that the specialization does, like dispatch on multiple processors.

The synchronous version of async/await is delay/force; that is just macrology over some lambdas.

Not all implementations of async are thread-based. For instance, in C# it's task-based. In F# it's thread-based.

Am I the only one, that thinks that that is not a good feature. Instead A calling B calling C and having a well defined hierarchy and encapsulating complexity you have A calling B calling C calling maybe B calling maybe C again. I mean I see real value in have unidirectional call graphs because they are some much more easier to reason about. I feel like this is another gimmick to break abstractions and increase architectural complexity. You can't just for example take C and maybe rewrite it without having to know and touch B. This increases coupling and so is a bad idea in my book.

Effects are not about calling hierarchy.

Effects are about doing a computation dependent on wider context: IO, thread scheduling, non-deterministic computations, mutable references.

Now, in your example `A calling B calling C and having a well defined hierarchy and encapsulating complexity` you already have effects: threads are being scheduled by OS/runtime, IO is performed, memory cells are written.

The only difference your avg. imperative language with A calling B have is that Effects are implicitly baked into the language, while algebraic effects let you define your own effects as well.

So instead of `async val` you could simply do `(perform Async val)`, which would return val in the context of fiber scheduler (aka will do the necessary scheduling for continuing the computation). With effects you could extend your language with new effectful semantics without descending into metaprogramming hell/fixing the language.

In fact, you could think of Effects as of Monads, but composable.

They are a way of parameterizing what would otherwise ambient authority.

Rather than needing to pass your file system accessor object, logging object, network interface, database connection provider, etc all the way through your call graph A -> B -> C, you can access those using apparently ambient authority, but still live the code testable, and all without IoC gymnastics.

I don't think the coupling argument is very strong when the effects are encoded in the type system. The kinds of effects that make sense are ambient authority operations which otherwise need explicit parameters. If they're encoded in the type system it's important that higher order functions are parameterized on possible effects to avoid unnecessarily limiting composition.

Of course all this is easier in a language with global type inference, since that will generate appropriately generic signatures that don't needlessly prevent flow of type information.

I don't think you're the only one. Like a lot of things that come from academia A, B and C are all well defined short functions where their pre and post conditions are clear. In real world code A, B and C will more likely be a mess of code written by 20 people over ten years resulting in the comment "DO NOT TOUCH".

As with all language features though, idiots will always take things too far.

Near the end of the article:

> Because algebraic effects are coming from statically typed languages, much of the debate about them centers on the ways they can be expressed in types. This is no doubt important but can also make it challenging to grasp the concept. That’s why this article doesn’t talk about types at all.

I'm only remotely familiar with algebraic effects, but I thought the whole point was to have a nice & composable way of dealing with effects in the type system, as an alternative to monads that generally do not mix well.

Also, Daan Leijen's papers on Koka are pretty accessible.

In e.g. Haskell, you often Dont explicitly write out types, letting the compiler infer them for you. This could essentially cascade all the way up.

Meanwhile, if you want to add logging in Haskell via a monad, you don't just need to change the type of each calling function to make it monadic, but you need to rewrite the function to make it monadic. That is harder work than changing some types. Moreover it is harder to automate by an IDE.

That explains why one of his examples is a log handler, which in javascript would be sensibly provided by dependency injection, but that doesn't work as well for haskell.

Things I'd want more information on: what about errors in effect handlers (or would errors be reimplemented as effects?), effects with no handler, which handlers do effects inside handlers use? Is it really worth making it much harder to reason about apparently straight through local code in order to gain the benefits (imagine you check a condition that your later code relies on and then call a log function that unbeknownst to you happens to use effects and doesn't return control until much later when the condition no longer holds?) Can we provide timeouts to effects? Get progress updates? Where should we use effects and where should we use dependency injection? Can library code detect whether handlers are installed for the effects it might want to use up front and fail fast or provide defaults if they aren't there? Will our debuggers understand? What are the practical best practices to avoid the kind of insane spaghetti that this seems to invite?

Dependency injection works fine on Haskell. An effect system is one way of implementing it.

That is, you write code that says it may perform any effects from the following list. At some central point, you execute that code in the context of a handler for those effects.

That's the same idea as using dependency injection to provide a service. The only difference is that the ergonomics are better. You can't forget to inject a service, the types prevent it.

You're right, this is an interesting way to implement dependency injection.

A big problem this approach solves is that it avoids code that relies on injected code from needing to predict in advance all the things that injected code might need to do. This problem is much less pronounced in a dynamically typed language (since you haven't had to predict the type signature of the injected code).

In javascript the most significant time this is a problem is when the injected code might need to be asynchronous. In standard javascript, if that's the case, the surrounding code needs to know about it.

In Haskell, you have the same problem doing something as simple as debug logging.

My concern is just that adding effects reduces how much you know about what a particular function does just by looking at it, and this is true in both typed and untyped languages.

Logging is probably easier, since a tuple with the output and the logging naturally forms a monad. What is really tricky o pull off is allowing interaction with the environment, which basically requires a more fine grained (and customizable) version of the IO monad.

Monads have the problem that they don't commute, which is usually not what you want for handling events (if you've got handlers for two different types of event it shouldn't matter which event handler was registered first) Maybe the algebraic effects solve this problem.

> Near the end of the article:

>> Because algebraic effects are coming from statically typed languages,

Though in fact the origins come from dynamically typed languages like Lisp, where condition signalling systems were developed in the 1970s. When they started appearing in statically typed languages in the late 80s, they were only used for errors, and by the time they were caught the stack had already been unwound.

What's old is new again.

Sure but I mean most work on algebraic effects (or at least the papers I know of) deals with the associated type system.

Instead most people here seem to focus on the semantics, which indeed boil down to call/cc + handlers or previous condition systems.

Well, a reason to care about effects systems is to have a way to specify what a program is doing in a way that lets you calculate and manipulate properties of it (from proofs to optimisations), which generally means static typing and purity.

The curry-howard equivalence is only as useful as the strength of the type system. We ideally want proof irrelevance, that is, any program satisfying the types is sufficient, but while some things remain outside the type system this is only true up to a point (think: time and space costs). Putting effects into types helps this along.

Those 1970s/1980s conditioning systems specifically evolved from the inheritance-based type systems originally developed for object-oriented systems.

Computer science is a field that notoriously ignores the past, so these folks are far from the only people to reinvent something already well developed.

This is spectacular revisionism.

Conditions arose out of MIT's AI labs, a place full of entrepreneurial enthusiasm and practical work, conditions were described in the early 80s.

Algebraic effects come out of Endinborough, a place full of academic research focusing on abstract algebraic models of computing.

Algebraic effects builds on theoretical work by Wadler, Spivey, and Moggi on monadic computation, not on Lisp. Similarly, Lisp around that era built primarily on the previous work on Lisp out of MIT, not on models of computing from around the other side of the globe.

These labs had different focuses, goals, and approaches. Lisp is one of a dozen languages of the time that found clever ways to express control flow and exceptional flow, but none of them were trying to find a model for defining semantics.

Years later, we discover that many of those old constructions are almost equivalent to models derived by PL research. That's not surprising, as the models are meant to help us describe programming in general, and the problems were already there to be solved. Using the models usually (not always) gives a better end result than the ad-hoc approaches.

Or sometimes we keep ignoring the models because we've always done it some other way.

I doubt if you need any new language construct to introduce this. Could you not simply pass an error handling function/object with all your functions/methods, which is called when an error occurs? This function/object could then resolve the error or throw an exception if it cannot resolve the error. It is possible, and relatively easy, to chain such error handling functions/methods to implement complex error handling methods.

The examples provided in the article aren't especially compelling.

These are capable of among other things, Prolog style nondeterministic choice and backtracking.

A more in depth introduction is here: https://www.eff-lang.org/handlers-tutorial.pdf (not really for 'the rest of us' - I struggled to understand this one, but found it interesting).

Koka's docs are also okay, but could still do with some more work to help explain things in a compelling way: https://koka-lang.github.io/koka/doc/kokaspec.html

Yes, you're absolutely right. The article gives the impression that algebraic effects give a new ability to decouple the code that uses some effects from the implementation of those effects. But that's absolutely not so: Simply passing down the implementation as an argument provides all the functionality that was described in the article.

So, in imperative/OOP languages, it's best to treat algebraic effects as inspiration for passing down objects to provide implementations; that gives you most of the same power, while using existing features of your language.

Some effect systems allow resuming multiple times, allowing more fancy features, but those features are niche, and even of questionable theoretical value because of their incompatibility with linearity.

Should the caller, callee, or somebody in between be allowed to decide what the handling function should be? How an error is handled depends highly on what context is available at the site of the error or in the stack frames above it.

The article only gives a limited example of algebraic effects. The thing that is required for a condition system (ie good resumable exceptions) is lexical non-local transfer of control and doesn’t need algebraic effects (Common Lisp had a condition system but not algebraic effects in the late 1980s).

Lexical non-local transfer of control is effectively the ability to write code which looks like (made up JS like syntax):

  function find(haystack, needle) {
    iterate_big_datastructure(function(x) { if (x == needle) goto found; });
    return false
    return true
And allows unwinding the stack to lexically scoped labels (whereas exceptions transfer control to dynamically bound labels). This is reliable because the stack-unwinding can’t really be stopped like it can with an exception.

This then allows conditions to be implemented by dynamically binding a list of condition handlers (functions which decide what to do when there is a condition) and restarts (functions which do the thing by non-locally transferring control out of themselves). Then instead of raising an exception and unwinding the stack, one signals a condition by creating the condition object, computing the set of restarts, and asking each handler in turn what to do until a handler invoked a restart which will unwind the stack to the right place to resume.

So one might ask how conditions (or rather lexical goto) are different to algebraic effects. The difference is that all these do is let you unwind the stack to specific places using closures and syntax to give the impression that control flow briefly jumps up the stack and back. Algebraic effects instead give you delimited continuations: when an effect is performed, the handler is given a continuation which is a bit like a return pointer plus the bit of the stack between the handler and the place the effect was performed, packaged up to look like a function. This means that one can put these continuations into data structures and do other things, effectively forking the stack. Lexical goto doesn’t let you implement something like async/await but algebraic effects do.

As a diagram, here is a schematic of the stack in the two language features. We’ll try to write down stack frames with dots, a bar for a condition/effect handler and > for the top of the control stack.

  Condition system
    Condition signalled
    Call closure created by function at |
    Find and invoke restart
  Effect system:
    perform effect
     stack is now “forked”, control is at the effect handler
     It can return control back to *, invoke another function (leaving the stack forked),
      return up the stack (invalidating the continuations),
      or otherwise transfer control (e.g. perform another effect).
A final question is how algebraic effects are different from having delimited continuations and building an effect system on top. The answer is that they work in strongly typed languages where one must know what type will result from performing an effect and be sure that all a functions effects will be handled.

Another way to do these effect like things is with monads which effectively convert one’s code into continuation passing style. These can make things slow if the compiler doesn’t like inlining/allocating/calling closures. Then one can have programs to evaluate these monads which work like the effect handlers because the program is already set up to put its continuations into the monad. It is hard to compose multiple monads (in particular if one doesn’t have typeclasses or a MTL equivalent) but it looks like algebraic effects will be more composable.

One type of refactoring is to replace recursion with iteration. You could replace iterate_big_datastructure with an iteration class. I know, it will require dynamic memory allocation. But do not forget, that you can also run out of stack if you allow unlimited recursion. Usually, the heap is much bigger than the stack, especially if you are working with a lot of threads that all require their own stack. In a sense it would be like allocating one chunk of memory and use this for the 'recursive' invocations inside your iterator. An iterator like implementation would also allow you to 'jump' around and restart at a different location.

I don’t know if you’re trying to solve the problem in the first snippet or the more general problem. The first snippet is just to illustrate what a lexical goto can do rather than the only thing it can do.

A similar transformation would be converting to CPS with a big trampoline. Both of these differ from lexical goto and algebraic effects in a few ways:

- they require annoying manual/automatic code transformations that may make your code slower

- they don’t necessarily provide much safety

- they can be hard to type

- they can be hard to compose

These are issues which algebraic effects aim to solve.

Note that these techniques are strictly more powerful than lexical goto which only lets you unwind the stack to a particular tack frame (and recall unwinding means you forget about everything unwound).

Question about forking: isn't this essentially the same thing as restarts, after the garbage collector gets rid of the original (and now dangling) stack branch? Or is there something you can do with both of these branches simultaneously that's not expressible in conditions/restarts system?

The second: once a restart is invoked, the stack is unwound and so whatever was happening at the tip is forgotten. With an effect handler one is given a continuation (this might be a function or, as in ocaml, a special function-like thing which may be called only once), and you can put this to one side and do something else.

A common example for an application of algebraic effects would be implementing async/await like mechanisms. The effect handler would take the continuations and put them into a scheduler, so it’s stack would effectively be heavily forked.

Algebraic effects are similar to Common Lisp restart handlers, but in addition, they also receive the continuation of the invoker. This means, you can use algebraic effect handlers to implement higher-order control features like coroutines, probabilistic programming, and nondeterminism (which you can't in Common Lisp).

However, what most people get wrong: you do not need higher-order control if you just want to resume after an error. This is demonstrated by Common Lisp, which doesn't have coroutines, algebraic effects, nor continuations, but _can_ resume after an error.

The main example of the article could be done just fine in Common Lisp, because it doesn't use any higher-order control.

As CL is programmable, I would not claim "which you can't in Common Lisp".

The only way to get higher-order control in CL is through whole-program rewriting, e.g. SCREAMER https://www.cliki.net/screamer so I think this qualifies as "can't".

I worked on schooling project for Algebraic effects using Eff. I would surely recommend it if you want to know more on it.


Is it safe to say this is basically a more practical version of EXCEPTION_CONTINUE_EXECUTION? [1]

Also, on another note, it's not really true that "things in the middle don’t need to concern themselves with error handling". That's what exception-safety is about. You very much do need to concern yourself with exception handling if you want to allow the possibility of a caller handling a callee's exception. Also see [2].

[1] https://docs.microsoft.com/en-us/windows/win32/debug/excepti...

[2] https://devblogs.microsoft.com/oldnewthing/20120910-00/?p=66...

Or "structured" coroutines, which jump to a previously established handler? In the _enumerateFiles_ example, control keeps jumping between that routine and the different procedures declared in the handler, which take care of different needed functions.

Maybe I'm missing something, but isn't this essentially just coroutines? In Lua you can do `myFile = coroutine.yield("get a file")` to pause your coroutine, and when the caller does `coroutine.resume(someFile)` it's resumed with the value passed in.

EDIT: I guess the difference would be in Lua, yields return to where they were resumed each time, while in algebriac effects they return to the nearest handler for that case. You'd need some boilerplate to bubble up all effects you don't care about up another level at each handler in Lua.

Coroutines are a mechanism that you can use to implement algebraic effects. When the coroutine yields, it yields the tagged effect object which can be used to determine what effect handler to run, and after that whether to resume the evaluation of the coroutine or not.

One obvious difference is that in a statically-typed language with algebraic effects the compiler checks that all the effects are being handled correctly. If you implement these in raw Lua a stray "coroutine.yield" can ruin everything if it doesn't return a properly formed tagged event object.

JS's generator functions have a yield operator that that works in a very similar way - a function can 'pause' and return a value and then resume from the same place the next time it's called. I think that's closer to Lua's yield than the effect Dan is talking about in the article.


Except that in js all of the call stack must be marked as generators. Lua thread can yield through any careless function, iterator and even C routine (if the latter does a simple continuation trick).

Seemed like a JS specific problem to me, because it's single threaded.

Coroutines are light (cooperative) threads, not preemptive ones, and Lua is strictly single-threaded too^. Lua Lanes library provides hardware threads to Lua programs by creating separate vm states and managing interactions between these: http://lualanes.github.io/lanes/ but it is another beast.

^ except rare cases when embedded with lua_lock() defined as a thread locking routine. Then it becomes thread-safe, but still not multithreaded.

In Python:

https://pypi.org/project/effect/ (Effect library)

https://www.youtube.com/watch?v=fM5d_2BS6FY (talk from PyConNZ 2015).

(Shameless plug: this is one of the libraries listed in https://github.com/sfermigier/awesome-functional-python ).

I'm far from convinced of the utility of algebraic effects, but if you like them, implementing them in Java (or any Java platform language) would be possible soon thanks to Project Loom [1]. The scoped (ALA multi-prompt) delimited continuations provided by Loom are intended for other uses, but they could also be used to implement algebraic effects.

[1] https://wiki.openjdk.java.net/display/loom/

Well, OCaml it the only mainstream language that works on the integration of the algebraic effects. But the work[1] is being done is very slow for a project of such importance (it is also a part of multicore). Nevertheless, OCaml loses some points to Rust, due to its lack of proper parallelism. So I hope Rust ecosystem would put more attention to the efforts[2][3][4] to bring algebraic effects to the language.

[1] https://github.com/ocaml-multicore/ocaml-multicore/projects/...

[2] https://github.com/pandaman64/effective-rust

[3] https://kcs1959.jp/archives/4387/general/algebraic-effects-f...

[4] https://qiita.com/__pandaman64__/items/9fd47af5a39f0d2a6bbb

Brandon Dail from the React team gave a talk about what they mean by algebraic effects at React Rally a year ago: https://youtu.be/7GcrT0SBSnI

Where does the name come from? The word "effects" makes me think of "side effects", which is something I'd usually like to avoid.

Algebraic effects are the opposite of "side" effects: they are intentional and controlled. In haskell, the effects of a function are described explicitly in its type (and transitively to its callers' types).

What is 'algebraic' about algebraic effects?

I've heard the phrase "forming an algebra" being used for delaying effects by means of recording the intents into a data structure. So for instance a function which initially performs file i/o as a side effect, could instead return a data structure "FileOperations" with entries like "FileWrite(a.txt, Hello world)".

Maybe that's where they got the name from? Although performing the effects directly through a pause/resume mechanism doesn't sound algebraic to me.

>It turns out, we can call resume with asynchronously from our effect handler without making any changes to getName or makeFriends:

This sounds like a terrible idea. Especially given the nature of Javascript, this basically would mean that you would have to write every single bit of code to be re-entrant. What if you 'perform' and then the 'resume' doesn't happen until every single assumption made about the entire program state for the entirety of the function in which the perform occurs has been invalidated? After doing a 'perform', you would have to operate under the presumption that nothing done in the first portion of the function has any relevance any longer, no? The enumerateFiles example later in the article is an even better example. It performs in multiple places, but carries on like it's a normal function, not considering that at each of those performs, the entire state of the program could be changed, and none of the conditions established prior to those lines can be relied upon to still hold.

Can anyone tell me the relationship between this and call/cc?

I wonder how much of this is more easily (or less easily) handled with Ruby-like blocks. One can pass in a procedure as an argument to handle the conditional execution.

Interesting sideeffect of Dijkstras rant :)

"Imagine that you’re writing code with goto, and somebody shows you if and for statements."

10 for a = 0 to 10

20 if a = 5 then goto 40

30 next

40 end

50 print "5"

60 goto 30

I lack imagination.

That said, algebraic effects sounds interesting indeed.

> Algebraic Effects are a research programming language feature. This means that unlike if, functions, or even async / await, you probably can’t really use them in production yet. They are only supported by a few languages that were created specifically to explore that idea. There is progress on productionizing them in OCaml which is… still ongoing. In other words, Can’t Touch This.

WalterBright: "Challenge accepted!"

Instead of ES2025 he should have used "ES1978" because the early lisp signalling systems were more general than just errors and had continuable exceptions. This evolved into the Common Lisp Object System when Common Lisp was standardized in the early 1980s.

In Java you simply extend an Exception class (checked or unchecked) and handle it properly regardless of the error message.

Modern IDEs can detect exception types which don't exist yet, and create them on the fly while you try to use them for the first time.

This would be a replacement if Java had the concept of a continuation, but as far as I know that isn't the case.

Although in this case it simply seems a way to interact with some kind of environment, which in object oriented languages is easily achieved with dependency injection (which does mean you have to pass through an extra argument to every function, but that's not usually that big a problem, and it can give hints on how to combine and divide environments).

I'm not wild about this article; it gives lots of "you can go from code like this, to code like this!" examples with non-equivalent functionality, which is confusing if you're skimming.

Totally separate from stylistic quibbles, I also think effect systems are often oversold by folks who like them (usually, in my experience, folks from a functional background).

Fundamentally, a lot of the code we write is the effectful IO plumbing. By that I mean: there are very few complicated algorithms, or really any hand-written algorithms at all, in a large amount of code written for modern systems. Instead, the complexity and value of the code is in the way it coordinates different external IO sources/sinks. This is pretty well illustrated in the article's toy directory enumeration/file handling example: with the IO/system specific stuff extracted into the effect receivers (processors? handlers?), the remaining code is not just simple, it's vacuously simple. The complexity and trickiness of handling IO, dealing with error conditions, etc. all remains, though, in the effect receivers. This is subjective, but that seems more akin to the "over-extracting methods to the point where all you do is increase the line count" school of refactoring than the "improving the comprehensibility/maintainability of the code" school. Generifying IO interactions behind an effect system in a codebase that is primarily occupied with gluing together external systems results in moving so much of the code into effect receivers that nothing useful remains behind.

Put another way: often, what we're doing is intimately coupled with how: like, sure, I'm technically "piping data from a source into a sink with a transformer in between", but they don't pay me to write "source |> transformer |> sink", they pay me to write (for example) the SELECT statement in the source, the column mapping/reformatting logic of the transformer, and the POST-to-endpoint in the sink. If those things already existed, it would be the one-liner above, but they don't for the business domain, so we make them. Once they're written, by all means, modularize them and make them easily usable in a streamable, convenient way. But most of the interesting code, once you peel back the curtain on "source" or "sink" is still going to be in its effectfulness.

Then there's the argument from modularity/swappability: that you can replace effect handlers with equivalent handlers for doing other things. If you're writing a system with many swappable backends, this may be useful. However, most systems don't have that property. Datastores and effect receivers change much less often than the data flow itself. And past a certain point you end up with "old Java"-style modularity: things abstracted so far away in service to unneeded pluggability that the code becomes harder to follow and maintain (especially given that the code may be primarily/near-exclusively concerned with specifics of IO flow).

To be sure, there are some cases where effect systems can really help. I just don't think those are as numerous as FP proponents think they are.

Those Game of Thrones references are super cringey.

Now conditions/restarts. How much more decades should we wait till they rediscover all the programming tools? Please, rewrite your browser legacies once to support non-trivial execution model and allow any decent language to stop this madness.

Step 1) Find a language that can do that

Step 2) Compile it to WebAssembly

Step 3) Validate those input fields

Step 4) Profit

I better wait a decade, it ain’t much competition since we’re all in the same boat. It’s nice that many are happy with what we have now, but I don’t understand why you dismiss this suggestion so easily (and superficially, as it seems).

Wasm doesn’t allow 1+2, since browsers dictate how io/device/extension-communicating code should be done and their native routines are not ready for techniques of the level higher than just a bare callback. Wasm is not a solution, since the platform and primitives are the same. It is basically the same javascript-in-a-browser model with syntax and scoping rules to be implemented by someone else.

One can emulate any language by turning js/wasm into a virtual machine, but that’s not speed or battery-friendly.

> The best we can do is to recover from a failure and maybe somehow retry what we were doing, but we can’t magically “go back” to where we were, and do something different. But with algebraic effects, we can.

This is literally Common Lisp's condition system which a) decouples signaling conditions from handling conditions, b) allows you to execute arbitrary code in the place that signals, c) allows you to stack independent pieces of condition-handling code on one another so they run together, and d) allows you to unwind the stack or not unwind the stack at any moment, so your code may continue running even if it ends up signaling.

ANSI Common Lisp was standardized in 1994. This post is from 2019. That's reinventing a 25+ year old wheel the author is likely unaware of.

Reinventing? The post repeatedly cites prior art, particularly OCaml.

The author is likely unaware of Common Lisp's condition system because that's a completely random and arbitrary conceptual ancestor. It was predated by throw/callcc in Standard ML in particular, I believe (which, needless to say, draws a direct line to OCaml and thus this post). In fact, continuations in some form or another date back to 1964 according to: https://en.wikipedia.org/wiki/Continuation#History

The Common Lisp condition system is based on the 'New Error System' in Zetalisp for the Symbolics Lisp Machine operating system. The New Error System is from the early 80s. So we are talking about 35 years old. Well, actually the New Error System was based on ideas from PL/I and the Multics OS from the 60s..

The more interesting thing is not the wheel (aka the technology) itself, but how to make actual use of it...

The origin of "algebraic effects" appears to be this paper (Plotkin and Power): https://link.springer.com/article/10.1023%2FA%3A102306490896...

Unfortunately I don't speak monad.

That paper has another paper on algebraic effects in its list of references, so it's highly unlikely to be the origin :-)

Usually, "for the rest of us" signals that the author is trying to introduce an existing concept to a group that did not use it.

It's a condition system, not "algebraic effects". Googling for "condition system" shows you exactly how it already works in an existing language with living implementations that are updated daily and released monthly, not in some hypothetical JavaScript dialects the author mentions.

It just doesn't seem practical at all.

I think you can fault the academics that use the "algebraic effects" term for that. Alternatively, you can see it as an extension of the efforts done in Lisp community since the 1960s.

However, neither the linked paper nor the author of this article seem aware of Lisp's prior art in this space, which is a shame. In particular, it's not true that you "can't touch this" - you absolutely can touch a production-ready implementation of this, in Common Lisp, and could for the past 30 years.

> That's reinventing a 25+ year old wheel the author is likely unaware of.

These comments are not helpful. React is a UI library and this concept would be new in this world. Who cares if it's done by the Simpsons already? It's a nice trivia, but what should we do with this information?

React might be a UI library, but the post doesn't describe the concept in terms of application in a UI library, but as a general concept (examples given include logging and exception handling, and are not tied to UI needs).

Second, React might be a UI library, but its need / uses / limits in using for something like "algebraic effects" could be similar to the ones encountered 30+ years ago in non-UI domains.

Not to mention the biggest fault in this comment: the domains where Dan got the idea of Algebraic Effects from are already non UI -- he got it from functional programming research in non-UI specific domains and languages.

Which means that he could just as well use the original, also non-UI, concepts, was he aware of them.

A more valid response along your reasoning would be that of course not everybody can/knows every prior art. That's true, and valid. But the argument that prior art is not relevant in this case because we're talking about UI is not...

>It's a nice trivia, but what should we do with this information?

Evaluate it, enrich our understanding of it, find prior uses and limits (and not just the current re-implementation of the concept) and so on?

The usual. Find out how the concept was successfully implemented before, learn from these implementations' mistakes, and leverage all of that knowledge in your current work.

That's a much nicer, forward pointing comment and it's good to know there's previous act. I've read some bitterness between the lines.

Probably just bitterness from the general trend of reinventing everything in computer science, all the time. People who are unaware of former work on any given topic are forced to reinvent it, poorly. People who are aware of it are forced to watch everyone else reinvent them.

Without any sarcasm, communication is a hard and unsolved problem in general.

The irony here is that you appear to be unaware of the former work on algebraic effects, to the point that this one lightweight blog post forms your entire understanding of it and you mistake it for a limited and ill-defined language construct...

Algebraic Effects are not common lisp's condition system. They are far more general.

The example here to do with error handling only shows that this form of error handling is an (incredibly) specific (very limited) instance.

And nothing is being reinvented here. This is an article to communicate, in the simplest terms possible, decades of academic research. The author is not claiming to, nor has he, invented anything.

Lisp's condition system isn't just for handling errors either; it's an system for handling conditions, just as the name suggests.

Algebraic Effects specifically meaningful in statically typed systems; they have nothing to do with conditions.

Continuation-passing is only one component.

Your whole approach to commenting here is, "I think I know what AE are, and on that basis, this is stupid!"

Both your premise and conclusion are false.

Common Lisp's condition system was first described in '83 for the Lisp Machine in a technical report that's hard to get one's hands on these days. It was described in an MIT AI Labs working paper a few years later by Pitman (https://dspace.mit.edu/handle/1721.1/41474).

Algebraic effects were introduced by Plotkin in 2001 (https://link.springer.com/chapter/10.1007/3-540-45315-6_1) as an extension of Moggi's 1991 work on monads (https://www.sciencedirect.com/science/article/pii/0890540191...), which in turn built on work from the 70s and 80s on understanding computing from the perspective of category theory.

The chief difference, IMO, is that Pitman defined terminology for a language feature in some LISP dialects, while Plotkin showed operational semantics for algebraic effects that are compatible with the denotational semantics (ie, adequacy). Pitman wrote his paper using the AI Labs' practical experience with LISP for the benefit of other LISP practitioners. Plotkin wrote his paper using previous work on lambda calculus and category theory for the benefit of programming language theorists.

Plotkin may have been unaware of CL's conditions system, but since conditions do not have formal semantics, and their behaviour is not particularly useful for probablistic non-determinism (the main thrust of Plotkin's effects article), I doubt that being aware of it would have made him any less motivated or likely to do the work he did.

Dolan et al (https://link.springer.com/chapter/10.1007/978-3-319-89719-6_...) discuss algebraic effects for concurrency in the setting of a dialect of OCaml, and reference CL conditions (amongst other things) as related work. It's a very brief mention observing only that conditions fail to convey the full continuation as a value to handlers, making them unsuited to the use case at hand.

TL;DR: no, conditions aren't the same thing, and if they were, they lack the rigour to be useful without the work that went into algebraic effects.

> first described in '83 for the Lisp Machine

The Symbolics Lisp Machine Manual from 1984 documents the condition system for the Lisp OS written in Zetalisp:


See the chapter 'COND' in Volume 3B, 'Lisp Language'

Of course the failure could as well be on the Lisp / ANSI Common Lisp people. They had a concept 25+ years ago, and failed to promote it properly.

Yeah, it's literally a subset of Common Lisp's condition system. With Lisp's closure and continuation, it's easy to implement the feature. When I read the blog, I was wondering is renaming old concepts with exotic names and making them new the trend now?

Unlike the condition system algebraic effects are typed so you can reason about them at compilation time and ensure all of them are at least used somewhat correctly.

It makes quite a bit of difference when I compare e.g. async programming in OCaml with Lwt/Async where I know I've at least hooked up the promises somewhat correctly with e.g. Python or JS where I will notice that it is wrong when the code is executed.

> Unlike the condition system algebraic effects are typed

What exactly do you mean? Conditions are typed in Lisp with proper inheritance, and handlers can be bound only to particular types of conditions.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact