Hacker News new | comments | show | ask | jobs | submit login
Show HN: Cancelable async primitives for JavaScript (github.com)
84 points by Mitranim 10 months ago | hide | past | web | favorite | 97 comments



Am I the only one that wonders why there is no reference/comparison to observables such as provided by i.e. the RxJS library? They are async, composable and cancelable and also widely used (see Angular >= 2).


Good suggestion, should probably address this in the readme.

Short answer: observables do too much, and Rx is WAY, WAAAAY too large. We need simple async primitives before layering bigger abstractions on them. They shouldn't take encyclopedic amount of reading to learn. They need to come in a library that doesn't weigh over 100 KB.


Agreed, thats why there are proposals of introducing Observables as ES8 native datatypes. Note that while full RxJS is heavy, Observables are not. Its the multitude of operators to compose observables, and the various helper implementations of Subjects (AsyncSubject, ReplaySubject, BehaviorSubject etc.) - which are similar to Defereds in Promise world - that make RxJS so heavy. If you use only the Observable part with a slim selection of operators in your code, you will not add so much of the library to your code (but granted the patch-based API of RxJS is not really trivial).


I believe we need a hierarchy of primitives:

Level 0: one-value operations (promises = futures).

Level 1: multi-value operations (streams = observables), implemented in terms of one-value operations. Tokio did it well for Rust (https://tokio.rs).

Starting at level 1 doesn't feel right to me.


I'd rather have a clear separation of data structures and implemenations:

Promise, Future, Observable = data structures

Bluebird, Fluture, RxJS Observable = implementation of the data structures

Operators (map/race/all) = helper implementations for operating the data structure implementations

Now, different implementations may make different choices, like there are dozens of Promise implementations tuned for either feature richness, speed, or small file size. Same holds true for observables, there are feature-rich libs like RxJS, but also fast and/or small implementations (Bacon, most.js, xstream).

My point: A cancelable promise is basically an observable that emits a single item (and caches that item). So an observable is kind of like a superset of a cancelable promise (and of course a non-cancelable promise). All operator implementations manipulating observables can directly be reused for cancelable and non cancelable promises-like observables; the other way around does NOT hold true.

My main point: Just use observables, they can do everything that promises and cancelable promises can do and, if needed, a lot more. So instead of layering, learn to use this one data structure and you will be able to cope with a lot of async troubles.


The main point is correct.

But observables are very different from promises at the core. Not only are they cached, but they're also eagers, while observables are not. The very minimum implementation of an observable is just a few lines of code. Promises...are much more complex. So I'm not sure how reusing operators between the two efficiently would work.

So yeah, just use observables. Most people think observables are a more complex, higher level abstraction with promises being the primitive, simple one. It isn't true though. Observables are much, MUCH simpler, and are perfectly suited to doing everything promises do, but better (lazy, optionally cached async primitive is way better).


You can make any observable "cached" in RxJS by subscribing it to a ReplaySubject or by calling .publishReplay(N).refCount() on them. So observables can be cached or uncached, and the cache length can be specified (N parameter). They can also be eager or lazy; that is the hot/cold lingo in RxJS world (or unicast/multicast).

I repeat my claim: Observables (at least as implemented by RxJS) are a superset of (cancelable-)promises. Superset means, they can do everything the exact same way as promises, and a lot more.


Yeah I know. I explained myself poorly. The observable itself is lazy and cold. Wrappers functions, observers (like subjects) and operators can add a higher level abstraction to change that. It's easy to make lazy into eager and cold into hot, of course, but the observable itself is cold/lazy.

Observables can do everything promises can do, but the construct itself, while more flexible, is significantly simpler. Like, -way- simpler. And a lot of what they do and how they do it comes from how they're implemented at the very base (essentially a generic observer pattern). Building observables on top of something else would add tons of overhead/complexity for no reason.

The observable needs to be the lowest level piece and we build the rest on top, not the other way around. That is counter intuitive since they can do so much more: the abstractions on top are more like specializations of the low level construct.


No disagreement here. Rx observables are strictly more powerful than promises/futures/streams.

But why start with the superset? I don't think it's good design. Consider the perspective of a language designer. You want to start with primitives that are as simple as possible while satisfying a vast number of real use cases. One-short async primitives are an important intermediary step towards observables, it shouldn't be skipped. Observables should be implemented in terms of these.

This layered design gets you a smaller cognitive cost of entry (promises are hard enough as it is!) and higher efficiency for the vast number of cases that don't need stream-like functionality.

On top of that, one-shot primitives are conceptually compatible with blocking expressions in coroutines such as async/await, whereas streams are not.

I don't get it when people shoot for fat primitives that do it all.


Okay. Most.js then, at 10kb minified looking at the build on unpkg.

Also, implementing an observable from scratch is pretty much trivial. The operators are where all of the logic are, and they are arguably simpler to understand than alternatives because of how well documented they are. And if you limit yourself to the operators that are equivalent to what this library provides, there really wouldn't be much complexity to it.


In my experience, most async operations fit very well within the small feature set of promises/futures, with only a few operators (map/all/race). Streams should be the optional next layer of abstraction, not the only layer of abstraction. They should be built on top of futures. I quite like how Tokio did it for Rust futures. [1] Not a fan of Rx and similar designs.

There are also different types of observables to consider. Rx, xtream, Most.js, they're basically stream libraries. When doing GUI programming, it's more useful to have reactive units with a _synchronous_ data access, because it matches how you want to access the data when drawing your view (e.g. in React). I ended using observables that look like Clojure's atoms (see Espo: [2]), with a special adapter for React (see Prax: [3]). Observable libraries like Rx were useless for that use case.

[1] Example of decent future/stream design: https://tokio.rs

[2] Observables focused on synchronous access: https://mitranim.com/espo/#-atom-value-

[3] React adapter for implicit reactivity, based on synchronous observables (impossible with async streams): https://mitranim.com/prax/


What about xstream? https://github.com/staltz/xstream it's a lightweigth alternative with a fraction of Rx's operators and a fraction of its weight.


I guess this needs a bigger answer.

A well-designed tool should minimise the amount of states the program can be in. Asynchronous programming is already horrifically bad, it generates combinatorial explosions of intermediary states. Adding imperative, mutative programming and an event-driven API only exacerbates the problem, increasing the amount of possible state sequences even further. I don't think xtream provides a good API.



SodiumFRP provides all the required primitives and is probably super tiny if combined and minified


Is there a measurable performance impact caused by the 100 kb library size?


Yes. Depends on client bandwidth and CPU, but it's significant. In fact, it's insanely high. Not caring about "just another 100 KB" is how people end up with 2-5 MB bundles that take seconds to download on weak networks and seconds to execute on weak devices.


You would have to epxlain how adding a 100kb library make your bundle become 2-5MB huge.

And note that with RxJS 5 you can only pick operators and data structures that you need; I have written a Redux-clone using RxJS [1] which uses Observable, Subject and a handful of operators and is less than 16kb as minified & gzipped in total as self-contained umd bundle (of which half the size is attributed to lodash helper functions).

[1]: https://github.com/Dynalon/reactive-state


This is news to me. Last time I tried RxJS, was unable to get a usable core less than 100 KB minified (I don't use gzip as a metric). Would consider 20-30 KB. Might want to look again.

Bundle size: a typical SPA imports multiple libraries, they import more libraries, and so on. It's not just Rx. I shouldn't have to explain how small things add up. Not caring about size is how it balloons up. We have to care about it in every library.



Just to point out - RxJs 5 allows you to bundle only operators you need, thus significantly lowering output file size. You could actually ship to the browser only features equivalent to those of this library. Not sure what actual numbers are though.


This is news to me. Last time I looked into RxJS, I was unable to get a minimum useful core of less than 100 KB minified. Might want to look again, thanks!


import { Observable } from "rxjs/Observable";

import "rxjs/add/operator/map"; import "rxjs/add/operator/reduce"; import "rxjs/add/operator/take"; import "rxjs/add/observable/of";

// etc. - this way you only get what you need into your bundle


Very interesting. Cancellation semantics of promises have bugged me for a while. Looks promising (no pun intended).

If can make a suggestion--a cookbook of promise interop might be worth adding. There's a fundamental incompatibility, given the cancellation concerns you mention, so I'm sure it couldn't be 100% caveat-free, but it would still ease adoption given the ecosystem's current standardization on promises.


Thanks for the suggestion. I tend to run everything on Futures, rarely dealing with callbacks or promises, so interop tends to be a non-issue. But it's probably worth including a promise-to-future convertion function.

What do you think is worth adding or writing about?


You can also use cancellation tokens with promises. Pretty easy to do and doesn't require a library.


There is a cancellation library kicking around npm that reasonably mimics C#'s.

There must have been man-years of discussion on the Bluebird project around cancellation, but for me it all comes back to tokens as every promise baked-in solution feels a bit off. Golang effectively has them as well via the context package.

*As you say it is pretty easy to do, but it's nice to standardize on an implementation


Yes, we've been reading all the golang discussion very carefully. It's nice to see them go through the same mental process we went through - and my hopes are that if we don't interfere they'll come up with a better idea we can just copy.

We've had 4 cancellation proposals rejected so far (cancellation as rejection, third state, cancellable-promise and cancel-tokens).

They work in bluebird pretty well - but people have concerns.

Personally I use our cancellation if I already have bluebird, and tokens if I don't.


Not familiar with cancelation tokens. Can you describe the approach?


You pass them to an asynchronous function and said function can do something like `token.throwIfCancelled()` if you've called `token.cancel()` elsewhere.


Doesn't sound as ergonomic as just returning a cancelation function in a future initialiser. Not seeing any other advantages, either. Am I missing some?


Your idea was described by kriskowal in 2013 (see https://github.com/kriskowal/gtor ) - in the promises community we call them "Tasks".

Basically, you can make it work pretty easily with promises and multicast (by reference counting).

The reason they're not added to promises is because some people feel they break a guarantee a promise makes. Not only is it possible - it works well.

Domenic is worn out from having to put up with all the involved shit (if you're reading this, sorry :D). He put a lot of work but I think he's tired of fighting for this rather than push other areas where he meets less resistance in the DOM.

I just wanted to point out that while cancellation is a whole world of complexity and is very interesting - it's not avoided in the language because we're stuck on the technical side.


Interesting! Nice that the approach is known, and others are being discussed. I should check GTOR again.

Multicast cancelation based on refcounting doesn't look right to me. I explicitly avoided this in Posterus, sticking with exclusive ownership, what GTOR calls unicast (thanks for linking). The reason is that subscribers may come and go over time. I've seen, and written, code that caches a promise and reuses it many times. It may start with one consumer, lose it, be canceled, then another consumer comes and gets rejected. This just feels like a big footgun. In Posterus, multicast is explicit and opt-in (`.weak()` futures).

If we eventually standardise a solution, I'd like something with as few interfaces as possible. In a dynamic language, you don't have the luxury of creating a future that initialises into a task, returning a cancelation token. Too many types without static checking is just a footgun. This is part of why Posterus only has futures, why cancelation is provided in the future constructor and tied to the future itself, and why they're eager rather than lazy. Deviating from these constraints would introduce new types, increasing the footgun potential.


Also, I hope I don't sound cocky or rude in my other reply. I have mad respect to people who actually go and build stuff instead of complaining. It's a great way to learn and contribute - and who knows maybe we'll all be using your library in 5 years :D


Can you compare/contrast with Fluture? https://github.com/fluture-js/Fluture


Thanks for the suggestion. I wasn't aware of Fluture. Will look deeper into it.

On cursory inspection, looks like Fluture has multiple features that I rejected when designing Posterus: laziness/templating, separation of map/flatmap, and possibly others. They come at the cost of cognitive load and API surface.

In contrast, Posterus aims to be the simplest Promise replacement you could possibly come up with, filling the main missing features: cancelation and scheduling control.

Additional features come at a cost. Laziness & templating trips up unfamiliar developers. Separating map and flatmap doesn't make sense in a dynamic language; since you can't statically enforce it, it's just a tripmine with no benefit. Also, a large amount of utilities bloats the code size, which I care about very much. Posterus is designed to be as small as a typical promise polyfill, fitting into a size-constrained browser application.

Needless to say, Fluture has features you might want that don't exist in Posterus. I'm not familiar with it, so it's difficult to recomment any.


Might be a naive question, but how does this compare to existing Promise libraries that have cancellation (like bluebird)?

http://bluebirdjs.com/docs/api/cancellation.html


  Here's an example: in Bluebird, cancelation doesn't
  propagate upstream. After registering onCancel in a
  promise constructor, you have to call .cancel() on
  that exact promise object. Calling .cancel() in any
  child promise created with .then() or .catch() will
  not abort the work, rendering the feature useless for
  the most common use case!

  True cancelation must propagate upstream, prevent
  all pending work, and immediately free resources and memory.
From the README


But the linked bluebird docs says:

  As an optimization, the cancellation signal propagates 
  upwards the promise chain so that an ongoing operation e.g. 
  network request can be aborted.


Interesting! This must be new. Bluebird didn't have upstream cancelation last time I checked. Now it might actually be viable. Should update the readme. Thanks!

Unfortunately it's still not viable in the browser due to its size, promises too easily get converted into non-cancelables, and worst of all, async/await forces native promises. If you want cancelable coroutines, you're forced to roll a generator-based implementation such as what Posterus provides.


D'oh, thanks


Am bluebird collaborator - confirming cancellation does everything this library does - but also componses and ref-counts to work with one-to-many flawlessly.


Glad it works for you!

Kind of answered in another subthread: https://news.ycombinator.com/item?id=14963524


I can't think of any reason I've ever wanted to cancel a promise except maybe when uploading a file. Therefore I'm having a hard time grasping why this is such a big deal. What is another example?


Here's some examples from my experience.

* Server. Request handler starts expensive work. Let's say 4-5 database requests, some FS operations, rendering and sending response. Client disconnects. Running those operations will waste resources, we should stop. [1]

* Web. You can make at most 6 concurrent HTTP requests. They're precious resources. Dropping a request you no longer need will let others complete. It's nice to be able to abstract a painful ajax API behind something like a promise that doesn't lose the important ability to abort.

* React. View instance starts async fetch that eventually updates its state. User navigates to another view before it finishes. Updating the state after the component is unmounted is an error, and React will rightly complain. We should stop it.

* Server: abstracting an operation that's already cancelable behind a future. Let's say using Electron to render a website into a PDF. It's a really expensive operation, and the API is a pain to program against. You want to provide something as simple as a promise, but that doesn't lose the ability to stop it. [2]

[1] I write servers using Posterus coroutines, so all async code is automatically owned and canceled where appropriate: https://github.com/Mitranim/koa-ring

[2] This relies on futures to abstract away tricky timing management: https://github.com/Mitranim/epdf/blob/1c54481d4760a7eb730eb3...


This is great!

I actually had to write my way around non-cancellable promises quite a few times and my situation mirrors yours though mostly on the client:

I have a search page that updates results every time a new filter is updated (imagine you select a new tag to filter by, or narrow down the search somehow). This can be done faster than responses come back and can create a huge mess because some responses can come back faster than others.

This creates uncertainty in terms of what should be on the page. A few lines of code and it's fixed but the cool thing is that I get to throw away results that I don't care about and not process them (expensive-ish action).

This kind of situation happens somewhat often because a user can quickly navigate around, fire off a ton of requests and only really cares about the last one in the queue.

I haven't worked my way around it on a global scale which means that there could be a ton of requests being processed and thrown away right after for no good reason.


Glad you understand the problem. This is what futures are good for. Write an XMLHttpRequest adapter that returns a future, have an easy time composing or aborting operations.


Added more examples and motivations for cancelation: https://github.com/Mitranim/posterus/blob/17c89694ecdce4633f...


Having send 1 API query initiated from a user action, and you want to:

1) allow the user to cancel it explicitly

2) cancel it if the user selects something else (instead of firing a second query, and then having a race condition on which returns first)


Another example is for live search. As I type, each letter cancels pending search requests and starts a new one, making sure I don't show search results for a previous search term. I use this all the time.


We use 'fetch' extensively. It does not support cancellation or timeouts, although you can implement the later as a Promise.race with a setTimeout.


Not good enough. Each ongoing request eats machine resources and counts towards the browser request limit (6 or so). Cancelation must release the underlying resources, which XMLHttpRequest allows you to do. By being promise-based, fetch is fundamentally broken. It's a dead-on-arrival tool.


I wouldn't be that harsh. even if it doesn't support cancellation today the support can be added later, eg through an optional CancellationToken parameter. This is eg how .Net Apis have added cancellation support. I would agree that cancellation would be good to have for lots of applications, but others will also be fine without it


>Each ongoing request eats machine resources and counts towards the browser request limit (6 or so).

And in most cases this doesn't matter at all.


Agreed, often it doesn't matter. Sometimes it DOES matter (high latency / low bandwidth / weak device), and then your program is slower than it could be. Why not pursue async primitives that let you manage resources properly at no downside?


If by dead on arrival you mean incredibly useful, yes.


The brevity and aesthetics of async/await has made me reluctant to move back to any pyramid type async chain. I don't know if it's good practice, but I use typescript to declare an optional null type and just return a null from inside the async function if I need to cancel.


Posterus provides coroutines just for that: https://github.com/Mitranim/posterus#routine

Posterus coroutines are similar to async/await, but cancelable and free of `await`'s race condition problem (promise can get rejected before `await` attaches a handler).


Why is that a race? IIRC Promise.then is supposed to invoke the callback immediately (or on next loop) if the promise is already resolved.


It would be race-free if `.then()` was invoked synchronously when evaluating the `await` expression, just like in normal Promise-based code. Currently in V8, there's a delay between evaluating `await` and actually calling `.then()`. If the promise uses a sufficiently nimble scheduler (i.e. based on `process.nextTick`), its unhandled rejection handler may run _in between_, throwing an exception, polluting stderr and possibly killing the process.

Example with Posterus:

  async function main() {
    try {
      await Future.fromError('fail')
    }
    catch (err) {
      console.error('caught:', err)
    }
  }
In Node, this actually produces an unhandled rejection because Posterus's scheduler uses `process.nextTick` and squeezes into this unnecessary delay. Doesn't happen if you `.catch()` manually or just use Posterus coroutines instead of interoping with async/await, but it highlights the incorrect implementation of async/await in the first place. (Or is the spec at fault?)


I'm admiring this work, but can't help to be increasingly concerned about the utter complexity web developing is heading for. Case in point on a reddit frequented by junior web devs (and product placement bots as it seems): https://www.reddit.com/r/webdev/comments/6sdglh/feeling_over...


User interfaces are hard to implement and JS is only approaching the complexity of requirements for modern UI. It's no more a single form or a button, so it should not be a surprise.

On the other side, JS indeed is ugly language with trash ecosystem (btw, how long it will take for community to get rid of fsevents warning on non-Mac systems?), but there are patterns forming and best practices being documented. It can be learned, but juniors should not expect learning a huge engineering discipline in a week.


>User interfaces are hard to implement

They weren't that hard in Visual Basic 20 years ago, nor in any modern environment like Cocoa -- and they didn't need all the craziness and frantic over-engineering that goes on in JS frameworks.

The problem is everybody in the web rebuilds the whole UI from low level primitives. Need a wizard? Make one yourself. Need a form? Build it. No upfront structure to anything -- and the frameworks don't add enough either.


User interfaces are not just forms and wizards. You cannot build everything from a high level template. That's why the whole UX discipline exists.


>User interfaces are not just forms and wizards.

Never said they were. Just used them as two basic examples, that web UIs still overcomplicate and have problems with.

>You cannot build everything from a high level template. That's why the whole UX discipline exists.

The UX discipline exists for a totally orthogonal reason: to study, understand, and suggest improvements to the design of interfaces (and the resulting "user experience" with them), whether they are made with a "high level template" or not. You still need UX if you everything with the basic Cocoa controls or Windows standard widgets or whatever.

The kind of UIs you can do in native, beyond forms and wizards, the web can't even dream of. The inverse is not true (except if you do your whole thing in Canvas or WebGL which defeats the purpose).


you dont really need all this if you want a simple presentations website. most of those things are for more complex web applications. a lot of times where i end up is using jquery and some templating like handlebars to show some ajax response. if its something simple its not wrong to use jquery or vanila javascript, you dont have to use those complex tools.


Correct. Most websites are better off with static HTML or server-side rendering and the least possible amount of JS. However some apps DO have to be fat. And don't forget about Node.js.


I find callbacks and events much easier to understand then promises and futures. Example code:

  var req = get("https://news.ycombinator.com")
  req.onData = callbackFunction
  if(condition) req.abort()
"Callback hell" can be avoided by using named functions and sub-functions.


For one operation, sure. It doesn't work when you want to compose multiple async operations, wait until all are finished, or race several of them to completion (e.g. useful operation competing against timeout). The purpose of promises/futures is this composability that allows you to get the order and timing of operations right.


What if you want to abort, or take another path, depending on an error code ? With callbacks there's a convention that the first parameter is either null or an error.


That's what the mapping operators are for: `.mapResult` (same as `promise.then`), `.mapError` (same as `promise.catch`), or `.map`. The latter has an errback signature, like Node.js callbacks. They can also return new futures, transforming the result asynchronously. On top of that, error handling in promises/futures is much easier than in callbacks, as you can use one error handler for a chain of operations. Kinda like exceptions in synchronous code.


Isn't callback hell referencing to the fact that you can't easily access the previous result of computation without nesting closures inside one another?

It seems to me that first time I really avoided cb hell is when I started using async/await.


  var timeCreated = new Date();
  button.onclick = function buttonClicked(clickEvent) {
    var timeClicked = new Date();
    console.log("Button was created " + (timeClicked-timeCreated) + "ms ago");
  }
Lets say you have two functions: Jon from support, and Jane from billing. Both of them have access to the customer database. Now a customer calls, the customer hits 1 for support and Jon gets the call, while Jon is on the phone, another customer calls, hits 2 for billing and Jane gets the call.

You can have a very complex setup using just functions, and some variables to keep track of things, maybe an array for implementing a call queue. You can choose when to run in serial, and when to run in parallel, and make a limit on how many calls that can run at any given time. And every possible error handled and managed. Promises, async/await and futures are only leaky abstractions that will just complicate things. They look nice for simple examples without any error handling, but when you want to do real world stuff you have to think about how they work behind the scenes.


It sounds like you're saying "explicit FSMs are strictly more powerful than promises/coroutines". Which is true. However, they're inherently difficult to program. In my view, the key to simplicity and correctness of programs is reducing the amount of states the program can possibly have. Isolating a piece of code into a blocking sequence seems like a fairly good way of doing it.

Not sure what you mean by "simple examples without error handling". Just like normal synchronous code, promises and coroutines use exceptions, which have the nice property of composability: one catch for several statements/promises. So error handling is the same as in synchronous code.

Now granted, there's a LOT of async/reactive problems where sequential abstractions like promises are irrelevant. GUI programming comes to mind, where you want the view to reflect some reactive state that changes in arbitrary order.


Callback hell does not exist in modern javascript because you have async/await.


this is interesting work but at least on the node side I am going 100% native async/await promises after node v8.3.0 is released. whatever small advantages more advanced libraries may offer are for me overwhelmed by writing standard spec compliant code now that the spec and implementation are finally (almost) up to par.


Seconded. I get the desire to free up the resources but unless you’re running some extremely hot resource paths (huge db queries, etc) the need for this is minimal.

Useful, yes. Absolutely. But I’ll wait on a standard spec first.

I’ve yet to see a clean implementation of this (bluebird comes closest though).


Depends on use case. Sometimes you're surprisingly resource constrained.

People gave a few examples in this subtopic: https://news.ycombinator.com/item?id=14962684


This looks great! Cancellable promises are still pending after 6+ years of discussion.

I'm curious about the naming choices though, since user friendliness seems to be one of the goals: why 'deinit' over 'cancel' and 'arrive' over 'resolve'?


We need a standard destructor interface instead of choosing a different word every time. Cancel, close, destroy, drop, unmount, they do the same thing. If we settled on ONE destructor interface, we could have automatic resource management. [1] I use `deinit` in all my libraries, as it seems to be the most neutral word appropriate for every case.

`arrive` — no particular reason. It's one "errback" method rather than two methods like `resolve/reject`, so it needs to have a neutral tone. Not too happy with it, better suggestions are welcome.

[1] Basic implementation of automatic resource management in JS: https://mitranim.com/espo/#-agent-value-


This is really great. It reminds me of NSOperationQueue on iOS. I think a cool next step would be to add dependencies -- ie have Futures wait to execute until dependent Futures have completed/succeeded.


Pretty much what jrs95 said. If I understand the question correctly, `Future.all` and `Future.race` address exactly this case.

[1] https://github.com/Mitranim/posterus#futureallvalues

[2] https://github.com/Mitranim/posterus#futureracevalues


I don't think I get what you mean exactly. How is this different from wrapping all the dependencies in a single promise and starting the dependent one when they're all finished?


What is the overhead cost of maintaining the scheduler?


Negative. The scheduler is an optimisation. The alternative is to rely on VM scheduling using `process.nextTick` or `setTimeout` for every new async operation, which involves mandatory allocations and possibly other overhead. Using a custom scheduler is MUCH more efficient, which is why every decent promise implementation does it.


Interesting, I would have assumed that relying on VM scheduling would be more efficient.

As a corollary, are custom promises faster than native promises?


Yes. Some popular promise polyfills totally trash the performance of "native" promises. I don't quite remember which ones. Needless to say, Posterus also compares very well in this field.


Hmm, I'll have to look for benchmarks then. My general experience is that it is hard to outsmart the VM performance wise in the long run


We're not competing with VMs here. From what I hear, like many other built-ins, "native" promises in every VM are implemented in JavaScript. They're not always well done (V8 promises were really bad for a while), and pay a mandatory overhead for useless "privacy" features dictated by the spec.


Is it possible to use it with async/await?


Futures automatically coerce to promises, so yes.

Even better, they come with generator-based coroutines that work with futures, and are automatically cancelable: https://github.com/Mitranim/posterus#routine

You can even run it with Koa 2 instead of async/await: https://github.com/Mitranim/koa-ring


Is canceling as useful as inhibitory neurons?


I'm curious! If you are trying to cancel an async, why was it invoked in the first place? Or is this for operations no longer requested, like a big upload or something?

Inhibitory neurons make it possible for you to walk down stairs without falling the whole way, so whatever trickster is voting down neurons, cut it out


People and programs change their minds all the time. A lot of behaviors we consider intuitive rely on some form of cancelation.

People gave a few examples in this subtopic: https://news.ycombinator.com/item?id=14962684


Perfect! Thank you.


Well that was my quickest github star ever


Will there be a cancelable javascript sometime? I'd like to cancel it all.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: