
Promises are not neutral enough - giulianoxt
https://staltz.com/promises-are-not-neutral-enough.html
======
magnushiie
I think the author wants promises to represent computation, whereas they
represent predetermined (i.e. single-shot) events. He mentioned C# Tasks,
which do mainly represent computation, but in some cases Tasks are also used
as events and this gets confusing as hell. I've worked with C# Tasks and hope
that MS once cleans this up and builds the stuff on promises instead. Note
that the C# language construct uses the awaitable pattern (GetAwaiter method)
instead of tasks - awaitables are actually pretty similar to promises.

1\. Eager, not lazy - I think it was a mistake for the promise constructor to
take a function, and in that way lead the users to believe the promise
represents a computation. Creating a pair of promise and future (the latter as
the producer side, like in C++) would be much cleaner. I disagree that lazy
would be more general, you can simulate lazyness with functions, but you
couldn't eliminate the performance cost of creating the unnecessary closure
with a lazy solution. Regarding getUserAge - the common case for that function
would be to take the user ID as the parameter (and hence would be lazy by
construction), the parameterless version is a special case.

2\. No cancellation - cancellation is much better represented with
cancellation tokens (even C# Tasks cancel with cancellation tokens, so does
fun-task mentioned at the end, though in non-composable way) - you cannot
build a generic solution that can cancel the right computations. With
cancellation tokens it's clear what cancels what.

3\. and 4. (as well as being allowed pass non-promises to places where only
promises make sense, like Promise.all and await) are unfortunate accidents
that make typed environments (e.g. TypeScript) harder to work with but are not
that important as 1 and 2.

~~~
codedokode
Actually, cancellations can be useful to prevent wasting resources on useless
computations. Imagine, if you have 2 threads doing some computations that will
be later merged. If one thread fails, it makes no sense to continue executing
other thread. That is where cancellation can help - but it should be
thoroughly designed, not hacked like they usually do it in Node.JS.

~~~
magnushiie
I was not saying cancellations are not useful, I was saying cancellations are
better handled explicitly via cancellation tokens (which compose perfectly,
unlike computation based cancellation).

------
roguecoder
This article doesn't even touch on the worst sin of JavaScript Promises: they
swallow errors and exceptions, which makes them nearly impossible to test
correctly and makes debugging horrifying (if you even notice anything is
wrong.)

Promises are a great example with the problems of believing that something
good in one language will be good in another. Promises in JavaScript are
fighting the language, because JavaScript is fundamentally a collection of
isolated but contextual behaviors.

Using Agents to encapsulate callbacks is easier to reason about, easier to
test, less likely to swallow errors whole, and doesn't have any of the
problems laid out in this article. Unfortunately because it isn't a model
popular in any other language, it doesn't have the name recognition Promises
do.

~~~
yuchi
Promises don‘t swallow errors (sorry to be pedantic but there’s no concept of
Exception in JS). They just propagate it to the rejection channel.

What was your precise experience?

~~~
mstade
If there’s no logic to catch the rejection it will be silently ignored, which
is effectively the same as “swallowing exceptions” – your distinction is
correct but it’s academic at best.

In practice, this behavior causes very real problems and in node land they
made the sane decision to kill the process whenever an unhandled promise
rejection comes about. I don’t know if this has landed yet, but you’ll see a
warning about it if you run a node process in which you reject a promise.

~~~
rmrfrmrf
This was a problem before native Promises, but not so much anymore.

Throwing on unhandledRejection has been the default in Node.js >= 7 and
browsers now have DOM Levels 1 and 3 events for unhandledrejection.

~~~
mstade
This is a half truth. Native promises landed quite some time before a way to
catch unhandled promises did, and even that event wasn’t enough, as evidenced
by at least node deciding that the process needs to crash. (As it would’ve
when unhandled exceptions were thrown.) The browser story is of course
different.

Regardless, the original poster was correct – this has been a sin of promises
for a long time. What’s worse I think is that because of this we move have
semantics that are close but not quite the same as exceptions. Case in point:
throwing an exception while executing a promise function will reject the
promise. But it’s not an exception anymore, even though the value of the
rejection is in fact the exception. The semantics are now different – because
promises.

Promises in JavaScript have a certain almost-but-not-quite quality to them.

------
_greim_
I don't know. André Staltz is a great programmer, but I can't help but think
what seems "opinionated" to him about promises boils down to the fact that
they don't perfectly match certain quasi-ideological preferences he has about
async programming, at the expense of all other concerns. As he states at the
end, promises still work, you can get things done and everything is fine. But
the part about them being opinionated I just can't get behind.

In fact, if promises worked the way he wanted them to, it would hurt the
ecosystem in every category he mentions. Lazy promises would cease
representing a single value, and be un-cacheable. Promises that didn't flatten
inner promises would create endless confusion and ambiguity over "onion-
promise" scenarios. Sometimes-synchronous promises would introduce subtle and
sometimes catastrophic runtime ambiguities (aka "release zalgo"). Even
cancelable promises would raise thorny issues regarding whether promises are
intended to be multicast or unicast, which is a problem the current design
side-steps entirely.

~~~
spion
This has nothing to do with it. I love promises but they are truly limiting.

Here is an example of how promises limit the power of mobx

[https://twitter.com/spion/status/958906847385341952](https://twitter.com/spion/status/958906847385341952)

Another example relevant in node is continuation-local-storage (equivalent to
threadlocal storage). Implementing it on top of generators or other "chainable
/ thenable" abstractions is trivially easy. Implementing it on top of native
promises and async/await is impossible without deep hooks into the platform.

More examples here: [https://spion.github.io/posts/es7-async-await-step-in-
the-wr...](https://spion.github.io/posts/es7-async-await-step-in-the-wrong-
direction.html) (see the second part)

We should've paused on async-await and waited for jhusain's compositional
functions: [https://github.com/jhusain/compositional-
functions](https://github.com/jhusain/compositional-functions)

In the meantime generator based libraries would've properly explored the whole
breadth of power that co-routines can give you, creating cowpaths to be paved
by TC39.

Promises make trade-offs, and they end up with a design that is generally good
and can be used well in some number of situations. But not all. Not nearly
enough to get first class syntax support that makes them privileged over all
other solutions.

~~~
matharmin
Gorgi Kosev's post highlights a very nice use case for generators (database
transactions). However, in all my JavaScript over the last few years, that is
the only good use case I've found so far in my code base for generators. In
all other use cases I've come across, async-await works just fine, and has a
much nicer syntax to work with.

~~~
spion
I elaborated that case in the most detail, but there are many other problems
that generators solve mentioned in the blog post. Another very common one is
getting the current user that initiated the request (or maybe their session),
which you need to pass around to all your functions/classes.

What if you could simply `yield getCurrentUserSession` and the engine which
ran the toplevel generator returned it back to you?

jhusein's compositional functions solved the syntax issue.

------
twohearted
The analogies damage this article because they feel wrong. For example the
"never synchronous" example is more like this:

 _You order a burger at the cashier window, then go to the pickup window. If
the burger is already made, it 's already at the pickup window when you get
there._

The author wants a special case where if the burger is already made, they hand
it to you immediately at the cashier window. This might seem more efficient,
but both in the restaurant and in code it makes logic way more complex.

~~~
jonny_eh
Exactly. If for some reason my promise is immediately fulfilled (e.g. the
result was previously memoized), I don't want to provide an alternate codepath
to handle the result.

------
matharmin
I work with fairly large JavaScript codebases, and the issues mentioned in the
post has never been an issue for me. The switch from callbacks to Promises,
and later to async-await has made a massive improvement to the ease of
writing, reading and maintaining the code. Lazy and cancellable tasks are
edge-cases that don't need support in Promises directly. I haven't needed
either of those in more than a couple of places in the code, versus thousands
of places where Promises are used as is.

I can see the automatic unwrapping of Promises to be an issue in some
libraries that want to make specific guarantees, but in most of my code this
has behaviour has simplified things.

I definitely prefer having Promises and async-await right now over another
theoretically sound (but probably more verbose) system available in a couple
of years.

~~~
z3t4
I think Promises=>async/await works great for certain domains like get a value
from a database, them make an insert, then do something else and then return a
message to the user.

I however write a lot of systems where almost all operations need to be
concurrent, cancelable and rate-limited.

Personally I find callbacks and the event loop easy to reason about, but too
daunting when all you do is CRUD requests to a database.

The problem with Promises though is that they spread, they don't like to live
side by side with other async paradigms.

------
Const-me
I don't think the author has experience working with C# tasks.

Technically, the API documentation indeed says a Task has Start() methods i.e.
is lazy.

But practically, in the majority of cases they are created already in Running
or WaitingToRun state. This applies to tasks returned by asynchronous APIs in
the framework, tasks implemented by user-written async methods, and tasks
started with Task.Run() static methods. Calling Start() on them will throw an
exception complaining about the wrong task state. So, in the current versions
of .NET, the tasks are eager just like in JS.

I think the lazy tasks are mostly for backward-compatibility with older .NET
framework 4.0 that already had tasks but didn’t support async-await.

------
rictic
Neither lazy nor eager is neutral. Sometimes you want one, sometimes you want
the other, and you can build either out of the other.

For promise cancellation, this has been talked to death, but in short, making
any function preemptable at any point in its execution makes writing correct
code much much harder. As an example, I've got an API that takes independently
cancellable requests. Multiple requests often need to calculate the same
thing, so there's a cache. Any given promise in the system might be downstream
of multiple requests. If cancellation is built into promises, how do I express
how cancellation should propagate through the tree of promises?

A C#-style cancellation token API, orthogonal to promises, is simple, easy to
build, and easy to understand.

~~~
shawndellysse
> A C#-style cancellation token API, orthogonal to promises, is simple, easy
> to build, and easy to understand.

I'm interested in learning more about this, do you happen to have any links to
building such an api?

~~~
rictic
Creating a cancel token gives you two things: a token and a function. You call
the function when the token should be cancelled, and the token can tell you
when it has been cancelled. The simplest way to query the token is to call
token.throwIfRequested(), which throws a Cancel if the token is cancelled. The
token can also give you promises or callbacks of cancellation if you like, so
you can do stuff like `const result = await Promise.race(token.promise,
promiseOfResult);`

So you pass the token into a cancellable API, and that API calls
token.throwIfRequested() in places where it is safe for it to do so (i.e.
outside of critical sections).

A library implementing this in <100 lines of code (based on a spec that has
since been sadly abandoned): [https://www.npmjs.com/package/cancel-
token](https://www.npmjs.com/package/cancel-token)

~~~
shawndellysse
Oh I like this. It's pretty simple, both code and conceptually, that it seems
obvious in retrospect.

It allows a function to be cancelled from the outside, but only on its own
terms, and gives the function a chance to clean up after itself.

I'm going to use this

------
skybrian
If promises were lazy (basically just function composition), it seems like
we'd have similar problems to Haskell where it's difficult to understand
performance. You wouldn't know when an I/O operation starts or whether it will
get executed again. Maybe that's okay for a high-level API, but low-level I/O
operations are not idempotent, so this seems risky?

So this looks like a trade-off: you could make function composition easier
only by making Promises less suitable for their original purpose. By going
generic, you lose an important guarantee that a Promise is just a value.

------
egeozcan
const myJob = { run: () => fetch(... is too long? Eager is easy to make lazy.
While the opposite is also true, it means a superfluous run. It is worse,
semantically speaking. I'd argue that eager is more general than lazy.

Also, cancellation is yet another state and it's hard to generalize especially
when you don't have threads.

Promises should always be async because you'd want the result to be
consistent. If I'm returning a promise and you are depending it to be sync,
that means it weakens my flexibility. It makes the code harder to reason
about.

------
Animats
OK, we can't do threaded imperative programming because threads are expensive
and people botch the locking. So we have callbacks where completion of some
external event calls you back. Then you need closures so the callback has some
state so it knows what to do when called back. Now you have a control
structure problem, and need a state machine to decide what to do next. But
most of the time you just want to do the next thing, so there's syntax such as
".then()" so you can write imperative programs again.

------
paroneayea
One of the big challenges mentioned with promises is that you have to kind of
commit to promises linking to promises... this is a common problem with async
systems added later, where you have to "line them up like gears", and you
can't just do async functions which call non-async functions which call async
functions and expect it to work. Python has this problem too.

There's a solution in delimited continuations, however delimited continuations
seem to only be used and understood in the Scheme community (are they used
anywhere else?). Delimited continuations allow you to suspend your code to a
"prompt" lower in the stack at that point... and it doesn't matter if you have
non-"async" code in between.

It'll be nice when they make their way to other more mainstream languages.

------
dmitriid
\- Eager is relatively easy to convert to lazy, if needed. The inverse isn’t
true.

\- async is relatively easy to turn into sync. The inverse isn’t true.

\- no API design is ”neutral”. Any API design is opinionated. Cancelable lazy
synchronous promises is just as opinionated a design as the current design.

------
kazinator
The whole idea of cancelation is poor; it shouldn't even be a feature.

The way you avoid unnecessary computation, when you have laziness, is to just
roll it into the lazy semantics.

Have it so that if the promise generates something complicated, like a
sequence, that the promise only generates as much of that something as is
accessed (and maybe only a little bit beyond that).

In other words, the async promises should perhaps behave not so differently
from synchronous lazy mechanisms.

The two are flipsides of the same coin. Say I have a synchronous lazy list (of
strings). The strings come from reading a file. Ah, but reading a file is
asynchronous at the OS level. So actually the list is asynchronous, in a
sense. When we access the first element in the list, a line is read from the
file. The underlying stream object reads an entire buffer-sized chunk, though:
still synchronously. Moreover the OS behaves asynchronously and reads ahead in
the file, caching more of it than the stream library asked for. Of course, the
OS doesn't read the _whole file_ (unless it's small). Just a little bit ahead.
Enough ahead not to hammer the I/O subsystem with lots of small operations.

We can create this list over a log file that has 100 million lines, then read
just the first 100 lines and stop using it. The underlying stream library
might read 16K of the file, of which the 100 lines occupies only the first 8.
The OS might have read ahead by quite a bit more than that and cached more of
the file, and the hard drive's firmware might have buffered an entire track.
If we don't read anything more from that list, then the operation is
effectively canceled. The OS won't cache any more from the file; the stream
library won't buffer more of text stream.

~~~
deathanatos
> _Have it so that if the promise generates something complicated, like a
> sequence, that the promise only generates as much of that something as is
> accessed (and maybe only a little bit beyond that)._

What about when you start composing Promises? For example, say I have a top-
level promise that just returns a value. But under the hood, it needs a
promise that generates an array. Even if the under-the-hood promise generates
the array piece by piece, the caller of the larger promise is only even going
to get a single value, so they lose the ability to "stop".

(You might imagine that the composed-over promises are network I/O, for
example.)

~~~
kazinator
In that case, one thing we can do is that the under the hood promise is not
forced at all if the wrapping promise's value isn't forced yet. Then we don't
have any async behavior, unfortunately; no calculation begins until the
promise is called in. We can do part of the calculation ahead of time, but
then stop and don't complete it until there is an indication that the value is
required. That is fudgy though: how far is far enough to reap the async
benefit without the downsides.

How about this alternative: instead of .cancel() on promises have a .commit().
This is called if you're sure that you will eventually need that value. The
calculation then proceeds full steam ahead: no going back.

You can ask for the value with or without .commit(); it is just a hint. But if
you ask without .commit(), you may have to wait for a completion that was
deliberately stalled due to your lack of commitment.

Without .commit(), async promises will still proceed on their own to some
extent based on some fudge factor; we don't want programmers automatically
calling .commit() on every promise they make to get the async benefit, which
defeats the purpose.

Uncommitted promises could be identifiable to the garbage collector and
subject to an internal cancelation protocol between GC and the promises. That
protocol basically helps the promise's thread vacate the object so it can be
reclaimed.

Promises could have some sort of hint about how far to proceed before
requiring commitment. This would have to be well thought out: such hints tend
to be too system and workload specific. Automatic tuning is better. The
promise system could keep some statistics about how soon various kinds of
promises are called in after being initiated, and how often they are called in
_at all_ , and then uncommitted promises could decide based on that how far to
compute.

------
xori
This was a weird blog post to read. I think I agree on all of your points
(Promises should be lazy[-ish], cancellable and optionally synchronous) but
disagree on all of your proposed solutions.

I do think `p = new Promise(fn);` shouldn't kick off the `fn` immediately. But
that it should start right away in the next event loop. I haven't had issues
with creating promise getters for repeatable calls. And think it organizes the
business code away from the low level code.

I don't see a problem with the original Promise.cancel() you proposed or how
your lazy promises makes canceling them any easier.

And don't we have `await` for the synchronous problem?

    
    
      console.log(await Promise.resolve('hello')); 
      console.log('world')
      // outputs "hello" "world"

~~~
saurik
One nice thing about "new Promise()" calling the function immediately is that
if you are prepared to provide the value immediately then you don't have to
return into the runloop, but probably the reason I'd give for why delaying the
call would be a horrible idea is that the vast majority of the time the
promise is going to do some minimal amount of setup work and then... return to
the runloop (and if it isn't, I am going to ask why you are using a promise).
That means that the behavior they currently have of calling the function
immediately minimizes returns to the runloop and provides performance as close
as possible to what you would get if you hand-coded it using callbacks (the
only overhead being the unlikely-to-be-optimized-away-fully-by-the-VM object
allocations and indirect function calls; but like: this paradigm in a language
with zero-cost abstractions would be perfect).

~~~
codedokode
No, you won't get the result immediately anyway because then() callbacks are
called only on next event loop iteration. This protects from overflowing the
stack, but might have small impact on performance.

~~~
saurik
OK. I see why they would have chosen to do that, and it disappoints me, but it
seems to be weirdly more complex than that... this code prints the numbers in
order (and continues to do so if you rearrange the calls to setImmediate and
setTimeout or move them outside of the function either before or after)... so
it is definitely returning from the function but it seems like the resolved
value gets to jump the queue?

    
    
        (async () => {
            setImmediate(function() {
                console.log("4");
            });
            setTimeout(function() {
                console.log("3");
            }, 0);
            console.log(await new Promise((resolve, reject) => {
                console.log("0");
                resolve("2");
            }));
        })().catch();
        console.log("1");

~~~
codedokode
I don't think it is a good idea to expect some specific order in executing
callbacks here unless it is described in some specification.

------
BenoitEssiambre
You don't have to use promises: [https://medium.com/@b.essiambre/continuation-
passing-style-p...](https://medium.com/@b.essiambre/continuation-passing-
style-patterns-for-javascript-5528449d3070)

------
kahnjw
Agreed 100%. I did a bunch of js work about 3 years ago, used tons of
promises. Then started a new job using Scala. The futures api in Scala is
exactly what the author advocates, and it is definitely better for the reasons
he gives.

------
acjohnson55
He actually missed my least favorite thing about the promise API, which is
that they fail silently. I'd argue that by default, an unhandled rejection
should throw an exception at the end of an event loop, with an opt-in for the
current behavior per-promise.

------
andrewaylett
I seem to be in a minority, but rarely do I want to use the `new Promise()`
mechanism for creating a promise, and I get the distinct impression that
having it be the 'default' is a bad idea -- the amount of times I've seen
people wrapping up all their promise-related code inside the constructor,
finishing off with `.then(function (x) {resolve(x)})` is disappointing :(.

async/await solves much of this, of course, but where that's not available I
much prefer to keep all my async functionality actually async, and start off
by using `Promise.resolve()`. Save the constructor for when you need to
encapsulate some non-promise async code.

------
dsego
Sindre built some great little modules that make working with promises a
breeze. [https://github.com/sindresorhus/promise-
fun](https://github.com/sindresorhus/promise-fun)

------
kodablah
This feels analogous to the problems w/ futures in Scala/Java as they were
first introduced. And the solutions are provided by libraries like
[https://monix.io/](https://monix.io/).

So why are promises the problem instead of the lack of libraries on top of
them? I understand cancellation cannot be fixed, but laziness sure can. As for
synchronous execution, that's just not gonna happen in event-driven-land. It
doesn't with other callback-based APIs (except AJAX which is deprecated) and I
don't see the complaints there.

------
fwip
It sounds like the author wants coroutines, not promises.

~~~
yuchi
Exactly.

I’ve been using redux-saga a lot recently and they really fill the gap between
the concept of a long running Task and asynchronous values/executions
(Promises).

~~~
rmrfrmrf
I just got there myself after thinking async/await would be all I needed.

There's still a lot that can be done with generators, so I don't see them
falling out of favor completely yet.

------
rmrfrmrf
You can have your referentially-transparent cake and eat it too, but the main
problem is that no one has developed a decent library that marries
FantasyLand-compliant wrappers with Promise interop.

There are about 8 million Task/IO monad implementations and no one stopped for
a second to think that `task.fork` could just return a Thenable and work with
async/await as expected.

~~~
rockymadden
This does exist: [https://github.com/fluture-
js/Fluture/blob/master/README.md#...](https://github.com/fluture-
js/Fluture/blob/master/README.md#promise)

Future.of(0).promise().then(console.log);

------
phaedrus
What do you think of C++17 coroutines? Of particular interest to you might be
CppCon talks by Gor Nishanov, which are on Youtube.

------
fefb
I rarely use promises. Just for the most simplest logics. When a package
return promises in its API, I just use Rx.Observable.fromPromise(theRomise) .
ReactiveX, especially RxJS, is so powerful to handle async events, from
differents sources to differents logics. You can build powerful pipelines with
it.

------
reaktivo
A reminder that LazyPromises are trivial to implement:

    
    
        function LazyPromise(executor) {
          this.then = (resolve, reject) => new 
        Promise(executor).then(resolve, reject);
          this.catch = (resolve, reject) => new 
        Promise(executor).catch(reject);
        }

------
fiatjaf
I don't agree. Promises are a solution to the asynchronous callback world, not
a dream spec someone came up with.

1\. Eager, not lazy: Why is lazy better? Sometimes I want eager, I use
promises. Sometimes I want lazy, I use a promise getter. Done. What if it was
the other case, how would I turn a lazy promise into an eager one without
messy code?

2\. No cancellation. Events that permit cancellation are rare. Situations in
which you would want to cancel something are rare. If you face these, use a
promise library that does permit cancellation. Bluebird does it.

3\. Never synchronous. If you want synchronous, just don't use a Promise, use
a function that takes another function. I don't get the point about "callbacks
to sync". Callbacks are asynchronous. "Synchronous callbacks" may have this
name, but they're not actually callbacks, they're functions. A function can
take another function as a parameter, that doesn't automatically make it into
a "callback".

------
avaq
This article is a bit similar to one I wrote a while ago:
[https://medium.com/@avaq/broken-
promises-2ae92780f33](https://medium.com/@avaq/broken-promises-2ae92780f33)

------
maximexx
> They force some behaviors to always happen even when it doesn’t make sense.
> That’s okay

No, it's not. Promises suck, that's all, no need to spend more words on it.

------
amelius
The alternatives look nice. There's just one requirement missing: streaming
progress information to the listeners.

~~~
zimablue
I think this completely changes what you have though, in a way that's no
longer a primitive?

You then have something like an async generator, which is like a fusion of
asynchrony and sequence?

Except generators are pull not push, so instead you have a promise that
accepts a function that operates on a sequence?

I don't know if this pattern is common somewhere, someone who does please
explain!

~~~
amelius
In Haskell you can generate a list lazily. The last item in the list could be
the final result-value, whereas the leading elements could be the progress
information. Canceling a computation could be done simply by stopping to
"listen" to the result (i.e. stopping the evaluation process).

I guess you could implement this in JS by having a promise-like structure
which returns a tuple containing progress information AND a promise for the
remainder of the computation. In a sense, this is similar to the generator
approach.

One problem is if your program execs an external program. At what point should
a promise kill the external process?

------
LordHumungous
I thought 'await' was supposed to solve the problem of going from async to
sync?

------
0x7f800000
Is it possible to modify Promises to be lazy through a Proxy?

------
singularity2001
agreed with OP: being able to call async functions from 'normal' code would be
very kind of js.

res=fetch('i-just-want-the-result.com')

at least when scripting with node.js

------
codedokode
I have a feeling the author doesn't understand Promises well. In my opinion,
they are in fact designed poorly, but I don't see any problems with points the
author describes.

He doesn't like that the callback is called immediately - but Promises just
represent a result that will be available later and do not guarantee (and
should not) when the function will be called. If you want to delay some
function call, do it explicitly or use a delay promise.

In my opinion, the main problem with promises is broken error handling. They
don't play well with exceptions. For example:

    
    
        var p = Promise(function (res, rej) {
            throw new RuntimeError("System is broken");
        });
    

This code will just ignore the error. While it is expected that the runtime
error would float up and terminate the program - that is what runtime errors
are made for.

This also makes writing tests more difficult because tests often use
exceptions to indicate failure.

I have some ideas how to fix it (neither is perfect), but the comment will
become too long.

~~~
rictic
This is a fundamental issue with multitasking, in that an independent task
will have its own stack, so errors can't propagate up the stack of the
function that started the task.

e.g. consider this python code:

    
    
        def start_task():
          Thread(do_task).start()
    

What happens if do_task throws an exception? The exception can't propagate up
from start_task because start_task may have returned when the exception is
thrown.

At some point you have to join your threads, and you have to await/then your
promises, otherwise there's no well defined place in your program for the
exception to go.

Linters can help with this. tslint is one example, I believe it can give a
warning for unhandled promises.

`p` will reject, any function that awaits p will reject. The error is that
you've fired off an asynchronous task but you don't have any code that cares
about the result of that asynchronous task (and the stack of any code that

~~~
codedokode
> otherwise there's no well defined place in your program for the exception to
> go.

I don't see the problem with that. Unhandled exceptions can occur at any place
of your program.

I also don't see why the unhandled exception from the background thread cannot
terminate main thread. Why not? That is how unhandled exceptions are supposed
to work. Terminating the program is the optimal _default_ behaviour for any
error in my opinion. This way you won't miss them.

~~~
fiatjaf
In Go there are two ways you can handle wrong things happening:

    
    
      - return an error
      - panic
    

What is the recommended, best, pretty, beautiful way? Return an error. Nobody
uses panic.

Javascript Promises are equivalent to that.

If you want to stop the main program, call process.exit(). That would be the
equivalent of panic.

~~~
codedokode
I don't think it is beautiful. This fills program with lots of unnecessary
`if`s for checking results of each call. Also, it prevents one from chaining
functions like a(b(x)).

~~~
fiatjaf
Yeah, you're right. Not beautiful and prevents chaining. But works like a
charm, don't you think?

Chaining functions is for functional people. Go is proudly imperative.

------
IIIIIIIIIIIIIII
This may get me in hot water here but...

I started working with JS promises specifically when they were barely
available in a beta runtime. It took me over a year of working with them to
really get a feel for them, now it's been far longer. That's because while you
can "understand" the description and use it just fine, but a deeper
comprehension and intuition takes much more time. I experimented a lot and
insisted on writing my own helpers from scratch, without looking up other
people's code, because I wanted to get a _feeling_ for the details.

This article seems quite artificial to me, the problems mostly made-up.

I don't see the point of the first complaint. If you don't want to start right
away chain it to something that it should wait for. If it should not wait,
then it can start right away. Her writes " _Functions rescue us in this case
because functions are lazy._ " which I don't quite understand: what is he
running through promises if not functions? Hi "betterFetch" example mixes
synchronous and promise syntax - how about using async/await if you prefer the
former? I admit though I don't quite get the point of that example.

I don't understand the whole "run a promise" idea either - because you don't
"run a promise", that whole notion has nothing to do with what "promise"
means. Just look at the word! It represents a (wrapped) future value. Where
does the idea of "running it" come from? How do you "run" a (future) value?

You have a function and it is quite easy IMO: Using a promise you chain it to
whatever you want to wait for. These days you can even use semi-synchronous
syntax (async/await). "Running a promise" makes no sense to me, you run
functions, and I don't see where the difficulty lies here?

The second point, cancellation, has been discussed very, _very_ thoroughly -
after all, this was on the table to be standardized. One of the issues he
raises is the same as point one - if you have a chain it's automatic. The main
issue of cancellation is that you have zero control over the actual
asynchronous operation that the promise actually stands for - because this is
controlled by the OS alone! If you started I/O, what does "cancelling the
promise" mean?

1\. If it is still waiting: If you don't want to run something make sure the
previous step returns a rejected promise. You can easily "cancel the promise".
Just let your promise function check something in the parent scope (via
callback or it is in its lexical scope) when its chained function starts, and
if that says "you are canceled" then don't do it. You can put such a check as
a standalone function anywhere in the promise chain you created, just let that
"amIcancelled()" function throw or return a rejected promise. The whole chain
aspect is something that the article is missing.

2\. If the code is already running: you cannot cancel the actual (OS
controlled) asynchronous operation, nor can you cancel a running JS function
(unless you use async/await see bottom paragraph).

I agree that promises are not perfect, but async/await - not mentioned at all!
- makes it a bit easier for many people - as long as they don't forget one
thing: Even if your functions now look like synchronous ones there is a
fundamental difference: A synchronous JS function is never interrupted by any
other code. An async function is suspended and other JS code gets to run in
the middle of it when it encounters an "await". This is something new first
introduced by generators, before that JS functions were atomic (now some are
not).

~~~
ryandvm
> It took me over a year of working with them to really get a feel for them

I had a similar experience. It took quite a while for me to stop shooting my
foot. My takeaway from that experience was that, while they do have certain
advantages, Promises suck. Any abstraction that is so unintuitive that it
takes beginners dozens or hundreds of hours to master is probably not an
abstraction worth using - especially if it is supposed to be a primary feature
of the language.

~~~
z3t4
I had the same experience with the callback pattern. It literally took a whole
year to grok. And I code almost every day. I'm now a _ninja_ with callbacks.
So it's a hard to motivate myself to learn Promises. Syncronious code is more
easy to deal with, and you get concurrency by thread abstraction. But it will
eventually bite you when you start to get double transactions eg line 1 checks
if there's funds in the account, line two draws money, line 3 inserts good.
But then another thread takes the money between line 1 and 2. And then the
"single threaded" event loop actually becomes easier to deal with then making
sure your code is "thread safe" with locks etc.

------
tobltobs
Promises are classic JS clusterfuck. Replace something shitty, like nested
callbacks, with something even more shitty. Thereby ignoring 40 years of
experiences of other languages.

~~~
phillnom
I couldn't disagree more. Promises are magnitudes better than nested
callbacks.

