
What's so great about JavaScript Promises? - jamesjyu
http://blog.parse.com/2013/01/29/whats-so-great-about-javascript-promises/
======
MatthewPhillips
Promises do 2 things:

1) Give the illusion that control has been returned to the caller. It hasn't,
but it kinda looks like it. This is soothing for people who don't really like
JavaScript all that much.

2) Provide a structured way of dealing with callback-oriented code. You don't
have to think about in a function whether the callback should be that last
parameter, the only parameter. You don't have to think about whether it should
include an errback or if the error should instead be the first parameter of
the callback. You just do it the way the promises library describes, so the
interface is predictable.

#1 is of no concern to me. #2 I feel pretty meh about. A consistent interface
is nice, but I don't lose sleep over a library using callbacks in a slightly
different way than I like, I just write an extra function or two to wrap it
into the way I like and move on with my life. I don't have a problem with
Promises per se, I just don't want a library I'm using to pick a winner and
tact on an extra few k.

Doesn't look like ES6 is going to pick a promises winner, so I probably will
largely ignore it until ES7.

~~~
rictic
I wrote a ton of callback-oriented code, thinking similarly to you. Promises
have come a long ways since then, and I tried out Q.js and discovered that I
was missing out on a significant and powerful abstraction. The two things
you're missing out on here are:

#1 Promises have strong guarantees. A promise will be resolved at most once,
and it will be either resolves successfully or unsuccessfully. This makes
reasoning about them and composing them together much easier. For example,
whenever I need to perform multiple async calls with callbacks I've always got
to write a bit of repetitive error-prone boiler plate to wait for and combine
their results. With a good promise library this is a one-liner using a well
tested library function.

#2 Error propagation. With callbacks most exceptions vanish into the ether,
only catchable by window.onerror and friends. Propagating that promise up to
the caller who can handle it is an enormous pain in the ass and requires
repetitive error handling code everywhere if you want to be resilient. With
promises an exception raised in a handler counts as the resulting promise
failing, which can propagate up similarly to the way exceptions propagate with
try/catch. Again, this makes writing reliable abstractions so much easier.

~~~
taproot
#2 cannot be emphasized enough, I can't count the number of times this would
have improved my code in a huge way.

------
davewasmer
So, as far as I can tell, there are two benefits this article outlines:
avoiding highly nested code, and handling certain types of errors better. Are
there other benefits?

In my experience, promises can be more difficult to debug (once the promise
library takes my callback, I can't follow the flow of execution until it is
called, unless I crack open the library itself), and are less intuitive. While
those shouldn't disqualify the idea immediately, it does make me hesitate.

And I'm not sold on the benefits to error handling either. In the author's
example, yes, all those error handlers could be grouped together - but how
often do you have that many async function calls with _identical_ error
handlers? In most situations, that is a warning sign that you aren't handling
errors properly and with enough "resolution".

All that said, I don't think promises are useless. There may be times when it
makes sense to use them. But calling them the "next great paradigm in
JavaScript programming" seems like a bit of a stretch to me.

~~~
embwbam
At least in node, you usually pass errors all the way back up the call chain.
With normal callback style error handling, you have to put "if (err) return
cb(err)" after each async call. It's crazy.

With promises, you can attach an error handler once. If you think of errors as
only unexpected conditions, you can have a single error handler for each set
of operations and not have to worry about checking at each step.

~~~
MatthewPhillips
You can wrap your callback functions and handle errors all in one spot.
Example:

    
    
        var slice = Array.prototype.slice;
    
        function errHandler(callback) {
          return function(err) {
            if(err) {
          // Error handling code here.
            } else {
              callback(slice.call(arguments, 1));
            }
          };
        }

~~~
embwbam
I've tried stuff like this before, but it clutters your code pretty bad
anyway, doesn't it? You have to do something like this:

    
    
        function myAsyncThing(cb) {
            var handleErrors = errHandler(cb)
            doSomethingElse(handleErrors(function(data) {
                // repeat. use handleErrors at each callback. 
            }))
        }
        

Am I missing something? Can you get it cleaner than that?

~~~
MatthewPhillips
That depends, if you're using doSomethingElse a lot I'd just go ahead and wrap
that as well, if you're using it once it's no big deal.

    
    
      doSomethingElse(errHandler(function(real, results) {
        // foo
      }));
      

If you're using Node you should be writing small(ish) modules anyways, so it
doesn't lead to much code blote and its advantageous to consolidate your error
handling in one or 2 spots.

------
ef4
Promises are not great -- I'd much rather have real coroutine support. But
given that we don't have real coroutine support, promises are absolutely
essential do writing sane, fault-tolerant Javascript.

~~~
shardling
If for some reason you end up writing javascript that only needs to run on
Firefox, you do get coroutines via the _yield_ keyword. Pretty sure it will
also appear in a future version of javascript.

[https://developer.mozilla.org/en-
US/docs/JavaScript/New_in_J...](https://developer.mozilla.org/en-
US/docs/JavaScript/New_in_JavaScript/1.7)

~~~
Sephr
I wrote a library that adds support for this using yield:
[http://eligrey.com/blog/post/pausing-javascript-with-
async-j...](http://eligrey.com/blog/post/pausing-javascript-with-async-js)

------
kogir
Promises are fine, but I much prefer C#'s await. The last example using await:

    
    
      try {
        await Parse.User.logIn("user", "pass");
        var results = await query.find();
        await results[0].save(new { key = value });
      } catch (Exception e) {
        // Some error handling
      }
    
      // the object was saved
    

Does anything exist to do this in JavaScript? Can CoffeeScript emulate it? C#
does it entirely in the compiler, I believe.

~~~
rektide
There's an extreme amount of magic going under the hood when you do this- from
[http://msdn.microsoft.com/en-
us/library/vstudio/hh156528.asp...](http://msdn.microsoft.com/en-
us/library/vstudio/hh156528.aspx) :

| "An await expression does not block the thread on which it is executing.
Instead, it causes the compiler to sign up the rest of the async method as a
continuation on the awaited task. Control then returns to the caller of the
async method. When the task completes, it invokes its continuation, and
execution of the async method resumes where it left off."

The implementation detail is that your caller's state needs to somehow be re-
asserted when the asynchronous wait finishes evaluating- anyone mildly versed
in computer hardware knows that to, in most situations (some green threading
environments aside) be a very onerous scary intensive thing, speak nothing of
the tasks of capturing and storing the continuation, which likely is fairly
deep in a call stack somewhere.

It's great magic, it's certainly putting a lot of magic programmers like to
use at their fingertips. But be advised that it is somewhat scary, handing out
magic wands like candy to newcomers and telling them it's OK and good.

~~~
kogir
It's only magic if it's poorly or non-documented. I'm aware of many of the
internals, and have written my own SynchronizationContext before. Execution
actually can (and often) does continue using the same thread.

I still strongly believe it's better to make cool things available rather than
try and protect people from themselves.

~~~
rektide
I'd fallen out of the .NET world before Async/Await arrived, but were someone
to put some under-the-hood-monkeying around-with-it docs under my nose, I'd
love to brush up. I seem to have osmosed that there is some compiler
transforming going on, but I've not been exposed to ways to monkey around, did
not know there was more than a use-only black box: would be lovely to get an
engine-bay tour. Have you any recommendations?

------
iamwil
Funnily, I was just doing some comparison shopping between different promise
libraries as well.

Besides avoiding highly nested code, and handling certain types of errors
better, I've found that they also provide an optional progress callback, in
addition to the success and error callbacks in then(). This is highly useful
in reporting the progress of a long running process, to help give the user
some update or stream the results as it's being completed. That cuts down on
both the actual and perceived response time.

In normal callback-style, it's awkward to have two callbacks, one for
success/err, and one for progress. In addition, it's not the canonical form
for node.js, so we often skip doing it.

As for between the different libs, I've found that Q has one of the better
APIs, but it's also the slowest. <https://github.com/kriskowal/q/issues/153>

But that's because it uses Object.freeze(), which the js V8 penalizes with a
slowdown, when iterating over it.
[http://stackoverflow.com/questions/8435080/any-
performance-b...](http://stackoverflow.com/questions/8435080/any-performance-
benefit-to-locking-down-javascript-objects)

~~~
tlrobinson
The usage of Object.freeze() was removed recently and is slated for release in
v0.9
[https://github.com/kriskowal/q/commit/ef6c1b9ce35c7bf3949c70...](https://github.com/kriskowal/q/commit/ef6c1b9ce35c7bf3949c7035b877d153af766987)

~~~
iamwil
Awesome, looking forward to the update, and speedup.

------
grannyg00se
>That’s getting pretty ridiculous...but because of the way promise chaining
works, the code can now be much flatter.

In what way is the indented code ridiculous, and why is flatter better?

Conceptually, you still have to think of the callbacks operating the same way
in order to properly understand the code. Flattening it out and changing the
shape so that it isn't nested doesn't really help change what is going on.
Maybe I'm missing the bigger picture.

~~~
rektide
Indenting is a symptom of having to write code structurally, having to tightly
couple call side with it's handler.

Promises break that structural linkage: we have the only thing in JS that
matters, an object, a first class plain old object, that we can reason with
and perform operations on as if it were any other object. It happens to be an
object about a future which will one day resolve, but it's just an object.

And because it's an object that can, after it's inception, have handlers added
to it, we break the necessary structural linkage that callbacks mandate: we
can make callbacks latter, we can condition what callbacks to attach, we can
compose or chain more promises after our first promise and put our handler on
that... we have a first class thing to work with, not just a function, which
expects a fully-readied handler. It defers the need to define handling.

Read more of my comments to get a better sense of why A) I adore promises B) I
feel this article is injurous to the case for promises.

~~~
jessaustin
_this article is injurous to the case for promises._

Haha I thought the same about the Crockford talk. Can you point me to a non-
injurious source?

~~~
jasondenizac
see also: <https://gist.github.com/3889970>

------
tlrobinson
I'm glad promises are gaining traction in JavaScript, but it's a shame they
chose to use jQuery promises rather than CommonJS Promises/A. The differences
in error handling are significant.

See Domenic Denicola's comment at the bottom of the article, and his excellent
"You're Missing the Point of Promises" post: <https://gist.github.com/3889970>

------
kzahel
Most of the time I'm processing data that is on a queue, and I have some kind
of event loop. Promises are great for simple things, I guess. But they just
seem like extra, unnecessary syntax. I tried a while ago looking at the
different libraries to see if they would give me a
"resolveAll/sequential/parallel" for a list of promises, but none of the
libraries even seemed to do something very basic like that, which is where I
would get any actual utility. I guess most promises are told to "execute" (do
their asynchronous thing) as soon as the promise is created. I'm more
interested in being able to have more fine grained control about when the
execution happens, too.

All in all, I find promises to be pretty pointless in javascript. Until ES6,
that is. I also experimented with the "yield" expression in google's traceur
compiler, but they were still working out their issues last time I tried.

------
nickporter
I think I'm the only one in this world who doesn't mind the async pyramid. To
me, it is very intuitive. Also, if your pyramid is too deep, chances are you
need to extract some of it into functions.

~~~
ufo
I hate the async pyramid because explicit continuation passing ought to be a
low-level representation generated exclusively by machines, instead of by
programmers.

For example, one of the great things about high level programming languages is
that you can write _composable_ expressions such as

    
    
        f(g() + h())
    

but you can't do that if f and g are written in CPS style, since then you need
to give an explicit names to all the intermediate "registers" used in the
computation as well as baking down an specific order of evaluation for things

    
    
        g(function(a){
            return h(function(b){
                return f(a + b)
            })
         })
    

> Also, if your pyramid is too deep, chances are you need to extract some of
> it into functions.

This sounds like that "if your method is more then 15 lines you should split
it into subfunctions". I never liked this line of reasoning because the
complexity (number of possible interactions) in a module increases
quadratically with the number of functions in it. You fight this complexity by
only breaking down a function on "natural" boundaries dictated by
encapsulation and reuse, rather then by code size.

------
ilaksh
I think Iced CoffeeScript's await and defer are nicer.

There is also LiveScript which has backcalls:

    
    
        x <- map _, [1 to 3]
        x * 2
        #  => [2, 4, 6]
    
        data <-! $.get 'ajaxtest'
        $ \.result .html data
        processed <-! $.get 'ajaxprocess', data
        $ \.result .append processed
    

I think the LiveScript syntax is awesome.

------
leeoniya
promises are also pretty essential when doing RPC-esque back-and-forth with
web-workers when you need to distribute a workload and wait for all results to
come back to continue a chain of ops on the main thread.

------
Twisol
Promises actually aren't far off from functional reactive programming (FRP).
The idea in common is to take the abstract notion of an entry point and make
it an actual object to be manipulated.

------
shtylman
If your original callback approach just had one callback it wouldn't look so
terrible. success: error: is some old jqueryism.

------
kikibobo69
Why are these called promises and not futures?

~~~
dotborg
ES6 generators are futures

