
Callbacks are imperative, promises are functional - timcraft
http://blog.jcoglan.com/2013/03/30/callbacks-are-imperative-promises-are-functional-nodes-biggest-missed-opportunity/
======
tomdale
James does a good job of articulating why promises are such a useful
abstraction, especially in JavaScript land. I've been working on a project
recently that relies heavily on coordinating many asynchronously-populated
values, and I don't even want to think about what the code would look like if
we were wrangling callbacks manually.

We actually extracted our promises implementation from the work we've been
doing, and released it as RSVP.js[1]. While other JavaScript promises
libraries are great, we specifically designed RSVP.js to be a lightweight
primitive that can be embedded and used by other libraries. Effectively, it
implements only what's needed to pass the Promises/A+ spec[2]. For a
comparison of RSVP.js with other promises-based JavaScript asynchrony
libraries, see this previous discussion on Hacker News[3].

1: <https://github.com/tildeio/rsvp.js>

2: <https://github.com/promises-aplus/promises-spec>

3: <https://news.ycombinator.com/item?id=4661620>

------
mbostock
Not to focus too myopically on the given example, but I can’t help but wonder
why it’s a requirement that the first file be handled specially? A less
contrived example would make the argument more convincing.

If I wanted to compute the size of one file relative to a set, I’d probably do
something like this:

    
    
      queue()
          .defer(fs.stat, "file1.txt")
          .defer(fs.stat, "file2.txt")
          .defer(fs.stat, "file3.txt")
          .awaitAll(function(error, stats) {
            if (error) throw error;
            console.log(stats[0].size / stats.reduce(function(p, v) { return p + v.size; }, 0));
          });
    

Or, if you prefer a list:

    
    
      var q = queue();
      files.forEach(function(f) { q.defer(fs.stat, f); });
      q.awaitAll(…); // as before
    

This uses my (shameless plug) queue-async module, 419 bytes minified and
gzipped: <https://github.com/mbostock/queue>

A related question is whether you actually want to parallelize access to the
file system. Stat'ing might be okay, but reading files in parallel would
presumably be slower since you'd be jumping around on disk. (Although, with
SSDs, YMMV.) A nice aspect of queue-async is that you can specify the
parallelism in the queue constructor, so if you only want one task at a time,
it’s as simple as queue(1) rather than queue(). This is not a data dependency,
but an optimization based on the characteristics of the underlying system.

Anyway, I actually like promises in theory. I just feel like they might be a
bit heavy-weight and a lot of API surface area to solve this particular
problem. (For that matter, I created queue-async because I wanted something
even more minimal than Caolan’s async, and to avoid code transpilation as with
Tame.) Callbacks are surely the minimalist solution for serialized
asynchronous tasks, and for managing parallelization, I like being able to
exercise my preference.

~~~
tel

        do f1 <- fsStat "file1.txt"
           f2 <- fsStat "file2.txt"
           f3 <- fsStat "file3.txt"
           let ratio = (size f1) / (sum $ map size [f1, f2, f3])
           print ratio 
    

Or, if you prefer a list

    
    
        do fs <- mapM fsStat files
           let ratio = (size . head $ fs) / (sum . map size $ fs)
           print ratio
    

And that seems to be one small example of why you may have already invented
monads. I've been loving the impact of Javascript—modify and immediately see
it on the browser—but every time I'm not using Haskell I miss it dearly.

~~~
dchichkov
Please use more readable names in your code. Use of names like 'fs' in key
places makes it unreadable.

~~~
schrototo
Generally this is of course good advice, but in Haskell it is common practice
to use very short names (e.g. x, x', xs, ...) if the context is clear (which
it usually is due to small scope, clear function names, type signature, etc.).
This makes code much more concise and readable (it also makes it look very
"mathematical").

~~~
skrebbel
> _(it also makes it look very "mathematical")_

Has that ever been an advantage?

~~~
psionski
For people that like math, sure, why not. When implementing mathematical
concepts, if you squint at Haskell code you can see the original formulas,
which should make it easier for people used to this way of thinking.

EDIT:

I'm not implying it's useful just for programming "math stuff", after all,
everything can be reduced to a mathematical problem - including game
engines[1], web application frameworks[2], etc.

[1] <http://www.cse.unsw.edu.au/~pls/thesis/munc-thesis.pdf> [2]
<https://github.com/yesodweb/yesod>

~~~
vidarh
And it's probably one of _the_ most significant things limiting adoption of
Haskell.

~~~
egeozcan
Exactly. From my point of view, Haskell is the perfect language which
unfortunately comes with the worst naming conventions. (I generally develop in
C#, F# and JavaScript)

~~~
psionski
The naming conventions are fine, but they're _very_ different from C#, F# or
JS - it's basically the difference between reading English text and reading
formulas (i.e. compositions of weird letters and symbols).

~~~
psionski
I just want to add something - using Haskell's naming conventions in C# or JS
is a crime against humanity. If your function is longer than 5-6 lines and
wider than 10-15 characters `xs` or `<|>` are _not_ good names. I think you
could get away with it in F#, but it will look weird.

------
steveklabnik

      > If foo takes many arguments we add more arrows, i.e. foo :: a -> b -> c
      > means that foo takes two arguments of types a and b and returns something of
      > type c.
    

Nitpick alert: since everything is curried in Haskell, it's actually more like
`foo takes an argument a and returns a function that takes one b and returns
one c`.

Other than that teeny thing, this article is awesome, and I fully agree.
Promises are an excellent thing, and while I'm just getting going with large
amounts of JavaScript, they seem far superior to me.

~~~
andrus
Really? Consider

    
    
        f :: a -> b -> a
        f a = g a
        
        g :: a -> b -> a
        g a _ = a
    

It doesn't seem right to say that g "returns a function that takes one b",
whereas you could say that about f.

~~~
steveklabnik
Yes. <http://www.haskell.org/haskellwiki/Currying>

~~~
andrus
Thank you for clarifying! I did not know that all functions in Haskell are
considered curried. My surprise stemmed in part from reading a bit about
"arity" from [1].

It's interesting how the theoretical model of Haskell--"all functions in
Haskell take just single arguments"--differs from implementation, where, for
functions of known arity, GHC in particular does not actually "follow the
currying story literally" [2].

[1]
[http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Hask...](http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/HaskellExecution/FunctionCalls#Genericapply)

[2] <http://community.haskell.org/~simonmar/papers/eval-apply.pdf>

~~~
steveklabnik
Any time. It's one of the more interesting parts of Haskell to me, so it's one
I always remember.

You're absolutely right to point out that implementations and theory often
differ; compilers often do tricky things behind the scences.

------
crazygringo
This is an interesting perspective. But to me, even having spent a year on a
large node.js project, I just don't see how promises would have simplified
things at all.

If you have some crazy graph of dependencies, I can see how breaking out
promises could help simplify things. But I don't feel like that's a super-
common scenario.

The author says:

> * [Promises] are easier to think about precisely because we’ve delegated
> part of our thought process to the machine. When using the async module, our
> thought process is:*

> _A. The tasks in this program depend on each other like so,_

> _B. Therefore the operations must be ordered like so,_

> _C. Therefore let’s write code to express B._

> _Using graphs of dependent promises lets you skip step B altogether._

But in most cases, I don't _want_ to skip B. As a programmer, I generally find
myself _preferring_ to know what order things are happening in. At most, I'll
parallelize a few of database calls or RPC's, but it's never that complex.
(And normal async-helper libraries work just fine.)

I swear I want to wrap my head around how this promises stuff could be useful
in everyday, "normal" webserver programming, but it just always feels like
over-abstraction to me, obfuscating what the code is actually doing, hindering
more than helping. I want to know, specifically, if one query is running
before another, or after another, or in parallel -- web programming is almost
entirely about side effects, at least in my experience, so these things often
matter an awful lot.

I'm still waiting for a real-world example of where promises help with the
kind of everyday webserver (or client) programming which the vast majority of
programmers actually do.

> _Getting the result out of a callback- or event-based function basically
> means “being in the right place at the right time”. If you bind your event
> listener after the result event has been fired, or you don’t have code in
> the right place in a callback, then tough luck, you missed the result. This
> sort of thing plagues people writing HTTP servers in Node. If you don’t get
> your control flow right, your program breaks._

I have literally never had this problem. I don't think it really plagues
people writing HTTP servers. I mean, you really don't know what you're doing
if you try to bind your event listener after a callback has fired. Remember,
callbacks only ever fire AFTER your current imperative code has finished
executing, and you've returned control to node.js.

~~~
TheZenPsycho
The point is promises free you from wanting or needing to know about the order
that things happen in. I hear you saying you fear promises, because it means
it would get in the way of your ability to know that. But the truth is once
you embrace them, that need becomes unimportant.

The idea that webservers are "all about side effects" gives me a chill. The
whole architecture concept of HTTP is _no side effects_ , so to claim that
it's all about side effects seems odd. It should only be the case for POST PUT
or DELETE methods, and only in very specific ways.

~~~
rtfeldman
> The idea that webservers are "all about side effects" gives me a chill. The
> whole architecture concept of HTTP is no side effects, so to claim that it's
> all about side effects seems odd. It should only be the case for POST PUT or
> DELETE methods, and only in very specific ways.

There's nothing incongruous about that. It _is_ the case that side effects
should only happen on POST, PUT, and DELETE methods (and the like), but almost
all webservers are written because of a need to use these.

If your webserver is all GETs and HEADs, then it is either trivial and you
would have used someone else's instead of writing your own, or its sole
purpose is to repackage and serve existing data from other sources - a rare
use case among all webservers.

If you were to take an inventory of all the webservers out there, you would
doubtless find that almost all of them exist in large part in order to create
side effects.

~~~
crazygringo
And I'm thinking about complex sites, not simple serve-up-a-page-and-
that's-it.

A cache gets refreshed or added to. A user's viewcount is incremented. A new
statistic is calculated and then stored. An item is marked as viewed. And
these are all just on a GET.

On complex sites with a logged-in user, side effects are pretty much the norm.

------
ww520
I feel this is twisting the meaning of functional programming. Excel is not
functional. It is declarative. You declare the relationships between the cells
and Excel uses those to propagate changes. Just like a makefile is not
functional but declarative. The dependency of the relationships are enforced
to produce action. SQL is another example of declarative language and it is
nowhere near as functional.

~~~
erichocean
_Excel is not functional. It is declarative._

Alan Kay (yes, _that_ Alan Kay[1] -- the guy that's won a Turing award)
formalized spreadsheets as a limited form of first-order functional
programming.[0]

[0] <http://en.wikipedia.org/wiki/Spreadsheet#Values>

[1] <http://en.wikipedia.org/wiki/Alan_Kay>

~~~
gruseom
I'm afraid you've misunderstood that Wikipedia page. It attributes the phrase
"first-order functional programming" to authors other than Kay. All it
attributes to Kay is the phrase "value rule".

Kay's interest in spreadsheets wasn't about functional programming, it was
about interactive and dynamic computation. I have a pdf of the 1984 Scientific
American article that Wikipedia is quoting from. It does include the phrase
"value rule"—by which he simply meant what we would call a spreadsheet
formula—but I'm pretty sure it makes no argument about functional programming
(it's all images so I can't search to be sure). If you'd like a copy, email
me. It's a pretty neat article, ahead of its time as one would expect from
Alan Kay.

~~~
erichocean
I've already read the Alan Kay article you mentioned (recently, in fact), and
that's not what I took away from it.

I guess we'll agree to disagree, I don't think someone needs to use the phrase
"first-order functional programming" when they give the very definition of it,
which is what Wikipedia does: summarize Kay's argument.

I do agree the article itself was quite interesting, and certainly ahead of
its time.

~~~
gruseom
What's Kay's argument, then? And how does it relate to FP? I'm curious.

I was making a textual point about the Wikipedia article. Its use of the
phrase "first-order functional programming" is hyperlinked to
[http://journals.cambridge.org/action/displayAbstract?aid=727...](http://journals.cambridge.org/action/displayAbstract?aid=72731).
It's not citing Alan Kay.

~~~
erichocean
I guess I wasn't clear, sorry. What Alan Kay meant by the "spreadsheet value
rule" and what the phrase "a limited form of first-order functional
progamming" means are semantically equivalent; they are the same thing.

I have no idea if Alan Kay ever used the latter phrase, but it was easier to
use that phrase here on HN than Alan Kay's made up phrase, which would have be
difficult to understand without the content of the article explaining it.

~~~
gruseom
Ah, gotcha. I agree with you that spreadsheet formulas are a limited form of
first-order functional programming. But the memory model with which they are
coupled is just as important (I have in mind the grid addressing system and
dataflow semantics) and this does not fit as nicely into the FP paradigm. But
I'm repeating what I said in other comments.

~~~
erichocean
_But the memory model with which they are coupled is just as important (I have
in mind the grid addressing system and dataflow semantics)_

Totally agree. My startup is using hierarchical grids as our core datatypes
for just that reason (we also support Function cells that are used as values).

"Naming" is one of the core problems in computer science, and
grids/spreadsheets elegantly solve that problem for many ad hoc use cases,
where functional programming (in all its forms) does not.

------
ricardobeat
Ryan Dahl in February 2010, when Promises were removed from core:

    
    
        Because many people (myself included) only want a low-level interface
        to file system operations that does not necessitate creating an
        object, while many other people want something like promises but
        different in one way or another. So instead of promises we'll use last
        argument callbacks and consign the task of building better abstraction
        layers to user libraries.
    

Those libraries do exist. There still isn't a canonical Promises
specification. Node trying to force promises onto the ecosystem early on
would've been like applying brakes and slow down adoption enormously.

~~~
mjackson
> There still isn't a canonical Promises specification.

Yes, there is: <https://github.com/promises-aplus/promises-spec>

~~~
ricardobeat
Promises/A+ surfaced less than 6 months ago, and is not implemented by most
widely-used frameworks. Still a bit far from canonical.

~~~
tlrobinson
Promises/A+ is just a more fully specified version of Promises/A, which has
been around for about 4 years.

------
SeanDav
I don't agree that there is any fundamental difference in functionality
between callbacks and promises.

Promises don't somehow magically make asynchronous code easy to write while
leaving callbacks out in the cold. They have very similar strengths and
weaknesses and I didn't find any of the OP's arguments compelling.

In fact, if I had to choose, I would take the opposite view and say callbacks
are neater, cleaner and more consistent than promises.

~~~
tomp
Promises are values, and you can use them to compute things. Callbacks are
procedures, and are non-composable in non-trivial ways (you can chain
callbacks very simply, but that's basically it).

------
spullara
It is hard for me to fathom the negative feelings towards Promises. They are
quite clearly a great way to perform async programming in a civilized way (see
Twitter's Future/Promise in Finagle on github). JDK 8 will even have the
equivalent in CompletableFuture. The only thing better is to combine Promises
with coroutines for a more linear programming style like in Flow:
<http://www.foundationdb.com/white-papers/flow/>

~~~
dschobel
if you want this on the JVM today and can abide Scala, see:
<http://doc.akka.io/docs/akka/snapshot/scala/dataflow.html>

~~~
spullara
The Twitter solution I mention above is in Scala — that said, I have one that
also works in JDK 6/7 in a branch of <https://github.com/spullara/java-future-
jdk8>.

------
graue
This code doesn't look right to me:

    
    
        // list :: [Promise a] -> Promise [a]
        var list = function(promises) {
          var listPromise = new Promise();
          for (var k in listPromise) promises[k] = listPromise[k];
    

Perhaps the assignment is supposed to be the other way around?

    
    
          for (var k in promises) listPromise[k] = promises[k];

~~~
pyrtsa
I asked the same question in Twitter. Turns out James was actually augmenting
(i.e. modifying) the array object `promises` to behave as a promise itself. I
don't think this was a particularly beautiful way of doing it but it seems to
work now that I think of it.

Promise libraries, like RSVP.js [1] he referred to, typically implement a way
to construct a promise with a depends-on-many relationship, as a function
possibly called `all([p1, p2, ...])` (with the same type signature as for
`list`), `and(p1, p2, ...)` or something similar.

IMO, defining the `list` function that way would've been clearer to the reader
and more FP'ish, treating the `promises` argument in as a value and not a
mutable object.

[1]:
[https://github.com/tildeio/rsvp.js/blob/master/lib/rsvp/all....](https://github.com/tildeio/rsvp.js/blob/master/lib/rsvp/all.js)

~~~
graue
A day later I looked at this again and I'm a little closer to understanding.

    
    
          var listPromise = new Promise();
    

creates an object that, being a Promise object, has certain methods and
internal state, derived from the prototype of Promise.

    
    
          for (var k in listPromise) promises[k] = listPromise[k];
    

This confused me because I thought "k" was a stand-in for a numeric index,
e.g. that it was doing promises[0] = listPromise[0], promises[1] =
listPromise[1], etc. That is not what's going on. Rather, "k" refers to
attributes and/or methods that objects of the Promise class have by default.
It's copying those onto `promises` — the array `promises` itself, not the
individual items `promises[i]`, which keep their existing methods and
attributes.

Coming from a Python background, I think I would have found this more obvious
if the variable "k" were instead called "method" or "attr". If it was `for
(var method in listPromise)` it'd be much clearer what's going on, whereas
single-letter variables like i, j, and k are, to me, stand-ins for integers.

It was also confusing, as you said, that the function uses destructive update
rather than treating the input as a value. James did mention this ("augmenting
the list with promise methods"), but it's still unexpected, especially when
the function is preceded by a Haskell type signature.

The reason I only say I'm closer to understanding, and not quite there yet, is
I'm not sure what it means to do `new Promise()` or what is being copied over
in the above for-loop. I tried James's code with a Promises/A+ implementation,
rsvp.js (<https://github.com/tildeio/rsvp.js>), but it won't let me do `new
Promise()` because it works differently:

    
    
        > var promise = new RSVP.Promise();
        TypeError: You must pass a resolver function as the sole argument to the promise constructor
    

Per an example in RSVP.js's readme, it's expecting this:

    
    
        var promise = new RSVP.Promise(function(resolve, reject){
            // set up a callback that calls either resolve(...)
            // or reject(...)
        });
    

If James is using a specific promises implementation in his code, it appears
to be the one he defined in a past blog post
([http://blog.jcoglan.com/2011/03/11/promises-are-the-monad-
of...](http://blog.jcoglan.com/2011/03/11/promises-are-the-monad-of-
asynchronous-programming/)), which in turn builds on a module from his
JS.Class library (<http://jsclass.jcoglan.com/deferrable.html>), which I
hadn't heard of before.

I still think this is a great article, but that code snippet has proven to be
quite a puzzle.

------
eldude
Unfortunately, in practice promises end up making your code more difficult to
reason about by adding cruft and _unnecessary_ abstraction. They're also very
limiting from a control-flow perspective.

This is especially noticeable when you have branching behavior / want to
resolve a promise early[1]:

Branching with promises:

    
    
      function doTask(task, callback) {
        return Q.ncall(task.step1, task)
        .then(function(result1) {
          if (result1) {
            return result1;
          } else {
            return continueTasks(task);
          }
        })
        .nodeify(callback)
      }
    
      function continueTasks(task) {
        return Q.ncall(task.step2, task);
        .then(function(result2) {
          return Q.ncall(task.step3, task);
        })
      }
    

As opposed to with stepdown[2]:

    
    
      function doTask(task, callback) {
        $$([
          $$.stepCall(task.step1),
          function($, result1) {
            if (result1) return $.end(null, result1)
          },
          $$.stepCall(task.step2),
          $$.stepCall(task.step3)
        ], callback)
      }
    

I would really love for a post to include a non-trivial problem implemented
with promises, vanilla callbacks, and async (and I'd be happy to add a
stepdown equivalent), and allow people to see for themselves (how in my
opinion promises make code harder to read).

[1] [http://stackoverflow.com/questions/11302271/how-to-
properly-...](http://stackoverflow.com/questions/11302271/how-to-properly-
abort-a-node-js-promise-chain-using-q)

[2] <https://github.com/Schoonology/stepdown> (docs need updating, view tests
for documentation)

------
just2n
Promises are just tools for managing a list of callbacks with less
boilerplate. I wouldn't call one imperative and the other functional. Both are
functional. You might dislike callback patterns, but through one of the
beautiful parts of JS, you can trivially wrap any callback-oriented API you
want and have it become a promise based one. I've done this before when I had
a very complex dependency graph at the start of a program and a few API calls
were callback related. It looks something like this:

    
    
        SomeClass.prototype.someActionPromise = function(){
            var deferred = makeADeferred();
            SomeClass.prototype.someAction.call(this, function(err){
                err ? deferred.reject() : deferred.resolve();
            });
            return deferred.promise();
        };
    

Now you have a promise-based version that makes your code a little cleaner and
easier to read.

~~~
naradaellis
The author "promisify"s a callback-based API in the article - which is worth a
read by the way.

I'm interested in your opinion RE: his argument for why callback APIs are
imperative - because I think he has a very good point and has supported it
with a solid argument and you haven't offered any rebuttal.

~~~
just2n
Interesting. As you pointed out, I hadn't read the article but was rather
replying to other comments.

After reading it, I think I have to agree that I had never thought about it
that way. It makes a lot more sense that a promise is just a declaration of
some unit of work, and when you can use a promise like any other data, you
aren't just giving imperative commands, but rather describing work to be done
and using that as a fundamental part of your code, which is why it drastically
simplifies async programming (the relation of promises to monads is quite
nice, too). Definitely a good article, thanks for kicking me :).

------
niggler
" the decision, made quite early in its life, to prefer callback-based APIs to
promise-based ones."

Rewind to the point when nodejs was being designed. In that world, in the
context of javascript, callbacks were the only real pattern that existed. XHR?
callback. Doing something in the future? callback.

If you imagine node trying to leverage the javascript ecosystem, callbacks
were a no-brainer.

~~~
lucian1900
Various incarnations of the Promise monad have existed for quite a while, even
in JS. The oldest one I can think of is MochiKit's Deferred, inspired by
Twisted's. That one worked (and still does) seamlessly with any callback code.

~~~
abecedarius
(Twisted was inspired by the work I just pointed to in my answer. Not to take
away from yours -- I wasn't familiar with MochiKit.)

~~~
lucian1900
Of course, using monads for asynchronous tasks is an old trick and E has
always been ahead of its time (like many other languages ...)

------
ilaksh
Promises seem cool but if you are not liking callbacks very much you should
take a look at just using CoffeeScript indenting two spaces, specifying
functions instead of inline, he async module, icedcoffeescript with await and
defer, and Livescript with backcalls. All of that is more useful and
straightforward than promises.

------
richo
So, calling magic subroutines is more functional than passing about first
class functions?

~~~
naradaellis
Functional programming is not just about first class functions and I disagree
with the idea that using first class functions means you are doing "functional
programming".

See the second paragraph of the article for the authors take on this.
Specifically, the concept of values is very important.

------
arianvanp
This kind of programming certainly is promising </pun>

It's one of the reasons why i started learning haskell.

------
pk11
there is also a third approach for those who want to write composable,
functional javascript <http://dfellis.github.com/queue-
flow/2012/09/21/tutorial/>

------
jQueryIsAwesome
All the code and such a big abstraction for the first example when it could
have done like this:

    
    
        var result = [];
        paths.forEach(function (i, file){
            fs.stat(file, function (err, data){
                result.push(data);
                if (i === 0) {
                    // Use stat size
                }
                if (result.length === paths.length) {
                    // Use the stats
                }
            });
        });
    

Fairly understandable, more efficient and without introducing logic patterns
foreign to many. It also meet his requirements (It is parallel and we only hit
every file once)

~~~
asolove
To be fair, you haven't handled the complete case. What if one of the items
fails? You need to handle the error, but only if it's the first error, and
make sure to tell all later-called callbacks that they're too late and we have
already failed. Except, if we got an error on an early callback but the 0th
item comes back later, we need to do whatever we were going to do with that
one piece of data.

    
    
        var result = [];
        var hasFailed = false;
        paths.forEach(function (i, file){
            fs.stat(file, function (err, data){
                // if previous callback failed, give up.
                // unless this is the first item, which we still need
                if(hasFailed && i !== 0) return;
                if(err) {
                    hasFailed = true;
                    // do something with the error 
                    if (i === 0) // do something special for error on first item.
                    return;
                }
                result.push(data);
                if (i === 0) {
                    // Use stat size
                    // remember we might have already failed, in which case don't add the first item to the general result
                    if(hasFailed) return;
                }
                if (result.length === paths.length) {
                    // Use the stats
                }
            });
        });
    

That's almost certainly still not close to right. Which just illustrates the
basic problem: without either promises or something like async.js, you're
reimplementing control flow by yourself. You can easily start with perfectly-
nice-looking code that balloons to be incomprehensible as soon as you start
caring about error cases. And where two statements, perhaps dozens of lines
apart, are preserving some invariant that is not obvious to someone editing
your code in the future. Even yourself.

~~~
jQueryIsAwesome
The example I'm talking about is the one where he doesn't handle errors
either, this one: <http://pastebin.com/98CarwzU>.

And your example is disingenuous too, why do we need to add so much logic
instead of collecting the errors if you want to handle them anyway?

    
    
        result.push(err || data);
    

So you don't need to copy around a variable called "hasFail", one line can be
enough

    
    
        var failed = results.every(Buffer.isBuffer);

