Hacker News new | past | comments | ask | show | jobs | submit login
Callbacks are imperative, promises are functional (jcoglan.com)
353 points by timcraft on Mar 30, 2013 | hide | past | web | favorite | 149 comments



James does a good job of articulating why promises are such a useful abstraction, especially in JavaScript land. I've been working on a project recently that relies heavily on coordinating many asynchronously-populated values, and I don't even want to think about what the code would look like if we were wrangling callbacks manually.

We actually extracted our promises implementation from the work we've been doing, and released it as RSVP.js[1]. While other JavaScript promises libraries are great, we specifically designed RSVP.js to be a lightweight primitive that can be embedded and used by other libraries. Effectively, it implements only what's needed to pass the Promises/A+ spec[2]. For a comparison of RSVP.js with other promises-based JavaScript asynchrony libraries, see this previous discussion on Hacker News[3].

1: https://github.com/tildeio/rsvp.js

2: https://github.com/promises-aplus/promises-spec

3: https://news.ycombinator.com/item?id=4661620


Not to focus too myopically on the given example, but I can’t help but wonder why it’s a requirement that the first file be handled specially? A less contrived example would make the argument more convincing.

If I wanted to compute the size of one file relative to a set, I’d probably do something like this:

  queue()
      .defer(fs.stat, "file1.txt")
      .defer(fs.stat, "file2.txt")
      .defer(fs.stat, "file3.txt")
      .awaitAll(function(error, stats) {
        if (error) throw error;
        console.log(stats[0].size / stats.reduce(function(p, v) { return p + v.size; }, 0));
      });
Or, if you prefer a list:

  var q = queue();
  files.forEach(function(f) { q.defer(fs.stat, f); });
  q.awaitAll(…); // as before
This uses my (shameless plug) queue-async module, 419 bytes minified and gzipped: https://github.com/mbostock/queue

A related question is whether you actually want to parallelize access to the file system. Stat'ing might be okay, but reading files in parallel would presumably be slower since you'd be jumping around on disk. (Although, with SSDs, YMMV.) A nice aspect of queue-async is that you can specify the parallelism in the queue constructor, so if you only want one task at a time, it’s as simple as queue(1) rather than queue(). This is not a data dependency, but an optimization based on the characteristics of the underlying system.

Anyway, I actually like promises in theory. I just feel like they might be a bit heavy-weight and a lot of API surface area to solve this particular problem. (For that matter, I created queue-async because I wanted something even more minimal than Caolan’s async, and to avoid code transpilation as with Tame.) Callbacks are surely the minimalist solution for serialized asynchronous tasks, and for managing parallelization, I like being able to exercise my preference.


>> but reading files in parallel would presumably be slower since you'd be jumping around on disk

There is a lot of engineering that goes into making parallel reads go fast. Some combination of the file system and disk controller will probably be smart enough to recognize the opportunity for sequential reads and execute them as such if possible.

This is not always true, and it does not undermine the rest of what you have written. I just think it's interesting to keep in mind that operating systems implement a lot of helpful machinery that user-level programmers forgot about.


Stat just reads metadata that is (hopefully) cached in memory. Linux does not have any async APIs for reading metadata anyway. Examples are always a bit contrived, it doesn't matter.


    do f1 <- fsStat "file1.txt"
       f2 <- fsStat "file2.txt"
       f3 <- fsStat "file3.txt"
       let ratio = (size f1) / (sum $ map size [f1, f2, f3])
       print ratio 
Or, if you prefer a list

    do fs <- mapM fsStat files
       let ratio = (size . head $ fs) / (sum . map size $ fs)
       print ratio
And that seems to be one small example of why you may have already invented monads. I've been loving the impact of Javascript—modify and immediately see it on the browser—but every time I'm not using Haskell I miss it dearly.


Have to admit that has also been my reaction to some of these javascript async frameworks based on promises or deferreds. Congratulations, you've reimplemented a quirky ad-hoc variant on the continuation and error monads.

Perhaps people don't spot the link as easily because monads are usually explained in terms of a type system, and javascript is untyped? (Or perhaps just because Monad is a very abstract abstraction :)


Regarding your latter point, I don't think monads are even that abstract. "Monad" just happens to be something out of category theory so it has a mathematical weird-sounding name that you MIGHT guess has something to do with monoids -- if you know what monoids are -- so people think it has to be something complicated when in practice it's just a nice unified interface for glue code.


There's the async component as well, but I think `ErrorT e Par a` covers it.


Please use more readable names in your code. Use of names like 'fs' in key places makes it unreadable.


Generally this is of course good advice, but in Haskell it is common practice to use very short names (e.g. x, x', xs, ...) if the context is clear (which it usually is due to small scope, clear function names, type signature, etc.). This makes code much more concise and readable (it also makes it look very "mathematical").


> (it also makes it look very "mathematical")

Has that ever been an advantage?


For people that like math, sure, why not. When implementing mathematical concepts, if you squint at Haskell code you can see the original formulas, which should make it easier for people used to this way of thinking.

EDIT:

I'm not implying it's useful just for programming "math stuff", after all, everything can be reduced to a mathematical problem - including game engines[1], web application frameworks[2], etc.

[1] http://www.cse.unsw.edu.au/~pls/thesis/munc-thesis.pdf [2] https://github.com/yesodweb/yesod


And it's probably one of the most significant things limiting adoption of Haskell.


Exactly. From my point of view, Haskell is the perfect language which unfortunately comes with the worst naming conventions. (I generally develop in C#, F# and JavaScript)


The naming conventions are fine, but they're very different from C#, F# or JS - it's basically the difference between reading English text and reading formulas (i.e. compositions of weird letters and symbols).


I just want to add something - using Haskell's naming conventions in C# or JS is a crime against humanity. If your function is longer than 5-6 lines and wider than 10-15 characters `xs` or `<|>` are not good names. I think you could get away with it in F#, but it will look weird.


Indeed.


The convention for Haskell is to keep the active scope of variables very small. Any variable with an active scope of larger than maybe 3 lines, I make longer. Since these examples were hardly longer than that, I feel quite justified with short names.

In Haskell, if you see a short name, look up and down 3 lines for the definition. If you can't find it then complain.


I think you are right, and it is a convention in the functional world. People are using (and worse, reusing in a close proximity!) meaningless names like that. And I think these people have zero regard to anyone who is reading their code.

Well... more power to python, and culture that embraces 'what your see is what your get' and super-readable code.


There is actually an interesting technical reason for having short names in generic Haskell functions. Because of parametricity, the behavior of the function doesn't depend on what the values actually are. The shortness of the names really is meant to convey "don't think about what this is doing, because it's not important for this function". In the traditional example for map,

  map f [] = []
  map f (x:xs) = f x : map f xs
You're supposed to infer from the short function names that f and x could be anything. The only important bit is that you can apply one argument to f (so, for example, f could take two parameters, and then map is just doing a single partial application). In that context, x and xs is actually a better convention than "first" and "rest", because they indicate the adherence to the type system. The naming here is saying that x is of the type of elements of xs, and that this is the only important information for map. This seriously helps in more complicated functions like zip, etc.


I'm not so sure that briefness and adherence to that convention improves readability. Of course f, x, xs is much much better than 'first' and 'rest', or 'a', 'b', 'c', but something like 'func' and 'iterable' gives more context. And frees one's attention to more important things, than looking up and down the code.

Compare:

    map f [] = []
    map f (x:xs) = f x : map f xs
With:

    map f xs = [f x | x <- xs]
Or even better, in Python:

    map = lambda func, iterable: [func(x) for x in iterable]
Which one is more readable?

First one requires looking up and down in order to understand what is going on. Second one is better, context is limited to one line. And the last one doesn't require you to remember context at all.


I find

  map f xs = [f x | x <- xs]
the most readable, but given that list comprehension is basically a map and a filter joined together, that definition is kind of cheating.

I find the python version hardest to read (even though it's also "cheating"), which is largely because both the identifiers and the control constructs are alphabetic.

  [func(x) for x in iterable]
I have to read the words to figure out that for/in are the keywords and x/iterable are identifiers. func(x) is at least pretty obviously a function application. I'm glad it's not

  [call func x for x in iterable]
If you compare this to

  [f x | x <- iterable]
The parentheses-less function application might take some getting used to, but then it's pretty easy in my opinion. The | nicely divides it into two parts. A function application on the left, and a "take each element x out of iterable" on the right.

The only other thing is that because "iterable" is such a long word I expect it to be an identifier imported from a library or somewhere else in the program, and certainly not earlier in the line.

In summary, I think a lot of what we find "readable" depends on what we are used to.


>> In summary, I think a lot of what we find "readable" depends on what we are used to.

I think a general rule is that source code readability is inverse to the amount of context information that one needs to remember in order to interpret and understand the code.

If you can look at the code at any place (and at any scale, ranging from a single line to the whole call graph), and without any context understand what is happening at this particular place - code is readable. And on the opposite, if you need to know a lot about the context of each particular line of code - code is unreadable.


We'll have to agree to disagree. I think long variable names for short-lived variables decreases readability. Oftentimes these "points" are just used to glue functional pipelines together and have little-to-no intrinsic meaning. The true documentation comes from the types and is thus more trustworthy.


Oftentimes there is ML code in which both, types are implied and variable names are typical to functional programming (a,b,c,d,e) style.

In C or C++ this newer was the case, because type information was never implied (until recently, when auto was introduced). And in dynamic languages, like Python, this is also almost never the case, because good mainstream developers use object names consistently.

In the functional world however, mainstream (if there is any mainstream, as I often see each developer working in his/her own unique style) folks just say phrases like 'true documentation comes from the types' and write their recursions freely, and with no regard to the reader.

So yes. We'll have to agree to disagree.


Doesn't that have the problem that it won't get around to computing the ratio until it needs to be printed to the screen?


Depends on the semantics of the monad. If you want to control that kind of thing, you can use Strategies from Control.Concurrent. If you just want to force things, then abstract-par [1] and monad-par [2] have some pretty convenient semantics.

[1] http://hackage.haskell.org/package/abstract-par/ [2] http://hackage.haskell.org/package/monad-par/


> I just feel like they might be a bit heavy-weight and a lot of API surface area to solve this particular problem.

The surface area is `.then()`


Lack of standardization makes people go crazy. If another programming language had an API this simple, people would never think it is heavyweight. But because you're always rolling your own, people get obsessed with the smallest things in js-land.


These kinds problems can easily be solved with promises too. It would be even simpler if `fs.stat` returned a promise and there are promise libraries that do that. Promises is a small library I use, probably about the same number of bytes as your library, as I transition my code from callbacks to promises.

      var queue = new Promises;
      fs.stat("file1.txt", queue.cb());
      fs.stat("file2.txt", queue.cb());
      fs.stat("file2.txt", queue.cb());
      queue.all()
        .then()
        .fail();
But, comparing how promises solves the same flow as callbacks misses the point. Here's an example where an action is taken when two events fire (promises shine here):

      pub.on('foo', function() {
        promise1.fulfill();
      });
      pub.on('bar', function() {
        promise2.fullfill();
      });
      Vow.all([promise1, promise2]).then(...).fail(...);


It looks like you're able to return one of those queues from a function and allow some other code to call .await(). Being able to return something is a useful feature of promises too, seems like there might be more overlap there.


  > If foo takes many arguments we add more arrows, i.e. foo :: a -> b -> c
  > means that foo takes two arguments of types a and b and returns something of
  > type c.
Nitpick alert: since everything is curried in Haskell, it's actually more like `foo takes an argument a and returns a function that takes one b and returns one c`.

Other than that teeny thing, this article is awesome, and I fully agree. Promises are an excellent thing, and while I'm just getting going with large amounts of JavaScript, they seem far superior to me.


> Nitpick alert: since everything is curried in Haskell, it's actually more like `foo takes an argument a and returns a function that takes one b and returns one c`.

Whilst that's true, and is important to the way in which Haskell operates, people normally talk about functions as taking multiple arguments (at least, the people at London HUG, most of whom are better Haskellers than I, seem to).

Even ghci refers to the "second argument":

    Couldn't match expected type `Int' with actual type `Char'
    In the second argument of `foo', namely 'b'
    In the expression: foo 1 'b'
    In an equation for `it': it = foo 1 'b'


Of course, hence the 'nitpick alert' and admission that it doesn't really affect anything in the text, just a detail about how things work.

Often, conversations are not held to absolute rigor. Not every off-handed statement is absolutely consistent.


Really? Consider

    f :: a -> b -> a
    f a = g a
    
    g :: a -> b -> a
    g a _ = a
It doesn't seem right to say that g "returns a function that takes one b", whereas you could say that about f.



Thank you for clarifying! I did not know that all functions in Haskell are considered curried. My surprise stemmed in part from reading a bit about "arity" from [1].

It's interesting how the theoretical model of Haskell--"all functions in Haskell take just single arguments"--differs from implementation, where, for functions of known arity, GHC in particular does not actually "follow the currying story literally" [2].

[1] http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Hask...

[2] http://community.haskell.org/~simonmar/papers/eval-apply.pdf


Any time. It's one of the more interesting parts of Haskell to me, so it's one I always remember.

You're absolutely right to point out that implementations and theory often differ; compilers often do tricky things behind the scences.


(g a) is valid Haskell and it is equal to a constant function that returns a. In fact, g is the Prelude function 'const'.


This is an interesting perspective. But to me, even having spent a year on a large node.js project, I just don't see how promises would have simplified things at all.

If you have some crazy graph of dependencies, I can see how breaking out promises could help simplify things. But I don't feel like that's a super-common scenario.

The author says:

> * [Promises] are easier to think about precisely because we’ve delegated part of our thought process to the machine. When using the async module, our thought process is:*

> A. The tasks in this program depend on each other like so,

> B. Therefore the operations must be ordered like so,

> C. Therefore let’s write code to express B.

> Using graphs of dependent promises lets you skip step B altogether.

But in most cases, I don't want to skip B. As a programmer, I generally find myself preferring to know what order things are happening in. At most, I'll parallelize a few of database calls or RPC's, but it's never that complex. (And normal async-helper libraries work just fine.)

I swear I want to wrap my head around how this promises stuff could be useful in everyday, "normal" webserver programming, but it just always feels like over-abstraction to me, obfuscating what the code is actually doing, hindering more than helping. I want to know, specifically, if one query is running before another, or after another, or in parallel -- web programming is almost entirely about side effects, at least in my experience, so these things often matter an awful lot.

I'm still waiting for a real-world example of where promises help with the kind of everyday webserver (or client) programming which the vast majority of programmers actually do.

> Getting the result out of a callback- or event-based function basically means “being in the right place at the right time”. If you bind your event listener after the result event has been fired, or you don’t have code in the right place in a callback, then tough luck, you missed the result. This sort of thing plagues people writing HTTP servers in Node. If you don’t get your control flow right, your program breaks.

I have literally never had this problem. I don't think it really plagues people writing HTTP servers. I mean, you really don't know what you're doing if you try to bind your event listener after a callback has fired. Remember, callbacks only ever fire AFTER your current imperative code has finished executing, and you've returned control to node.js.


I can speak your comment, since my side project is a Node.js server that talks to several APIs, a database, and N web clients. Like you, I've worked more than a year on it, but I had a positive experience with Q [1] and jQuery promises.

Promises make async code easy to manage, even at scale. Each API request gets its own promise. What happens inside that promise doesn't matter, as long as it returns a result or an error. If talking to the API takes one request or two, it does not matter with promises. We can abstract these API requests in such a way that even if document retrieval is a multi-step process or a one-step process for each document source, the collation process can know nothing about the retrieval process to work.

In other words, promises allow us to separate concerns. Document retrieval is one concern, collation another. Promises, by abstracting asynchronous control flow into synchronously appearing objects, allow us to write simple programs at higher levels. Separating concerns makes the server easier to test and easier to extend.

If you want to see how I've used promises, you can take a look at my work on Github [2].

[1] https://github.com/kriskowal/q

[2] https://github.com/fruchtose/muxamp


In other words, promises allow us to separate concerns. Document retrieval is one concern, collation another.

Other programming languages have this too. They're called a 'METHOD'. Sorry, couldn't resist.

On a serious note, look at your code in here:

https://github.com/fruchtose/muxamp/blob/master/lib/playlist...

And look at your 'playlistCount' function on line 39 (which for no apparent reason you've made a variable).

Now I can't understand how a promise there is helping your programming. In fact it seems to have vastly increased the complexity of the code and it is now completely unclear from a quick scan how the method works, what the program flow actually is.

It should be a much shorter function, literally just this:

    function playlistCount(){
        return db.query('SELECT COUNT(id) AS count FROM Playlists;', function(rows) {                   
            return parseInt(rows[0]['count']);
        });
    }
Your version is 28 lines!

28 LINES. HOW? For a simple COUNT!

Actually to be fair to promises, 50% of the problem here is that you've not abstracted your db code and unnecessarily repeat boiler plate connection code again and again and again.

But a big part of the problem is that you can't just throw an exception and let it bubble up the stack, you're constantly having to baby sit the code, and that is the promises fault.

(unrelated, but as a mini-code review though I've never used node.js but the oft repeated line of `dbConnectionPool.release(connection);` in the connection failure code is an immediate code smell to me, it looks like rain dance code, why would you have to release a failed connection? It failed, how can it be released?)


Some pseudocode

    playlistCount :: PromiseDB Int
    playlistCount = withConn $ \conn -> do
                      result <- query 'SELECT COUNT(id) AS count FROM Playlists;' conn
                      return (parseInt $ get 'count' result)
Wrapping the pool handling into withConn, and the failure modes into query. This PromiseDB monad would be fairly trivial to produce in Haskell. It's also fairly easy to write it point-free as

    withConn $ 
      query 'SELECT COUNT(id) AS count FROM Playlists;' 
      >=> return . parseInt . get 'count'


Having now thought about it a bit more, you could actually write the code so you don't have to baby the promise at all in normal code. So the promises didn't increase the complexity of the code really, the lack of abstraction is.

I'm thinking of something like the below as a dbHelper class.

Note I'm passing the error messages into the deferred reject method rather than using console logging. I'm not sure if the q API supports that as you don't, but if it doesn't that seems like a bit of an odd design decision by the library authors, how else are you supposed to pass errors back? Unless I'm misunderstanding promises. If you want to have console logging of failed defers I would recommend either editing the source of Q or monkey patching the reject method to do it automatically.

It's also worth noting I am deliberately not checking rows length, etc. You should let exceptions do what they're supposed to do as something very serious has gone wrong if rows.length == 0 or rows[0] is null or rows[0]['count'] doesn't exist. A SELECT COUNT should always return 1 row and if you've named the column it should definitely be there.

    //this is totally untested code to show the idea of 
    //how you should abstract the boilerplate from your code
    var dbHelper = function() {
        var db = require('./db'),
            Q  = require('q');

        var dbHelperInner = {
            query : function(query, onComplete) {
                var dfd = Q.defer();
                
                try {
                    var cmd = new DbCommand(dfd);
                    cmd.execute(query, onComplete);
                } catch(e) {
                    dfd.reject(e);
                }
                
                return dfd.promise;
            }
        }

        function DbCommand(dfd) {
            if (!db.canRead()) {
                throw "db unavailable";
            }
            
            this.dfd = dfd;
        }
        
        DbCommand.prototype.execute = function(query, onComplete) {
            var self = this;
            dbConnectionPool.acquire(function(acquireError, connection) {
                if (acquireError) {
                    self.dfd.reject(acquireError);
                    return;
                }
                connection.query(query, function(queryError, rows) {
                    if (queryError) {
                        self.dfd.reject(queryError);
                    } else {
                        try {
                            self.dfd.resolve(
                                onComplete(rows));
                        } catch (e) {
                            self.dfd.reject(e);
                        }
                    }

                    dbConnectionPool.release(connection);
                });
            });
        }
        
        return dbHelperInner;
    }();


    //and then you could use it like this
    //which is almost identical to the code I posited above
    function playlistCount(){
        return dbHelper.query('SELECT COUNT(id) AS count FROM Playlists;', function(rows) {                   
            return parseInt(rows[0]['count']);
        });
    }


First of all, thank you for taking the time to read over my code. I don't get enough of this.

You're right that 28 lines is pretty ridiculous, but it's because I never abstracted out that code. The playlist code is some of the ugliest in that project, because I got pretty lazy with it. I know it's terrible. The console logging stuff is part of that.

Q actually allows exceptions to cause promise rejections, which is both a nifty feature and a potential curse (e.g. throwing an exception before releasing a resource).

I like the changes you propose (not considering testing), but with slightly different implementation. the DbCommand should be creating its own deferreds, rather than accepting one as a parameter. This kind of promise handling is best left up to DbCommand to implement, rather than the caller. In Q it's easy to chain promises, like so:

    Q.fcall(someFunction).then(function(result) {
      return functionThatReturnsAPromise();
    }).then(function(secondResult) {
      console.log(secondResult);
    }).done();
Also, in a proper redesign, Q's denodeify function can change the whole flow of the execute function entirely:

    DbCommand.prototype.execute = function(query, onComplete) {
      var self = this;
      Q.denodeify(dbConnectionPool.acquire).then(function(connection) {
        self.connection = connection;
        return Q.denodeify(connection.query);
      }).then(function(rows) {
        return onComplete(rows);
      }).finally(function() {
        self.connection && dbConnectionPool.release(self.connection);
      }).done();
    };
Q's denodeify call works with Node.js callbacks which follow the convention that the error is the first argument, and the result all others. denodeify then converts the error into an argument for any calls to Q.fail. Any uncaught errors will be thrown after done() is called.

However, I am not sold on the idea of creating a prototype for DB commands. There's no state that needs to be held, and the code is abstract enough without introducing a prototype.

Again, thanks for the code review. The playlist DB code needs a lot of refactoring, since right now there's too much repetition I've been too lazy to fix. If you want to talk some more, feel free to email me.


The point is promises free you from wanting or needing to know about the order that things happen in. I hear you saying you fear promises, because it means it would get in the way of your ability to know that. But the truth is once you embrace them, that need becomes unimportant.

The idea that webservers are "all about side effects" gives me a chill. The whole architecture concept of HTTP is no side effects, so to claim that it's all about side effects seems odd. It should only be the case for POST PUT or DELETE methods, and only in very specific ways.


As someone who's started moving from the imperative world, albeit in Java, C# and Ruby based work rather than JavaScript, I can say that letting go is the hardest part.

However, when you learn to stop worrying and let the runtime decide it's so much nicer. It turns out that people have already optimised the framework, so at worst it's just as fast as the code I wrote. At best it's faster because the framework knows more about what it's capable of.

The biggest thing to realise is that while you can easily make things perform well in isolation the runtime can look at the bigger picture. There's no point making an operation run in 300ms if it blocks all other tasks on the server, when it could run in 600ms and allow everything else to keep going.


Just think of it as forking and joining threads...


> The point is promises free you from wanting or > needing to know about the order that things happen in > ... But the truth is once you embrace them, > that need becomes unimportant

This smells like the classic leaky abstraction though. Like when people tried to paper over the difference between remote calls and local calls with abstract interfaces (CORBA, RMI, etc.). Everyone would say it was so awesome, remote calls look the same as local calls! But it wasn't awesome, it was horrible, because the details of the abstraction 'leaked' through and you got all kinds of problems from having delegated away what was actually one of the most critical, sensitive parts of your code to a layer you had no control over. 15 years later we're back to nearly everybody using REST because it turns out to be way better not to shove those abstractions on top of your most important code.

Now, I'm not saying that analogy is perfect here ... but it does remind me of it. Why should you care about the order of things? Just to suggest something, sometimes it's just useful to be able to reason about it. "We know the first operation definitely happened before the others, so an earlier one failing can't be a side effect of something a later one did ... oops, we don't know that any more. We actually have no idea what order they happened in."


The need for that form of reasoning seems to me to be a symptom of the way that we go about writing code. We don't need to worry about if line 10 executes before line 11 within a given scope -- we just know this. If we come up with a similarly simple way of writing asynchronous operations that have dependencies then we can read it just as fluently.

In my experience, promises do fill this role if they're used in a suitable scenario.


> The idea that webservers are "all about side effects" gives me a chill. The whole architecture concept of HTTP is no side effects, so to claim that it's all about side effects seems odd. It should only be the case for POST PUT or DELETE methods, and only in very specific ways.

There's nothing incongruous about that. It is the case that side effects should only happen on POST, PUT, and DELETE methods (and the like), but almost all webservers are written because of a need to use these.

If your webserver is all GETs and HEADs, then it is either trivial and you would have used someone else's instead of writing your own, or its sole purpose is to repackage and serve existing data from other sources - a rare use case among all webservers.

If you were to take an inventory of all the webservers out there, you would doubtless find that almost all of them exist in large part in order to create side effects.


And I'm thinking about complex sites, not simple serve-up-a-page-and-that's-it.

A cache gets refreshed or added to. A user's viewcount is incremented. A new statistic is calculated and then stored. An item is marked as viewed. And these are all just on a GET.

On complex sites with a logged-in user, side effects are pretty much the norm.


I'm not arguing about that, I am arguing about "all"


I think his point is that there's usually a very strict ordering to the events on an HTTP server - you parse and sanitize your input, make some database calls, and generate a response - at best, letting something else do the sequencing and composition for you doesn't gain you much, as it might in a reactive GUI. At worst it leaves room for subtle bugs or code that's less clear (arising from the statefulness of the Promse object itself).

Using a Promises, as opposed to reducing a list of computations async-style, also limits you to the Promise object's interface, so you lose (or at least add cruft to) the flexibility and composability of using native lists. By sequencing computations with lists, if I want some insight into what's happening, I just List.map(apply compose, logFunc). With promises, I have some work to do.

Promises have their uses, but it's definitely a tradeoff, and for most HTTP servers, I'd argue that their utility does seem a bit limited. I'd similarly say that making a point of using FRP to build a server would probably be a bit overkill for the task.


What kind of object is in your list of async operations? promises. (though probably your own ad hoc, hand rolled and poorly specified version of them)


Just plain-old native functions - that's the whole point.


when you put "plain old native functions" in an array, with the intent of executing them in sequence, with the output of i being fed into the input of i+1, congratulations, the functions are now implicitly promises.

Because, in the end, what, semantically, is the difference between:

runqueue([func1,func2,func3,func4]); and func1().then(func2).then(func3).then(func4);

No significant difference at all, really. except the promises permit you much more flexibility and options.


The difference is that the first works with all of the native list functions, as well as all of those in e.g., underscore, without any extra work. The latter doesn't. Now, the latter certainly offers some other features, but my point was that, in specifically building an HTTP server, it's been my experience that those features aren't of as much use as being able to use the native list functions to, say, map a log function onto the list of functions, or reduce while halting execution under particular conditions.


It's an easy way to avoid race conditions, say you have 2 ajax calls that are required to render your Main view. If either or both of the calls fail, you want to show a Default view.

One answer would be to just do them synchronously, perhaps nesting one of the api calls inside the other's callback (event-handler or otherwise). And then for the Default view, you'd need to have the code that handles the failure in 2 different places (or at least 2 tests for failure).

Another answer would be a state machine which can be grouped and chained (pipe'd) with other similar state machines, to guarantee an order of operations when you need it. With this, you create a promise which is only resolved when both of the promises for the 2 ajax calls are complete, and that promise then pipes the results to the next promise in the chain which renders your view. For the Default view, if either of the ajax calls fail then the parent promise will fail, allowing you to handle the failure in one place.

Code example:

    Deferred.when( ajax1(), ajax2() ).then( /* success */ mainView(), /* fail */ defaultView() );


In POSIX thread programming the "state machine" that you talk about is commonly called a Condition Variable. https://computing.llnl.gov/tutorials/pthreads/#ConVarOvervie...


I've read a few other blog posts about Promises in the past months, and also was never convinced. Maybe I never read the good ones, because this is the first one that made me sit up a bit in my chair and think that this is really pretty cool. I thought it was really well written and quite illuminating.


The project I'm working on right now is about 6 months old. Promises have greatly simplified our data access layer. My argument here is mostly syntactic (not semantic, like the OP), but being able to assign promises to a variable has improved the readability of the code and the intent of the code, improving readability, testability, and flexibility. I don't claim that promises are the One True Way, but they get a lot of noise out of my way and let me focus more on what our code is doing than on how it's doing it.


"Web server programming is entirely about side-effects" is not terribly interesting, when observing current common practice: if it's typically callback driven, there's nothing else it can be about.


Twitter's finagle is a good example.

Anytime you need to perform more than one interaction with external services in parallel, it's a lot easier to wrap the results as futures and interact with the promise objects than to cope with callback spaghetti.

This is particularly critical anytime you're working in an SOA environment.

I could see why it's easy to believe that you don't need promises if you're averaging 1-2 database queries and 0 API calls per web page/API call reply, but once your scenario gets even slightly more complex - you're fucked.

Promises are one of the big reasons I like Clojure's concurrency better than Go's.


I feel this is twisting the meaning of functional programming. Excel is not functional. It is declarative. You declare the relationships between the cells and Excel uses those to propagate changes. Just like a makefile is not functional but declarative. The dependency of the relationships are enforced to produce action. SQL is another example of declarative language and it is nowhere near as functional.


Excel is not functional. It is declarative.

Alan Kay (yes, that Alan Kay[1] -- the guy that's won a Turing award) formalized spreadsheets as a limited form of first-order functional programming.[0]

[0] http://en.wikipedia.org/wiki/Spreadsheet#Values

[1] http://en.wikipedia.org/wiki/Alan_Kay


I'm afraid you've misunderstood that Wikipedia page. It attributes the phrase "first-order functional programming" to authors other than Kay. All it attributes to Kay is the phrase "value rule".

Kay's interest in spreadsheets wasn't about functional programming, it was about interactive and dynamic computation. I have a pdf of the 1984 Scientific American article that Wikipedia is quoting from. It does include the phrase "value rule"—by which he simply meant what we would call a spreadsheet formula—but I'm pretty sure it makes no argument about functional programming (it's all images so I can't search to be sure). If you'd like a copy, email me. It's a pretty neat article, ahead of its time as one would expect from Alan Kay.


I've already read the Alan Kay article you mentioned (recently, in fact), and that's not what I took away from it.

I guess we'll agree to disagree, I don't think someone needs to use the phrase "first-order functional programming" when they give the very definition of it, which is what Wikipedia does: summarize Kay's argument.

I do agree the article itself was quite interesting, and certainly ahead of its time.


What's Kay's argument, then? And how does it relate to FP? I'm curious.

I was making a textual point about the Wikipedia article. Its use of the phrase "first-order functional programming" is hyperlinked to http://journals.cambridge.org/action/displayAbstract?aid=727.... It's not citing Alan Kay.


I guess I wasn't clear, sorry. What Alan Kay meant by the "spreadsheet value rule" and what the phrase "a limited form of first-order functional progamming" means are semantically equivalent; they are the same thing.

I have no idea if Alan Kay ever used the latter phrase, but it was easier to use that phrase here on HN than Alan Kay's made up phrase, which would have be difficult to understand without the content of the article explaining it.


Ah, gotcha. I agree with you that spreadsheet formulas are a limited form of first-order functional programming. But the memory model with which they are coupled is just as important (I have in mind the grid addressing system and dataflow semantics) and this does not fit as nicely into the FP paradigm. But I'm repeating what I said in other comments.


But the memory model with which they are coupled is just as important (I have in mind the grid addressing system and dataflow semantics)

Totally agree. My startup is using hierarchical grids as our core datatypes for just that reason (we also support Function cells that are used as values).

"Naming" is one of the core problems in computer science, and grids/spreadsheets elegantly solve that problem for many ad hoc use cases, where functional programming (in all its forms) does not.


With all due respect to Alan Kay, I don't know what he meant by "first order functional programming." If we are talking in relation to high order functions, a high order function is one that can take other functions as parameters or produces functions in return, i.e. a high order function maps functions to functions. All other functions are first order function, i.e. they are ordinary functions mapping values to values.

In that case any languages supporting function is "first-order functional programming." It's kind of meaningless when we are talking about real functional languages. PHP can define a function that takes a value and returns a value, and thus it can do "first-order functional programming." But no one claims PHP is a functional language. (Not to degrade PHP, it's a good language doing what it does best.)

All these twisting the definition of functional programming just produce more confusion.


What most C derivative programming languages call a "function" is not actually a "function" in the sense meant in "functional programming". It is more accurately called a "procedure". While you can write a "functional function" in php, there's nothing inherent about php's "functions" that makes them "functional". it's up to the programmer to make them so.

On the other hand, a "formula" in excel is inherently a "functional" function or "first order function". Why? two things: No side effects, and statelessness- that is, the order of the computation is theoretically implicit: a value can be computed in a number of different arbitrary orders without effecting the final result.-- not explicit- like in php, where you are specifying an exact order of operations by the order of statements in a procedure ("function").

Another thing about statelessness is that given the same input, a "first order function" must always return the same output.

The fact that order is not explicit in excel, that there is no user controlled "state" means that there's certain associative, commutative and compositional properties available to "first order" functions that are not possible with a procedure that is not guaranteed to be a first order function.

it's the difference between deciding to do functional programming in an imperative language, and having functional programming structurally enforced by the language.


The "pure" functions are just a subset of "procedure" functions. One can always restrict to use the "pure" function in PHP and yes, that's "first-order functional programming" in PHP. The question is: do we call PHP a functional language?

Likewise, Excel can do stateful update with side effect. The cell formula function happens to be a subset of all it can do. Sure, we can claim it's "first-order functional programming." But do we want to call it a functional language?

My point is that "first-order functional programming" is a really weak qualifier to call a language functional.


You can do "Object Oriented programming" in C++. you can also do "OOP" in plain C, but why do we call C++ an "OO Language" but not C? because the design of the language itself encourages and enforces a particular style of "OOP".

In this case, I believe the "Excel is a functional language" meme is using "Excel" not to include ALL the things that excel does, which includes things like VB and Javascript scripting, but as shorthand for the spreadsheet model of application, and more specifically aimed at Excel's formula language, which enforces the creation of first order functions by excluding the possibility of state-fulness and side-effects from the language's design. It is the fact that this is inherent in its design that merits calling it a functional language, while in php there are no such design choices which enforce or even encourage a functional way of writing code.


Another point to make is that the "Excel is functional" meme is so useful is because, as soon as you tell most programmers that "functional programming" has no variables, no state, and no side-effects, they can't imagine how it's possible to actually do anything useful in a functional language. On the other hand, most programmers do know how to do useful things in a spreadsheet program- which happens to be effectively equivalent to a stateless side-effect free "pure" functional programming language.


That's kind of false advertisement. Excel (or spreadsheet in general) is great and easy to use because of declarative programming, not because of functional programming. The functional programming aspect of it is minimal. No variables, no state, and no side-effects are mainly attribute of declarative programming.

The most important aspect of being functional, high order function, is completely missing.


Even "no variables" and "no state" are arguably untrue. The state of a spreadsheet consists of the values sitting in its cells. You mutate them by editing them.

One FP attribute that spreadsheets do have is referential transparency: the same inputs given to the same formula will always produce the same result.


But it only recomputes them when you edit them. It's like changing the source code of the program and rerunning it.

  let x = 3, y = 5 in
    x + y
No (mutable) variables there, but if I change the y = 3 bit to y = 4, the value of the expression changes. That's what it's like in excel, as well.


I was wondering if someone would point that out! That's why I hedged with "arguably". :)

You're right, of course. But it depends on how you draw the boundaries of the system. If you think of the numbers in your example as code, then the program is stateless, and each time you change a number you get a new program. But if you think of them as parameters that live outside the code, then the edit-recompute cycle is a state change. In a similar if trivial way, if you take (say) a Java program running over a database and decide that the data in the database is "code", that "program" (consisting of Java code plus database) is now "stateless" too, and anyone who updates a database record is changing the "program".

Given that "stateful" vs. "stateless" depends on how you draw the boundaries of the system, the question is how best to draw the boundaries of a spreadsheet. I'm not arguing there's a single correct way to do that, but in my view the user's mental model of a spreadsheet is closer to "a calculating machine with state that I can update" than it is to "a stateless calculating machine with a lot of hard-coded literals".

Psychologically, updating the numbers in a spreadsheet doesn't feel like editing the source code of a program and re-running it. It feels stateful, like mutating something that triggers a cascade of side effects (recalculation). For this reason I think that spreadsheets are closer to the Smalltalk vision of a world of objects that respond to user interaction (as well as to each other) than they are to the functional programming vision of pure code. Put differently, the spreadsheet's I/O, its grid UI, is part of its essence. You can't abstract away from that without losing the heart of the thing. So the analogy with functional programming, though tempting, leads in the wrong direction. Every individual formula that lives in a cell is certainly a functional program when taken in isolation—but the memory model by which the cells reference one another and get updated is stateful.


From Wikipedia[0]:

Common declarative languages include those of database query languages (e.g., SQL, XQuery), regular expressions, logic programming, and functional programming.

I guess you're both right then. :)

[0] http://en.wikipedia.org/wiki/Declarative_programming


"Excel is not functional. It is declarative. You declare the relationships between the cells and Excel uses those to propagate changes."

Whereas in functional languages, the functions declare relationships between values and the language uses the evaluation model to propagate the results between function evaluation. Where's the difference?


Functional language has the declarative aspect while declarative language lacks the functional aspect. Just because A => B doesn't mean B => A.


You are correct that the fact that excel is declarative does not, on its own, make it "functional"... the fact that excel formulas adhere (by the definition of what an excel formula is) to the statelessness, and referential transparency requirements of "first order functions" does make it functional.


But Excel doesn't have first order functions. You can't even define a function in spreadsheet cells, let alone pass one around as a value.


The cells are the functions.


A cell formula defines a computation, but not a function. For it to be a function you have to be able to reuse it (call it as a function) in multiple places.

For example, you can't define the abstraction "square" in Excel. You can compute 2 * 2, A1 * A1 and so on. But there's no way to define a construct which given x produces x * x, then reuse that construct everywhere you want to square things. Everywhere you want to square something, you must inline the computation. That's the absence of functional abstraction.


The reusability test of function nails it. Thanks for clarifying it. Sometimes we just need to go back to the basic definition to make things clear.


declarative is an orthogonal attribute to functional. The two attributes are not mutually exclusive. You may as well say something like: "A bicycle isn't a vehicle at all! A Bicycle is a metallic object!"


Imo declarative, functional and OO are somewhat but not really orthogonal concepts.

A mathematical (ie pure) function is declarative in nature as it defines a relation between sets. However, in practice you often use it imperatively, ie as a means to get an output from some input.

An object is imperative in nature as it encapsulates mutable state. However, the set of messages defines an abstract interface which is arguably declarative.

In principle, a purely functional language cannot be object-oriented, and a purely object-oriented language cannot be functional. In practice, this doesn't matter as pure languages are rare.


Following that logic, OO is an orthogonal attribute to functional. Both of them can define functions, and thus OO is functional.


First of all, that's not what "functional" means.

Second, OO is an orthogonal attribute to functional.

Third, your argument commits a formal fallacy of this form:

All cats have whiskers

Cats can have stripes

Tony has stripes

therefore tony is a cat.

---The possibility of an attribute in X, and Y containing that attribute does not imply that Y is an X.

the point is that whether something is declarative has no bearing on whether it is functional or not. It's an irrelevant point to bring up. Whether something is OO is equally irrelevant. That is what "orthogonal" means. I would go on to define for you "functional" "declarative", "formal fallacy" "logic", but this seems like a bottomless rabbit hole. I can only hope you'll try and find out what these words actually mean yourself.


Ok. I misread what you wrote in my haste and misunderstood what you said. You can ignore my reply to your original comment.

However, my original point was addressing the claim in the blog that functional and declarative are equivalent in that functional has aspect of declarative. And since Excel is declarative, it's functional. I was basically saying that's not the case.

While declarative and functional can be orthogonal, and can both live in a language is an interesting but separate topic that is not really related to the original comment.


This was my thought as well. Promises are declarative... making a promise is almost the very definition of declarative programming.

It's not functional at all. This reaffirms my belief that blog posts are a terrible place to learn. People who know the least shout the loudest.


A tyrannical dichotomy.

Functional programming is declarative. Especially when it's lazy and the program's instantaneous state is abstracted out. One of the motivating goals in functional programming is to be able to define a computation once, in terms of other computations, and have that relationship be maintained with minimal regard to the state of the program or its order of execution. Which is what it seems like (this is the first time I've specifically encountered them) Promises are a powerful tool for accomplishing.

In contrast, threading explicit callbacks/continuations through a program, which is explicitly managing the order of execution, is relatively more imperative, which I think is the point of the article -- you don't need the wider control-flow flexibility of explicitly and manually threading callbacks to do the type of computations that most async web stuff does. You can abstract the common callback pattern out into something like Promises, and make all your shit more consistent and concise.


I'm not saying his design sense is wrong.

I'm saying his characterization of promises as functional is so close to being right that it's the worst kind of wrongness.

It's worse to say, "The capital of Kenya is Nairobi Central" than to say "The capital of Kenya is London." It is worse because errors that are only slightly wrong and maybe even partially correct are more deceptive, durable, and confusing.

A promise is not a functional concept. Functions don't make promises. You CAN, as a thinking human, re-conceptualize the immutable return value of a function as being "pretty much the same" as a promise, but when you re-conceptualize functional programming you are just deconstructing what is already understood to be an arbitrary conceptual construct. Any conceptual construct--especially one based on simple metaphors--can be deconstructed into some other form. This doesn't invalidate the construct and it doesn't equate the construct as such with the deconstructed components.

You might as well say that an airplane is not really an airplane. It's just two wings, a cockpit, some jet engines and a fuselage assembled together in a certain way so that it can fly. It's not really an airplane; it's just those things in that way. And those things are really just all made out of aluminium so really it's not an airplane it's just aluminium. Anyone who says an airplane is more than just aluminium is making a tyrannical dichotomy.

So according to you promises are functional because under the hood promises have some conceptual similarity to the immutable return values provided in functional programming. But under the hood the Boeing 737 is just scrap metal. Yet it's NOT just scrap metal. The difference is that "the promise" is a precise concept based on some metaphors that is specifically used in declarative styles and not in functional styles. Functions return immutable values. Declarations make promises. Under the hood there may be similarities but no one made any claims to the contrary.

If people don't think these high level conceptual distinctions are important, why are they even reading this article that is entirely about splitting hairs between these conceptual distinctions? AND the article is even getting it wrong because the author is a self-admitted amateur. Why bother learning from the village loudmouth when there are geniuses who publish books and give lectures just down the hall?

The stuff that reaches the front page of HN makes no sense. Most of it is written by terribly ignorant people. Programmers have no respect for expertise.


Okay, if you require an appeal to authority, here is Oleg Kiselyov deriving Iteratees in Haskell, a more general formalization of "futures"/"promises": http://okmij.org/ftp/Streams.html

If you insist on setting up semantic games: if Oleg hacking Haskell isn't functional programming then nothing meaningfully is.

----

> Functions return immutable values. Declarations make promises. Under the hood there may be similarities but no one made any claims to the contrary.

What does `foldr (+)` return? What about `const 1`?


The distinction between functional and imperative is precisely a semantic game and nothing else.

Semantic games are meaningful or this blog post wouldn't be on HN. The title of the post makes it clear that we're playing a semantic game.

Kiselyov is deconstructing the constructs. That's fine. He's playing the semantic game of showing how different concepts share ontology. This doesn't make the concepts identical; it makes their deconstructions identical. These paradigms are not physical things they are conceptual abstractions. Anyway I'm obviously not getting through to you.

const is not a promise, it's a constant. Different concepts, similar ontology.


I'm not sure we have the necessary language in common to communicate here. Functional programming is not just about stateless pure functions, it's also about composing computations. I'm curious if you've ever written in a natively lazy functional language like Haskell, or encountered `delay`/`force` promises in Scheme. The `const` I'm referring to is not the `const` of C et al: (http://hackage.haskell.org/packages/archive/base/latest/doc/...) and is just an example.

A "promise" is a delayed computation. It's a stand-in for a value, and the computation referencing it will suspend its execution until a value is available for it to consume. Similarly with Iteratees. They encapsulate specific patterns for building computations out of other computations.

What makes the "promise" pattern relatively less imperative than threading callbacks is that it abstracts out a specific pattern for ordering a computation, so that it doesn't need to be restated every time.

What makes the "promise" pattern relatively less declarative than "purely declarative" is that the control of the computation can be specified in the language itself. It's just abstracted out to the pattern in common. Similarly to re-writing certain stateful `for` loops with `map`.

Promises are a pattern in the functional programming paradigm. The "functional programming community" is where the construct comes from, and using them lets you write code that is closer to the declarative ideals of functional programming than the article's specified alternative, threading callbacks, which has relatively more in common with the explicit control of the order of computation that is implied by imperative programming.

I don't see anything in the article that remotely justifies "It's not functional at all. This reaffirms my belief that blog posts are a terrible place to learn. People who know the least shout the loudest.". In fact, I don't see anything wrong with it at all, or anything that substantially disagrees with what I've said or what I've seen any of the type of (yet-unspecified) authorities of certified expertise that you've appealed to have said. Yes, Promises are relatively more declarative, but they're also relatively more functional, relatively less imperative, and I don't see any basis for calling somebody an idiot for not using those terms with total mutual exclusion.

So please, raise, any specific objections you have which I have not addressed, or else have the honesty to retract the undeserved insults you've thrown at the author of this article.


You're defining a promise in terms of what is "under the hood." You actually define promise in terms of what-the-hardware-does. This demonstrates the varying levels of abstraction that we're operating at.

You define promise:

>A "promise" is a delayed computation. It's a stand-in for a value, and the computation referencing it will suspend its execution until a value is available for it to consume.

I define promise:

>A "promise" is a declaration or assurance that one will do a particular thing or that guarantees that a particular thing will happen.

My definition of promise is conceptually and formally more accurate than yours, because your definition raises issues like "what is a computation" and "what is a value" and "what is a stand-in" and "what is execution" and "what does consume mean"

As programmers, we understand what these words mean concretely in terms of lower-level abstractions or in some cases actual hardware operation.

My point is that the declarative ideal that you reference is not defined in terms of lower level programmer abstractions, but rather in terms of natural conceptual metaphors.

Since functional programming is generally seen as a subset of declarative programming, it's natural that functional programmers use declarative concepts and it's even understandable that they call those concepts functional. But they're mixing metaphors and creating conceptual confusion. They're losing track of the original metaphors that draw hair-splitting analytic distinctions between concepts in order to create clarity. It's semantics.

Like I already said, I'm not criticizing his design sense and I'm not criticizing yours. I'm criticizing his and your use of language and I'm asserting that your language is degraded and impure.

To me, aside from the design advice he gives, his blog post is about drawing semantic distinctions. I just find the way he does this to be horribly flawed in a way is "so close to being right that it's the worst kind of wrong."

Similarly, I would describe your definition of promise in this way. So close to being right that it's the worst kind of wrong. Your definition of promise is not a definition; it's a deconstruction. It's actually an imperative deconstruction--you're describing the promise using imperative language--concepts like execution and computation.

HN is full of very "practical" people who have a view that goes something like, "I'm correct enough for this to WORK, I'm correct enough to get the correct answer, therefore I'm correct." It's a reductionist view of truth; that which I can deconstruct, I understand. I don't share this view.

I would say, you ARE correct, but you're not as correct as you could be. Likewise with the author.

When I want to learn about the distinction between declarative paradigms and functional paradigms I talk to people who specialize in drawing that distinction. Since the blog post is prima facie drawing a distinction between functional and imperative, the value I'm looking for is an analytically rigorous distinction between abstract concepts.

That's not what I found, so I'm critical. Sorry. This just isn't the author's area of expertise and it shows. Perhaps most of his readers don't care. More power to them. But if you want to engage in intellectual life you can't throw down the gauntlet every time someone criticizes you.


> When I want to learn about the distinction between declarative paradigms and functional paradigms I talk to people who specialize in drawing that distinction. Since the blog post is prima facie drawing a distinction between functional and imperative, the value I'm looking for is an analytically rigorous distinction between abstract concepts.

Who specializes in making that distinction? Point me in the right direction. I am absolutely throwing down the gauntlet, I don't believe for a second you know what you're talking about, and I suspect that your understanding of functional programming does not include the possibility of functions returning functions, and the term "thunk" would be lost on you. If you're going to put so much weight on expertise, please point me in the direction of the researcher who you suspect would take the author of this article to task for his terminology. Or make a positive argument yourself.

"Promise" here refers to a specific computation strategy, it has a well-documented intellectual heritage and has been defined as a lambda calculus. It has its basis in the structuring of computations. With how you've constructed this ontology, I don't see any room for "functional programming" to mean anything at all.

I suppose that does leave a lot of room for calling the OP "the village loudmouth" for having any actual content behind his words.


Well, you're right that I'm an asshole. And you're right that I'm not inline with your discipline.

I'm just inline with my own discipline where promise has a different meaning, a super ordinate meaning that includes your meaning. As a consequence, anyone from your discipline just thinks I'm clueless. I'm definitely not going to convince you that the typical definition of promise, the one you're used to, is actually just an operational definition--an instrumental label assigned to a particular instance of a promise-like thing.

I'd cite the tiny field of cranks who think like me, but that would just bring shame on them by association. I'm not doing research in functional programming. I'm working on hair-splitting tyrannical distinctions-without-a-difference. According to your paradigm, I'm an outright fraud spouting bullshit. So enjoy your victory I guess.


HN comment threads are my favorite learning environment.


Ryan Dahl in February 2010, when Promises were removed from core:

    Because many people (myself included) only want a low-level interface
    to file system operations that does not necessitate creating an
    object, while many other people want something like promises but
    different in one way or another. So instead of promises we'll use last
    argument callbacks and consign the task of building better abstraction
    layers to user libraries.
Those libraries do exist. There still isn't a canonical Promises specification. Node trying to force promises onto the ecosystem early on would've been like applying brakes and slow down adoption enormously.


> There still isn't a canonical Promises specification.

Yes, there is: https://github.com/promises-aplus/promises-spec


Promises/A+ surfaced less than 6 months ago, and is not implemented by most widely-used frameworks. Still a bit far from canonical.


Promises/A+ is just a more fully specified version of Promises/A, which has been around for about 4 years.


I don't agree that there is any fundamental difference in functionality between callbacks and promises.

Promises don't somehow magically make asynchronous code easy to write while leaving callbacks out in the cold. They have very similar strengths and weaknesses and I didn't find any of the OP's arguments compelling.

In fact, if I had to choose, I would take the opposite view and say callbacks are neater, cleaner and more consistent than promises.


Promises are values, and you can use them to compute things. Callbacks are procedures, and are non-composable in non-trivial ways (you can chain callbacks very simply, but that's basically it).


It is hard for me to fathom the negative feelings towards Promises. They are quite clearly a great way to perform async programming in a civilized way (see Twitter's Future/Promise in Finagle on github). JDK 8 will even have the equivalent in CompletableFuture. The only thing better is to combine Promises with coroutines for a more linear programming style like in Flow: http://www.foundationdb.com/white-papers/flow/


Thanks for this--I mentioned finagle to jcoglan on twitter yesterday after I read his blog post, and I don't think he was aware of the similarities.

I actually didn't know about futures until I learned about scala and finagle. I watched a talk on twitter's service stack given by marius eriksen and was blown away. My coworkers heard me rambling on about futures for weeks afterwards, and I found that it was difficult to explain what was so great about them. So I'm not surprised at the negative reactions in the comments here (although jcoglan did a much better job of exlaining them than I ever did).


if you want this on the JVM today and can abide Scala, see: http://doc.akka.io/docs/akka/snapshot/scala/dataflow.html


The Twitter solution I mention above is in Scala — that said, I have one that also works in JDK 6/7 in a branch of https://github.com/spullara/java-future-jdk8.


This code doesn't look right to me:

    // list :: [Promise a] -> Promise [a]
    var list = function(promises) {
      var listPromise = new Promise();
      for (var k in listPromise) promises[k] = listPromise[k];
Perhaps the assignment is supposed to be the other way around?

      for (var k in promises) listPromise[k] = promises[k];


I asked the same question in Twitter. Turns out James was actually augmenting (i.e. modifying) the array object `promises` to behave as a promise itself. I don't think this was a particularly beautiful way of doing it but it seems to work now that I think of it.

Promise libraries, like RSVP.js [1] he referred to, typically implement a way to construct a promise with a depends-on-many relationship, as a function possibly called `all([p1, p2, ...])` (with the same type signature as for `list`), `and(p1, p2, ...)` or something similar.

IMO, defining the `list` function that way would've been clearer to the reader and more FP'ish, treating the `promises` argument in as a value and not a mutable object.

[1]: https://github.com/tildeio/rsvp.js/blob/master/lib/rsvp/all....


A day later I looked at this again and I'm a little closer to understanding.

      var listPromise = new Promise();
creates an object that, being a Promise object, has certain methods and internal state, derived from the prototype of Promise.

      for (var k in listPromise) promises[k] = listPromise[k];
This confused me because I thought "k" was a stand-in for a numeric index, e.g. that it was doing promises[0] = listPromise[0], promises[1] = listPromise[1], etc. That is not what's going on. Rather, "k" refers to attributes and/or methods that objects of the Promise class have by default. It's copying those onto `promises` — the array `promises` itself, not the individual items `promises[i]`, which keep their existing methods and attributes.

Coming from a Python background, I think I would have found this more obvious if the variable "k" were instead called "method" or "attr". If it was `for (var method in listPromise)` it'd be much clearer what's going on, whereas single-letter variables like i, j, and k are, to me, stand-ins for integers.

It was also confusing, as you said, that the function uses destructive update rather than treating the input as a value. James did mention this ("augmenting the list with promise methods"), but it's still unexpected, especially when the function is preceded by a Haskell type signature.

The reason I only say I'm closer to understanding, and not quite there yet, is I'm not sure what it means to do `new Promise()` or what is being copied over in the above for-loop. I tried James's code with a Promises/A+ implementation, rsvp.js (https://github.com/tildeio/rsvp.js), but it won't let me do `new Promise()` because it works differently:

    > var promise = new RSVP.Promise();
    TypeError: You must pass a resolver function as the sole argument to the promise constructor
Per an example in RSVP.js's readme, it's expecting this:

    var promise = new RSVP.Promise(function(resolve, reject){
        // set up a callback that calls either resolve(...)
        // or reject(...)
    });
If James is using a specific promises implementation in his code, it appears to be the one he defined in a past blog post (http://blog.jcoglan.com/2011/03/11/promises-are-the-monad-of...), which in turn builds on a module from his JS.Class library (http://jsclass.jcoglan.com/deferrable.html), which I hadn't heard of before.

I still think this is a great article, but that code snippet has proven to be quite a puzzle.


I agree, I was going to ask the same question. Unless there's some subtlety that I'm missing, the order you propose makes much more sense.


I'm glad someone else thought that looked odd.


Unfortunately, in practice promises end up making your code more difficult to reason about by adding cruft and unnecessary abstraction. They're also very limiting from a control-flow perspective.

This is especially noticeable when you have branching behavior / want to resolve a promise early[1]:

Branching with promises:

  function doTask(task, callback) {
    return Q.ncall(task.step1, task)
    .then(function(result1) {
      if (result1) {
        return result1;
      } else {
        return continueTasks(task);
      }
    })
    .nodeify(callback)
  }

  function continueTasks(task) {
    return Q.ncall(task.step2, task);
    .then(function(result2) {
      return Q.ncall(task.step3, task);
    })
  }
As opposed to with stepdown[2]:

  function doTask(task, callback) {
    $$([
      $$.stepCall(task.step1),
      function($, result1) {
        if (result1) return $.end(null, result1)
      },
      $$.stepCall(task.step2),
      $$.stepCall(task.step3)
    ], callback)
  }
I would really love for a post to include a non-trivial problem implemented with promises, vanilla callbacks, and async (and I'd be happy to add a stepdown equivalent), and allow people to see for themselves (how in my opinion promises make code harder to read).

[1] http://stackoverflow.com/questions/11302271/how-to-properly-...

[2] https://github.com/Schoonology/stepdown (docs need updating, view tests for documentation)


Promises are just tools for managing a list of callbacks with less boilerplate. I wouldn't call one imperative and the other functional. Both are functional. You might dislike callback patterns, but through one of the beautiful parts of JS, you can trivially wrap any callback-oriented API you want and have it become a promise based one. I've done this before when I had a very complex dependency graph at the start of a program and a few API calls were callback related. It looks something like this:

    SomeClass.prototype.someActionPromise = function(){
        var deferred = makeADeferred();
        SomeClass.prototype.someAction.call(this, function(err){
            err ? deferred.reject() : deferred.resolve();
        });
        return deferred.promise();
    };
Now you have a promise-based version that makes your code a little cleaner and easier to read.


The author "promisify"s a callback-based API in the article - which is worth a read by the way.

I'm interested in your opinion RE: his argument for why callback APIs are imperative - because I think he has a very good point and has supported it with a solid argument and you haven't offered any rebuttal.


Interesting. As you pointed out, I hadn't read the article but was rather replying to other comments.

After reading it, I think I have to agree that I had never thought about it that way. It makes a lot more sense that a promise is just a declaration of some unit of work, and when you can use a promise like any other data, you aren't just giving imperative commands, but rather describing work to be done and using that as a fundamental part of your code, which is why it drastically simplifies async programming (the relation of promises to monads is quite nice, too). Definitely a good article, thanks for kicking me :).


" the decision, made quite early in its life, to prefer callback-based APIs to promise-based ones."

Rewind to the point when nodejs was being designed. In that world, in the context of javascript, callbacks were the only real pattern that existed. XHR? callback. Doing something in the future? callback.

If you imagine node trying to leverage the javascript ecosystem, callbacks were a no-brainer.


I'm not sure you've ever used XHR if you call it the callback pattern.

The XHR object is effectively a request and a response bundled up into one object that has promise-like traits. You attach event handlers to it to handle various state changes and scenarios, and then once you issue the request, the event handlers get invoked 0-N times. If it really was JavaScript callback-style, XHR would look like this:

    window.xmlHttpRequest("GET", url, function (result, error) { ... } );
It doesn't. setTimeout/setInterval are definitely callback-passing, but they're not exactly glowing examples of stellar API design. They return integer IDs instead of handles or objects!

Honestly, the only way to classify node's callback-heavy design as a 'no-brainer' is if you excuse the design by saying no thought was put into it beyond simply doing what a bunch of other people were doing. If you put enough thought into how large applications will be built, and how difficult it is to build scalable, maintainable libraries, callback-passing style easily loses compared to promises.


This is simply false.

Here's a discussion about removing promises from node.js in 2010:

https://groups.google.com/d/msg/nodejs/RvNoQtoWyZA/ar_lYLhK8...


Various incarnations of the Promise monad have existed for quite a while, even in JS. The oldest one I can think of is MochiKit's Deferred, inspired by Twisted's. That one worked (and still does) seamlessly with any callback code.


(Twisted was inspired by the work I just pointed to in my answer. Not to take away from yours -- I wasn't familiar with MochiKit.)


Of course, using monads for asynchronous tasks is an old trick and E has always been ahead of its time (like many other languages ...)


Maybe so, but everything in this article goes back at least to the 90s with the E programming language (http://erights.org). Doug Crockford was involved in E. (Nowadays E's Mark Miller is on the Ecmascript committee.)


> Rewind to the point when nodejs was being designed. In that world, in the context of javascript, callbacks were the only real pattern that existed.

Sure, but there were languages back then that made programming with callbacks more pleasant due to features like coroutines or first class continuations. While these might not have been on the radar for most Javascript developers the technology certainly existed back then.


Promises seem cool but if you are not liking callbacks very much you should take a look at just using CoffeeScript indenting two spaces, specifying functions instead of inline, he async module, icedcoffeescript with await and defer, and Livescript with backcalls. All of that is more useful and straightforward than promises.


So, calling magic subroutines is more functional than passing about first class functions?


Functional programming is not just about first class functions and I disagree with the idea that using first class functions means you are doing "functional programming".

See the second paragraph of the article for the authors take on this. Specifically, the concept of values is very important.


This kind of programming certainly is promising </pun>

It's one of the reasons why i started learning haskell.


there is also a third approach for those who want to write composable, functional javascript http://dfellis.github.com/queue-flow/2012/09/21/tutorial/


All the code and such a big abstraction for the first example when it could have done like this:

    var result = [];
    paths.forEach(function (i, file){
        fs.stat(file, function (err, data){
            result.push(data);
            if (i === 0) {
                // Use stat size
            }
            if (result.length === paths.length) {
                // Use the stats
            }
        });
    });
Fairly understandable, more efficient and without introducing logic patterns foreign to many. It also meet his requirements (It is parallel and we only hit every file once)


To be fair, you haven't handled the complete case. What if one of the items fails? You need to handle the error, but only if it's the first error, and make sure to tell all later-called callbacks that they're too late and we have already failed. Except, if we got an error on an early callback but the 0th item comes back later, we need to do whatever we were going to do with that one piece of data.

    var result = [];
    var hasFailed = false;
    paths.forEach(function (i, file){
        fs.stat(file, function (err, data){
            // if previous callback failed, give up.
            // unless this is the first item, which we still need
            if(hasFailed && i !== 0) return;
            if(err) {
                hasFailed = true;
                // do something with the error 
                if (i === 0) // do something special for error on first item.
                return;
            }
            result.push(data);
            if (i === 0) {
                // Use stat size
                // remember we might have already failed, in which case don't add the first item to the general result
                if(hasFailed) return;
            }
            if (result.length === paths.length) {
                // Use the stats
            }
        });
    });
That's almost certainly still not close to right. Which just illustrates the basic problem: without either promises or something like async.js, you're reimplementing control flow by yourself. You can easily start with perfectly-nice-looking code that balloons to be incomprehensible as soon as you start caring about error cases. And where two statements, perhaps dozens of lines apart, are preserving some invariant that is not obvious to someone editing your code in the future. Even yourself.


The example I'm talking about is the one where he doesn't handle errors either, this one: http://pastebin.com/98CarwzU.

And your example is disingenuous too, why do we need to add so much logic instead of collecting the errors if you want to handle them anyway?

    result.push(err || data);
So you don't need to copy around a variable called "hasFail", one line can be enough

    var failed = results.every(Buffer.isBuffer);


> without introducing logic patterns foreign to many

Consider your use of the forEach() abstraction when you could have used a for() loop just as easily. (That said, I agree the article in general could do a better job of describing the "other" side)


Nope, the shared scope would obligate me to create a function with binded params for each iteration so "i" is not the last value (e.g. path.length) in every call. Examples: http://stackoverflow.com/questions/1451009/javascript-infamo...


Exactly, one of the advantages of using a good abstraction like forEach() is that it prevents you from making that kind of mistake. On the other hand, it can be more costly in cases where you don't need the closure. All abstractions by definition have some tradeoffs, some visible or hidden complexity, and some extra knowledge. Promises are no different.


I see no benefits in promises, zero, none.


The real problem, design-wise, is that fs.stat operates on a single file at a time. Sometimes you only want info on one file, sure, but in many common use cases, you want info on a bunch of files - perhaps even the contents of an entire directory, or a directory tree. Worse still, stat might be a syscall! Woo, syscall per file.


... and what does that haves to do with promises?

And in such case you would only do this once outside the listeners of http/or-whatever connections so it would be done just once regardless of the number of concurrent activity.


The point is that a properly designed API wouldn't require any amount of scaffolding. You'd go:

    fs.statMany(filenames, function (stats) { ... });
or:

    var statsPromise = fs.statMany(filenames);
And then in either case, you'd just use a for-loop or forEach or whatever your preference on the result. No thinking about how to preserve complex invariants or whatever is necessary.

Hell, with ES6 generator-y expressions you could make it even more succinct, something like:

    var result;
    fs.statMany(filenames, function (stats) { result = [... for x in stats]; });
No push nonsense, no nested if statements, no need to explicitly invoke async.parallel or whatever. Just clarity.


Sorry, I don't see no clarity there. And how would that fix the fact that stat works in individual files?, and more importantly, how does a function like that handles error? Individual level, group level? It is confusing.


You've missed so many edge cases it's not even funny. Error handling? Zero length "paths" array? These are exactly the sort of things that using promises helps avoid.

Why don't we all just go back to manually pushing/popping arguments on the stack like in assembly language while we're at it?


[deleted]


This is a ridiculous attitude. Do you laugh at the concept of "complex" and "imaginary" numbers because a group of these so called "mathematicians" have apparently arbitrarily decided to call some numbers imaginary? RIDICULOUS! LAUGHABLE!




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: