Hacker News new | comments | show | ask | jobs | submit login

This is an interesting perspective. But to me, even having spent a year on a large node.js project, I just don't see how promises would have simplified things at all.

If you have some crazy graph of dependencies, I can see how breaking out promises could help simplify things. But I don't feel like that's a super-common scenario.

The author says:

> * [Promises] are easier to think about precisely because we’ve delegated part of our thought process to the machine. When using the async module, our thought process is:*

> A. The tasks in this program depend on each other like so,

> B. Therefore the operations must be ordered like so,

> C. Therefore let’s write code to express B.

> Using graphs of dependent promises lets you skip step B altogether.

But in most cases, I don't want to skip B. As a programmer, I generally find myself preferring to know what order things are happening in. At most, I'll parallelize a few of database calls or RPC's, but it's never that complex. (And normal async-helper libraries work just fine.)

I swear I want to wrap my head around how this promises stuff could be useful in everyday, "normal" webserver programming, but it just always feels like over-abstraction to me, obfuscating what the code is actually doing, hindering more than helping. I want to know, specifically, if one query is running before another, or after another, or in parallel -- web programming is almost entirely about side effects, at least in my experience, so these things often matter an awful lot.

I'm still waiting for a real-world example of where promises help with the kind of everyday webserver (or client) programming which the vast majority of programmers actually do.

> Getting the result out of a callback- or event-based function basically means “being in the right place at the right time”. If you bind your event listener after the result event has been fired, or you don’t have code in the right place in a callback, then tough luck, you missed the result. This sort of thing plagues people writing HTTP servers in Node. If you don’t get your control flow right, your program breaks.

I have literally never had this problem. I don't think it really plagues people writing HTTP servers. I mean, you really don't know what you're doing if you try to bind your event listener after a callback has fired. Remember, callbacks only ever fire AFTER your current imperative code has finished executing, and you've returned control to node.js.




I can speak your comment, since my side project is a Node.js server that talks to several APIs, a database, and N web clients. Like you, I've worked more than a year on it, but I had a positive experience with Q [1] and jQuery promises.

Promises make async code easy to manage, even at scale. Each API request gets its own promise. What happens inside that promise doesn't matter, as long as it returns a result or an error. If talking to the API takes one request or two, it does not matter with promises. We can abstract these API requests in such a way that even if document retrieval is a multi-step process or a one-step process for each document source, the collation process can know nothing about the retrieval process to work.

In other words, promises allow us to separate concerns. Document retrieval is one concern, collation another. Promises, by abstracting asynchronous control flow into synchronously appearing objects, allow us to write simple programs at higher levels. Separating concerns makes the server easier to test and easier to extend.

If you want to see how I've used promises, you can take a look at my work on Github [2].

[1] https://github.com/kriskowal/q

[2] https://github.com/fruchtose/muxamp


In other words, promises allow us to separate concerns. Document retrieval is one concern, collation another.

Other programming languages have this too. They're called a 'METHOD'. Sorry, couldn't resist.

On a serious note, look at your code in here:

https://github.com/fruchtose/muxamp/blob/master/lib/playlist...

And look at your 'playlistCount' function on line 39 (which for no apparent reason you've made a variable).

Now I can't understand how a promise there is helping your programming. In fact it seems to have vastly increased the complexity of the code and it is now completely unclear from a quick scan how the method works, what the program flow actually is.

It should be a much shorter function, literally just this:

    function playlistCount(){
        return db.query('SELECT COUNT(id) AS count FROM Playlists;', function(rows) {                   
            return parseInt(rows[0]['count']);
        });
    }
Your version is 28 lines!

28 LINES. HOW? For a simple COUNT!

Actually to be fair to promises, 50% of the problem here is that you've not abstracted your db code and unnecessarily repeat boiler plate connection code again and again and again.

But a big part of the problem is that you can't just throw an exception and let it bubble up the stack, you're constantly having to baby sit the code, and that is the promises fault.

(unrelated, but as a mini-code review though I've never used node.js but the oft repeated line of `dbConnectionPool.release(connection);` in the connection failure code is an immediate code smell to me, it looks like rain dance code, why would you have to release a failed connection? It failed, how can it be released?)


Some pseudocode

    playlistCount :: PromiseDB Int
    playlistCount = withConn $ \conn -> do
                      result <- query 'SELECT COUNT(id) AS count FROM Playlists;' conn
                      return (parseInt $ get 'count' result)
Wrapping the pool handling into withConn, and the failure modes into query. This PromiseDB monad would be fairly trivial to produce in Haskell. It's also fairly easy to write it point-free as

    withConn $ 
      query 'SELECT COUNT(id) AS count FROM Playlists;' 
      >=> return . parseInt . get 'count'


Having now thought about it a bit more, you could actually write the code so you don't have to baby the promise at all in normal code. So the promises didn't increase the complexity of the code really, the lack of abstraction is.

I'm thinking of something like the below as a dbHelper class.

Note I'm passing the error messages into the deferred reject method rather than using console logging. I'm not sure if the q API supports that as you don't, but if it doesn't that seems like a bit of an odd design decision by the library authors, how else are you supposed to pass errors back? Unless I'm misunderstanding promises. If you want to have console logging of failed defers I would recommend either editing the source of Q or monkey patching the reject method to do it automatically.

It's also worth noting I am deliberately not checking rows length, etc. You should let exceptions do what they're supposed to do as something very serious has gone wrong if rows.length == 0 or rows[0] is null or rows[0]['count'] doesn't exist. A SELECT COUNT should always return 1 row and if you've named the column it should definitely be there.

    //this is totally untested code to show the idea of 
    //how you should abstract the boilerplate from your code
    var dbHelper = function() {
        var db = require('./db'),
            Q  = require('q');

        var dbHelperInner = {
            query : function(query, onComplete) {
                var dfd = Q.defer();
                
                try {
                    var cmd = new DbCommand(dfd);
                    cmd.execute(query, onComplete);
                } catch(e) {
                    dfd.reject(e);
                }
                
                return dfd.promise;
            }
        }

        function DbCommand(dfd) {
            if (!db.canRead()) {
                throw "db unavailable";
            }
            
            this.dfd = dfd;
        }
        
        DbCommand.prototype.execute = function(query, onComplete) {
            var self = this;
            dbConnectionPool.acquire(function(acquireError, connection) {
                if (acquireError) {
                    self.dfd.reject(acquireError);
                    return;
                }
                connection.query(query, function(queryError, rows) {
                    if (queryError) {
                        self.dfd.reject(queryError);
                    } else {
                        try {
                            self.dfd.resolve(
                                onComplete(rows));
                        } catch (e) {
                            self.dfd.reject(e);
                        }
                    }

                    dbConnectionPool.release(connection);
                });
            });
        }
        
        return dbHelperInner;
    }();


    //and then you could use it like this
    //which is almost identical to the code I posited above
    function playlistCount(){
        return dbHelper.query('SELECT COUNT(id) AS count FROM Playlists;', function(rows) {                   
            return parseInt(rows[0]['count']);
        });
    }


First of all, thank you for taking the time to read over my code. I don't get enough of this.

You're right that 28 lines is pretty ridiculous, but it's because I never abstracted out that code. The playlist code is some of the ugliest in that project, because I got pretty lazy with it. I know it's terrible. The console logging stuff is part of that.

Q actually allows exceptions to cause promise rejections, which is both a nifty feature and a potential curse (e.g. throwing an exception before releasing a resource).

I like the changes you propose (not considering testing), but with slightly different implementation. the DbCommand should be creating its own deferreds, rather than accepting one as a parameter. This kind of promise handling is best left up to DbCommand to implement, rather than the caller. In Q it's easy to chain promises, like so:

    Q.fcall(someFunction).then(function(result) {
      return functionThatReturnsAPromise();
    }).then(function(secondResult) {
      console.log(secondResult);
    }).done();
Also, in a proper redesign, Q's denodeify function can change the whole flow of the execute function entirely:

    DbCommand.prototype.execute = function(query, onComplete) {
      var self = this;
      Q.denodeify(dbConnectionPool.acquire).then(function(connection) {
        self.connection = connection;
        return Q.denodeify(connection.query);
      }).then(function(rows) {
        return onComplete(rows);
      }).finally(function() {
        self.connection && dbConnectionPool.release(self.connection);
      }).done();
    };
Q's denodeify call works with Node.js callbacks which follow the convention that the error is the first argument, and the result all others. denodeify then converts the error into an argument for any calls to Q.fail. Any uncaught errors will be thrown after done() is called.

However, I am not sold on the idea of creating a prototype for DB commands. There's no state that needs to be held, and the code is abstract enough without introducing a prototype.

Again, thanks for the code review. The playlist DB code needs a lot of refactoring, since right now there's too much repetition I've been too lazy to fix. If you want to talk some more, feel free to email me.


The point is promises free you from wanting or needing to know about the order that things happen in. I hear you saying you fear promises, because it means it would get in the way of your ability to know that. But the truth is once you embrace them, that need becomes unimportant.

The idea that webservers are "all about side effects" gives me a chill. The whole architecture concept of HTTP is no side effects, so to claim that it's all about side effects seems odd. It should only be the case for POST PUT or DELETE methods, and only in very specific ways.


As someone who's started moving from the imperative world, albeit in Java, C# and Ruby based work rather than JavaScript, I can say that letting go is the hardest part.

However, when you learn to stop worrying and let the runtime decide it's so much nicer. It turns out that people have already optimised the framework, so at worst it's just as fast as the code I wrote. At best it's faster because the framework knows more about what it's capable of.

The biggest thing to realise is that while you can easily make things perform well in isolation the runtime can look at the bigger picture. There's no point making an operation run in 300ms if it blocks all other tasks on the server, when it could run in 600ms and allow everything else to keep going.


Just think of it as forking and joining threads...


> The point is promises free you from wanting or > needing to know about the order that things happen in > ... But the truth is once you embrace them, > that need becomes unimportant

This smells like the classic leaky abstraction though. Like when people tried to paper over the difference between remote calls and local calls with abstract interfaces (CORBA, RMI, etc.). Everyone would say it was so awesome, remote calls look the same as local calls! But it wasn't awesome, it was horrible, because the details of the abstraction 'leaked' through and you got all kinds of problems from having delegated away what was actually one of the most critical, sensitive parts of your code to a layer you had no control over. 15 years later we're back to nearly everybody using REST because it turns out to be way better not to shove those abstractions on top of your most important code.

Now, I'm not saying that analogy is perfect here ... but it does remind me of it. Why should you care about the order of things? Just to suggest something, sometimes it's just useful to be able to reason about it. "We know the first operation definitely happened before the others, so an earlier one failing can't be a side effect of something a later one did ... oops, we don't know that any more. We actually have no idea what order they happened in."


The need for that form of reasoning seems to me to be a symptom of the way that we go about writing code. We don't need to worry about if line 10 executes before line 11 within a given scope -- we just know this. If we come up with a similarly simple way of writing asynchronous operations that have dependencies then we can read it just as fluently.

In my experience, promises do fill this role if they're used in a suitable scenario.


> The idea that webservers are "all about side effects" gives me a chill. The whole architecture concept of HTTP is no side effects, so to claim that it's all about side effects seems odd. It should only be the case for POST PUT or DELETE methods, and only in very specific ways.

There's nothing incongruous about that. It is the case that side effects should only happen on POST, PUT, and DELETE methods (and the like), but almost all webservers are written because of a need to use these.

If your webserver is all GETs and HEADs, then it is either trivial and you would have used someone else's instead of writing your own, or its sole purpose is to repackage and serve existing data from other sources - a rare use case among all webservers.

If you were to take an inventory of all the webservers out there, you would doubtless find that almost all of them exist in large part in order to create side effects.


And I'm thinking about complex sites, not simple serve-up-a-page-and-that's-it.

A cache gets refreshed or added to. A user's viewcount is incremented. A new statistic is calculated and then stored. An item is marked as viewed. And these are all just on a GET.

On complex sites with a logged-in user, side effects are pretty much the norm.


I'm not arguing about that, I am arguing about "all"


I think his point is that there's usually a very strict ordering to the events on an HTTP server - you parse and sanitize your input, make some database calls, and generate a response - at best, letting something else do the sequencing and composition for you doesn't gain you much, as it might in a reactive GUI. At worst it leaves room for subtle bugs or code that's less clear (arising from the statefulness of the Promse object itself).

Using a Promises, as opposed to reducing a list of computations async-style, also limits you to the Promise object's interface, so you lose (or at least add cruft to) the flexibility and composability of using native lists. By sequencing computations with lists, if I want some insight into what's happening, I just List.map(apply compose, logFunc). With promises, I have some work to do.

Promises have their uses, but it's definitely a tradeoff, and for most HTTP servers, I'd argue that their utility does seem a bit limited. I'd similarly say that making a point of using FRP to build a server would probably be a bit overkill for the task.


What kind of object is in your list of async operations? promises. (though probably your own ad hoc, hand rolled and poorly specified version of them)


Just plain-old native functions - that's the whole point.


when you put "plain old native functions" in an array, with the intent of executing them in sequence, with the output of i being fed into the input of i+1, congratulations, the functions are now implicitly promises.

Because, in the end, what, semantically, is the difference between:

runqueue([func1,func2,func3,func4]); and func1().then(func2).then(func3).then(func4);

No significant difference at all, really. except the promises permit you much more flexibility and options.


The difference is that the first works with all of the native list functions, as well as all of those in e.g., underscore, without any extra work. The latter doesn't. Now, the latter certainly offers some other features, but my point was that, in specifically building an HTTP server, it's been my experience that those features aren't of as much use as being able to use the native list functions to, say, map a log function onto the list of functions, or reduce while halting execution under particular conditions.


It's an easy way to avoid race conditions, say you have 2 ajax calls that are required to render your Main view. If either or both of the calls fail, you want to show a Default view.

One answer would be to just do them synchronously, perhaps nesting one of the api calls inside the other's callback (event-handler or otherwise). And then for the Default view, you'd need to have the code that handles the failure in 2 different places (or at least 2 tests for failure).

Another answer would be a state machine which can be grouped and chained (pipe'd) with other similar state machines, to guarantee an order of operations when you need it. With this, you create a promise which is only resolved when both of the promises for the 2 ajax calls are complete, and that promise then pipes the results to the next promise in the chain which renders your view. For the Default view, if either of the ajax calls fail then the parent promise will fail, allowing you to handle the failure in one place.

Code example:

    Deferred.when( ajax1(), ajax2() ).then( /* success */ mainView(), /* fail */ defaultView() );


In POSIX thread programming the "state machine" that you talk about is commonly called a Condition Variable. https://computing.llnl.gov/tutorials/pthreads/#ConVarOvervie...


I've read a few other blog posts about Promises in the past months, and also was never convinced. Maybe I never read the good ones, because this is the first one that made me sit up a bit in my chair and think that this is really pretty cool. I thought it was really well written and quite illuminating.


The project I'm working on right now is about 6 months old. Promises have greatly simplified our data access layer. My argument here is mostly syntactic (not semantic, like the OP), but being able to assign promises to a variable has improved the readability of the code and the intent of the code, improving readability, testability, and flexibility. I don't claim that promises are the One True Way, but they get a lot of noise out of my way and let me focus more on what our code is doing than on how it's doing it.


"Web server programming is entirely about side-effects" is not terribly interesting, when observing current common practice: if it's typically callback driven, there's nothing else it can be about.


Twitter's finagle is a good example.

Anytime you need to perform more than one interaction with external services in parallel, it's a lot easier to wrap the results as futures and interact with the promise objects than to cope with callback spaghetti.

This is particularly critical anytime you're working in an SOA environment.

I could see why it's easy to believe that you don't need promises if you're averaging 1-2 database queries and 0 API calls per web page/API call reply, but once your scenario gets even slightly more complex - you're fucked.

Promises are one of the big reasons I like Clojure's concurrency better than Go's.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: