Hacker News new | past | comments | ask | show | jobs | submit login

Marek,

Streams return `false` when a write will buffer past a configurable highWaterMark. The first hand-rolled `on('data', write)` pipe doesn't take this into consideration, and so yes, backpressure is not handled. `r.pipe(w)` does the right thing here.

The single "extra read" that you're seeing is just filling up to that configureable highWaterMark, which is an intentional feature. In the real world, connections are often of extremely variable speeds. Consider sending data from a database on localhost to a client on a 3G or 4G network. The mobile connection is extremely bursty, but with a high max speed and periods of very high latency. The database connection is extremely steady, but with a slower max throughput because of hard disk latency. In that case, you absolutely don't want to miss a chance to send data to the mobile client during a burst, so the ideal default approach is for Node to smooth out those highs and lows by buffering a small amount of data in memory. We don't consider 64KB to be a large amount for most purposes, but as I mentioned, it is configurable.

There is no way to pause the accept call, it's true. We've considered adding that feature, but no one has ever asked for it. Perhaps if you explain your use case in a github issue, we could do that. You can `server.close()` but that also unbinds, so clients get an ECONNREFUSED. Except in the cluster use-case, bind() and accept() are typically very tied to one another. It wouldn't be too hard to expose them separately, but like I said, no one's ever asked. If your complaint is that we haven't implemented a feature that no one's ever asked for, well, ok, that's just not how we do things, so maybe it's just a cultural difference in our approaches to creating software, I don't know.

    First, I believe most node.js programmers (including myself)
    don't understand Streams and just don't implement the Stream
    interfaces correctly.
Ok, well, there's not really any excuse for that any more. They're thoroughly documented, base classes are provided, there are blogs and examples all over the place. Maybe start with http://api.nodejs.org/stream.html and if you have questions that aren't answered, complain about it at https://github.com/joyent/node/issues and mention `@isaacs` in the issue.

It's literally a single method that you have to override to implement a well-behaved Readable, Writable, or Transform stream.

    But even if Streams were properly implemented everywhere
    the API suffers a race condition: it's possible to get
    plenty of data before the writer reacts and stops the reader.
This is not true. The Writable stream object has a highWaterMark. Once that much data is buffered in memory, it starts returning `false` to its consumers. If you'd like to set that to 0, go right ahead. It will return `true` only if the data is immediately consumed by the underlying system. This doesn't happen "some time in the future". It happens at the first `write()` call that pushes the buffer over the high water mark. The example you describe is quite easy to simulate with setTimeout and the like. Perhaps you could post a bug if it behaves in a way that is problematic?

I have a hard time sussing out what you're actually complaining about in this article. You certainly seem upset about some things node does, but I can't figure out exactly what's bugging you. Is it the inability to delay accept() calls? Is it callbacks? Is it streams? Is it non-blocking IO as such?

Streams aren't really a "callback based" API as much as an event-based one, and actually, a more strictly callback-based stream API would be quite a bit easier to get right, in my opinion, with much less ceremony: http://lists.w3.org/Archives/Public/public-webapps/2013JulSe...

A similar approach could be taken to the listen/accept stuff you write about. `server.accept(function(conn) {})` and then call accept() again when you're ready for another one. A convenience method can then be trivially layered on top of that to call accept() repeatedly:

    Server.prototype.listen = function(cb) {
      this.accept(function onconn(conn) {
        cb(conn);
        this.accept(onconn);
      });
    };

I could be wrong, but I suspect, at the root, the cause of your distaste with Node is actually EventEmitters, rather than any of that other stuff you mention. And if so, I agree 100%. The "evented" part of Node is a mistake which can only be properly appreciated with the benefit of hindsight. It's too late to easily change now, of course, and so that's the design constraint we were faced with in building streams2 and streams3. But I think that platforms of the future should avoid this landmine.

Fair warning: I'm going to be offline first for NodeConf and then for vacation, for the next several weeks, so this is is a bit of a hit-and-run comment. Feel free to reply to i@izs.me or post issues on the Node.js github page. I probably won't see replies here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: