Hacker News new | comments | show | ask | jobs | submit login

HTTP Pipelining is designed so a client can send multiple requests without waiting on a response and then the server sends all the responses in order. This is helpful on high latency links since it can combine numerous HTTP requests into fewer packets. The exploit is the client never stops to read and just writes requests nonstop. Meanwhile node's http.Server continually populates a response buffer which is never consumed.

Node uses Stream[1] objects for reading/writing streams of data. The Stream object has a 'needsDrain' boolean which is set once its internal buffer surpasses the highWaterMark (defaults to 16kb). Subsequent writes will return false[2] and code should wait until the 'drain' event is emitted, signaling it's safe to write again[3]. The documentation even warns about this scenario:

> However, writes will be buffered in memory, so it is best not to do this excessively. Instead, wait for the drain event before writing more data.

http.Server[4] uses a writeable stream to send responses to a client. Until this patch[5] it was ignoring the needsDrain/highWaterMark status and just writing to the stream. It fills up the buffer of the writeable stream, far beyond the high water mark and eventually runs out of memory.

The patch resolves this by checking when needsDrain is set, then it stops writing and stops reading/parsing incoming data. It then waits until the 'drain' event is fired and then proceeds as normal.

[1] http://nodejs.org/api/stream.html

[2] http://nodejs.org/api/stream.html#stream_writable_write_chun...

[3] http://nodejs.org/api/stream.html#stream_event_drain

[4] http://nodejs.org/api/http.html#http_class_http_server

[5] https://github.com/joyent/node/commit/085dd30e93da67362f044a...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: