Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently implemented an event logging server, intended to process a massive, concurrent number of event messages from a large clusters of systems.

We choose to implement a CPU efficient binary protocol, automatic client-side queuing of outbound messages if a server failed, and automatic -client-side- fail-over to the next available server.

The initial implementation, with no time spent on profiling/optimization, was able to receive and process 25,000 event messages per second from a single client, and scale up clients/cores relatively linearly.

I can't even begin to fathom solving this problem with HTTP, or why the 'features' of HTTP listed (proxies, load balancers, web browsers, 'extensive hardware', etc) would be an improvement over the relatively simple and highly scalable single-tier implementation we created.

Clearly, HTTP works well for some things, but "just use HTTP, your life will be simpler" is a naive axiom.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: