Author of Elli here.
Elli is similar to Mochiweb, in that there is a pool of processes all accepting on the socket (doesn't work on Windows, I'm told). When an "acceptor" gets a connection, it handles that client for the lifetime of the connection, which might mean multiple requests if keep alive is used. Unlike Cowboy, Misultin and Yaws, no process is spawned after accepting and no process is spawned to run the user callback. This makes for better performance and it is more robust, as the processes cannot get out of sync. I could not make any of the existing projects work this way without completely rewriting the core.
The biggest difference between Elli and the other Erlang webservers however is the programming model. Mochiweb, Yaws, Misultin and Cowboy give you helper functions for writing a response on the socket. This makes it easy to send the body before you send the headers, send multiple bodies, etc. In fact, it makes it so easy that Cowboy tries really hard to help you avoid this with the cost of higher complexity in the user code (need to pass the return value from every helper function into the next calls).
The programming model offered in Elli is similar to the "rack" model of request-response. You get a request and return a response which is serialized by Elli into the actual HTTP response. This makes it very very easy to reason about and test the controller logic by creating a fake request with your paths, body, etc, then checking the response, no sockets or processes involved. This model breaks when you want to do streamed and chunked responses, which is handled differently. At Wooga, we use the chunked responses to send real-time notifications.
Another big upside of the request-response model is that you can write pluggable middlewares to extend and customize Elli. For example, you can add access logging, real-time stats dashboard (https://github.com/knutin/elli_stats), basic auth, compression, basic media serving and when I get around to it, even the "Date" header. If you don't want these features, you can simply turn them off. This might sound complex, but in practice it is very powerful. We are running out of CPU and being able to turn off features completely is a big win. You also don't need to deal with unused features causing problems on the critical path.
Starting from scratch allowed me to make some tough choices in the name of robustness and performance, at the cost of sacrificing features considered essential in a more complete server.
I feel I need to address this, as your two points about why there is a performance difference is false.
I just implemented and released a middleware to add the Date header, it's available here: https://github.com/knutin/elli_date. When running the "Hello World!" micro-benchmark where I'm only testing the performance of the webserver itself, there is no significant difference in performance. I used the same approach as in Cowboy and Yaws and cache the date string in an ETS-table and read it on every request.
As for routing, Cowboy offers very nice routing that makes writing applications easier. Elli does not offer any explicit facility to do this, but pushes it to the user, which in our case typically means function clauses matching on the url as can be seen in this example: https://github.com/knutin/elli/blob/master/src/elli_example_... The Erlang VM can nicely optimize matching on these clauses especially with HiPE. Claiming that Elli and the benchmark does not do any routing is false.
I have studied Cowboy closely and taken ideas from it. I'm very thankful of everybody in the community and you in particular who offers up their projects and ideas for general consumption. It makes the community richer. Building on the shoulders of giants makes projects like Elli easier.
I'm happy that with Ranch, Cowboy will see a performance improvement. I hope that there are some ideas in Elli that can be used by other projects to improve performance and robustness.
Still neat.. what impressed me was some of the linked "helper" projects for doing stuff likes stats.
Do anyone know if there is somewhere comprehensive comparison of different http solutions for erlang? Thanks.
Also, are you sure you didn't mean ms and not us? Seems more likely for an HTTP service.
To check if the numbers make sense, I dumped the raw data used to compute these stats and compared elli_stats with the same functions in R and they match up.
Show me a real benchmark. I don't really care if its written in Erlang. I can likely blow that 'Hello World' away by writing directly to a socket.
Author of elli here.
It is very hard to create real and meaningful benchmarks. The "Hello world" benchmarks are very useful when writing a webserver and you are curious about where to optimize it, if necessary. Even though they are very superficial, they can also help you compare two webservers.
It is very easy to run the benchmarks on your own hardware. Get elli, then run "elli:start_link()" and hit "/hello?name=john" with apachebench or your tool of choice.
Elli is only useful if you want to write an Erlang application that exposes a HTTP API. If you want raw performance, Haskell has some servers which does 300k+ rps.
Here is a very good article from Steve Vinoski who is overall a very smart and experienced guy on the topic of benchmarking Erlang webservers: http://steve.vinoski.net/blog/2011/05/09/erlang-web-server-b...
This is an interesting tool, developed by a group of people doing interesting things, and it's been shared with the world for others to use and/or improve on.
While I'm personally not an Erlang developer, I'm grateful for others who create interesting solutions to their scaling problems. Plus, we utilize Erlang extensively at Whoosh Traffic -- anything that can inspire our developers is always welcome.
2) If your expectations aren't being met, ask for what is missing. Don't come out as an antagonist belittling the author.
Thatnk you for your consideration.
The post however is about a whole new server. It's also written in an interesting language. This I might use. This I want to learn more about. I'll go on a limb here, and say that more HN people want to here about the new webserver than about how "one can do it as fast by writing to a socket".