Hacker News new | comments | show | ask | jobs | submit login
Elli: Performance oriented web server written in Erlang (wooga.com)
65 points by sbhat7 1424 days ago | hide | past | web | 20 comments | favorite

I wonder how it compares with all the other high performance Erlang web servers. He seems familiar with them, so I'm curious what requirements or functionality changed that they needed to write their own instead of hacking one of the others.


Author of Elli here.

Elli is similar to Mochiweb, in that there is a pool of processes all accepting on the socket (doesn't work on Windows, I'm told). When an "acceptor" gets a connection, it handles that client for the lifetime of the connection, which might mean multiple requests if keep alive is used. Unlike Cowboy, Misultin and Yaws, no process is spawned after accepting and no process is spawned to run the user callback. This makes for better performance and it is more robust, as the processes cannot get out of sync. I could not make any of the existing projects work this way without completely rewriting the core.

The biggest difference between Elli and the other Erlang webservers however is the programming model. Mochiweb, Yaws, Misultin and Cowboy give you helper functions for writing a response on the socket. This makes it easy to send the body before you send the headers, send multiple bodies, etc. In fact, it makes it so easy that Cowboy tries really hard to help you avoid this with the cost of higher complexity in the user code (need to pass the return value from every helper function into the next calls).

The programming model offered in Elli is similar to the "rack" model of request-response. You get a request and return a response which is serialized by Elli into the actual HTTP response. This makes it very very easy to reason about and test the controller logic by creating a fake request with your paths, body, etc, then checking the response, no sockets or processes involved. This model breaks when you want to do streamed and chunked responses, which is handled differently. At Wooga, we use the chunked responses to send real-time notifications.

Another big upside of the request-response model is that you can write pluggable middlewares to extend and customize Elli. For example, you can add access logging, real-time stats dashboard (https://github.com/knutin/elli_stats), basic auth, compression, basic media serving and when I get around to it, even the "Date" header. If you don't want these features, you can simply turn them off. This might sound complex, but in practice it is very powerful. We are running out of CPU and being able to turn off features completely is a big win. You also don't need to deal with unused features causing problems on the critical path.

Starting from scratch allowed me to make some tough choices in the name of robustness and performance, at the cost of sacrificing features considered essential in a more complete server.


According to my simple 'hello world' AB test, elli is about 40% faster than cowboy.

Cowboy developer here. A good part of that is the lack of the Date header. The other main difference is the lack of routing (that you are going to do in any real-world application anyway). Also depends on your Cowboy version, the one that uses Ranch (which I'm about to push) got an increase in performance due to the removal of a bottleneck.

Hi Löic,

I feel I need to address this, as your two points about why there is a performance difference is false.

I just implemented and released a middleware to add the Date header, it's available here: https://github.com/knutin/elli_date. When running the "Hello World!" micro-benchmark where I'm only testing the performance of the webserver itself, there is no significant difference in performance. I used the same approach as in Cowboy and Yaws and cache the date string in an ETS-table and read it on every request.

As for routing, Cowboy offers very nice routing that makes writing applications easier. Elli does not offer any explicit facility to do this, but pushes it to the user, which in our case typically means function clauses matching on the url as can be seen in this example: https://github.com/knutin/elli/blob/master/src/elli_example_... The Erlang VM can nicely optimize matching on these clauses especially with HiPE. Claiming that Elli and the benchmark does not do any routing is false.

I have studied Cowboy closely and taken ideas from it. I'm very thankful of everybody in the community and you in particular who offers up their projects and ideas for general consumption. It makes the community richer. Building on the shoulders of giants makes projects like Elli easier.

I'm happy that with Ranch, Cowboy will see a performance improvement. I hope that there are some ideas in Elli that can be used by other projects to improve performance and robustness.


Why is it so much faster? Is it really that much better code, or are there some tradeoffs in the other ones that Elli approaches in a different way?

Less features... still isn't doing dates (which is a known slowdown in Erlang)... connection process because handler process...

Still neat.. what impressed me was some of the linked "helper" projects for doing stuff likes stats.

He wrote a custom webserver based on his very specific needs. It's not a general purpose solution, so not too surprising it's faster for his particular use cases.

It'd be interesting to hear about that in more detail - what tradeoffs he made to get that speedup.

what's the difference in architecture to the other "big" erlang webservers (yaws, misultin and cowboy)?

I'm interested in this as well. So far I only read parts of Mochiweb (I needed some of functionality they implemented in my hobby project, mainly json2.erl IIRC, but ended reading much more because it was fun :)) and didn't have time to read through other servers.

Do anyone know if there is somewhere comprehensive comparison of different http solutions for erlang? Thanks.

Misultin development has been discontinued.

You may have a problem in your math. A mean of 4.3us with a stdev of 10.9us should mean that your 99% should be over 26us, unless it's super smooth with a few huge outliers.

Also, are you sure you didn't mean ms and not us? Seems more likely for an HTTP service.

Thanks for your feedback. Always good to have an extra set of eyes!

To check if the numbers make sense, I dumped the raw data used to compute these stats and compared elli_stats with the same functions in R and they match up.

Color me disinterested.

Show me a real benchmark. I don't really care if its written in Erlang. I can likely blow that 'Hello World' away by writing directly to a socket.


Author of elli here.

It is very hard to create real and meaningful benchmarks. The "Hello world" benchmarks are very useful when writing a webserver and you are curious about where to optimize it, if necessary. Even though they are very superficial, they can also help you compare two webservers.

It is very easy to run the benchmarks on your own hardware. Get elli, then run "elli:start_link()" and hit "/hello?name=john" with apachebench or your tool of choice.

Elli is only useful if you want to write an Erlang application that exposes a HTTP API. If you want raw performance, Haskell has some servers which does 300k+ rps.

Here is a very good article from Steve Vinoski who is overall a very smart and experienced guy on the topic of benchmarking Erlang webservers: http://steve.vinoski.net/blog/2011/05/09/erlang-web-server-b...


It's attitudes like this that make HN comments so dismal. Ugh.

This is an interesting tool, developed by a group of people doing interesting things, and it's been shared with the world for others to use and/or improve on.

While I'm personally not an Erlang developer, I'm grateful for others who create interesting solutions to their scaling problems. Plus, we utilize Erlang extensively at Whoosh Traffic -- anything that can inspire our developers is always welcome.

I'm sorry, but when the title says 'Performance oriented' anything I expect to see more than 'Hello world' as the benchmark.

1) If you aren't going to be sincere, don't apologize.

2) If your expectations aren't being met, ask for what is missing. Don't come out as an antagonist belittling the author.

Thatnk you for your consideration.

I can recreate your "writing directly to a socket" in a couple of lines of code in any language. So it means nothing to me.

The post however is about a whole new server. It's also written in an interesting language. This I might use. This I want to learn more about. I'll go on a limb here, and say that more HN people want to here about the new webserver than about how "one can do it as fast by writing to a socket".

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact