
GitHub's Unicorn Setup - mqt
http://github.com/blog/517-unicorn
======
cakeface
Its great to see companies sharing what they've learned from experience about
system architecture. So many times this sort of stuff is very difficult to
plan out and the only real way to get it right is through experimentation.
Having first hand descriptions like this is a great resource if you are
setting something up the first time.

I like that they posted their unicorn config file too!

~~~
defunkt
Also it doubles as documentation for the members of our team who aren't
familiar with this part of the system :)

------
dschobel
_When the Unicorn master starts, it loads our app into memory. As soon as it’s
ready to serve requests it forks 16 workers. Those workers then select() on
the socket, only serving requests they’re capable of handling. In this way the
kernel handles the load balancing for us._

Wasn't there a heated debate here just the other day about the prefork model?

Guess it's at least back en vogue @github.

~~~
seiji
It's connection pooling, not a fork-per-accept web server. Each worker is
select/epoll/kqueue'ing for individual requests.

~~~
tptacek
Has any mainstream web server ever been pure demand-forked? Apache has been
connection pooled since the '90s; the second edition of Unix Network
Programming used it as a case study.

~~~
mrshoe
Exactly. Hence the "pre" in prefork. A "forking" web server (one fork per
request) would be incredibly inefficient, whereas prefork is only... slightly
inefficient. :-)

~~~
tlrobinson
How's this for incredibly inefficient
<http://github.com/tlrobinson/wwwoosh/blob/master/wwwoosh.sh> ;)

------
blasdel
I don't understand why people insist on architectures where otherwise-
independent processes share a single socket.

You're already running a reverse proxy in front of them! There's no reason
each Unicorn couldn't be listening on a different port. Does that third layer
of local load-balancing between the HTTP proxy and the event-driven app server
actually get you anything?

~~~
wmf
Replacing N ports with one simplifies configuration.

I never understood the complex HAProxy in front of Apache in front of Nginx in
front of Mongrel type setups that seem to be popular in the Rails world. Why
not just use Unicorn? What value is GitHub getting from having Nginx in front?

~~~
defunkt
Unicorn is not for slow clients or static assets. That's what nginx is for.
See <http://unicorn.bogomips.org/PHILOSOPHY.html> for info on Unicorn and slow
clients.

nginx also has features like ESI, serving from memcached, and rate limiting
which Unicorn does not (and doesn't need).

------
atambo
Is there any benefit in using unicorn over passenger?

~~~
fizx
Unicorn doesn't make your site very slow for the 5 seconds-2 minutes (YMMV)
after you deploy.

~~~
chadr
It seems to me like the Passenger guys could easily add an option so that on a
"touch tmp/restart.txt" a set of new worker processes is started before the
old ones are killed off. I imagine this would make this slowness a thing of
the past. For the record, my apps experiencing this momentary queueing and
slowness on restart (5 seconds max).

------
boundlessdreamz
How does passenger handle restarts? Does it also allow a zero downtime restart
?

~~~
joevandyk
You run "touch #{RAILS_ROOT}/tmp/restart.txt".

That will restart the rails processes. No connections are dropped, and I've
not seen any downtime.

~~~
latortuga
To clarify, my understanding is that while rails is restarting, Passenger will
queue all the requests that come in and begin processing them as soon as rails
is ready. But yeah, zero downtime, it's pretty awesome.

~~~
fizx
Sites do tend to lag for at least several seconds after the restart though.

~~~
caseyf
Yeah. We load balance across 5 Apache/Passengers and I do rolling deploys (all
in Capistrano) by removing a Passenger from load balancing, updating the app,
restarting Apache, and adding it back into load balancing with a 10 second
delay between each. We tried the Passenger touch restart.txt and that didn't
go well _at all_ when we were under load.

~~~
boundlessdreamz
Can you share your setup please?

~~~
caseyf
I wrote a short post about our setup in March and it hasn't really changed
since then: [http://codemonkey.ravelry.com/2009/03/10/quick-update-
ravelr...](http://codemonkey.ravelry.com/2009/03/10/quick-update-ravelry-runs-
on/)

If you have any questions or are looking for details or something, ask away -
just stick a comment on the blog post.

------
Cornify
The Grand Unicorn is proud!

