

Puma vs Phusion Passenger - FooBarWidget
https://github.com/FooBarWidget/passenger/wiki/Puma-vs-Phusion-Passenger

======
evanphx
Some great points about the tooling, especially the control server. For 2.3,
the control server is going to get some much needed needed love that should
hopefully address major issues.

A couple of notes: Puma has a dynamic pool but it's common to see people
configure the pool as 16:16 in production, meaning it's staticly configured to
16 threads.

Some people have asked about using Puma to manage multiple apps recently. I'm
currently consider what that would look like. If it's done, it would likely be
a separate gem rather than being wired directly into Puma as it is now.

On the topic of time limiting requests, this is conscience choice. Aborting a
thread or process externally is really problematic and I don't personally
think it's something that should be done casually. If a user wants to do time
limiting, they can easily use a Rack middleware that uses timeout.rb.

The same goes for OobGC. Performing it in a multithreaded env makes little
sense because the only out-of-band is when all threads are idle. I have
thought of some ways to implement it in puma, but I see OobGC is a kludge.
Better to tune your interpreter to handle your garbage load better (something
1.9/2.0/rubinius/jruby can all do).

Thanks for the great comparison! I look forward to future back and forth and
we all work to raise the level of technology used in ruby webservers.

~~~
cmer
Hey Evan,

I keep reading how Puma is meant for be run multi-threaded on JRuby or
Rubinius.

Would it be non-sense to run it in "cluster" mode (multi-process) under MRI
2.0? Similar to how we'd run Unicorn, for example. Any benefits doing that
versus just running Unicorn?

EDIT: we have plenty of ram so that's not really a concern

~~~
evanphx
Yeah, you can certainly run puma in the same operational mode as unicorn with
clustering. But what I'd suggest is that you do instead is creating 1.5 as
many workers has you have cores in your machine and perhaps 8 threads per
worker. With that configuration, you'll get much more even performance than
unicorn because you won't get as much cpu thrashing from context switches
between processes and the threads will allow a high number of concurrent
requests to all make progress concurrently.

~~~
cmer
8 threads per worker on MRI? Isn't that a recipe for disaster? Or maybe I'm
missing something?

~~~
evanphx
Why would that be recipe for disaster? What do you think would happen?

~~~
cmer
Aren't threads on MRI evil?

~~~
evanphx
No? I don't know what your referencing.

------
jrochkind1
I posted a showdown of various open source servers that can be run on heroku a
few days ago, including puma, which some people interested in this may be
interested in:

[https://github.com/jrochkind/fake_work_app/blob/master/READM...](https://github.com/jrochkind/fake_work_app/blob/master/README.md)

For some reason it didn't get much HN traction. But either it inspired these
recent HN posts involving puma, or it's just a coincidence -- either way I'm
glad to see puma getting more attention.

I think multi-threaded request dispatch (from either puma or passenger
enterprise) are often the best way to maximize throughput in a web server with
I/O-bound work, as the Java community has been doing for a while. Any
implementation details are just minor tweaks compared to just doing multi-
threaded concurrent request dispatch in the first place -- so it's no surprise
to see Passenger Enterprise and puma performing similarly.

i agree with the Phusion essay, that Passenger currently has more robust
admin/management/supervision features than puma. I hope puma continues to
evolve, inspired by passenger

Although these features are not trivial to implement well -- I am very
impresed by passenger's feature set, and very robust and reliable performance.

But if you want a free/open-source solution, or a solution that it makes sense
to run on heroku -- puma is a (and the only) solid concurrent-request-dispatch
solution already.

~~~
FooBarWidget
Thank you for that benchmark (even though it didn't cover Phusion Passenger
Enterprise ;) It looks very thorough.

~~~
jrochkind1
Thanks. I am personally interested in Passenger and Passenger Enterprise -- I
do run Passenger (free version) on several self-managed servers, and am happy
with it.

There are two reasons I did not include Passenger Enterprise in my benchmarks:

1\. As far as I know there is no way to run it on heroku. And I was
intentionally benchmarking on heroku.

2\. I wasn't going to pay for it just for the purpose of running benchmarks on
it.

------
jasdeepsingh
Pardon and excuse the self promotion, but just yesterday night I did a small
blog post on deploying Rails Apps with Puma. The deployment part is really
quite naive and I'm working on part 2 of the post.

[http://jasdeep.ca/2013/07/deploying-rails-apps-with-puma-
and...](http://jasdeep.ca/2013/07/deploying-rails-apps-with-puma-and-nginx/)

~~~
FooBarWidget
What was naive about [your deployment section], and what needs updating?

~~~
johnbellone
I think that he meant the deployment section in his post was naive, not yours
:).

I appreciate the compare and contrast.

~~~
FooBarWidget
I know, that's I meant too. :) I meant to ask, what was naive about his
deployment section?

Perhaps he meant putting Puma directly on port 80. Although I usually wouldn't
do that, I am not entirely sure whether one _shouldn 't_ do that. For example
Unicorn is multi-proces single-threaded so it should never be used without a
buffering web server, but Puma is different. We'll need the Puma author to
make a statement about this.

I'm glad the comparison was useful to you. :)

------
trustfundbaby
For developers running a small vps or any kind of server in which memory is a
scarce resource, I highly recommend puma.

I needed about 5-6 passenger processes in my app to service requests in a
timely fashion moving to puma essentially cut down the processes I needed to
run to 1 ... saving me about 300MB of RAM, but not only did it save me memory,
performance was better than with Passenger.

Granted, I miss the ease of just typing

touch tmp/restart

to reboot my app, and I had to set up Apache (then later Nginx) proxying to
get it going but it was well worth it.

PS: here is my writeup on how you can get puma up and running with your
Apache/Passenger setup in less than an hour to try it out
[http://www.concept47.com/austin_web_developer_blog/rails/how...](http://www.concept47.com/austin_web_developer_blog/rails/how-
to-try-out-puma-with-apache-proxy-right-now/)

~~~
FooBarWidget
If memory is your main concern, then Phusion Passenger Enterprise uses even
less memory than Puma. :) We have a discount for cash-strapped startups,
students and educational institutions.

~~~
trustfundbaby
yeah, but puma is free :)

------
nahname
I don't understand how people are using multithreaded ruby. Are these people
doing something clever that I am not aware of? Or is it just insane amounts of
testing?

[http://stackoverflow.com/questions/15184338/how-to-know-
what...](http://stackoverflow.com/questions/15184338/how-to-know-what-is-not-
thread-safe-in-ruby)

~~~
FooBarWidget
Web stuff tends to be embarrassingly parallel:
[http://en.wikipedia.org/wiki/Embarrassingly_parallel](http://en.wikipedia.org/wiki/Embarrassingly_parallel).
Which means that it's extremely easy to make a framework that's reentrant:
[http://en.wikipedia.org/wiki/Reentrancy_(computing)](http://en.wikipedia.org/wiki/Reentrancy_\(computing\)).
Reentrancy gives you thread-safety by default.

Yeah, Ruby core data structures are not thread-safe. But that doesn't matter.
You're not supposed to a lot of any data structures between requests anyway.
Of the few that are shared (e.g. the Rails cache object), they are explicitly
engineered to be thread-safe.

There's also the story that Ruby cannot use multi-core. That's partially
correct: on the MRI implementation, it doesn't matter how many threads you
have, only one can be active at a time. However what you _can_ have is
multiple _processes_ , each with multiple threads. Each process can use a
different core. Furthermore, the JRuby and Rubinius implementations can handle
multi-core with multithreading just fine. Both Phusion Passenger and Puma
support JRuby and Rubinius.

Maybe I'm just too familiar with Ruby and multithreading, but I don't
understand why people have trouble with multithreaded Ruby. In my eyes it's
pretty easy. If you have a library that's not thread-safe, then just don't
share that library's objects between threads, or ensure that you grab a lock
before using that object, and you'll be fine. In my opinion, the situation is
not much different in Python, Java or C++.

~~~
dasil003
> _Maybe I 'm just too familiar with Ruby and multithreading, but I don't
> understand why people have trouble with multithreaded Ruby. [...] In my
> opinion, the situation is not much different in Python, Java or C++._

I think the problem is a mix of language features and culture. Ruby does not
have a history good thread-safe practices, and the language has some extremely
convenient features which are death for thread-safety (eg. class instance
variables). Combine that with prolific meta-programming and a lack of static
analysis tools, and it can become very very difficult to be certain that a
given application is thread safe. As long as all developers are well-versed
and keep thread-safety front and center from the beginning of application
developer then I agree it's not hard per se, but in practice that is so rarely
the case that if I knew I needed heavy multi-threading for memory efficiency
and CPU utilization I might disqualify Ruby on cultural reasons alone (and I
say this as a full-time rubyist who knows no language better).

~~~
jrochkind1
I know what you mean and don't disagree in general.... but for a _web
application_ specifically, the picture is quite a bit rosier.

A web application doesn't usually share much in-memory state _between
requests_ -- especially if we're talking about app-specific code and not
framework code. If it does (modify class variables or other global state, etc)
-- you've got to eliminate that or make it threadsafe, but that's not _that_
hard to identify. A web application is very rarely going to be doing class-
modifying meta-programming _as part of request handling_ (as opposed to on
boot, where it won't be a problem).

Your actual app-specific code in a web application is highly likely to be MT-
request-dispatch safe already, and if not is not too hard to get it there --
and worth the effort because of the extreme throughput increases you can get
with an MT-request-dispatch app server.

Now, what can definitely be trickier is framework and gem code. ActiveRecord,
historically, has had quite a few problems with thread-safety under MT-
request-dispatch, off and on (the database connections themselves are the
shared state it's tricky to deal with, among other things). But it's gotten a
LOT better and should be fairly robust now.

I know what you mean about 'cultural reasons', but I think the ruby culture
has been gradually changing for a while (jruby has a lot to do with it), and
some of us hope is becoming more MT-friendly.

But I'm not saying it's trivial or guaranteed problem free, but the potential
gains are worth it.

(I do run a rails app that uses multiple threads and ActiveRecord.)

------
wpeterson
I'm surprised no one mentioned the Rainbows app server, a multi-threaded
version of unicorn.

Unless you're running jRuby, you're limited by Ruby's green threads. If you
want to saturate all of the cores on your server, you need 1-2 processes per
core and then a thread pool within each to keep it well utilized.

I've been running a 4x10 rainbows process/thread setup in production with good
results on AWS dual core systems.

~~~
randomdata
_you 're limited by Ruby's green threads_

Ruby (MRI) started using native threads in 1.9, but you are still bound by the
GIL.

------
VeejayRampay
Just to understand what's going on in this article. The people at Phusion
Passenger are comparing Puma and their offer, only to come to the conclusion
that if you've got money you should go for their paying offer and if you don't
then it's a tie?

Not that they're not to be trusted and can't be unbiased in their judgment,
but I'd put more trust in a more independent study,

~~~
caboteria
It certainly would have been more useful if it had a disclaimer at the
beginning indicating that it was written by a Passenger developer. It's OK to
have biases as long as you're up-front about them.

~~~
FooBarWidget
You mean the URL (github.com/FooBarWidget/passenger) and the fact that the
Github page says that the wiki belongs to the "passenger" repository, aren't
enough of a disclaimer?

The "License and price" section at the beginning is also worded in such a way
that it should be clear that the text came from the Phusion Passenger authors.

~~~
rschmitty
To be fair, I only associated the name FooBarWidget with passenger after
figuring out you are the author on the project.

Any reason why you do not keep it with the company repo
[https://github.com/phusion](https://github.com/phusion) ?

~~~
FooBarWidget
Historical. The Phusion Passenger repository started in 2008, when Gitub did
not yet support teams. We never moved it because there are so many links to
FooBarWidget/passenger.

~~~
mtarnovan
I think Github will automatically set redirects for you if you move a
repository around.

~~~
FooBarWidget
Really? A few months ago when I tried to move a repo Github showed a big
warning, explicitly telling me that they do not setup redirects. Maybe that
has changed now.

~~~
evanphx
Redirects are new, so probably.

------
FooBarWidget
A new section has just been added which compares memory usage.

~~~
pkmiec
I've been using passenger for quite some time and really appreciate its focus
on operational tools.

Thanks for the detailed comparison. Good to know how it stacks up against the
other options out there.

