> It’s less work for the user. You don’t have to setup Nginx.
If you’re not familiar with Nginx, then using Raptor means you’ll have one tool less to worry about.
>For example, our builtin HTTP server doesn’t handle static file serving at all, nor gzip compression.
Sounds like I would need nginx(or another frontend server) anyways?
> By default, Raptor uses the multi-process blocking I/O model, just like Unicorn.
> When we said that Raptor’s builtin HTTP server is evented, we were not telling the entire truth. It is actually hybrid multithreaded and evented.
So, which it is? I assume by default is multi-process + events, but a paid version offers multithreaded + events? If so, isn't unicorn's model of multi-process+blocking IO is pretty good as well because OS becomes load balancer in that case.
Overall it seems they wrote a very fast web server. Kudos to that! But I don't think the web server was ever the problem for Rack/Ruby apps? Still on fence with this one until more details emerge. :-)
I don't mean to be negative; other posters have that angle covered. But I would comment that this ongoing proliferation in prefork backends is hardly disruptive to organizations who have already made significant commitments to Ruby web apps. Our Apache/Passenger servers aren't going away anytime soon.
Quibble: most multi-process web servers use fork() for child processes, which means they can share identical memory pages.
I'll chalk this one up to the PR/marketing person probably not taking an OS course.
Still it would be nice if they really did go back and read a little W. Richard Stevens.
 - http://en.wikipedia.org/wiki/Copy-on-write
 - http://en.wikipedia.org/wiki/W._Richard_Stevens
This entire web server is a marketing hype since day one. I imagine they are trying to build a pro product and support company out of this.
It's a web server with event loops and some fancy memory allocation. Shouldn't Node.js have taught us all by now the perils of event loops and insanely tweaked HTTP parsers? Sure, it looks great for "Hello World" benchmarks but falls right on its face as soon as you have an app of significant size spending real time on CPU.
I also wonder how their hybrid evented/threading/process model works in the presence of a GIL (which, last I checked, Ruby still has) and in the presence of blocking socket calls (which, last I checked, both the MySQL and PostGres APIs used).
a.) they could achieve that a lot simpler by bundling nginx, Unicorn, Rails, and a pre-vetted set of config files and shell scripts to bring the whole thing together and
b.) that's the value proposition of PaaS offerings like Heroku. Heroku is pretty damn simple already - just git push your code - and you'd outgrow it around the same time as you'd outgrow the bundled slow-client spoonfeeding, so what's the value proposition of this?
Actually, it's all in this about page: http://nikhilm.github.io/uvbook/introduction.html
1 - http://talloc.samba.org/talloc/doc/html/index.html
On twitter some ruby heroes say : " Raptor is 4x faster than existing ruby web servers for hello world applications" :)
The strong proclamations in favour of an open source project is a little bit strange if the open code is not yet released.
However I hope that all graphs on the home page are real for the ruby programmers happiness
Are there any giveaways in the blog that wouldn't allow Raptor to run on Rubinius or JRuby?
% ping rubyraptor.org
PING rubyraptor.org (22.214.171.124) 56(84) bytes of data.
64 bytes from shell.phusion.nl (126.96.36.199): icmp_seq=1 ttl=50 time=98.3 ms
According to that page, it doesn't support the most recent Ruby versions. That might not be accurate, though.
Nginx (and the general class of highly concurrent servers) is good at handling lots of connections largely because it tries to minimize the resources (memory, process scheduler time, etc) required to manage each connection as it slowly feeds the result down the wire.
The application server generally wants an instance per CPU so that it can hurry up and crank through a memory-, cpu-, or database-hungry calculation in as few microseconds as possible, hand the resulting data back to the webserver and proceed to put the memory, DB, and CPU to the task of processing the next request.
This is in contrast to the (simplified here) old-school CGI way that say ancient Apache would receive a request, then fork off a copy of PHP or Perl for each one, letting the app get blocked by writing to the stdio pipe to Apache then Apache to the requesting socket. All the while maintaining a full OS process for each request in play.
Although to be fair, the PHP model doesn't require a _persistent_ process between requests (I think?). But most other platforms do.
Of course, this may be mitigated by the fact that any reasonable production environment will have a web server layer over the app server/s anyway for load balancing, fail-over and exploit detection/prevention anyway.