

Why Rails 4 Live Streaming is a Big Deal - ninh
http://blog.phusion.nl/2012/08/03/why-rails-4-live-streaming-is-a-big-deal/

======
amix
Rails is turning into a framework that includes everything, including the
kitchen sink. Personally, I prefer to use the best tool for the job and
node.js seems to be a much saner choice when doing realtime communication,
since everything in node.js is non-blocking. There are so many ways to shoot
yourself in the foot if you develop large realtime systems in Ruby (or any
other language that includes a lot of blocking libraries).

~~~
sjtgraham
Actually _not_ everything in Node is non-blocking. IO is largely non-blocking
by but there is also blocking IO in Node too (synchronous file system
functions). Not to mention you definitely will block by doing something
computationally intensive in a single tick of the reactor loop.

Have you ever written a non-trivial "real time" app in Ruby? I have
(<https://github.com/stevegraham/slanger>). I think Ruby is actually very well
suited to event driven apps. Eventmachine is a very mature library providing
asynchronous I/O based on the same pattern as Node. Ruby also has fibers as a
native language feature, allowing you to write asynchronous code that looks
synchronous, i.e. no nested callback hell, and consequently this makes it a
lot easier to write tests for.

Comparing Node to Rails is also absolute nonsense. Rails is a web framework
and Node is much lower level than that. Rails is essentially a suite of DSLs
for building web applications. Of course there are costs associated with that
amount of abstraction.

~~~
sunkencity
One thing that keeps me from considering Eventmachine mature is that the built
in http clients are very crude and undocumented. Example: to get working error
handling one needs to use an external http client library like igrigoriks em-
http-request instead of the defaults. In this regard Node comes out ahead with
better core utilities to boot. Stuff like that is very confusing for new users
and puts the whole framework into question (shipping with a http client that
is not suitable for production use).

------
alexyoung
"Can Rails compete with Node.js?"

For the perplexed: Node isn't a web framework.

~~~
FooBarWidget
It isn't, but in the context of the article I was talking about competing on
the ability to support certain I/O use cases, not comparing features.

------
bascule
"Cons: If a thread crashes, the entire process goes down."

I wrote this thing called Celluloid and I can assure you this isn't true. Ruby
has "abort_on_exception" for threads, but the default is most assuredly false.

"Good luck debugging concurrency bugs."

Good luck debugging concurrency bugs in a callback-driven system!

~~~
FooBarWidget
> I wrote this thing called Celluloid and I can assure you this isn't true.
> Ruby has "abort_on_exception" for threads, but the default is most assuredly
> false.

I'm talking about CPU instruction level crashes, not language level crashes.
Things like writing to an invalid memory address or heap corruption.

> Good luck debugging concurrency bugs in a callback-driven system!

Actually I already mentioned concurrency bugs in evented systems in the
article.

~~~
bascule
"Things like writing to an invalid memory address or heap corruption."

So what you're trying to say is if the entire virtual machine crashes, you
lose all running threads.

------
edwinnathaniel
It's becoming more like... _GASP_ JavaEE _GASP_

~~~
bascule
Rails: reinventing Java one feature at a time (and that's not necessarily a
bad thing)

------
aoe
So these changes won't be available in the free version of Phusion Passenger
4?

------
parfe
> _Several days ago Rails introduced Live Streaming: the ability to send
> partial responses to the client immediately._

Would this be analogous to what PHP does if you being writing a response
without output buffering?

~~~
FooBarWidget
Yes you can do the same in PHP by disabling output buffering. You're limited
by the web server's concurrency model however. Apache's mod_php only works on
the prefork MPM so your concurrency is limited by the number of Apache
processes you can spawn (which can be quite bloated because you run the PHP
interpreter inside Apache). Another less commonly used but still notable setup
is PHP via FastCGI (e.g. when using PHP through Nginx). Here you are limited
by the number of PHP-FastCGI processes you spawn.

------
why-el
Pardon the ignorance, but can't this be achieved by simple Ajax requests
provided by any of the js frameworks? How is this better?

~~~
jherdman
Ajax requires long polling, this PUSHES the response to the server, thus
obviating the need for long polling.

~~~
masklinn
> this PUSHES the response to the server

Erm... it pushes the response to the client, not the server, and only pushes
after a normal HTTP request.

And of course, it also ties up a huge amount of server resources (total number
of clients = total number of workers, since each client permanently ties up a
connection forever). Phusion Passenger's docs recommend 8 workers/GB RAM, hope
you expect users.

~~~
mapgrep
1\. Long polling can tie up server resources too. There is a process on the
other end of that long AJAX request. The mechanism for delivering the
streaming connection to the client is orthogonal to the mechanism for handling
that connection on the server.

2\. You say a streaming connection "ties up a huge amount of server
resources," but the whole point of the linked article is that this does not
have to be the case; Node.js can (when used correctly) handle loads of
connections in a single process, and Phusion Passenger is clearly trying to
evolve their model to achieve similar if not fully comparable results.

~~~
masklinn
> Node.js can (when used correctly) handle loads of connections in a single
> process

Node uses an evented IO layer, that is completely orthogonal (and thus
irrelevant) to streaming responses. You can stream responses with blocking IO,
and you can buffer responses with evented IO.

> Phusion Passenger is clearly trying to evolve their model to achieve similar
> if not fully comparable results.

If you think that can happen, you're deluding yourself. Ruby+Rails's model
means you need one worker (OS-level, be it a process or a thread does not
matter) per connection. With "infinite" streaming responses this means each
client ties up a worker forever. OS threads may be cheaper than os processes
(when you need to load Ruby + Rails in your process) but that doesn't mean
they're actually _cheap_ when you need a thousand or two.

------
sergiotapia
Is this any different than what SignalR provides for ASP.Net Web Applications?

~~~
masklinn
It's got essentially no relation with SignalR. It's equivalent to using
`response.OutputStream.Write` in your HttpHandler.

If you're looking for a SignalR equivalent in Ruby, you need EventMachine.

