

Why HTTP Streaming? - ab9
http://weblog.rubyonrails.org/2011/4/18/why-http-streaming

======
judofyr
I've yet to see my concerns properly discussed (how to handle exceptions):
<http://news.ycombinator.com/item?id=1671437>

~~~
tenderlove
We won't do anything to handle exceptions in 3.1. For 3.1, this is definitely
a "use at your own risk" feature. Besides handling exceptions, there are a
bunch of caveats for using this. You need to use the right web server (unicorn
or specially configured passenger), along with the right proxy (nginx or
apache with special configuration) and Ruby 1.9.

I've got ideas for handling exceptions, but I just can't make it happen for
3.1. :-(

------
technomancy
It's funny; some of the last work I did in Ruby was getting streaming support
into Rack nearly three years ago. <http://technomancy.us/109> It's a shame to
see it took so long to make it to Rails.

------
callmeed
Besides asset fetching, do any browsers start _rendering_ pages as chunks come
in?

EDIT TO ADD: ok, so how does this work if you're page has js/jquery code
executing on load? (say for js-based navigation and layouts) ... are people
gonna see funky stuff before the page is fully loaded and the js is executed?

~~~
jrockway
Absolutely.

They won't do it for application/xhtml+xml, so if you want to see this in
action, try sending your page as text/html and then application/xhtml+xml.
You'll notice that the XML-based page seems glacial.

(Why does this happen? Who knows. I have been using streaming XML parsers
since like 1990...)

~~~
true_religion
I believe its because thanks to the structure of XHTML, you cannot validate a
page until all the content has been received.

~~~
jrockway
There are streaming XML validators.

But who cares? The browser isn't validating HTML as it's rendering it,
otherwise the internet would be one big blank page.

~~~
chc
I think that's the point: Browsers don't bother to validate HTML, but they
have to validate XHTML to know whether to pitch a fit about errors as required
by the spec.

~~~
bct
Please note that there is an important difference between "well-formed" and
"valid".

------
justincormack
Has anyone tested how much of a performance benefit is this likely to give? It
seems that it would benefit you most if you have a slow page composition layer
but generate a lot of static includes like js and css. Arguably in that case
you should be using ajax to retrieve the slower bits, or server side caching
instead. But maybe i am missing a use case or some measurements that show it
is more generally applicable.

------
Erwin
Can anyone comment on omitting Content-Length in the response, versus using
chunked encoding? Will chunked encoding simply give you more control over when
the browser executes the content, or perhaps be more compatible with any
proxies in between? Or is it just a matter of being able to reuse the
connection afterwards, rather than having to close it?

Using a simple CGI script, both methods achieve the same and work in FF and
Chrome - chunks sent have their script statements executed, so you can e.g
update a progress bar as you render partial content. However, I had trouble
getting it to work with gzip; I had to turn off gzip (SetEnv no-gzip in
.htaccess) otherwise the whole output was sent at once (this has possible to
do with some default compression buffer size setting).

------
guruz
How did it work before with RoR? The whole response was buffered and then sent
at once with a Content-Length header?

I wonder why "HTTP Streaming" (known since years as "Chunked Encoding") is
such a big deal now.

~~~
jrockway
I don't think they're doing chunked encoding. I think they're making their
template engine emit a stream instead of emitting the rendered page. Then,
each time the template system has some text to send, it's written to the
network rather than accumulated in a buffer that's all written at once.

The innovation is not what headers to send, it's to produce your data
incrementally instead of all at once.

~~~
jrockway
What I meant to write was that I don't think the encoding is what matters;
incremental writes to tcp sockets are easy, and http is not much harder.
What's hard is making your application code work well with this.

------
warrenwilkinson
Quick question: Is this suitable for streaming video? Just wondering, thanks.

~~~
cagenut
Yes, this is how Move Network's streaming works, how Apple's streaming works,
and Adobe just announced official support for streaming this way last week[1]
(edit: oh and microsoft buit it into iis too).

What they do is parallel-encode their video feed to many different bitrates,
then as the client is keeping up or not keeping up with the incoming chunks
they move up or down the scale of which chunks to send. The best part is, it
works via existing http cdn's like level3 and akamai, so just about anybody
can stream live video this way to as many people on the internet as they can
get to watch it for the already commodity cost of cdn bandwidth.

[1] - [http://arstechnica.com/apple/news/2011/04/adobe-throws-in-
to...](http://arstechnica.com/apple/news/2011/04/adobe-throws-in-towel-adopts-
http-live-streaming-for-ios.ars)

~~~
spjwebster
This is a common misconception. In fact, HTTP Live Streaming as proposed by
Apple has nothing to do with chunked encoding. The former splits the video
into multiple "chunk" files, using an m3u file as a playlist. Stream variants
are supported by having multiple sets of video and m3u files, but dynamically
switching between them based on bandwidth is left to the client.

See:

[http://tools.ietf.org/html/draft-pantos-http-live-
streaming-...](http://tools.ietf.org/html/draft-pantos-http-live-streaming-06)

