

Pushing Files to the Browser Using Delivery.js, Socket.IO and Node.js - liamk
http://liamkaufman.com/blog/2012/02/11/12-pushing-files-to-the-browser-using-deliveryjs-socketio-and-nodejs/

======
alexhaefner
This is something we looked into a while ago. There are a number of
disadvantages to this approach that were not highlighted.

(1)Base64 encoding files makes them larger, this will eliminate the advantage
of smaller headers, almost always. Specifically:

"Very roughly, the final size of Base64-encoded binary data is equal to 1.37
times the original data size + 814 bytes (for headers)."

Source: <http://en.wikipedia.org/wiki/Base64#MIME>

If you want to send/get Binary data and manipulate it on the client side, XHR
requests can handle binary data, which can be placed into javascript typed
arrays.

(2)Concurrency is limited with websocket requests. You can only push one file
per socket at a time, and then if you want to start pushing multiple
concurrently you'd need to have more websocket connections open. I understand
that you can push files one after another through the same socket, but that's
not concurrency. On the back end the infrastructure to send different files
through different websocket connections and manage the concurreny can get
really messy really quickly. With Http requests, you can usually do two
requests concurrently from any 1 domain, and then you can load balance across
a set of domains.

(3) When Socket.io falls back to HTTP polling, you may end up consuming a lot
of bandwidth on headers alone.

(4) If you're working with something that has cross domain issues, i.e. a
WebGL application base64 encoded URLs will not work. They cannot be used, you
have to have resources coming from the URL of a CORS accepted domain.

In the end it's simpler and more ideal to just push your files through http
requests with built in concurrency.

~~~
liamk
Those are all excellent points. I think the fact that it actually increases
the file size, is probably it's biggest drawback. However, I wonder if zip.js
would reduce the base64 inflation factor?

~~~
nbclark
I would argue that the lack of caching is the biggest drawback, though you
could workaround that by using offline storage and caching yourself.

------
rhplus
_The most apparent disadvantage would be the fact that it bipasses traditional
caching methods._

And this should be considered a fairly big disadvantage if what you're pushing
is publically cacheable. Consider places where a HTTP URI might be cached:
client memory, client disk, forward proxy, CDN/reverse proxy, server memory,
server disk. A web-sockets delivery mechanism would miss out on half of these
caches. Of course, if the files are private and requested infrequently by the
client, then a push mechanism might well be preferred.

 _In-browser file zipping could have a positive impact on transfer speeds_

Yes, you really should be compressing your content on the wire (although
compressed images, even in base64, might not benefit much), but I'm skeptical
that a JS gzip library could compete with the browser's native decompression
code. Has anyone done any profiling of JS gzip libraries?

------
iamleppert
Some problems with this approach:

1\. Concurrency. Regular HTTP and pipelining allow the transfer of more than
one resource at a time, often times many more. 2\. Caching. The author
mentions this, but fails to mention browser-side caching, which is the most
significant form of caching, that is, to get a resource without having to go
out over the network. This could perhaps be addressed with local storage, but
his node module doesn't take that into consideration. 3\. Compression. Regular
HTTP requests support gzip; I'm not sure if websockets do or not. An initial
google wasn't promising and it seems to be experimental. He mentioned in-
browser unzipping, which is interesting but a more standards based approach
would probably be via the content-accept header on a websocket connection.

------
kwamenum86
"With zip.js files could be deflated, or inflated, within the browser."

That's what gzip is for.

~~~
CWIZO
I'm not 100%, but I think that WebSockets (for now) do not support
compression.

~~~
rhizome
What aspect of it _isn't_ regressive?

------
ammmir
base64 encoding binary files is just wrong when you could as well "push" the
files' URLs over a websocket connection. that way you can take advantage of
gzip compression and caching on the client.

it also looks like the server-side code uses the blocking fs.readFileSync()
call to read the entire file into memory... but maybe it could have some uses
for small dynamically-generated data. if so, not sure a "file" abstraction is
needed :)

------
wavephorm
This is an anti-pattern. The HTTP spec already sends files to web browsers,
and it can send many files in parallel.

~~~
firefoxman1
But does it support file _pushing_? I think that's the only special thing
about this library.

~~~
shaka881
It's not pushing, the browser still has to poll - it's just enqueueing files
for transfer, right?

Even if that use case has some utility (of which I'm dubious), it still
needlessly breaks the addressability semantics of URLs. It would be much
better, in my opinion, to have the backend issue redirect to an unambiguous
URL that refers to the static resource, then let the web server do the thing
it's good at.

~~~
firefoxman1
It only uses polling if the browser doesn't support WebSockets, in which case
socket.io would fall back to another method. Otherwise it is truly pushing.

From looking at the source, it looks like it encodes the file as a DataURL in
base64 to send the files.

