Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nginx now supports Websockets (nginx.com)
358 points by Jhsto on Feb 19, 2013 | hide | past | favorite | 65 comments


What nginx needs to become the ultimate platform, in my opinion, is to provide an API for interacting with WebSockets by transient processes (such as PHP). That way it acts as a surrogate pub-sub handler, whether or not your instance stays as a running process.

That'd be a dream for what I want to do. At best, we have to use Node.js as a WebSocket provider and tie it in with the PHP sessions, etc. Not as simple as I'd like.



Interesting. Any info on how well this works with PHP?


That does look intriguing. I'll have to do some research to see if its suitable to my use case. Thanks Martin.


Ratchet (http://socketo.me/) has started to make this easier.

Regardless of the naysaying people will always do about PHP and long-running processes, an Nginx WebSockets proxy/backend protocol would be a huge hit for PHP developers, and as demonstrated by Ratchet, can certainly be made to work just fine.


I recently did a presentation on Ratchet with working demos. You can view all of the source code via the github repository:

https://github.com/cballou/php-websockets-demos

The three demos are of increasing complexity, with the third being a WTF example of adding WebSockets to an existing PHP application (in this case, a basic CRUD todo app).


Cool, browsing through your demos now!


Ratchet does look nice, but has several deployment dependencies and architectural constraints. Hopefully with nginx now adding the features to be a WebSocket proxy you could get HAProxy out of the way (unless you wanted it).


PHP simply is not suited for long lived processes. If you only want the ability to push something to clients without polling, then maybe you could set up something very simple with python, and communicate over redis or some other MQ.


What makes you say that? I've been running several PHP processes for weeks at a time with no problems, feeding on busy TCP connections for the whole time.


I didn't say that you can't do it.


But why do you find it unsuitable?


It is build around a model of process input, spit output, exit; which is not what you want for long living processes. There are many articles about it's flaws, my favourite is this one: http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-de...


It has ugly syntax and bad standard function names, and some people code ugly things with it.

/sarcasm


A few years ago I would have agreed, but I have several PHP daemons consuming messages from RabbitMQ and their uptime is in the range of months. Throw a little supervisord into the mix to keep an eye on things and you'd be surprised how far it has come.


Now that Nginx supports WebSockets, Phusion Passenger can start supporting WebSockets as well and allowing all hosted apps access to it. Node.js support is experimental, and there are plans to support PHP and other languages as well.


This is great. At dotCloud we've reluctantly ported our distributed routing layer (http://github.com/dotcloud/hipache) from Nginx to Node + node-http-proxy, because we needed 1) high-volume dynamic configuration and 2) websocket support.

Problem 1 can already be addressed by combining a Redis backend and Nginx's awesome Lua scripting engine (https://github.com/samalba/hipache-nginx). Now that problem 2 is solved as well, we might be able to port Hipache back to Nginx in the future :)

Go Nginx!



This announcement is a little light on details. What does Nginx supporting WebSockets even mean? Isn't it just looking for ws:// and proxy-passing that on to your application?


As pkulak said, Nginx didn't understand the Upgrade header so you couldn't proxy http and ws connections on the same port.

To get around that you had to add either another reverse proxy that did offload ws:// connections or run them on separate ports and deal with issues like corporate firewalls.


It's looking for the upgrade header in HTTP, and then, yes, proxying that connection.


Yup, nice to have this built in rather than relying on an additional layer (Varnish, HAProxy, 3rd party module, etc.)


Wow, realtime upvoting/downvoting for everyone now :-)


So in practical terms how does one setup backends to use WebSockets. For example, suppose you have nginx in front of a bunch of Unicorn or Rainbow processes and there are two clients that have made websocket connections. Consider a chat application. One of them sends a websocket message and the backend that receives it needs to push the message along the other websocket connection. But how does it know which backend to forward to? What is the intended idiomatic way of maintaining the necessary state?


Think of it like an HTTP request, except a two-way and very long-lived. Not too unlike Comet except with a secure-by-design approach. http://en.wikipedia.org/wiki/Comet_(programming)


I've never dealt with Comet. I need the long detailed answer for complete noobs.


The old version of that wikipedia article had a bit more background explanation: http://en.wikipedia.org/w/index.php?title=Comet_%28programmi...


If I am understanding correctly the suggestion is to have a distributed hash table that any backend can lookup to find the other backend it should forward to. And since the distributed hash table is critically important persistent data I'm assuming that using something like Memcached is not a good idea? What would be advisable instead?


So hopefully heroku will soon be able to support Web Sockets instead of only xhr-polling

https://devcenter.heroku.com/articles/http-routing#websocket...

https://devcenter.heroku.com/articles/using-socket-io-with-n...


They are busy trying to be able to support Rails apps for now.


Both cloudbees and apcera co-funded this feature for that reason. Heroku do have a routing mesh with more moving parts than just nginx though - so it will likely be a much harder change for them.


This. I really hope they're able to get that integrated soon. That would be awesome.


Does the backend which nginx talks to have to speak the Websocket protocol?

If this is the case, and you're running a pure TCP application like IRC, you still need a separate Websocket-to-TCP bridge application running on the server to sit between nginx and your IRC server. How is this an improvement from the status quo? ("IRC" is just an example, you can feel free to replace it with your favorite protocol.)

Granted, this change makes life a little easier for users behind outbound-restricting firewalls, since you can now multiplex both HTTP and IRC on port 80. But IMHO it would be more logical to just have nginx directly proxy the IRC server to the client-side JS over Websocket.

Then again, maybe this patch is as close as it's possible to get without major revisions to the Websocket protocol: With Websocket's non-optional framing "feature," you might need IRC-specific knowledge to translate an IRC stream into frames in a way that won't break anything.

Any Websocket experts are welcome to weigh in!


> Does the backend which nginx talks to have to speak the Websocket protocol?

Isn't that what the "supporting websockets for the proxying layer" statement means? I don't even know why you'd expect anything else to happen.

> it would be more logical to just have nginx directly proxy the IRC server to the client-side JS over Websocket.

Again, I don't see the logic there at all. There is a TCP proxy external module you can compile in (and that was how you'd get it to proxy websocket connection before). But you want nginx to proxy and arbitrarily translate between some random protocol and transport of your choice.

> With Websocket's non-optional framing "feature," you might need IRC-specific knowledge to translate an IRC stream into frames in a way that won't break anything.

Yes framing is a feature. Why are you including in quotes sarcastically implying it is a bad feature.

It sounds like you just need a firewall with some rules, if you just want plain TCP connections to be forwarded to your backend, why involve a nginx at all then?


I was thinking that a prominent use-case for JS websockets would be writing a JS client for $PROTOCOL which runs in the browser, where $PROTOCOL is some TCP-based protocol like IRC that was developed years before websockets existed.

> Why are you including in quotes sarcastically

Because framing means the websocket spec can't easily support this use case.

> if you just want plain TCP connections to be forwarded to your backend

That's exactly what I want -- from a Javascript client in an unmodified browser.

AFAIK I can't get such connections in JS clients; I can only get the Websocket protocol.

I was thinking some cool hacks become possible if you could talk to an arbitrary TCP server from JS, using nginx as the middleman (with all its scalable non-blocking goodness).

Too bad Java is going the way of the dodo; Java applets could do TCP connections to the same origin back in the '90s.


> writing a JS client for $PROTOCOL which runs in the browser, where $PROTOCOL is some TCP-based protocol like IRC that was developed years before websockets existed.

Yes that would be cool. Websockets make it doable. Framing is not as big of an issue. Most TCP based protocols (save for file streaming) already have some messaging at a higher level. In IRC you can think of the line as a message.

Almost every protocol I built on top of the TCP transport had to have framing and I had to mess with buffers, message headers, terminators, partially filled messages.

> I can only get the Websocket protocol.

One cool new feature is WebRTC it should support a datachannel connections and peer-to-peer (now you can make the web server a peer as well). So there it seems would be another case of streaming binary data to the server.

> I was thinking some cool hacks become possible if you could talk to an arbitrary TCP server from JS,

Yeah that would be cool. JS code would instead have to dealt with websockets or WebRTC data channels. It wouldn't open a TCP, socket, bind, listen, connect all those things. Now on the server side you can anything you want. For example I like the web STOMP adopter that RabbitMQ people have. You can effectively send and receive MQ (STOMP) messages from a browswer to an exchange. That is cool. There are VNC viewers built with websockets and canvas. So it is doable but I don't think its place is to put in nginx as a standard compiled in features. These all can be plugins.


The changes to nginx appear to be to support HTTP Upgrade in general, nothing specific for websockets. But I can't think of any other upgrade protocols that general purpose web browsers would give your JS code access to.

But for those of us who are writing websockets-enabled web applications and trying to run them in nginx-heavy environments, just being able to share a port is a significant improvement over the status quo!


What's the difference between this new support and the nginx-push-stream module? https://github.com/wandenberg/nginx-push-stream-module


nginx-push-stream handles the websocket connections itself and exposes a pubsub channel, so that your backend app doesn't have to worry about holding connections open. Whereas I believe the new websocket functionality allows you to proxy to websocket-enabled backend apps.


Good comparison. Thanks.


Is there any documentation on usage?


Not really, but I used this link: http://trac.nginx.org/nginx/changeset/5073/nginx to set up this config: https://gist.github.com/octaflop/4991052 for this server: https://gist.github.com/octaflop/4991187 hosted on this site: https://legionofevil.org/

Hope that helps! (IRC is also quite handy today)


Excellent!

Now all we need is WebRTC.


And for most corporations like mine to stop blocking websockets. What do you mean let some odd port through the firewall?


Websockets run on port 80 natively. It's all done through a HTTP Upgrade header to turn the connection into a websocket.

Up until this release, anyone running both HTTP and WebSockets behind nginx has had to run them on separate ports (i.e. HTTP = 80, WS = 8080) and then use TCP proxying to load balance the websocket connections. Nginx didn't natively understand the Upgrade header - but now it does so you're free to use port 80 for everything.


That's the beauty of this nginx feature. Up until now, the easiest way to add websockets to an existing application was to put them on a separate (weird) port.

But now nginx lets you multiplex those websocket connections over good old port 80 or 443, while still supporting regular HTTP(S) on those ports at the same time.


How about serving it using SSL on port 443?


WebSockets over SSL (wss://) work just fine. Faris Chebib provided a nginx config file in #nginx earlier today. https://gist.github.com/octaflop/4991052


Now how to interact with their support, and hopefully some tutorial on integration with the multitude of different languages.


This has nothing to do with languages. Nginx can now proxy WebSockets to arbitrary servers.


What were people using previously? Just plopping their node.js or whatever straight on the network?


I use python's Tornado webserver directly on the network. Don't know if it's the best idea, but seems to work fine for my purposes.


I've had good results so far with node-http-proxy:

https://github.com/nodejitsu/node-http-proxy/

Easy to set up, and has the benefit of reducing the number of processes I need to maintain (my node server and my proxy server are the same).


Hey... saw your comment here http://news.ycombinator.com/item?id=5172775 about the work you did on unrolling arrays in mongo_fdw. Any chance you can email me about it (that thread is closed). username @ gmail.com


We are using HAproxy to direct traffic and nginx only as a web server. The config looks something like this: http://stackoverflow.com/a/8640394/218413


HAProxy has had Websocket support for a while now. But other than that, you can run run on the same domain without a proxy by just using a different port.


Something like varnish or haproxy. In my setup I had varnish proxying "/websocket" directly to my app server, and everything else to nginx.


There was two main options

1. Run nginx with something like HAProxy/Varnish infront

2. Run nginx alone with the app server on port 80/443 and the websocket server running on a special port. The app was then proxyed using the built in http_proxy, while the websocket was handled by tcp_proxy.


I've been using HAProxy in the interim.


Varnish and HAproxy


Any ETA on SPDY support being included (instead of requiring a patch)?


Looks like the next release.

http://trac.nginx.org/nginx/roadmap

It's only SPDY/2 for now although SPDY/3 eventually.


This day has finally come!


Great news!


Good news, finally, not just need to wait couple of months until they solve all the initial bugs to migrate our existing apps! ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: