Can do this in the root location "/" with:
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
Also good to remember to put another header in for the forwarded protocol (if you're terminating an ssl tunnel at the balancer.)
There is nothing to stop a malicious client adding the header themselves and if you rely on IP lookup (i.e. Dev Mode active for 127.0.0.1) for access control you can leave yourself wide open. While I can't find the article at the moment, Stack Overflow accidentally gave admin level access to the site because of this over sight.
So you can choose to trust only the rightmost one, if there are several entries in the list.
proxy_set_header Host $host;
Would like to hear people's thoughts on using Nginx in "real life" for load balancing rather than Haproxy.
Unfortunately we had to switch off of it due to PCI compliance concerns, but I'd use it again in a heartbeat.
 not because there were actual issues, but because other solutions were fully audited out of the box. I'm hardly surprised that we've had more issues with those solutions than we ever had with nginx, including the time when we barely knew how to configure the thing. One of the unavoidable hazards of PCI Level 1 :( We still use it for the actual web requests quite happily.
Was the lack of a web ui (like haproxy has) ever a concern? How did you keep track of dead servers behind the LB?
The configs are pretty straightforward, but might get a little nuts if you're dealing with hundreds of servers behind the thing. I don't have to wear a sysadmin hat too frequently (thank god) but when I did it was pretty easy to deal with.
Huge fan of the fact that reloading the config would perform a configtest automatically before trying to apply the new settings. I don't know why all software doesn't do this.
I'm also doing SSL termination at it, so I don't really have any metrics on the balancing in isolation, but for moving 50-100 concurrent connections around it hasn't blinked.
I do really like HAProxy's more flexible up/down monitoring, though. In the past, we've done the trick with separate control connections that we can bring up & down with iptables to shuffle traffic around without any broken connections.
Did you miss Haproxy's web ui? Does nginx have any way of reporting if a server is down?
You can also use it for your application tier with Passenger, [U]WSGI, FPM, FCGI...
We also run another HAProxy instance for rate limiting for attacked sites that feeds back into the main load balancer. And this is Layer 7 load balancing including inspecting headers. Never breaks a sweat. 1.5 supports SPDY, which is the last big thing for us (though I need it in the opposite direction from other mentions, used alongside stunnel).
You can setup stunnel to terminate SSL then append this line to the request that's sent to HAProxy, which will then add an X-Forwarded-For header from that info. This may be relevant to your interests, though: http://www.igvita.com/2012/10/31/simple-spdy-and-npn-negotia...
HAproxy works correctly for websocket backends today.
The VMs are each running a websocket server running as the user that will be connecting. This makes the security aspects very easy to handle. Each user can only modify their own environment and write to their own files (backed by unix permissions). Even if they root the VM (excluding hypervisor vulnerabilities) they won't be able to access any private data.
If I want to be able to hot migrate VMs between physical machines, I need some way of dynamically proxying the connections. If I had lots of IPs, I could simply let each VM have an IP address and the SSL terminator would route properly no matter where I move the VM.
Does that make sense?
It's based on NodeJS, and it's really good. I've been using it in front of three web servers serving around 800 small-to-medium business websites for the last six months and it's been fantastic.
It pull configuration data from Redis so you can easily do things like automating deployments etc.
nginx addserver upstream-name 127.0.0.1:8025
EDIT: Added upstream name
Beyond that, it depends how you're using (HTTP load balancing, TCP only, etc) Got any specific questions? We've been running it in production for over a year.
Generally I would say that if you're proxying web connections and need caching or the ability to do lots of complicated rewriting on the proxy side, use nginx. If you're proxying database, mail or similar... haproxy. If you don't need any caching or similar, either nginx or haproxy depending on your application.
HAProxy also allows you to modify balanced nodes while the server is running, and has fantastic logging once you get used to looking at it.