This feels overly complex. The tutorial lets HAProxy do the SSL and Nginx the HTTP/2 (without SSL). But Nginx is still set up for HTTPS (that will never be used). Especially since Nginx can perfectly do both. You might as well replace the HAProxy server you're using with just another Nginx server (with OpenSSL 1.0.2 statically linked) and remove the SSL steps from your application server configuration. It reduces the amount of 'different' technologies and configuration standards you have to keep up with and gives you more flexibility.
What we do internally where I work is the following: our application servers all listen only on a secure internal network and accept incoming connections only from our 'gateway' server. All simple HTTP over port 80. The 'gateway' servers run just Nginx and nothing else. They staple SSL and HTTP/2 on top of the forwarded requests to application servers.
Since our gateway servers are stateless and behind a floating IP we can easily swap them out. And because their task is simple we can take more risks with those servers (cutting-edge things needed for HTTP/2). Currently our gateway servers all run Debian Stretch (unstable) since we get Nginx with OpenSSL 1.0.2 for free. Our application servers run whatever stable software they require.
HAProxy is typically used as a load balancer, much like ELB. But you can also use HAProxy like Nginx as a web server. I don't do that, but I am willing to experiment.
> What we do internally where I work is the following: our application servers all listen only on a secure internal network and accept incoming connections only from our 'gateway' server. All simple HTTP over port 80. The 'gateway' servers run just Nginx and nothing else. They staple SSL and HTTP/2 on top of the forwarded requests to application servers.
Very typical setup. Although in some companies they encrypt traffic all the way through (bad if you want to make sense of your tcpdump).
Did you encounter any issues after switching to HTTP/2 or during the process of switching over?
I do see the point in using HAProxy as a loadbalancer, we use it too (in a different setup without SSL), but in the context of this tutorial it's usage doesn't seem warranted when you're already running Nginx.
We've had literally zero issues after switching to HTTP/2, and we had none when enabling SPDY early 2015 either. Since we have integrated Let's Encrypt with automatic renewal in our provisioning we've even gone back to old websites and retroactively enabled HTTPS just to benefit from HTTP/2 and the response from clients and users has been either positive or absent.
The most annoying part was finding a good solution for the HTTP/2 / ALPN / NPN problem. With OpenSSL 1.0.1 Nginx will negotiate HTTP/2 just fine, but over NPN only (which Chrome is dropping soon). We wanted to avoid compiling our own Nginx with every OpenSSL update too. In the end settling on Debian unstable proved to be easy and by far stable enough to simply run Nginx.
I didn't find any good repositories with Nginx 1.9.5+ and OpenSSL 1.0.2 statically linked that I found trustworthy enough. Going with Debian Stretch/Unstable was possible without making any changes to our provisioning scripts (all targeted at Debian Jessie) and has proven to be more than stable enough to function as a gateway.
No issues so far with prototypebrewery.io, but it's just couple of days since we switched. We don't have massive traffic there and 99.9% is from newest modern browsers. Still, I hope with fallback to HTTP/1.1 we're safe, don't you think?
Wasn't nginx just on hn a few days ago getting completely slammed for being a less than stellar proxy? When I hear almost completely different experiences and advice from people working with a product in production it makes me wonder how to properly assess it.
I guess people paint with a broad brush based on their experience. nginx is a pretty fantastic piece of software - it runs a significant proportion of the web. nginx retries put / post requests if the application server times out. This quirk can be dangerous if you don't know about it, and nginx really should allow you to configure it, but it can be patched, and only affects you if you have multiple application servers.
The HAProxy makes sense only if you need a load balancer in front of several Nginx back-end server. In case of just one instance of Nginx you wouldn't use this configuration. I actually make an amend in the article to make it clear.
Nginx still has setup for HTTPS in the presented configuration. And why not - it doesn't do any damage and has a couple of benefits. You might still want to access Nginx nodes without HAProxy layer, e.g. for monitoring/debugging purposes. Also, if your developers use this setup on their laptops (with Docker images), they can run a smaller stack, without HAProxy, while developing the app. IMO very useful feature.
Mozilla has great recommendations in three categories (depending on what browsers you have to support). The intermediate recommendation will do for almost everyone. Combined with HSTS and a well-configured certificate it'll score you an easy A+ [1] on SSL Labs.
Since when some random website on the internet dictates "proper" configuration of TLS?
Hint: "A+" will make sure a lot of old clients which do not support fancy new encryption schemes will not get your content. If you don't care about that, it's fine, but do not call that "improper". It simply isnt.
With the Mozilla Intermediate configuration you lose clients that use IE on Windows XP and Android 2.3. We mostly target a Belgian audience, where those users pretty much don't exist anymore.
And still, we don't even target those browsers with our development anymore. Early this year we decided that supporting anything older than IE11 / latest Chrome / Firefox ESR / latest Safari just isn't worth it, so we don't do that anymore either.
For simple setups Caddy [1] is great. It supports HTTP/2 out of the box, and even provisions the certs for you from LE automagically. The defaults are much saner than nginx.
I took a stab at building from it source on Ubuntu 14.04 + LetsEncrypt for an experimental site (i.e follow the config best practices mentioned elsewhere here).
What we do internally where I work is the following: our application servers all listen only on a secure internal network and accept incoming connections only from our 'gateway' server. All simple HTTP over port 80. The 'gateway' servers run just Nginx and nothing else. They staple SSL and HTTP/2 on top of the forwarded requests to application servers.
Since our gateway servers are stateless and behind a floating IP we can easily swap them out. And because their task is simple we can take more risks with those servers (cutting-edge things needed for HTTP/2). Currently our gateway servers all run Debian Stretch (unstable) since we get Nginx with OpenSSL 1.0.2 for free. Our application servers run whatever stable software they require.
A simplified but functional version of our Nginx configuration: https://gist.github.com/Ambroos/1552515b0dd2b755fe1a