Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt and Nginx – State of the art secure web deployment (letsecure.me)
290 points by llambiel on Mar 31, 2016 | hide | past | web | favorite | 84 comments



It scared me to see that the author recommended running

  curl http://nginx.org/keys/nginx_signing.key | sudo apt-key add -
(This adds a key or keys downloaded over an unauthenticated http connection to one's Debian keyring, allowing whatever keys the network sends back to authenticate any future package updates.) I wrote to the author with a note expressing my concern.


I agree with your concern. As a temporary "workaround" I've updated the article with a SHA256 checksum of the key. I'll chase the Nginx team for serving this key over https


We should really be verifying the fingerprint of the key itself, even if it is served over HTTPS.


Thanks for the update!


Note that it also does not limit the key to simple nginx packages - if I take control of that repo I could trick you into installing my version of, say, base-files or bash, etc.. :(


Unfortunately, it seems there's no secure way to fetch the key. The nginx team recommends one checks the "web-of-trust" to check if the key is signed by others.


At the least though it could be https.


I also suggested that in the meantime the author of the article can provide a SHA256 checksum, so you can see if you get a different key than he does.


Stick it on a keyserver, and then ask gpg to fetch it from that keyserver with the full fingerprint. Assuming that your instructions that include the fingerprint are secure (which they have to be, else the instructions could root your box anyway), then that should be reasonable.

This does assume that gpg verifies that the key retrieved matches the ID requested, which I assume it does. Otherwise that'd be quite a serious bug.


The question is how to ensure you're getting the right fingerprint. If you have that, you can just as easily fetch the key using HTTP and verify it.


I covered that when I talked of the security of the instructions. The real question is how to ensure you're getting the right instructions, since they could direct you to download a different source entirely.

If you have ensured that you're getting the right instructions, and those instructions supply the right fingerprint, then you can be sure that you have the right fingerprint.


Based on the PGP pathfinder here[0], it is likely this is a valid key. I'm only a few signatures away from this nginx signing key.

[0]http://pgp.cs.uu.nl/


The problem is that "this" changes depending on who runs your network. You see the correct key, but I might not.


Also, while apt-key might handle random input fine, I'm concerned with anything that goes "curl http://example.com | sudo ...". In this special case, "apt-key add -" should avoid most problems - but I'd still prefer verifying the (possibly untrusted) gpg-key as a normal user, and then elevating to add what appears to be the correct key, and a valid gpg-key via apt-key add.


What bugs me is how prevalent that has become.

I'm looking at you Jenkins![0]

[0] https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenki...


Jenkins at least serves the key over HTTPS. Would you instead prefer they did not offer packages for your package manager at all? I sure wouldn't, I really appreciate those packages for easy upgrades.


I would prefer if they made it easier for people to verify the gpg key fingerprint for those of us who want that extra level of security. You don't have to verify the fingerprint if you don't want to but at least give me the option.


The rust project also advocates this method of installing software on their download page. To their defense, however, they do offer gpg signatures for their tarballs, even if you need to dig around for a bit to find them.

Also, anyone suggesting that this method of downloading and installing software is secure due to its use of HTTPS is incredibly reckless.


There's a ton of important software that people are installing over HTTP, so using HTTPS is unfortunately already super-substantial progress. Chris Palmer gave the sad example of PuTTY a couple of years ago:

https://noncombatant.org/2014/03/03/downloading-software-saf...

(after what I think was a long time, the actual download links themselves are now HTTPS, although they're all still served -- along with the signatures -- from an HTTP page)

I'm certainly not going to defend the idea that HTTPS is enough authentication for software installations (I'm writing an article related to software transparency), but there's a pretty big bootstrap problem and infrastructure gap right now.


Just using caddy server seems a lot simpler...


+1 for Caddy. I was using NGINX a long time, and wrote some similar scripts to make certificates for my web apps. After I switched to Caddy I had a 6x smaller config file, no more ln -s, and HTTPS without every having to think about it!


Neat, I like that when started with no arguments/config it just serves the files in the current directory, but then you can customize it from there.

I have "alias webserver='python -m SimpleHTTPServer'" in my shell config, but I think I'll switch to Caddy.


For local development, consider looking into devd (https://github.com/cortesi/devd). It's a single binary that supports things like livereload, network throttling, routing, and reverse proxying.


Ooh, that looks pretty nice. I was using Caddy to serve the current directory, but this seems even nicer.


I'd never heard of Caddy, and it looks great! Thanks for giving me something to tinker with this weekend! :-)


Agreed, Caddy is great.


You do need to restart Caddy in order to renew your cert, FWIW


Not true - Caddy has always renewed certificates automatically, and the latest version (0.8.2) renews without restarting. Relevant change: https://github.com/mholt/caddy/commit/11103bd8d68ed9d8dcd2fc...


I have been happy with https://github.com/lukas2511/letsencrypt.sh. I am trying to get it packaged for Debian/Ubuntu and either get it into Debian-proper or at least host the repo myself to make it easier to use for the common case. Since nginx reloads the cert on a SIGHUP it makes it really easy to have zero downtime renews.

As for getting notified if something goes wrong I use the following in my crontab:

    10 5 * * *  root    test -e /usr/local/bin/letsencrypt.sh && /usr/local/bin/letsencrypt.sh -c > /dev/null
letsencrypt.sh outputs errors to stderr, so any errors will be sent to the root account. To get that working, do:

    apt-get install postfix
    echo 'postmaster:     root' > /etc/aliases
    echo 'root:           igor@example.com' >> /etc/aliases
    newaliases
Problem solved.


Ewwww, that renewCerts.sh is pretty crappy. Who the hell is going to check the /var/log/letsencrypt/renew.log everyday to see if renewing failed?

Could not they do something nicer with systemd and email?


The default behaviour of cron is to email the user if a job finishes with a non-zero exit code, which seems to apply here in case of renewal failure.


But is not the default account that it would email root? I run Debian and almost never log in as root. Would all admin sudoers receive the email?


Without judgment intended, as a Linux sysadmin you should absolutely be monitoring mail to root. That is the standard place to deliver error output from unattended processes. You can easily /etc/alias it to something else if that's more convenient.


Sysadmin/Devops here. I send all root mail to Graylog.


That does look nice. Thanks for Graylog reference.


Graylog looks fantastic, thanks for the mention.


You'll love it. I'm pushing tens of thousands of messages per second into a cluster, and it works like a champ.



So the default behavior is to only email root unless crontab is edited, therefore most people would never receive an email (in case of renew failure), if they only followed the instructions given.

Otherwise mail is sent to the owner of the crontab.


If your server isn't set up to forward root's cron E-Mail to you you have bigger problems than your let's encrypt certs not renewing.


A properly administered Linux system would be emailing root mail to a real email address unless monitored by another system. I've never worked in a professional environment where root mail was left unread at any point. Root aliases (excluding environments with other monitoring) are on the checklist for any basic image(server) deployment. It's a standard, well-adopted practice.


cron error reporting via email is an established solution. Why reinvent the wheel?

I'd agree that a hint regarding MAILTO= in the crontab file would be neat.


Getting off-topic here, but whenever I do a new Debian build one of the items on my checklist is to edit /etc/aliases to add either my actual login user or a real email address (depending on the server setup) as an alias for root.


You are doing your Debian installs with debconf set way too high. I always set debconf to "low" but I am 99% positive this is a default question during installation.


Interesting. I'll try that next time. Thanks!


This sounds like a job for Dead Man's Snitch.

https://deadmanssnitch.com/


It's a nice tutorial. The title tripped me out, though: a common webserver + HTTPS + free certificate on Windows/Linux is "state of the art secure web deployment?" I'd hate to see what passes for average or (shudders) ancient.

In my mind, I'm seeing "state of the art" being more like a combo of Ur/Web for apps, robust implementation of OP2 web browser for client, lighttpd rewritten in Haskell, HTTPS component written in SPARK or Rust, all running on GenodeOS or CheriBSD in isolated partitions, C parts compiled with CompCert extended with Softbound + CETS, anti-fuse FPGA doing I/O offloading/mediation, and hardware done in Bluespec. That is state of the art with probably badass results. This submission is... more run of the mill. Immediately useful, though. :)


Thanks for the info on the headers, I can't believe they've issued certs for over a million domains!

Here's my notes on setting up LE on IIS if anyone one is interested, it's done by using Powershell/ Package manager.

//1. Install (you will get some security prompts) Install-Module -Name ACMESharp

Import-Module ACMESharp

Initialize-ACMEVault

New-ACMERegistration -Contacts mailto:somebody@example.org -AcceptTos

//2. Request the challange, this is for a website currently running on IIS. 'WebSiteRef ' refers to the name of the site within IIS

New-ACMEIdentifier -Dns demo.velox.io -Alias demo Complete-ACMEChallenge demo -ChallengeType http-01 -Handler iis -HandlerParameters @{ WebSiteRef = 'Demo' }

Submit-ACMEChallenge demo -ChallengeType http-01

//3. Create & download the certificate

New-ACMECertificate demo -Generate -Alias demoCert

Submit-ACMECertificate demoCert

Update-ACMECertificate demoCert

Get-ACMECertificate demoCert -ExportPkcs12 "C:\Users\USER\desktop\demoCert.pfx"

You can now install this on your server.


I just use https://github.com/lukas2511/letsencrypt.sh/ single bash script.

Add a config.sh and setup nginx alias. Then just add domains to the domains.txt and have the script run via cron daily.

Finished


This is the script I use too. I have a hook that automatically restarts nginx, which fires only if a cert has changed. Very simple. Works very well.


  > sed -i 's|PasswordAuthentication yes|PasswordAuthentication no|g' /etc/ssh/sshd_config
Will not work if string is commented out:

  > grep PasswordAuthentication /etc/ssh/sshd_config
  # PasswordAuthentication yes
  > sed -i 's|PasswordAuthentication yes|PasswordAuthentication no|g' /etc/ssh/sshd_config
  > grep PasswordAuthentication /etc/ssh/sshd_config
  # PasswordAuthentication no


The --webroot option doesn't work for my setup, so I need to shutdown nginx for 2-3 seconds and use the --standalone option. I set this as a CRON job that will run every two months. It's not elegant, but it's done.

Here's the modified script using certonly and the --force-renew flag.

    #!/bin/bash
    # Force-renew the "Let's Encrypt" certificates for a given domain
    # Run this as root as a BI-MONTHLY cron job
    export DOMAINS="yourdomain.com,www.yourdomain.com"
    export LOGFILE="/var/log/letsencrypt/renewal_yourdomain.log"

    echo "Stopping nginx temporarily to renvew certificates for $DOMAINS ..."
    service nginx stop

    echo "Calling /opt/letsencrypt/letsencrypt-auto certonly --standalone --force-renew -d $DOMAINS"
    if ! /opt/letsencrypt/letsencrypt-auto certonly --standalone --force-renew -d $DOMAINS > $LOGFILE 2>&1 ; then
        echo "certonly call failed, restarting nginx"
        service nginx start
        echo "LOG info:"
        cat $LOGFILE
        # TODO: email administrator...
        exit 1
    fi

    echo "certonly call succeeded, restarting nginx"
    service nginx start
Note: don't run this as a daily cron job since this has --force-renew...


Do you ever get problems with the socket still being in use after nginx is shut down?


Not on the N=1 times I've run the script, but will look out for this in the future.


I'm curious: why doesn't webroot work for your setup?


A dynamic script is handling all requests, so there is no "webroot" directory where you can put stuff for them to appear under /


You could quite easily add a location /.well-known rule to the server, right?


Oh yeah, I didn't know about this option. A static dir for /.well-known is a much more elegant solution than shutting down nginx... Thx for the pointer.


Lets encrypt fixes the encryption problem sure but does anyone else feel that all we really needed was really great documentation on what to do instead of an intrusive set of scripts?


No, because you need the automation to make the short expiration times bearable, and having those short expiration times is more secure.

Besides, the official script is just one part of the project; the others are (1) free certs and (2) a standard protocol, which you can use with other tools.


Yes, the API documentation is lacking especially with what we've gotten used to from Swagger markup and Stripe's API doc style. I have scoured for such an easy breakdown and found none. As a result I actually just implemented a new, clear, client for LetsEncrypt and have been documenting as I go.

It's made me think we should have a Swagger or API Blueprint of the spec on github that everyone can keep up to date. What do you think?


Are you referring to the server-side API the client is communicating with, or the internal API the client exposes?

The former is documented in the ACME specification[1], currently being worked on by the IETF. There are many low-level ACME libraries for basically every language[2], and a pretty decent guide on writing your own client as well[3].

[1]: https://ietf-wg-acme.github.io/acme/

[2]: https://github.com/letsencrypt/letsencrypt/wiki/Links#librar...

[3]: https://github.com/alexpeattie/letsencrypt-fromscratch


Just gonna mention that the IETF RFC viewer [0] puts you one click away from a diff between the current and previous rev of a document [1] which can be quite handy when implementing a WIP protocol. For non-draft documents, you also get a link to the RFC's Errata page at the top of the page.

[0] https://tools.ietf.org/html/draft-ietf-acme-acme-02

[1] https://tools.ietf.org/rfcdiff?url2=draft-ietf-acme-acme-02....


Thank you very much for these links. 3) is the closest to what I'm looking for, and really good! but is still an implementation and not a spec. 1) is fine for a spec for an internet committee who has to delve into every detail for standardization. But if a client can be written in 150 lines of code, there should be a much shorter version of the spec (only a couple pages) in a standard format. I should be able to easily write a client from a 3-page spec without looking at all the implementations.

All due respect to the client authors, but only a few clients are good. Many are very poorly written, which I do not trust for security. I believe the cause is not having a clear, short, standard modern spec.


> But if a client can be written in 150 lines of code, there should be a much shorter version of the spec (only a couple pages) in a standard format. I should be able to easily write a client from a 3-page spec without looking at all the implementations.

Right, but even though the protocol is simple, there are pretty much always subtleties and potential ambiguities that need to be resolved by the spec so that one can write good implementations.

> I believe the [problem] is not having a clear, short, standard modern spec.

The IETF ACME draft spec is (like many IETF specs) clear, short, standard, and modern. The entire document is only 50 pages (fewer if you reduce the font size), and (from skimming the ToC) the last ~10 of those pages are largely optional material for someone who's just reading to implement the protocol. That document shouldn't take you more than an hour to read and digest.

If you've never actually read an IETF spec they can be intimidating, but -if you're a programmer, network guy, or backend web dev- you really, REALLY owe it to yourself to learn how to read them:

* Use the IETF's HTML RFC viewer rather than the plain text viewer.

* Until you become familiar with the way IETF standards documents are written, don't skim! They're generally information-dense documents that do NOT repeat themselves.

* Start from the beginning of the document and read through the end.

* If the spec references another document, and then starts to talk about things from that document that you don't understand and can't figure out, go read the relevant parts of the referenced document.

* If the spec starts presuming knowledge of things that you're sure it hasn't mentioned yet, backtrack a bit... you probably overlooked something.

* The ASCII-art diagrams present in some specs aren't there for fun; they're important information.

In regards to shorter documents, I'm not sure what you're looking for... just a listing of the HTTP conversations and their payloads?


My point is, the clients are generally pretty bad despite the IETF spec. Lots of edge cases ignored, poor security practices. I understand the intention but the effect is that the clients are just as opaque as the spec but often more incorrect. Who would try to launch a startup API for wide use without a Stripe-style spec these days?

Thanks for the advice, but it's not that I don't understand how to read it, it's that I can tell other devs don't understand it despite the good intentions of the authors.

Plenty of good clients are written for plenty of other tools, based on a much more straightforward call-and-response API spec. For example, the Hashicorp tools have a simple spec and proper clients in many languages.


The spec contains sample payloads for pretty much every resource. In fact, you can build a functional client for http-01 just by looking at the examples.

On top of that, if you're using a programming language that's at least close to mainstream, there's a very good chance someone has already written a library which handles most of the nitty-gritty details of ACME. As an example, this is all the code you need with the acme-client ruby gem in order to solve a http-01 challenge and get a cert (slightly abbreviated):

    require 'acme/client'
    client = Acme::Client.new(private_key: private_key, endpoint: endpoint)
    registration = client.register(contact: 'mailto:contact@example.com')
    registration.agree_terms
    authorization = client.authorize(domain: 'example.org')
    # serve challenge.filename with content challenge.file_content
    challenge.request_verification
    # loop/sleep until challenge.verify_status == 'valid'
    csr = Acme::Client::CertificateRequest.new(names: ['example.org'])
    certificate = client.new_certificate(csr)
    # certificate.to_pem contains your signed cert. done!


Great write-up and advice! :)


I'm really not a fan of this domain grab to write a single article with no(t a lot of?) new information aimed at selling services from a single host. You're not the only person guilty of this but it feels quite misleading like the article is coming from a 3rd party.


You can create a more hardened setup by using a 4096 bit RSA key:

  /opt/letsencrypt/letsencrypt-auto certonly --rsa-key-size 4096 --server https://acme-v01.api.letsencrypt.org/directory -a webroot --webroot-path=$DIR -d $DOMAINS 
...and using the secp384r1 curve for ECDHE key exchange:

  # in your nginx.conf
  ssl_ecdh_curve secp384r1;
Arguably, the real state of the art is to use an ECDSA certificate. Let's Encrypt recently started supported them, they offer a equivalent level of security to RSA at much lower bit lengths (a 384 bit ECDSA key is considered equivalent to a 7680 bit RSA key) and a few recent TLS vulnerabilities (like DROWN) have targeted implementation details of RSA.


4096 bit RSA keys offer very little additional security (2048 is plenty for at least the next few years, and with a certificate that's valid for 90 days, there's practically no risk - you can rotate the key rather easily if something bad comes along), but has a fairly big impact on performance and battery life, especially on mobile devices.


Here is also my config if anyone is interested (also A+ on ssllabs.com): https://gist.github.com/alex-min/158f35f604b24e163ae9, feel free to copy it. (or suggest improvements !)

I also recommand https://sslcatch.com which sends you a warning email if your certificate is about to expire. I have a crontab to renew it but this can be also helpful just in case.


Isn't running this as a `@daily` CRON job too much? I thought Let's Encrypt certs were good for 3 months? Why not @monthly or months 0,2,4,6,8,10 ?


If something breaks, you might as well find out about it as soon as possible. That way you have the full 90 days to figure it out at your leisure, instead of 60 or 30.


Right. Also, I just saw that `letsencrypt-auto renew` will only issue new certs if < 30 days left on current cert.


Recently went through a similar setup, but used Docker and some existing h2-friendly images. Think it's a nice way forward for deploying to production environments.

Wrote about the process here: https://clay.fail/posts/hip-http2-using-docker/


I recently built docker-gen-letsencrypt[1]. It's the same concept as what you're using, but fully automated for getting certs.

[1]: https://github.com/mikew/docker-gen-letsencrypt


That's awesome, will be updating the site to use your image this weekend. Like the support for docker-compose and the staging servers, too.


Look at how many lines we need to secure tls connections on nginx. We need better defaults.


Interesting choice of cryptos. My latest client isn't letting us use anything besides GCM right now.


"State of the art" and "cron" should probably never be in the same article.


https://github.com/containous/traefik now has native Let's Encrypt support ;)


Can someone tell medium? They're still buying comodo certs for their custom domains


They probably do not want to deal with LE's rate limiting and shorter renewal periods.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: