You may also want to try alternative client from https://github.com/kuba/simp_le. It can be easily dropped into crontab and renew certificates when necessary.
Disclaimer: I'm the author of simp_le and developer of the official client :)
The biggest usability challenge for me is that each crypto tool uses its own key store. kgpg, ssh, yubikey, openvpn, etc. Do you foresee allowing your client to pick up the private key from a smartcard like a yubikey? My goal is to centralize all the certs in a place that I will always have access to - and not have a different passphrase for each cert (which I will forget) or 1 passphrase for all certs (even worse). 1 passphrase for my cert-store (the yubikey) but which contains all my certs.
Right, I get that. I've been looking through their code and the draft RFC here and there over the last few days (I'm considering using it for internal applications). He mentioned yubikey, pgp, and openvpn and it started to sound more like he was talking about a supporting client use cases. I can see the convenience in using a common client key but it seems more secure to keep things compartmentalized, particularly when it comes to mixing client and server stuff.
I could see wanting to store the Account Key on one of these devices.
edit: Was mostly thinking about smartcards, etc. Honestly, since most of the crypto operations can be performed by OpenSSL, I guess the account key can already live on any device with PKCS11 support.
The primary difference is in the underlying philosophy. I encourage everyone to read the manifest from the main project site [0]. Most of the points are counteracting bad design decisions (IMO) made in the official client.
In general, official client tries to do too many things at once. In the result, the code base is huge -- too big to be easily audited; it's also proven that it's likely to have a lot more bugs. On a similar note, letsencrypt-auto, the default installation method, pulls in Apache plugin with all of its dependencies. And so on, and so forth...
At the same time standard modes of operation are well hidden and/or difficult to configure. For instance, running the official client without root is technically possible, but let me know how much time it took you to figure this out ;). Likewise, scripting is pretty much impossible, because the CLI tends to ask interactive questions... "webroot" plugin is probably the only plugin that could be made into non-interactive mode, and again you have to be knowledgeable enough to figure out all necessary flags.
simp_le tries to takes the best from the official client ("webroot" plugin -- not stealing, I authored that plugin myself) and adds some missing features that IMO should be the standard. E.g. you can just just put `simp_le --some --flags && this_command_will_be_run_only_if_cert_was_renewed_eg_restart_nginx` into your crontab -- this mode of operation is simply not possible with the official client.
NB You can also turn simp_le into a standalone binary (~8MB) using built-in PyInstaller setup [1] and distribute to your machines without having to install any dependencies (even Python) :)
I hope that answers your question. You can also catch me on #letsencrypt, or better yet #simp_le (both at Freenode).
Thank you for your work on this project. It is exactly what I need. I don't want the main LE package, which is bloated and annoying to use. I love LetsEncrypt for what they're providing as a free service, but I think they're a little too ambitious from the start for their client. Start small.
Nice job! I found LE's client irritating to use as well. Yours sounds clean and correct.
(I'm just amazed that LE decided to try modifying webserver config files automatically. There are soooo many ways for that to fail. It's going to take an immense amount of programmer effort for very little gain.)
It's probably motivated in no small part by helping the type of person that's currently not encrypting their application/server and should be. TLS, certificates and PKI's are something a lot of people have a hard time with.
We're at a stage where people are capable of developing a web application but creating a (suitable) private key, generating a CRL and then using the resulting certificate in Apache is currently beyond their capabilities. Which is kind of cool in one sense but kind of scary in another.
People who have no idea how to add certs are also unlikely to know how to login to their web server using SSH or how to install a package.
For people who can SSH and install packages, I think they are better served being shown how to add couple lines to make SSL work than some magic that could screw up their web server config. After all, these people chose to learn how to login to the web server instead of just managing their site from a CMS. They would probably like to learn how to configure something like SSL.
>People who have no idea how to add certs are also unlikely to know how to login to their web server using SSH or how to install a package.
You'd be surprised I think.
>They would probably like to learn how to configure something like SSL.
I think you'd be surprised again. It's not even always about aptitude–quite often it's just laziness. The reason that many of these people aren't encrypting today is because they don't want to invest the time and effort. "My copy-pasted openssl commands didn't work? Eff this."
Operations is as much a mindset as it is a skill. Some developers have it, some don't.
Webserver config modification has indeed taken an immense amount of programmer effort. It's actually pretty magical when it works well, and it's suprisingly robust, but it's still filled with horrible edge cases everywhere.
I don't understand the point of "not specifying output files". First: I just used your client and it worked, it is great work, thanks for that! But the one thing I did not understand was the -f options. Why don't you implement an (in my eyes much easier and common) interface of --output-key path and --output-cert path?
Having to script that with symlinks is nothing I can't do, but something that sucks a bit. Otherwise the crontab would not even need to call a bash-skript wrapping that stuff, it would be just your client with proper parameters. I'd much prefer that.
Got an example of how to drop it into crontab? I'm not really a Python guy and it seems to only run under the venv which I'm not sure how to setup from a cron job.
EDIT: Never mind, figured it out, I can just run the simp_le script directly from the venv directory.
Ehhhh, so I get why you made this, but at the same time, LetsEncrypt intentionally chose 90 days for the duration of the certs. Presumably because renewing a cert is a positive confirmation that you still retain ownership. You actively performed an action that declared: hey I still retain control of this and it hasn't been compromised. (as opposed to other states your cert could be in -- abandoned, compromised, password forgotten and you were too lazy to get the cert revoked-- trust me, engineers made cron because we were lazy especially when it comes to mundane things).
This is around-about way of saying that 90 days was a well-thought out duration to issue a cert and the process of renewing it manually is there as a positive (rather than passive) confirmation you retain control. These guys are doing a public service, on par with archive.org and the EFF. I'm not sure, but even though I've been using cron since before I was a teenager, I'm not throwing that entry in.
Actually, no: the default 90-day certificate lifetime was chosen such that it wouldn't be too inconvenient for manual renewal if necessary, but was intended to encourage automatic renewal: https://letsencrypt.org/2015/11/09/why-90-days.html
Let's Encrypt gives encouraging automation as a reason why they choose 90 days:
They encourage automation, which is absolutely essential for ease-of-use. If we’re going to move the entire Web to HTTPS, we can’t continue to expect system administrators to manually handle renewals. Once issuance and renewal are automated, shorter lifetimes won’t be any less convenience than longer ones.
That philosophy is perplexing because system admins already have to manually handle things like software upgrades. Unlike SSL renewals, upgrades usually occur to an arbitrary frequency yet admins seem to be able to cope.
I'd also contend that 90 day expiry makes end-users less sensitive to certificate change notifications; at present if a website renews its cert after a year I receive a pop-up and I pay attention. Receiving one every 60 days will just condition people to start ignoring cert changes, whether valid or not.
The end users should ideally not be prompted when they see a certificate renewed by the same CA. It should also ideally be a /renewal/ of the old one, and not an entirely new private key generated each time as well. Of course everything could be tuneable.
I believe these are sane defaults.
* Prompt on CA change? (Default Yes)
* Prompt on private key change? (Default No IF the cached certificate is on the old CA's revocation list.)
* Prompt on CA renewal? (Default No)
> It should also ideally be a /renewal/ of the old one, and not an entirely new private key generated each time as well
sure? A new key provides quite a bit of security benefits because even when the key got loose without you noticing, three months later, it won't be usable any more when the new cert is made for a new key.
Users should be able to safely ignore certificate changes. No one should need to install Certificate Patrol to have a safe experience.
Of course, if the certificate changes to an untrusted one, the browser should flag it -- and if the server specifies HSTS, then the browser should block the user from clicking through anyway (increasing the pressure to make sure the cert is always trusted and unexpired).
> LetsEncrypt intentionally chose 90 days for the duration of the certs. Presumably because renewing a cert is a positive confirmation that you still retain ownership.
LetsEncrypt does domain validation only so there is zero check who owns the server. So no, that makes no sense. You can already crontab the official client anyways.
>hey I still retain control of this and it hasn't been compromised.
For the most part certificates have never provided this guarantee. What they do guarantee is that A) the entity you're communicating with is in control of the private key and signed certificate that you expect B) your communication with that entity is secure from a man in the middle attack. And mostly B if you're visiting the site for the first time.
FYI, if you don't want to install anything to try it out, you can use https://gethttpsforfree.com which is a browser-based ACME client. It doesn't ask for private keys, so you don't need to trust it.
This is awesome! Way easier than installing anything that tries to do everything for you, especially if you have a non-traditional setup.
One caveat: AWS CloudFront requires the domain key to be 2048 bits instead of 4096. Other than that, worked flawlessly for me and now www.a1k0n.net is finally SSL capable. Thank you!
I got to the step 5 but nothing seems to be happening.
When I opened the developer console on Chrome, it has these errors:
Failed to load resource: the server responded with a status of 400 (OK)
index.js:834 Uncaught InvalidStateError: Failed to read the 'responseText' property from 'XMLHttpRequest': The value is only accessible if the object's 'responseType' is '' or 'text' (was 'arraybuffer').
I opened the page, then needed to close my laptop for a while before I had done the step where the domain provides certain hash in particular url. Then I got back and provided the url, and it validated, but the step 5 doesn't work.
Thanks for providing the service!
edit: If I now try to do the 4th step again, I get:
Error: Domain challenge failed. Please start back at Step 1. {"type":"urn:acme:error:malformed","detail":"Unable to read/verify body :: JWS has invalid anti-replay nonce","status":400}
Ok, the error in the api call seems to be "Certificate public key must be different than account key".
I'm a layman when it comes to secure websites and certificates. I remember reading that the "free https" provided by Cloudflare meant Cloudflare owned the private keys to something and was able to see all traffic. Is that the case with Let's Encrypt? Because if it is I don't see as an improvement at all.
You create your own private key (or rather, the client does that for you, if you use the official one) and send them a Certificate Signing Request (CSR) which only includes your public key. The private key stays on your server.
At least you should not. StartSSL (and probably other CAs) has a prominent feature to create your key pair for you, which you need to skip to use a signing request. Always use the latter!
Personally, I would avoid changing SSL processes close to the expiry of your current certificate if uptime is a significant consideration[1]. If it's a toy or personal use server, then sure, learn a new experimental solution and save some money.
If you have problems with letencrypt.org, there are cheaper options for a domain validation certificate than $150. namecheap.com generally has competitive prices on various types of certs from various registrars. options and is pretty easy to use, startssl.com has free domain validation certs, but the experience can be a bit harrowing the first time around [make sure you keep a copy of everything in a safe place, you don't want to screw up the idiosyncratic process that startssl uses]).
On production servers, it's always better to use a tested process rather than winging it. This is doubly true when you have a hard deadline like a certificate expiration hanging over your head.
[1] in fact, I don't even like to let certs get close to expiry even when I'm not changing processes, since missing the deadline can cause so much unnecessary grief
edit: A bunch of little tweaks, nothing that changed the main thrust of the comment.
This website is agnostic to the server setup. All it does is ask you to host a file at a specific endpoint on your domain on port 80. It has examples of how to do this with python or file-based on nginx and apache, but there's nothing stopping you from doing your own configuration to host the file in your own way. Hope that helps!
Have you used this recently? I just tried it 3 times, and every step succeeds until after I've successfully completed step 4, then step 5 displays a failure message that tells me to go back to Step 1 again.
I didn't realize you were the author! I figured out that I made a mistake by using the same key pair for Steps 1 and 2. The LetsEncrypt API returned an error that told me to use a different key for the CSR, and once I did that it worked.
Reading back through your instructions, I don't know how you could be more clear that ACCOUNT.KEY and DOMAIN.KEY should be different. It's just my fault for not reading slowly enough. :)
Thanks a bunch for making this tool, it made everything simple.
I don't get it. What is the domain key here?
I tried to read the instructions but all of it said that the signing should be done with account private key.
Ok I got confused because tried to simultaneously read how to make the certificate using Amazon AWS documentation, and seems I skipped one crucial part.
Did you figure it out? You need two different keys: the account.key and the domain.key. For most of the steps you will use the account.key to sign, but the Certificate Signing Request will use the domain.key to sign.
this was super easy! i just got it HTTPS setup through your site for an API backend I'm working on, and the hardest part was moving nginx off of dokku so that I could get it to serve static files. thanks!
Happy to see this project hit public beta! I've deployed Let's Encrypt on a couple of side projects during the last month or so, and my experience has been mostly positive.
The official client still needs some work, especially in terms of auto-configuration on apache, nginx and others, but it's getting there. Some say it's become a bit bloated, which is true to a certain degree, but probably necessary to achieve the goals they have set for it.
Luckily, Let's Encrypt is based on an open specification (ACME) and it's really easy to implement a custom client. There are already more than 10 client implementations out there[1], all created with different goals in mind - anything from a Ruby gem to a simple scripts to get your own CSR signed. If you're not running your typical LAMP or LEMP stack, and don't want to run the official client which is more of a certificate manager requiring root access, that's definitely something to look into.
Note that if Windows XP support is relevant for your use-case, you might want to hold off. There's currently a problem with how XP deals with name constraints, which means any application using Windows XP's SSL API (I believe it's called schannel?) won't work - for example Internet Explorer and Chrome. This might get fixed in the future[2]. Hopefully, that's not relevant to you. :)
Except on my system the letsencrypt command did not work. It failed with an "Operation not permitted". So I edited the webroot.py file, and commented out line 108 that said:
# Remove execution bit (not needed for this file)
os.chmod(path, filemode & ~stat.S_IEXEC)
> They keep trying to push the idea that letsencrypt should be ran as root.
When you say it that way, it sounds like there's something untoward going on. ;)
From what I understand, the official client can bind to port 80 to do Basic HTTP verification. This requires root privs. The official client can also update many HTTP server config files. I guess you don't need to be root to do this, but it does remove a command line flag. LE is designed to be stupidly simple, but -as you've discovered- it does let more technical users run it in safer modes of operation.
> Except on my system the letsencrypt command did not work. It failed with an "Operation not permitted".
Odd. If I'm reading the code correctly, it looks like you have to have write and create privs to 'path', so it's odd that you wouldn't also be able to remove the execute bit.
Regardless, would you file a bug about this or -at least- bring it up on the mailing list? It's possible that this is user error, but if it's not, I expect that it's something the LE guys would like to hear about.
I'm the nth author of an ACME (Let's Encrypt) client. It's a single-binary Go client which you can build and upload to your server. It's designed to work like "make"; you tell it what hostnames you want certificates for, and it tries to satisfy those requirements.
It can install a cronjob automatically for autorenewal, and the authorization process doesn't require downtime.
The official lets-encrypt client has an extremely large dependency graph, and using the client requires server downtime since it takes over port 80. Can either of these things be improved?
Yes, this greatly annoyed me too especially given the 3 month expiry time. https://github.com/diafygi/acme-tiny is a tiny alternative client without any dependencies (except python itself), and you can use any web server including your usual running one to serve the challenge responses up.
I was just clarifying for other readers. I looked through it because I've been working on some Python crypto stuff for a while, so I was curious what it used since the statement above was that it only required Python.
I think you should double check the taking over port 80 thing. There are a number of authenticators that do not require taking down your existing webserver. I've used the webroot authenticator to get several certificates without taking down anything.
Not supported on all platforms though, on my FreeBSD box the only option it gave me was to take down the web server. Maybe it's possible with manual config?
I can understand some of the need for this. On the other hand you could make it trust all privileged ports, which implies root access - but this is a separate concern for other users.
You installed py27-letsencrypt from ports? I posted a command in here that I used on FreeBSD and it worked find to register a certificate without disturbing the running web server. The webroot authenticator does work in FreeBSD.
Yeah, this gets really tedious though because you have to do special configuration for every vhost you want SSL on, nginx doesn't appear to offer a way to do global location directives.
Maybe I'll just write a firewall rule to redirect traffic from letsencrypt IPs over to the standalone client.
I think you can create an include file with all the SSL configuration like cipher lists, Let's Encrypt support etc and just include it once for every vhost that uses SSL. Probably won't save time initially, but it should make changes like this faster in future.
Yep. The option is called 'webroot' and using letsencrypt-auto (the offical client) I deployed this on a few servers and setup cron to renew automatically. It's not very well documented (at least wasn't when I last checked).
For those concerned with the official client requiring `sudo`: there are already many alternative clients that are compatible with the Let's Encrypt server, mine included.[1]
I made my own client because I wanted to know what's exactly going on during the certificate issue process. I tried to make the code as simple as possible, so take a look if you have time![2] It's a simple single file script.
Yes, correct. This is using the simp_le client (as mentioned in https://news.ycombinator.com/item?id=10672006). The client is easier to use and developed by the same author as the official client. Works wonderfully.
Looks like Certificate Transparency isn't on the roadmap? What's the holdup on that? Seems like a perfect match for something automated like Let's Encrypt.
I have seen many howtos recommending to add a monthly cronjob for the certificate renewal on the first day of the month at 12am (0 0 1 * * or @monthly). It is probably better to renew the certificate on a random day/time (30 4 5 * *) to prevent excessive load on their servers.
PS: I also made a cron-callable script which checks the expirity time of the cert before telling letsencrypt to renew. It checks if the cert was renewed afterwards, and echos to stderr if renewal didn't take.
This is among the most exciting things going on in the web world, for me. It's a pretty dramatic change that now every website can be encrypted, by default, and in a secure(ish) fashion (it doesn't really do much for proving identity, but SSL has been broken for that for years anyway).
I suspect integrating this has been the most requested feature for Virtualmin for the past several months (and we're about to roll it out, probably next week). For whatever reason, SSL is just always intimidating for people...even when it's been almost entirely automated, the back and forth between the CA and the server and dealing with private keys is a deal-breaker for a lot of non-technical users, so many of our users who are new to web server management have problems with SSL. It follows close behind DNS in terms of how much confusion it causes.
Anyway, I love that Mozilla and others took the initiative to pull this together, and used their not insignificant clout to push it to completion.
What else is exciting? This is the most exciting thing for me, which is actually pretty sad. It's 2015, and HTTPS still isn't widespread, and something like this wasn't done before? It's a good little new thing, but it doesn't really push anything forward. It feels like we're still where we were a decade ago.
PHP 7 is a pretty big deal for our users, though not particularly to me (though our websites run on Drupal, WordPress, and Mediawiki so we will get some performance benefit from the change).
It does seem ridiculous that something like Let's Encrypt didn't happen sooner. But, now that it's finally here, I'm excited about it. I like that we can also expect mail to get more widespread encryption because of this, as well.
It still might get annoying (for both sides) if you request thousands of them, + you might not want to publish a list of all valid ones. Example: The issue came up in relation to sandstorm.io, which uses (for security reasons) a subdomain for every document that exists on a server.
Perhaps I'm ignorant, but I don't see how the LE guys would be annoyed about thousands of requests for TLS certs from a single user. The system is automated, after all. :)
> + you might not want to publish a list of all valid ones.
I assume that you mention this to illustrate a scenario where certs with a bunch of SANs is not a solution to the problem? If you weren't, does LE do something like publishing a list of all of the domains for which they have issued certs?
> but I don't see how the LE guys would be annoyed about thousands of requests for TLS certs from a single user. The system is automated, after all. :)
We have to actually run a complicated server that does things with an external Hardware Security Module. CPU time, disk space, and bandwidth all cost money, and there's a finite amount of money we can spend on resources :)
Thus, rate-limits. That also helps keeps latency low for most users, and prevents DDOSing.
I looked into it a while ago (it would be fine for me if it worked) but I think the problem I came up against was if I'm using virtual hosts in apache, I'm not sure I can use two different certs for a.mydomain.com and b.mydomain.com. If I'm wrong please tell me!
Yes you can, even on the same IP address with a technique called SNI. It's built into Apache and all common web browsers so you don't even have to worry about it.
Sounds good, do I need to change my apache config? Currently I have a bunch of port 80 virtual hosts that permanently redirect to their https/443 counterparts which are also virtual hosts and those all offer the same wildcard *.mydomain.com certificate. Or will apache just do SNI if I use a bunch of different certs?
According to this comment, [0] it looks like wildcard certs haven't been ruled out, but have been deferred due to rather thorny issues regarding getting automated validation right.
*each host, not site. But, yeah, I agree with you, I think just registering for all active hosts should be the standard (because if you example.com, you don't necessarily own waikiki.example.com)
How does lets encrypt handle possible phising domains?
Even if there's zero mitigation I think the benefits will outweigh the downsides, but I wonder if there's anything that stops a criminal from registering a domain that is very similar to, say, that of a bank?
I know from experience (ethical hack) that the traditional authorities won't easily let you register 'suspicious' names like: <bank>-<name>.com where the original domain is <bankname>.com. Or something like that.
You are misunderstanding the purpose of SSL certificates. An SSL certificate certifies that a specific private key belongs to the owner/operator of a specific domain (or so it is intended), not that the domain is trustworthy. It's only about identity, not about trustworthiness, which is how it should be, it's not up to some private business to decide who is trustworthy and who is not.
I'd like to know more about this. When you buy domain validated SSL certificate (that costs $5-10) for which the process is completely automated, does the issuing authority really check or care about which domain it is being used on? Does a human (or a program) check the "suspicious factor" of the domain?
No. It's easy to currently register an SSL certificate for any domain, even if that domain is similar to the name of another. The main reason this was a "deterrent" to phishers is that generating tons of these was expensive.
The phishers still have to front the cost for the domain itself, so this really isn't going to increase the number of phishing domains. It may increase the number of phishing domains with SSL, but the purpose of Lets Encrypt is to encrypt everything -- not just "official domains"
No you are wrong. Perhaps it's not the case everywhere. But like I said, from personal experience I know that certain types of domains are checked. I tried and failed to register a certificate for a phising domain that masqueraded as a banking website.
whether or not this was originaly the point of ssl or not, this is how many non-technical people decide to trust a page or not: by looking at the lock in their browser.
> No you are wrong. Perhaps it's not the case everywhere. But like I said, from personal experience I know that certain types of domains are checked. I tried and failed to register a certificate for a phising domain that masqueraded as a banking website.
I never said it's the case everywhere. I said it's easy to register an SSL certificate for basically any domain you actually own, which is true. Basic SSL certificates are not designed to provide extended validation (there is EV certificates for that), they are designed to identify that domain.
That's very overblown. Any CA in any place on the globe is in danger of being taking on their balls by some shady state agency. That's the CA problem.
Doing this at huge scale is not possible though without people noticing. Also one can pin certificates in some situations. Let's Encrypt makes it easy for us people to put an end to mass surveillance.
Unless I'm mistaken they're also fundamentally misunderstanding SSL/TLS.
As far as I can tell, LE never sees your private keys. A Certificate Authority signs your public key, so no, the NSA can't coerce LE to give up your private key because LE never sees it to begin with. Could the NSA coerce LE into signing one of the NSA's public keys under your Common Name (that is, coerce them into issuing rogue certificates for "national security" use)? Certainly, but they could do this before, with any already existing CA.
What I would really like is wildcard certs for internal only use. For now, as I understand it, the only way to do so would be to temporarily port forward the internal server so it can reach out and close it later... Certs are not just for the internet websites, they are important for Intranets too.
Dumb question time: Why would idenTrust, part of whose business is selling SSL certificates, cross-sign for Lets Encrypt, whose business is giving them away for free?
I know this late, but FYI: it seems that certificates issued by Lets Encrypt are not as widely accepted as other commercial providers.
For example, my trial and error I found that the webhook api for both Mandrill and SendGrid did not recognise the Let's Enrypt certificate (although Google Chrome did recognise it). When I switched to a certificate issued by Name Cheap both Mandrill and SendGrid worked.
Haven't been on the limit yet, but i have a small bash-script which will revalidate them.
I have a TODO to revalidate once manually when it is near expiration. If the script works, i will probably throw it in a cronjob.
Anyhow, we can always switch the Sphinx theme, and your comment sounds more like a complaint about Sphinx in general (which I don't happen to agree with, but whatever).
For those already using Let's Encrypt since the closed beta: Do not forget to remove the `agree-dev-preview` flags, as newer client version do seem to throw up if it's still set. I had `agree-dev-preview = True` in a config file and got an error about True being an invalid value.
Ignorant question: If they are making getting a certificate easy for everyone, what is to stop "bad guys" from getting certificates for their sketchy sites? I usually look to the green "https" in my uri bar for reassurance when I'm on an unusual site.
Nothing will. Checking for https for validation whether author of a website has malicious intents is wrong, SSL is not intended for that purpose. That's the purpose of an EV cert, because it requires a company to prove its identity, so the very least you can do is to look for green bar / company name in the address bar.
So then why is it important? What is the argument for encrypting all web traffic? Does it act as a sort of camouflage for the actually important encrypted traffic?
/root/.local/share/letsencrypt/bin/letsencrypt certonly --webroot -w /var/www/example.com/public -d www.example.com -d example.com
(uses the public directory for ownership check, and creates a cert for www.example.com + example.com)
Then in your /etc/nginx/sites-enabled/example.com:
I'm having trouble finding where it specifies what permissions I need to use Let's Encrypt. Can I get a certificate for my subdomain even if I don't control the full domain?
Yes, thank god I don't have to manually contact anyone like StartSSL or even provide real contact information for this. It's just what it says - domain validation, nothing more.
Interestingly, StartSSL did far more verification for their free certificates than other providers I used for paid certificates, Comodo, GlobalSign, AlphaSSL, etc.
Sounds fine for shopping, online banking, user authorizations. But for every website? If I'm a blogger/publisher or have a brochure type of website, I don't see point of the extra overhead.
Update: Thanks to those who answered my question. You pointed out some things I hadn't considered. Blocking the injection of invisible trackers and javascripts and ads, if that's what this is about for websites without user logins, then it would help to explicitly spell that out in marketing communications to promote adoption of this technology. The free speech angle argument is not as compelling to me though, but that's just my opinion.
Without HTTPS, any MitM could inject ads, malware, or simply manipulate any content on your blog. TLS isn't just useful to encrypt private data, it also makes sure what you see is what the site owner wanted you to see. With http/2, the overhead is minimal and with TLS 1.3 it might soon be gone completely (since it's probably going to add a mode that avoids multiple round trips for the initial TLS handshake; encryption itself isn't really an issue nowadays with AES-NI, etc.)
And this isn't a theoretical threat either, actual ISPs have been injecting adverts, trackers, and other content into third party websites. Even in the US.
They still do that, they haven't be sued, and they haven't lost. One of those articles is from May this year. And as far as I know they're still injecting trackers into the HTTP headers of mobile traffic to this day. It is opt out.
I'm not going to waste my Friday night looking up old lawsuits to save a few HN reputation points. But I can tell you, I remember first reading about this stuff ~1999 when ISPs wanted to get their content in front of the Internet, but I don't remember the exact details. I've been following Boardwatch, WIRED, Techdirt, Digg, Slashdot, TechMeme and TechCrunch since then and consider myself relatively informed. I thought we were beyond this by now, 15 years later, but apparently I was wrong!
Well if you're blogging about something important, you don't want the government to MITM your page and edit it without your permission. Imagine if the government did that for a major news site - lots of control.
Nope, but there is no reason to think that a sufficiently motivated and well funded party wouldn't be able to pull it off. If governments can build things like stuxnet then a project like this should be relatively straightforward.
It should be the default because if only those people who need the protection use crypto, they stick out like a sore thumb, negating much of the protection, if not actually increasing the risk. If only political activists (say) use GPG, then the easiest way to get rid of the opposition in a country is to round up everyone that's been seen by deep packet inspection machines sending a GPG-encrypted message, which is rather trivial to automate.
Also, you just don't have a clue who is watching your traffic and what they are using it for, and machines are only getting more powerful, enabling ever more advanced analysis of your communication (think of someone intelligent, with a brain, watching everything you do, or rather watching everything everyone does, but with sufficient intelligence to pay as much attention to you as a single person could watching a single person - that's probably not an accurate model (yet), but still probably closer than what you imagined). Imagine a representative from your internet provider or the government ringing at your door - if you wouldn't let them in to sit next to you/follow you whereever you go around the clock, you probably would also prefer encrypted communication if you understood what one can do with your internet traffic.
Well there is very little cost for what it offers. It takes developers a few days or more likely now a few hours to setup and only serves the viewer better. It affords the visitor some level of trust that the site hasn't been tampered with and their login credentials aren't being siphoned off for example.
If you are the author of a blog (with comments disabled), and you don't care if your message is manipulated, then thats your choice. But before you know it vanilla http will be blocked in browsers, and you'll need to make the change anyway.
Personally, I do every project through HTTPS only because a client had an atrocious corporate HTTP proxy cache that ignored expiry headers and assumed two files were equal because they had the same name or some shenanigans. Dealing with the endpoints (server and browser) is enough for me, thanks.
For a while, my ISP was using some sort of transparent proxy cache, which for some reason tended to stall HTTP connections (stuck in SYN-SENT IIRC). There were days when the only sites I could access were the ones using HTTPS, since the transparent proxy cache didn't touch them.
The most advertised gain from using TLS is security, but to me an equally important gain is not having to deal with broken middleboxes.
The big thing for non-login websites is the threat of the information being tampered with on the line. Someone could easily MitM your http connection and insert whatever they wanted into the website, making you think that website had said it.
Sure but what's the intent? If I own a lawncare business, or any business, seriously--what can the hijacker do? Change the phone number? Change the map?
Introduce arbitrary code and data for your User Agent to execute and decode.
This can be something as simple as data alteration to mislead the target or disrupt his communications. However, if the attacker has some Sweet 0-Day Sploit, (or some old-and-busted 'sploit that works on the target's old-and-busted User Agent) they can MitM any HTTP session and use that sploit to do $SOMETHING_NEFARIOUS.
This isn't theoretical. The NSA slides spoke of active attacks against older versions of Firefox shipped in the Tor Browser Bundle. Similar attacks making use of WebRTC to leak data were proposed and fixed, posthaste.
An additional benefit of HTTPS is the reduction of metadata provided to passive attackers. (HTTPS sessions encrypt the names of the resources requested from the remote server. There are still ways to get an idea of what's being requested, but all an adversary knows for sure is that you're talking HTTPS to a particular web server.)
Ignoring all the protections that HTTPS-by-default gets you and everyone else, bear in mind that most browsers will support http/2 only over TLS. So if you want the advantages that gets you (significant for a brochure type of website), you will need https anyway.
I'm wondering about that too since I'm running some very low power web servers (ESP8266) and I don't know if HTTPS is even feasible on these devices. I hope HTTPS become the preferred default but is not made mandatory by browser vendors.
Disclaimer: I'm the author of simp_le and developer of the official client :)