2. There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message which will make them think there are security issued with your website.
3. Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.
5. HTTPS is needlessly complex making it hard to implement. There have been several security vulnerabilities introduced simply by its use.
6. If you can't comply with the OpenSSL license, implementing it yourself is a hopeless endevour.
SSL was developed by corporations, for corporations. If you want some security feature to be applicable to the wider Internet, it needs to be community driven and community focused. Logging in to my server over SSH has far more security implications than accessing the website running on it over HTTPS. Yet, somehow, we managed to get SSH out there and accepted by the community without the need for Certificate Authorities.
Genuinely curious - what alternatives do you have in mind? Are there any WoT models that interest you more?
> There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message
Isn't this the point?
> Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
Could you elaborate? Have you written your whole stack from scratch? You are running millions of lines of code that you will never read but have been implemented by other parties.
> HTTPS is needlessly complex making it hard to implement.
Isn't this done with robust battle-tested libraries and built-in support in modern languages?
Mainly I'm just wondering why you're letting perfect be the enemy of good. There's always room for improvement in everything, but I don't think user privacy is a reasonable sacrifice to make.
> Giving in ends the hope that it will ever get changed.
Abstaining from HTTPS won't be seen by anyone as a protest, but as incompetency, whether you find that justifiable or not.
We don't have a robust understanding of who exactly operates PKI, but we do know that it's de facto governed by a company on Charleston Road, since CAs only have their root keys listed in things like web browsers at their pleasure. We also know that Charleston Road rewards CAs for their loyalty by red-zoning and down-ranking the folks who don't buy their products. Products which should ideally be deprecated, since SSL with PKI is much less secure.
Can anyone guess who's stymied progress in Internet security, by knuckle-dragging on DNSSEC interoperation? It reminds me of the days of Microsoft refusing to implement W3C standards. Shame on you, folks who work on Charleston Road and don't speak up. You can dominate the Internet all you like, but at least let it be free at its foundation.
Obviously, you can't replace "SSL with PKI" (you mean TLS, and/or the WebPKI) with DNSSEC, because DNSSEC doesn't encrypt anything. Whether or not you enact the ritual of adding signature records to your DNS zone, you will still need the TLS protocol to actually do anything securely, and the TLS protocol will still not need the DNS in order to authenticate connections.
Instead, what DNSSEC (DANE, really) hopes to do is replace LetsEncrypt, which is not "basically" but instead "actually" free, with CAs run by TLD owners. Who owns the most important TLDs on the Internet? The Five Eyes governments and China. Good plan!
Right now you need to ping Google's servers each time you visit a website to ask if it's safe. We love Google but they're a private company that can do anything they want. If you feel comfortable with them being the source of truth for names on the Internet, then the problem is solved.
Most of us would prefer it be controlled by ICANN which is a non-profit, not controlled by any one government, that lets anyone from around the world who cares enough participate show up and take part in Internet governance. Controlling names was the purpose they were founded to serve. I say let them.
DNSSEC is in fact controlled by world governments, who have de facto authority over the most important TLDs. When a CA misbehaves, Google and Mozilla can revoke them, as they've done with some of the largest and most popular CAs. You can't revoke .COM or .IO.
> Isn't this the point?
The point is to secure the communication between client and server, and warn/stop it, if it is insecure (MITM et al.). It is counter-productive to stop the communication because an unrelated party (CA) is having issues.
This is important. I have several devices at home that cannot display many web sites because they don't have the ability to use latest ciphers.
If a device can "display" webpages then it's extremely likely that it can handle tls.
> Moreover, they don't run Linux
That has never stopped people before.
If you dont get the difference in scale between the two you might have an issue understanding the real problem.
TLS is public key encryption... a 3rd party attesting to the provenance of public keys is inherent to its design.
HTTPS is cargo-cult'ish in this aspect. One obviously should not accept or serve personal data over HTTP, but why to encrypt public information? (Having said that I'm guilty here too as I blindly followed the instruction given to me by my hosting company and my plain open site redirects to HTTPS.)
Sort of similar to how linux package managers employ GPG and package mirrors.
Or maybe we can provide caching based on signed-exchange.
If I want to quickly host my page and use encryption, then I have go through all that hustle to make it work. Perhaps allow use of self-signed certificates on same level as http instead of blocking my website.
On the other hand, a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.
It's fair enough to an argue that a self-signed cert could be an attack, but so could any http request.
> a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.
I don't understand how that allows one to distinguish it from an attack. Knowing that a connection is supposed to be unencrypted is just equivalent to knowing that a connection could be under attack.
This means that with eSNI and at least one CA-signed cert on the IP, any attacker runs the risk of having to spoof the CA-signed certificate.
All in all, I'd say that the browser should still throw up a full-page warning because of the implications of TOFU, but it can be one where the "continue to site" option is clearly shown even to a naïve user, and not hidden behind a spoiler.
> ...disable secure cookies ... for self-signed certs. ... the user ... enable[s] them.
So you make a self-signed cert for your website which needs secure login, and you tell your users to turn on secure cookies so that you can safely store their credentials in the browser. Then your website gets MITM'ed with another self-signed cert, which either
1. can access the same cookies, because the domain is the same
2. can't, because the cert is different
Using a self-signed cert. isn't secure. What's being discussed is whether it's worse than HTTP. It isn't.
>What's being discussed is whether it's worse than HTTP. It isn't.
I disagree. The self-signed cert approach tries to carry with it the trappings of proper HTTPS, but it results in a bigger attack surface. Every additional bit of complexity that can be added to describing the "safe browsing experience" to the end user is an additional chink in the public armour. This is why I originally called it "open users up to social engineering attacks to make my web-dev life easier".
Since the self-signed cert is not secure, admins should have no reason not to simply use HTTP. In fact, this is where the discussion has gotten to now: self-signed certs can't even do safe login. What makes the self-signed cert worse is that for some reason people are insisting on using it anyways.
I manage 100+ servers, hosting a significantly larger number of domains, on a variety of linux and FreeBSD operating systems. Under both Apache & Nginx.
"..all of that hustle.." to initially setup is under 2 minutes with LetsEncrypt.
The renewal (via a cron job) is completely out-of-sight/out-of-mind.
The execution is shockingly simple. If you think it's "all that hassle" I guarantee you haven't even tried.
Otherwise you need some infrastructure: logging, monitoring, some way to manage upgrades, backups, testing recovery, oh and those private keys are better not be leaked anywhere, so you need encryption for backups, which brings key management and so on.
I actually evaluated a bunch of acme clients, wasn't satisfied with the code of any of them and wrote my own. But even from those I looked at certbot was always the worst choice, it's ridiculous letsencrypt is promoting it, better choices were POSIX shell clients or statically linked clients, like those written in Go and other compiled languages.
I wouldn't be surprised if that's how he's managing 100 servers, or something similar.
I suppose I could have made that known in my post.
Actually, it is. Some reverse proxies such as Traefik handle TLS certs automatically. You practically need to explicitly not want to do it.
A practical example I ran into lately, we had a small system run on GKE and Google Cloud Loadbalancer and struggled to automate the certificate renewal process. Because the cluster/project was for an internal tool this automation was given a low priority and we still have to "manually" swap a certificate every few months (and if we forget to we get an angry slack DM).
TLDR; there are still many combinations of networked services that still do not ~easily~ support certificate automation, even ones you expect really should by now.
Based on this:
Now, in our setup, everything is running on a different port, so it is easy to set up additional services all coming from the same hostname with the same IP address.
If you think something is set-it-and-forget-it, you haven't been around long enough.
Only if you're blindly running shell commands the effects of which you don't understand.
So, if you can't trust certificate (not when it is invalid), just show same level of protection as http.
If a CA issues a rogue cert and _does_ add it to the CT log, it's discoverable, at least in retrospect.
If a CA issues a rogue cert and _doesn't_ add it to the log, some browsers will refuse the connection when presented with that cert. https://www.agwa.name/blog/post/how_will_certificate_transpa... has more details about Chrome's implementation.
CA's and DNS are two parts of the internet that have become way too centralized in my opinion.
For development purposes, I imagine the approach akin to cross-origin support in browsers for loopback networks might work (i.e. don't enforce checks on them).
Without previous proof that the calling code was not eg. modified in-flight (eg. over HTTP or over HTTPS without a valid certificate), allowing it to use TLS and to either modify the trust root or pin a new certificate would severely reduce the security of the communication, and would be completely against what HTTPS is designed to solve in the first place (mainly MITM snooping and attacks).
And if there is a "previous proof" of genuinity (eg. by serving through properly encrypted HTTPS), what the benefit is to allowing those clients to pin certs? I.e. they'll still need the existing "proper" HTTPS for all the other first time visitors (and return visitors using new browsers/OSes/devices)?
Imagine e.g. combining this with an SPA bootloader contained in a data-url (like a bookmarklet), which the user scans via a QR-code or receives via text-based messaging.
CORS would still be in-play, and maybe the insecure nature of the caller is communicated to the API.
The benefit of this pinning would be e.g. allowing direct communication with IoT hardware, or even just prevention passive content analysis.
You could talk to IPs directly and still use TLS without weird wildcards like *.deviceid.servicedevices.com where the dns just has these zone entries:
deviceid.servicedevices.com DNAME has-a.name
Because it's taking time to build enough acceptance to flag http as insecure, whereas bad https connections that can't guarantee the expected security properties have been flagged as insecure from the beginning.
At this point, though, modern browsers show http sites as various flavors of "not secure" in the address bar, and limit what those sites can do. Browsers will increase the restrictions on insecure http over time, and hopefully get to the point where insecure http outside the local network gets treated much like bad https.
So like 3-5 minutes of work with Let's Encrypt?
Chrome's eventual goal is to mark all not-secure pages as not-secure: https://www.chromium.org/Home/chromium-security/marking-http...
"HTTP is known to cause cancer in the state of California"
This is easily testable. I view the website in both Chrome and Firefox, and it's http, not https.
Sure googletagmanager.com is in the preload list, but it doesn't have "mode": "force-https". It just has certificate pinning, not HSTS.
Having the DNS credentials laying around on the server is not a good idea. So creating wildcard certs via letsencrypt is a huge pain in the ass.
If a webmaster has control over somedomain.com I think that is enough to assume he has control over *.somedomain.com. So I think letsencrypt should allow wildcards to the owner of somedomain.com without dabbling with the DNS.
The way things are now, I don't use ssl for my smaller projects at smallproject123.mydomain.com because I don't want the hassle of yet another cronjob and I sometimes don't want the subdomain to go into a public registry (where all certificates go these days).
That's absolutely unnecessary
Set a NS record for _acme-challenge.domain.tld to your own nameservers, e.g. ns1.myowndomain.tld
And have your own name servers only serve the _acme-challenge.domain.tld zone.
Now you can just use the RFC DNS updater with your ACME client without any need for credentials for the actual domain.tld zone.
I use this currently with my own kuschku.de domain, you can check it out.
dig +trace @22.214.171.124 _acme-challenge.kuschku.de ANY
So if you’re using AWS you get it for free. Or you can slap CloudFront or Cloudflare in front of your origin.
I think the barrier is low enough that I SSL all the things (including my small side projects).
Used to be everyone complained about CF putting SSL in front of HTTP origins.
However, CF can also issue a CF-signed certificate with a stupid long expiration for your origins and validate it. This is how I fully SSL many of the things while avoiding potential headaches with LE / ACME. Combine with Authenticated Origin Pulls and firewalling to CF's IP ranges for further security.
Of course, that still leaves CF doing a MITM on all my things.
Static hosts like Netlify & GitHub also enable free SSLs. The barrier is so low most people trip over it.
I am sure there are still very unique edge cases though. If I had one of those edge cases I would sit down & really weigh the pros & cons though of not using HTTPS. I would not take it lightly.
"Free", but you can only use them on AWS stuff. AWS makes it nice and easy (and does a bunch behind the scenes for you). Part of that behind-the-scenes is that they have control of the private key on their side. You want to use the AWS generated cert locally, or on another provider, too bad.
Someone else mentioned Azure having a similar offering (I’ve never played with Azure so I can’t speak to it). And if 2/3 of the providers offer it, I’d imagine GCP will at some point as well.
I love how easy it’s becoming to launch SSL. LetsEncrypt did a lot to make it mainstream. I’ve never used LE but I am grateful for their impact on our industry.
Same here. If you have a domain then you should have a cert, it's not that hard today.
My wife wanted a website that's pictures of our dog as a joke, right now it's a single img tag. The second thing I did after that was getting an HTTPS cert and forcing redirection.
Would that work for mulitple domains? So I CNAME the _acme-challenge subdomain for all my domains to _acme-challenge.cheapthrowaway.com?
So, as long as the challenge taking is serialised you can get away with just giving a single TXT answer at a time.
And then have your acme client auth against that one.
No need for a new domain.
I used it on a pervious post to test it out and it seemed to be fine: https://github.com/benjojo/you-cant-curl-under-pressure/comm...
> Buypass Go SSL
> It is free! Issued in Scandinavia based on the industry standard ACME.
I posted a diff showing the patch you can use to switch go's crypto/acme/autocert to use it.
The CA does sell paid SSL product, but they also have a free ACME endpoint that issues 6 month certs.
Here is an example of what one of the certs look like: https://crt.sh/?id=2075589060
It is free!
You can run it yourself locally, or trust (why?) the upstream's service.
I think you still need a steady hostname pointing to it, right?
Sure there is Let's Encrypt and if you are facing Internet you are probably good to go.
If you are on an internal network, then good luck. You need to build a PKI, and then put into your devices the right certificate so that it is trusted.
If it was simpler, Apache would sing out its "It works!" in HTTPS and not HTTP.
Next you need to use ACME or Caddy (I use the latter) and tell it to do the Let's Encrypt DNS challenge using DuckDNS. It looks like this for Caddy:
# in the Caddyfile
# in the CaddyEnvfile
That's it, now I can go to https://myRaspberryPi.duckdns.org and I've got HTTPS on my local network without anything exposed on the internet EXCEPT my device's internal IP. You've got to evaluate how much of a threat that is.
Fun fact: TLS doesn't require certificates, and some browsers even used to support HTTPS in these TLS modes many moons ago. See eg https://security.stackexchange.com/questions/23024/can-diffi...
How to set this up on a domain which is not connected to Internet? How is the check done?
Realistically you can't entirely deconflict these names. So you always have a risk of shadowing names from the public Internet.
The public CAs spent years in denial over this (yes they used to sell publicly trusted certs for "private" names, this is now prohibited). Create internal.example.com and things get easier. To the extent security by obscurity is worth trying it's just as available this way (split horizon DNS etcetera)
It's totally save and legitimate for ycombinator to use secret.ycombinator.com on their intranet without telling anything about it to the outside internet.
The grandparent was, as I understand it, talking about names they don't own, for which you've no assurance somebody else won't own them (on the public Internet) tomorrow. This used to be very common, decades ago Microsoft even advised corporations to do it for their AD, but it's a bad idea.
.invalid and .local are reserved domains and guaranteed to never be in use on the public internet - yet I can't get certificates for them
For all local IP and domain space - that is 192.168/16, 10/8 and so on - it should automatically treat them as if they were safe anyway.
My point was that HTTPS is (much) more complicated than bare HTTP and this is probably one of the reasons it is not taking over the web in a storm (though progress is undoubtedly there)
Whereas the same server could tank 40k rps HTTP requests.
I have a 1 vCPU 2GB server that terminates TLS with dual Prime256v1/curve25519 + RSA 2048 setup with a 10 minute keepalive time, running AES 128, 256 (CPU has AES-NI), and CHACHA20-POLY1305 comfortably handling several millions of requests a day and CPU load hovering 10-20%.
The amount of ECC handshakes are surprisingly high, and CHACHA works wonders too with user agents today.
Given the threats from passive attacks today, this is a cost that must be paid. It just looks quite affordable with modern protocols.
Parent suggested that at 172 million requests per day (2000 rps), there would be trouble.
Assuming "several million" is <= 17 million (or even up to 34 million, given the 10-20% range stated), then your stats would tend to agree.
If a 16bit 200Mhz microprocessor can handle a few thousand connections/second, then a modern processor should definitely be able to stay upright fairly easily.
I am still skeptical TLS handshake on site visit is actually bogging down anyone’s computer.
For the average case it probably doesn't matter, and you can optimize it, but I think it is totally understandable that the average novice could end up with bad https performance if only because the defaults are bad or they made a mistake. If hardware assist for the handshake and/or transfer crypto is shut off (or unavailable, on lower-spec CPUs) your perf is going to tank real hard.
I ended up using ssh configured to use the weakest (fastest) crypto possible, because disabling crypto entirely was no longer an option. I controlled the entire network end to end so no real risk there - but obviously a dangerous tool to provide for insecure links.
Also worth keeping in mind that there are production scenarios now where people are pushing 1gb+ of data to all their servers on every deploy - iirc among others when Facebook compiles their entire site the executable is something like a gigabyte that needs to be pushed to thousands of frontends. If you're doing that over encrypted ssh you're wasting cycles which means wasting power and you're wasting that power on thousands of machines at once. Same would apply if the nodes pull the new executable down over HTTPS.
openssl speed ecdh
gatling -V -n -p 80 -u nobody
Feel free to try.
Assume the worst way to attack without being clearly obvious: handshake CPU grinding.
So you are being forced to either not serve http, or to condition users to trust MITM-able redirect. How many people will notice a typoed redirect to an https page with a good certificate?
The solution is simple: browsers should default to https, and fall back to http if unavailable. Sure, some sites have broken https endpoints, but browsers have enforced crazier shit recently.
And going further, you can enable HSTS preloading, meaning the next release of browsers is going to hardcode your website as always and only ever going to be used with HTTPS.
See for example my domain https://hstspreload.org/?domain=kuschku.de, which is currently in the preload lists of all major browsers including Chrome, Firefox, Edge and even Internet Explorer.
I also deploy the same for mail submission with forced STS, and several other protocols.
Or, as I stated, for preload, you have to either not have HTTP at all, or have a redirect to HTTPS: it should be clear from my above post why I think a redirect is a bad idea. I also dislike turning off HTTP for those that don't have any other option.
To me it seems that browsers just switching to https-by-default and http-as-fallback is a much simpler, better, backwards-compatible change that should just work. What am I missing and why do you feel HSTS is a good idea compared to that?
The preload list allows you to specifically say that for your own website clients should always use HTTPS, which is a good solution, as it means no one is ever going to visit kuschku.de on port 80, except for curl and dev tools, for which the redirect is useful.
But to each their own.
Browsers can’t set 443 as default, because other websites are broken, other websites I can’t fix and the browsers can’t fix either.
As for what browsers can or cannot do, they also can't introduce DNS-over-http, introduce stricter cookie policies breaking a bunch of web sites, or reduce effectiveness of ad-blockers, drop flash, or... Sure, defaulting to https is too high a bar (not expressing an opinion on any of those — eg. good riddance to Flash :) — but browsers can and have done stuff that's just as bad, forcing web site creators to adapt their web sites).
(Exception being if you use the dns challenge)
Exactly. DNS challenges don't suffer from this issue.
nature.com is marked as Chinese, as are nginx.org and ntp.org.
example.com is Indian in the list as is the now defunct dmoz.org.
I don't understand the methodology behind the country assignments at all…
% curl -I senate.gov
HTTP/1.1 301 Moved Permanently
Date: Tue, 17 Dec 2019 10:37:04 GMT
% curl -I www.senate.gov
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=iso-8859-1
Date: Tue, 17 Dec 2019 10:37:08 GMT
It seems to meet the requirement for exclusion from the list. Data updated 16 Dec 2019, so I don't think it's stale.
I've also checked from Australian and a European connection, so I don't think it's a regional thing. The other genuis.com doesn't work for me, the other sites redirect and set a cookie.
Maybe their tester applies the same criteria - although to me that feels a bit unfair...
I was also wrong to say that w3.org never redirects to HTTPS. If the browsers sends a Upgrade-Insecure-Requests HTTP-header, then it redirects. That allows it to support all browsers as securely as possible.
Sites like whynohttps.com and observatory.mozilla.org should really test for this pattern.
Must be a bug.
>an expectation that a site responds to an HTTP request over the insecure scheme with either a 301 or 302
Doing things this way is the final nail in the coffin for Internet Explorer 6, since IE6 does not use any version of SSL which is considered secure here in 2019. And, yes, I have seen in people the real world still using ancient Internet Explorer 6 as recently as 2015, and Windows XP as recently as 2017.
(No, I do not make any real attempt to have my HTML or CSS be compatible with IE6, except with https://samiam.org/resume/ and I am glad the nonsense about “pixel perfect” and Flash websites is a thing of the past with mobile everywhere)
If you don't do this to get SHA-1 then you're relying on the users somehow having applied enough updates to not need SHA-1 but for some reason insisting on IE6 anyway. That's a narrower set of users. At some point you have to cut your losses.
The task is not as simple as using DNS to store strict https flags(as DNS can be manipulated by intermediary), but hardcoding the lists in the browsers and keeping the lists in the chrome's code is definitely not a solution.
e.g. in the past it was just domains and subdomains.
Today there are already some TLDs on the list themselves.
A lot of websites just don't serve over HTTPS, or serve them with domains whose CN or SAN don't match the host.
Many that do support https have links that downgrade you back to http on the same domain.
If nothing else works, temporarily disabling the firewall is a couple clicks away, barely takes any time or effort at all.
I don't know why people are making such a fuss out of this.
(And also the redirection thing.)
macOS has a background daemon which automatically hits captive.apple.com on connection to a WiFi network, to detect if it's behind a captive portal (and opens up a browser window to let you complete the flow, if it gets a 302). So that much should work even if you block egress port 80 but whitelist captive.apple.com.
...that is, assuming the portal to which you get redirected would be served over https, but I guess that isn't a given either.
The redirects are also hard, I have a static site using Google storage and I have to create a server instance and redirect from there because it's not possible to do an automatic redirect. I don't know why the big cloud hosting providers aren't cooperating to make full https implementation easier.
PKI is technically the best practice for these systems, but it's also the most fragile and complicated. At a certain point, if the security model is so complex that it becomes hard to reason about, it's arguable that it's no longer a secure model, to say nothing of operational reliability.
I also have a whole rant about how some business models and government regulations literally require inspecting TLS certs of critical transport streams, and how the protocols are designed only to prevent this, and all the many problems this presents as a result, but I don't think most people care about those concerns.
Oh, and gentle reminder that there are still 100% effective attacks that allow automated generation of valid certs for domains you don't control. It doesn't happen frequently (that we know of) but it has happened multiple times in the past decade, so just having a secure connection to a website doesn't mean it's actually secure.
Security of the data transfer layer does not mean can or should trust the website you are visiting.
Just because a website has a padlock does not mean it is trust worthy and you can hand over your CC details.
https://www.amazon.somethiing.other.co/greatDiscount may look great to some!
It's already effectively how password form submissions work in many browsers.
Same with my IoT cameras and all the various local apps I run that can start a web server. Heck, my iPhone has tons of apps that start webservers for uploading data since iPhone's file sync sucks so bad.
We need a solution to HTTPS for devices inside home networks.
In the meantime having big warnings when connecting to these ad-hoc web
interfaces makes sense I think, since they can effectively easily be spoofed and
MitM'd (LANs are not always secure in the first place so it makes sense to warn
the user not to reuse a sensitive password for instance). It's annoying for us embedded devs but I think it's for the greater good.
I've seen TV adverts from banks for example (Here in the UK) telling people to look for the padlock! This is not a verifiable method of safety.
http: insecure and https: secure
probably only when http ceases to exist can we start differentiating between trustworthy and untrustworty.
how we actually do that is something we still need to figure out.
for now we have a check against sites that are known to distribute malware. maybe we need to somehow track which sites are known to be trustworthy.
different factors can go into that. their privacy statement, past incidents and their response. etc...
MITM can do anything to your site, so your totally-static site may not be static any more at the victim's end. It may be a site collecting private details, attacking the browser, or using the victim to attack other sites.
Your static HTTP site is a network vulnerability and a blank slate for the attacker.
TL;DR: Secure websites can make the web less accessible for those who rely on metered satellite internet (and I'm sure plenty of other cases).
Providing access to Wikipedia over http to people in third world countries may be worth the risk of someone MITMing the site with propaganda.
The suggestion is only to give some users the option.
The fact is as an ecosystem develops completely increases. Lifeforms in that ecosystem have to spend more time and effort protecting themselves from outside attacks as time progresses.
That casual dismissal of davidmurdoch's counterargument comes across tone-deaf to people stuck on crappy connections.
And "Let's Encrypt" is not an answer to "HTTPS is not free". It's not. We all are going to see our projects outlive Let's Encrypt (or their free tier).
In the end, nothing is secure. A dedicated attacker will find a way, given enough resources. Any security measure is just a deterrent.
My deterrent is that it's not worth MITM'ing my personal website with, like, 10 monthly visitors. (The reader might gasp that I lock my bicycle with a chain that can be snapped in a second, and that a strong enough human can probably bash my home door in).
Anyway. It's almost 2020, and if you are still advocating on moving the entirety of the Web to reliance on Big Centrally Good Guys, I really don't know what else to say to you.
- If you are hosting a simple static page or blog, your hosting provider probably has Let's Encrypt plugin.
- If you have your own VPS, Caddy has you covered with file serving, fastcgi support for PHP, and proxying to (g)unicorn/nodejs/Go/.NET, and has HTTPS enabled by default.
- If you have more advanced setup (e.g. containers), traefik supports HTTPS with just a few lines of configuration.
- If you are big enough to afford cloud, it takes a few lines of Terraform code to provision certificate for load balancers (speaking for AWS, and assuming others have similar solutions).
For other cases (e.g. lots of traffic with custom haproxy/nginx/etc. setup), you are probably smart enough to find out how to enable Let's Encrypt support.
2) Some services require wildcards, like proxies.
3) Some organizations have, due to someone far away making strange decisions, policies about certificate authorities, and people to audit for compliance. Therefore, a cert costs money and, for a site which is purely informational, that's a hard sell.
4) Because we're not running on a hosting provider, a VPS, containers, or cloud.
5) Because not everyone wants to deal with some combination of the above every three months due to Let's Encrypt's expiration policy.
It's very very nearly maintenance free .
 There's lots of tooling. My current preference is for https://github.com/lukas2511/dehydrated
 If something breaks you have to pay attention, otherwise... Not so much.
- Setup: https://github.com/susam/susam.in/blob/master/Makefile#L30-L...
- Renewal: https://github.com/susam/susam.in/blob/master/etc/crontab#L1
Sure, depending on your setup it's easy, but for a lot of setups it isn't. Instead of trying to say HTTPS is easy and shame everybody who isn't doing it more efforts should be diverted into creating an actual fully encrypted network that doesn't need CAs.
I'm guessing people aren't as lucky as I am to be running on newer machines and such.
I mean it even edits your nginx files to redirect http to https if you agree. It's not hard.
Up to date documentation was near-impossible to find, and the scripts that came out of the box on the recommended client needed some fixing. The whole thing took about half a day, plus some hours a few weeks later once the unforgiving anti-abuse thresholds I accidentally triggered during end-to-end testing finally expired. Definitely wasn't a pleasant experience.
It suddenly becomes really, really complicated if you have multiple servers, multiple domains, nginx configurations that the tool does not expect (but insists on rewriting).
For my part, I had to write around a thousand lines of script and alter various existing code in order to switch from manual ssl (whenever the client paid for it) to automatic ssl (everywhere), because there was no way I was going to manually buy hundreds of certificates a year when I took over this role. Nowadays we're 100% ssl but it was harder for an existing person already accustomed to the existing system than doing nothing. I'm just too lazy to check a site every week and renew many certificates manually and copy around stupid files and generally go crazy. Plus, if it's automated, I think there's less chance of the keys being copied. So in my mind it was worth the effort, but it was surely effort.
It's easy to set up a standard cert through Azure, but if you want to use Let's Encrypt there's a whole dance you have to go through to get there, and for many people it's not worth the time and they'll happily pay a bit of money to make it a few-clicks thing.
Though hopefully they simplify it for cases such as yours.
Putting Cloudflare in front is also another cheap option.
I don't know how it could possibly be any simpler.
Most websites now and days are over engineered.
Most websites now and days are over engineered."
That's awesome! Mind sharing some more details? (hosting plan/CDN/etc). Or even the URL?
Most of our traffic goes to our wiki: we are the most active open source video game on github. Most ss13 servers run their own codebase, forked from ours, but will still frequently point their players to our wiki rather then set one up on their own.
A Cloudflare caching layer was added back in march when we got a 4x spike in web traffic from a youtuber talking about the game.
Then your load balancers pull the current cert from the sidecar every day with NFS/Gluster/Ceph/HTTP/whatver-you-want and reload the web server if it changed.
Assuming that you can catch a failure of your sidecar server in 89 days or so you don't need much more redundancy.
I found that the certs behind a load balancer were enough of a problem that a solution was needed.
For instance Heroku still doesn't provide straightforward support for wildcard domains under SSL:
There is myriad of other cases, basically every time you diverge a bit from the 80% path, you're in for a treat and will deal with all the intricacies of SSL management.
The end game is first-party support for automatic HTTPS in all web (and other) servers. It is happening (e.g. mod_md), it's just going to take time. For example, to get it packaged for all distributions.
For shared hosting, if you ignore the few providers at the top who are either CAs (e.g. GoDaddy) or are in contracts with CAs (e.g. Namecheap), the overwhelming majority of them are already providing free and automatic SSL for all hosted domains.
There's still a need for certbot et al when you have multiple services (e.g. web and mail and XMPP) running on a single domain name. In fact, I actively avoid servers that insist on doing ACME themselves because it breaks my unified ACME process.
If the middlebox can't see inside your flow because it's encrypted it can't object to whatever new thing it's scared of this time whether that's HD video or a new HTML tag.
For example, I have a simple web app hosted on Heroku free plan, and I have to use CloudFlare SSL to get it served over https on my custom domain. But it actually is half encrypted as the connection between CloudFlare and Heroku is plain http.
My employer won't use Let's Encrypt because they (LE) want unlimited indemnity and that's a deal breaker for them (employer).
What i cannot stand is people who can do it, but refuse to out of laziness. Or because they want their content to be insecure on purpose.
This applies mostly to big orgs, so indie devs can have some leeway if it's too hard to implement.
(Raises guilty hand)
I run a couple of sites on my hosted server that are still http. They both sit behind a varnish setup and to be honest I just have not found the time to get it done. Usually when I mess with my configurations I lose a week to troubleshooting stupid stuff and I just can't bring myself to do it.
Again this really only applies to people in a comfortable position to do this and choose not to. The average developer is not my target here, it's the big guys.
I currently use a mini CDN (content delivery network) of three different OpenVZ servers in the cloud to host my content, so getting things to work with Let’s Encrypt took about two or three days of writing Bash and Ansible scripts which get the challenge-response from Let’s Encrypt, uploading it to all my cloud nodes, having Let’s Encrypt verify it got a good response, uploading the new cert to all of the cloud nodes, then using Ansible to log in to all the nodes, put the new cert where the web server can see it, then restarting the web server.
Point being, the amount of effort needed to get things to work with Let’s Encrypt varies, and can be non-trivial.
Still, the stand-alone mode is pretty dang easy. I've also considered the /.well-known mode but there was some tiny snag.