Hacker News new | past | comments | ask | show | jobs | submit login
Still Why No HTTPS? (troyhunt.com)
224 points by andimm 6 months ago | hide | past | favorite | 336 comments



1. The requirement to involve a 3rd party certificate authority is a needless power grab. Giving in ends the hope that it will ever get changed.

2. There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message which will make them think there are security issued with your website.

3. Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".

4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.

5. HTTPS is needlessly complex making it hard to implement. There have been several security vulnerabilities introduced simply by its use.

6. If you can't comply with the OpenSSL license, implementing it yourself is a hopeless endevour.

SSL was developed by corporations, for corporations. If you want some security feature to be applicable to the wider Internet, it needs to be community driven and community focused. Logging in to my server over SSH has far more security implications than accessing the website running on it over HTTPS. Yet, somehow, we managed to get SSH out there and accepted by the community without the need for Certificate Authorities.


> The requirement to involve a 3rd party certificate authority is a needless power grab. Giving in ends the hope that it will ever get changed.

Genuinely curious - what alternatives do you have in mind? Are there any WoT models that interest you more?

> There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message

Isn't this the point?

> Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".

Could you elaborate? Have you written your whole stack from scratch? You are running millions of lines of code that you will never read but have been implemented by other parties.

> HTTPS is needlessly complex making it hard to implement.

Isn't this done with robust battle-tested libraries and built-in support in modern languages?

---

Mainly I'm just wondering why you're letting perfect be the enemy of good. There's always room for improvement in everything, but I don't think user privacy is a reasonable sacrifice to make.

> Giving in ends the hope that it will ever get changed.

Abstaining from HTTPS won't be seen by anyone as a protest, but as incompetency, whether you find that justifiable or not.


DNSSEC is superior to both PKI and WOT. It's basically free. It makes chains of accountability transparent (hint: it's the dots in the URL). It provides the benefits of hierarchical trust except with democratic control, and is operated on film in public ceremonies.

We don't have a robust understanding of who exactly operates PKI, but we do know that it's de facto governed by a company on Charleston Road, since CAs only have their root keys listed in things like web browsers at their pleasure. We also know that Charleston Road rewards CAs for their loyalty by red-zoning and down-ranking the folks who don't buy their products. Products which should ideally be deprecated, since SSL with PKI is much less secure.

Can anyone guess who's stymied progress in Internet security, by knuckle-dragging on DNSSEC interoperation? It reminds me of the days of Microsoft refusing to implement W3C standards. Shame on you, folks who work on Charleston Road and don't speak up. You can dominate the Internet all you like, but at least let it be free at its foundation.


Who's knuckle-dragging on DNSSEC interop? The entire Internet community. It's been almost 25 years, and 3 major revisions of the protocol, and still it has almost no adoption --- virtually none of the most commonly queried zones are signed. Why is that? Because DNSSEC is awful.

Obviously, you can't replace "SSL with PKI" (you mean TLS, and/or the WebPKI) with DNSSEC, because DNSSEC doesn't encrypt anything. Whether or not you enact the ritual of adding signature records to your DNS zone, you will still need the TLS protocol to actually do anything securely, and the TLS protocol will still not need the DNS in order to authenticate connections.

Instead, what DNSSEC (DANE, really) hopes to do is replace LetsEncrypt, which is not "basically" but instead "actually" free, with CAs run by TLD owners. Who owns the most important TLDs on the Internet? The Five Eyes governments and China. Good plan!


What we mean by DNS security is that when you visit your bank's website, you know it's actually your bank. We're less concerned about concealing DNS queries from routers and more concerned about preventing them from forging responses. Eavesdropping won't empty your bank account. Spoofing can, and encryption doesn't matter if the remote endpoint isn't authentic.

Right now you need to ping Google's servers each time you visit a website to ask if it's safe. We love Google but they're a private company that can do anything they want. If you feel comfortable with them being the source of truth for names on the Internet, then the problem is solved.

Most of us would prefer it be controlled by ICANN which is a non-profit, not controlled by any one government, that lets anyone from around the world who cares enough participate show up and take part in Internet governance. Controlling names was the purpose they were founded to serve. I say let them.


DNSSEC doesn't protect your bank account. Your bank uses TLS to establish connections with you, and TLS is authenticated, and does not rely on the DNS when establishing connections.

DNSSEC is in fact controlled by world governments, who have de facto authority over the most important TLDs. When a CA misbehaves, Google and Mozilla can revoke them, as they've done with some of the largest and most popular CAs. You can't revoke .COM or .IO.


>> There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message

> Isn't this the point?

The point is to secure the communication between client and server, and warn/stop it, if it is insecure (MITM et al.). It is counter-productive to stop the communication because an unrelated party (CA) is having issues.


The CA is not an unrelated party. If the client cannot verify the validity of the cert against the CA, then it should throw up a warning message. If the server cannot get a cert signed by the CA, then it too should throw up a warning message, because it does not have the trust of clients by itself.


4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.

This is important. I have several devices at home that cannot display many web sites because they don't have the ability to use latest ciphers.


Don't buy computing devices that don't give you root.


Rooting won't help most of these devices. They just don't have the horsepower. Moreover, they don't run Linux.


> Rooting won't help most of these devices. They just don't have the horsepower.

If a device can "display" webpages then it's extremely likely that it can handle tls.

> Moreover, they don't run Linux

That has never stopped people before.


Um, the number of people connected to my ssh server I can count on my fingers, and generally have communicated with beforehand. The number of people communicating with my https server is one larger than I could ever count to monitonically.

If you dont get the difference in scale between the two you might have an issue understanding the real problem.


> The requirement to involve a 3rd party certificate authority is a needless power grab.

TLS is public key encryption... a 3rd party attesting to the provenance of public keys is inherent to its design.


7. it breaks caching proxies


I remember a discussion here on HN about how it makes life very hard for organizations like a school in Africa where the Internet connection is slow and expensive. Although many requests go many times to same pages (e.g. Wikipedia), HTTPS makes it impossible to cache them with a local cheap proxy.

HTTPS is cargo-cult'ish in this aspect. One obviously should not accept or serve personal data over HTTP, but why to encrypt public information? (Having said that I'm guilty here too as I blindly followed the instruction given to me by my hosting company and my plain open site redirects to HTTPS.)


Soon we can properly sign HTTP requests using DNS for the PKI. Stuff like SRI inside HTML is paving the road to allow verification of hashes transmitted via header for the main page request, including a signature of that hash and url or such.

Sort of similar to how linux package managers employ GPG and package mirrors.

Or maybe we can provide caching based on signed-exchange.


One benefit of encrypting public information is that ISPs can't mess with it by inserting ads and such.


Why do browsers punish non-verified certs much harder than no-cert?

If I want to quickly host my page and use encryption, then I have go through all that hustle to make it work. Perhaps allow use of self-signed certificates on same level as http instead of blocking my website.


Since there's no way to distinguish a non-verified (self-signed or not) certificate from an attack, browsers have to treat them identically to an attack (otherwise an attacker would simply pretend to be a non-verified certificate, to get the more lenient treatment).

On the other hand, a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.


I think the point here is that there's also no way to distinguish a http request from an attack.

It's fair enough to an argue that a self-signed cert could be an attack, but so could any http request.

> a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.

I don't understand how that allows one to distinguish it from an attack. Knowing that a connection is supposed to be unencrypted is just equivalent to knowing that a connection could be under attack.


Rightly punishing the connection for having the trappings of security when it actually lacks it doesn't mean we need to punish openly insecure traffic. End users have been told time and again that http is insecure, and so it's fine to leave it. End users should also be able to trust that https means secure without having to distinguish between secure and secure unless I'm being mitm'd and needing to understand what any of that means.


To echo @mrob's comment (not sure why they've been downvoted), relying on user-understanding of HTTP -vs- HTTPS is considered a failed experiment, and actively discouraged. Chrome in particular are moving to bring this into the browser UI by marking HTTP sites as insecure (rather than relying on users understanding that HTTPS is secure, which they don't).


Most end users have no idea what HTTPS is. They've just been (incorrectly) taught that the padlock means it's secure. Disable the padlock for self-signed HTTPS, and disable the CA-signed HTTPS-only features, and it becomes strictly better than HTTP.


Especially because there is no way to MITM a connection with perfect-forward-secrecy only if it ends up serving a self-signed certificate, because the connection first negotiates an ephemeral key with which everything, including the certificate, will be encrypted.

This means that with eSNI and at least one CA-signed cert on the IP, any attacker runs the risk of having to spoof the CA-signed certificate.


A sophisticated attacker might know that you were going to connect to a self-signed site, though. Interestingly though, private DNS (DoH, etc.) might help further shroud this fact from the attacker.

All in all, I'd say that the browser should still throw up a full-page warning because of the implications of TOFU, but it can be one where the "continue to site" option is clearly shown even to a naïve user, and not hidden behind a spoiler.


Then maybe fall back to DANE and thus restrict this to zones signed with more than 1024bit RSA?


The http version can't access secure cookies; https with the wrong cert can use the secure cookies of the real https site.


So disable secure cookies by default for self-signed certs. The scary warnings can be shown when the user tries to enable them.


In other words, "open users up to social engineering attacks to make my web-dev life easier".


You misinterpreted the above commenter. The suggestion is to disallow self-signed contexts access cookies set in authoritative contexts.


I don't see that as the biggest problem. If we repeat what was said in the above comment:

> ...disable secure cookies ... for self-signed certs. ... the user ... enable[s] them.

So you make a self-signed cert for your website which needs secure login, and you tell your users to turn on secure cookies so that you can safely store their credentials in the browser. Then your website gets MITM'ed with another self-signed cert, which either

  1. can access the same cookies, because the domain is the same
  2. can't, because the cert is different
But in the second case, you've already conditioned users to log in to your website with the cert being self-signed, so they'll just log in again. If the browser complains that the attacker's cert isn't the same as the old cert, or makes the user re-enable secure cookies with a warning, then the user has been conditioned to do that too - and an extra message of "we changed the cert, ignore your security warnings" will convince lots of users with doubts.


The convoluted and unlikely scenario you describe is currently possible with HTTP and non-secure cookies (the website admin is setting the cookies and can choose to define them as secure or not).

Using a self-signed cert. isn't secure. What's being discussed is whether it's worse than HTTP. It isn't.


Browsers now try to detect and warn about credential forms which are submitted over HTTP. Any website admin who tries to convince users to ignore security warnings about HTTP is somewhere between seriously negligent and evil.

>What's being discussed is whether it's worse than HTTP. It isn't.

I disagree. The self-signed cert approach tries to carry with it the trappings of proper HTTPS, but it results in a bigger attack surface. Every additional bit of complexity that can be added to describing the "safe browsing experience" to the end user is an additional chink in the public armour. This is why I originally called it "open users up to social engineering attacks to make my web-dev life easier".

Since the self-signed cert is not secure, admins should have no reason not to simply use HTTP. In fact, this is where the discussion has gotten to now: self-signed certs can't even do safe login. What makes the self-signed cert worse is that for some reason people are insisting on using it anyways.


A known phishing message in gmail gets a red banner. An expired cert gets a full page block and buries the actual page link. It does seem disproportionate.


Yes, also the browser can know if a site should be HTTPS through a https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/St... header, which can be preloaded in some browsers.


>go through all that hustle....

I manage 100+ servers, hosting a significantly larger number of domains, on a variety of linux and FreeBSD operating systems. Under both Apache & Nginx. "..all of that hustle.." to initially setup is under 2 minutes with LetsEncrypt. The renewal (via a cron job) is completely out-of-sight/out-of-mind.

The execution is shockingly simple. If you think it's "all that hassle" I guarantee you haven't even tried.


You're a professional plumber working on hundreds of households saying it's shockingly simple and should take no time at all for a first time home owner to fix their own plumbing. You've already got the knowledge, experience, and tools/parts in the van - of course you don't think it's a hassle!


And he's not even right. There is no hassle only if you take plenty of risk and rely on a random crappy acme client to do it well, its dependencies to always work, disks, OSes, servers not failing, acme protocol not changing and not deprecating anything.

Otherwise you need some infrastructure: logging, monitoring, some way to manage upgrades, backups, testing recovery, oh and those private keys are better not be leaked anywhere, so you need encryption for backups, which brings key management and so on.


Everything in your comment has to do with general server maintenance, and is not specific to automating certificate renewal with certbot or a similar tool which is what is being discussed. Adding HTTPS to your site and setting up automatic renewal is literally three steps on an Ubuntu system and you can copy and paste it from the certbot documentation [1].

[1] https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx


Dealing with certificates is more critical than "general server maintenance", things people often neglect doing suddenly become required. It might take from a few months to even a couple of years to get from neglected infrastructure to infrastructure ready for reliable automated issuance of certificates.

I actually evaluated a bunch of acme clients, wasn't satisfied with the code of any of them and wrote my own. But even from those I looked at certbot was always the worst choice, it's ridiculous letsencrypt is promoting it, better choices were POSIX shell clients or statically linked clients, like those written in Go and other compiled languages.


It sounds like you are super critical about any potential security issues (because what else could it be, other than that it just works or it doesn't). If given machine security is super important (oh it's running a web server..), then why not just run certbot elsewhere and sync the files in a manner that satisfies your security needs?


To be reasonably fair Lets Encrypt makes it easy. I even have a $5 a year shared hosting account that gives me Lets Encrypt SSL certs through cpanel. I can't imagine this feature is unique only to this one random shared hosting provider.


Your $5 instance with cPanel management isn't really quite the same as someone manually managing 100 servers hosting these services on disparate configurations and environments. In other words, what your tooling of choice automates in a specific situation doesn't necessarily apply to any one else's usage, and scalability-wise, would be even more detached.


Certbot works with multiple configurations and servers / Operating Systems:

https://certbot.eff.org/

I wouldn't be surprised if that's how he's managing 100 servers, or something similar.


Yes, Certbot.

I suppose I could have made that known in my post.


> Your $5 instance with cPanel management isn't really quite the same as someone manually managing 100 servers hosting these services on disparate configurations and environments.

Actually, it is. Some reverse proxies such as Traefik handle TLS certs automatically. You practically need to explicitly not want to do it.


Extended with: "and all his customers are based in a country with strong plumbing standards and regulatory guidelines" - rendering his advice less valid for every country which doesn't.

A practical example I ran into lately, we had a small system run on GKE and Google Cloud Loadbalancer and struggled to automate the certificate renewal process. Because the cluster/project was for an internal tool this automation was given a low priority and we still have to "manually" swap a certificate every few months (and if we forget to we get an angry slack DM).

TLDR; there are still many combinations of networked services that still do not ~easily~ support certificate automation, even ones you expect really should by now.


I find LetsEncrypt to be a hassle. About 1/3 of the time the auto-renew doesn't renew because some python dependency changed, etc.


I found this to be a problem too and created Certera (shameless plug). It helps simplify a lot of things and fixes some of the pain points of the typical ACME clients.

https://docs.certera.io


There's more than just Certbot out there:

https://letsencrypt.org/docs/client-options/


We've recently switched to a scheme where we run a Nginx reverse-proxy with LetsEncrypt. Everything is in docker containers.

Based on this:

https://medium.com/@pentacent/nginx-and-lets-encrypt-with-do...

Now, in our setup, everything is running on a different port, so it is easy to set up additional services all coming from the same hostname with the same IP address.


If you think this is "shockingly simple", I'd like to hear from you again in 10 years as your environment has grown, as your number of operating systems explodes, as you have to deal with restrictive network policies, as LetsEncrypt has been replaced a few times with new up-and-coming latest-and-greatest solutions, as bugs have been found, as clocks have skewed, as domain ownership rules have changed, as domain ownership verification policies have changed a half dozen times...

If you think something is set-it-and-forget-it, you haven't been around long enough.


>The execution is shockingly simple.

Only if you're blindly running shell commands the effects of which you don't understand.


Have you audited the source code of everything running on your computer? If not then you've had to trust that people aren't being evil or that someone is doing that checking for you. Why is this any different?


One reason that comes to mind immediately: self-signed certificates offer no protection against MITM attacks. It's worse than without a cert, since it gives a false sense of security.


You can't assume protection, whereas with http you assume no protection.

So, if you can't trust certificate (not when it is invalid), just show same level of protection as http.


The problem is that a http: protocol specifier implies no protection; the moment you follow the link, you know that the connection is not secured. Whereas a self-signed https: connection could be due to someone MITM'ing a site that generally uses CA's, in which case "no warning message" implies that the site is secured. The browser message has to make it clear to the user that something possibly unexpected is going on.


Which CAs have never been subject to a National Security letter? Can't say cuz you can't know? Lets talk about that false sense of security indeed.


This is one of the threats addressed by https://www.certificate-transparency.org/ .

If a CA issues a rogue cert and _does_ add it to the CT log, it's discoverable, at least in retrospect.

If a CA issues a rogue cert and _doesn't_ add it to the log, some browsers will refuse the connection when presented with that cert. https://www.agwa.name/blog/post/how_will_certificate_transpa... has more details about Chrome's implementation.


Yep, for years when everyone was talking about NSL's and other corporate strong-arming by the gov, I started saying I suspect most major CA's are compromised. At least you know your threat model though, because only the nation states are going to have that.

CA's and DNS are two parts of the internet that have become way too centralized in my opinion.


Do they? IME, they just ask you if you want to trust the self-signed certificate and allow you to optionally store that "trust" indefinitely, ending up with something like Trust On First Use. The warnings have to be scary initially because the security model is so radically different from the usual case of CA's; specifically, getting that "first use" validation correct is critically important.


They show big red error and prompt to continue is hidden under spoiler. Also XHR requests just cut off, it's painful when developing and testing.


Well, a XHR cannot programmatically decide whether a self-signed cert should be trusted. Perhaps browsers should pop up a warning bar in such cases, explaining that some site functionality is being blocked for security reasons. Clicking it would take the user to the big scary warning page, where they would be allowed to indicate that they trust the self-signed cert (permanently or not) and reload the original page.


The XHR api could allow specifying a trust root and/or cert-pinning though.


How do you imagine this to work? XHR caller and XHR endpoint are both coming from untrusted sources at that point — if you allow either side to define a trust root, you are fully opening up to MITM attacks.

For development purposes, I imagine the approach akin to cross-origin support in browsers for loopback networks might work (i.e. don't enforce checks on them).


Well, the caller is already allowed to run code. Allowing it to use TLS with cert-pinning doesn't make that any less secure.


I apologise, I still don't understand your claims.

Without previous proof that the calling code was not eg. modified in-flight (eg. over HTTP or over HTTPS without a valid certificate), allowing it to use TLS and to either modify the trust root or pin a new certificate would severely reduce the security of the communication, and would be completely against what HTTPS is designed to solve in the first place (mainly MITM snooping and attacks).

And if there is a "previous proof" of genuinity (eg. by serving through properly encrypted HTTPS), what the benefit is to allowing those clients to pin certs? I.e. they'll still need the existing "proper" HTTPS for all the other first time visitors (and return visitors using new browsers/OSes/devices)?


Don't worry.

I don't mean we should allow the fetch API to mess with the browsers trust configuration. It should only allow a temporary override of trust rules, similar to DANE TLSA-RRs, but provided by JavaScript instead of DNSSEC-verified DNS lookups.

Imagine e.g. combining this with an SPA bootloader contained in a data-url (like a bookmarklet), which the user scans via a QR-code or receives via text-based messaging.

CORS would still be in-play, and maybe the insecure nature of the caller is communicated to the API.

The benefit of this pinning would be e.g. allowing direct communication with IoT hardware, or even just prevention passive content analysis.

You could talk to IPs directly and still use TLS without weird wildcards like *.deviceid.servicedevices.com where the dns just has these zone entries:

  deviceid.servicedevices.com  DNAME  has-a.name
, but that's ugly and leaks the device's IP through a DNS lookup.


Ah, enabling TLS for access using only an IP address is a great point, thanks.


It already does, put your self-CA in the browser trust store.


The "trust this site" can be disabled, and usually is by many corporate policies. Additionally they use words like "unsafe" and "not trusted", but never use those words, nor big red screens, to warm against plain HTTP requests.


> Why do browsers punish non-verified certs much harder than no-cert?

Because it's taking time to build enough acceptance to flag http as insecure, whereas bad https connections that can't guarantee the expected security properties have been flagged as insecure from the beginning.

At this point, though, modern browsers show http sites as various flavors of "not secure" in the address bar, and limit what those sites can do. Browsers will increase the restrictions on insecure http over time, and hopefully get to the point where insecure http outside the local network gets treated much like bad https.


> then I have go through all that hustle to make it work

So like 3-5 minutes of work with Let's Encrypt?


because warning fatigue is real and http usage is too high to put scary warnings on all those sites.

Chrome's eventual goal is to mark all not-secure pages as not-secure: https://www.chromium.org/Home/chromium-security/marking-http...


> because warning fatigue is real

"HTTP is known to cause cancer in the state of California"


A non-verified cert can steal Secure cookies, no-cert cannot.


The article says googletagmanager.com has HSTS preloading. But it doesn't.

This is easily testable. I view the website in both Chrome and Firefox, and it's http, not https.

Sure googletagmanager.com is in the preload list, but it doesn't have "mode": "force-https". It just has certificate pinning, not HSTS.


Because there is only one free certificate provider (lets encrypt) and it does not allow wildcard certificates via server authentification.

Having the DNS credentials laying around on the server is not a good idea. So creating wildcard certs via letsencrypt is a huge pain in the ass.

If a webmaster has control over somedomain.com I think that is enough to assume he has control over *.somedomain.com. So I think letsencrypt should allow wildcards to the owner of somedomain.com without dabbling with the DNS.

The way things are now, I don't use ssl for my smaller projects at smallproject123.mydomain.com because I don't want the hassle of yet another cronjob and I sometimes don't want the subdomain to go into a public registry (where all certificates go these days).


> Having the DNS credentials laying around on the server is not a good idea. So creating wildcard certs via letsencrypt is a huge pain in the ass.

That's absolutely unnecessary

Set a NS record for _acme-challenge.domain.tld to your own nameservers, e.g. ns1.myowndomain.tld

And have your own name servers only serve the _acme-challenge.domain.tld zone.

Now you can just use the RFC DNS updater with your ACME client without any need for credentials for the actual domain.tld zone.

I use this currently with my own kuschku.de domain, you can check it out.

dig +trace @8.8.8.8 _acme-challenge.kuschku.de ANY


There's also ACME-DNS, which is a DNS server designed specifically for that use case: https://github.com/joohoi/acme-dns


AWS certificates are free. Cloudflare will also put SSL in front of your origin for free.

So if you’re using AWS you get it for free. Or you can slap CloudFront or Cloudflare in front of your origin.

I think the barrier is low enough that I SSL all the things (including my small side projects).


> Cloudflare will also put SSL in front of your origin for free.

Used to be everyone complained about CF putting SSL in front of HTTP origins.

However, CF can also issue a CF-signed certificate with a stupid long expiration for your origins[1] and validate it. This is how I fully SSL many of the things while avoiding potential headaches with LE / ACME. Combine with Authenticated Origin Pulls[2] and firewalling to CF's IP ranges[3] for further security.

Of course, that still leaves CF doing a MITM on all my things.

[1] https://blog.cloudflare.com/cloudflare-ca-encryption-origin/

[2] https://blog.cloudflare.com/protecting-the-origin-with-tls-a...

[3] https://www.cloudflare.com/ips/


Azure just released free SSLs as well after years of feedback - https://docs.microsoft.com/en-us/azure/app-service/configure...

Static hosts like Netlify & GitHub also enable free SSLs. The barrier is so low most people trip over it.

I am sure there are still very unique edge cases though. If I had one of those edge cases I would sit down & really weigh the pros & cons though of not using HTTPS. I would not take it lightly.


> AWS certificates are free.

"Free", but you can only use them on AWS stuff. AWS makes it nice and easy (and does a bunch behind the scenes for you). Part of that behind-the-scenes is that they have control of the private key on their side. You want to use the AWS generated cert locally, or on another provider, too bad.


You’re right, but it’s pretty simple to slap CloudFront (or Cloudflare) ahead of those origins if you need to in a pinch. I don’t work for Amazon (and have no dog in the fight) but I am a fan of AWS. And if you’re ever using AWS for anything, there’s no reason to _not_ use their free certs.

Someone else mentioned Azure having a similar offering (I’ve never played with Azure so I can’t speak to it). And if 2/3 of the providers offer it, I’d imagine GCP will at some point as well.

I love how easy it’s becoming to launch SSL. LetsEncrypt did a lot to make it mainstream. I’ve never used LE but I am grateful for their impact on our industry.


> I think the barrier is low enough that I SSL all the things (including my small side projects).

Same here. If you have a domain then you should have a cert, it's not that hard today.

My wife wanted a website that's pictures of our dog as a joke, right now it's a single img tag. The second thing I did after that was getting an HTTPS cert and forcing redirection.


Maybe you saw this, but you can make _acme-challenge.domainA.tld a CNAME to _acme-challenge.domainB.tld. Where domainB is a throwaway domain used only for validation. There are some TLDs that are pretty cheap per year.


That might be a step forward. Still a bit complex, but maybe worth considering.

Would that work for mulitple domains? So I CNAME the _acme-challenge subdomain for all my domains to _acme-challenge.cheapthrowaway.com?


It's supposed to work as long as your DNS provider can return multiple TXT records. Some can't, due to a lousy UI in the admin panel.


Certbot might not do this out of the box but ACME lets you pass one challenge at a time, collect a new one, repeat. The tokens which show you passed a challenge will "keep" for at least hours and it might even be days (when Let's Encrypt was new it was weeks!) so you can collect them up to get your cert over a time period.

So, as long as the challenge taking is serialised you can get away with just giving a single TXT answer at a time.


You can even just set NS records for _acme-challenge subdomain to your own DNS server.

And then have your acme client auth against that one.

No need for a new domain.


True, though running your own DNS server or paying for another DNS provider may be similar in effort or expense...as compared to a throwaway cheap TLD domain that comes with DNS.


As it's a DNS server that only ever serves certificate validation requests, and doesn't need 100% uptime, a normal simple BIND or knot is good enough.


I'd expect it to be built in to certbot like serverauth.


There is a 2nd ACME free CA these days based in Norway: https://www.buypass.com/ssl/products/acme

I used it on a pervious post to test it out and it seemed to be fine: https://github.com/benjojo/you-cant-curl-under-pressure/comm...


Did you miss the "free" in my comment or am I missing something?


To quote the link in my reply to you:

> Buypass Go SSL

> It is free! Issued in Scandinavia based on the industry standard ACME.

I posted a diff showing the patch you can use to switch go's crypto/acme/autocert to use it.

The CA does sell paid SSL product, but they also have a free ACME endpoint that issues 6 month certs.

Here is an example of what one of the certs look like: https://crt.sh/?id=2075589060


The wildcart cert from them isn't free, or even inexpensive.


    It is free!
Something might be free there. But not the wildcard certs we are talking about.


Your post said several things, but one was "there is only one free certificate provider (lets encrypt)". Pointing out that there's actually a second ACME one is a useful response, at least to me, since I think a lot of us still thought LE was the only option.


StartCom/StartSSL used to issue free certificates even before LetsEncrypt appeared, and it was a much bigger hassle to get verified, but at least they were valid for a full year. Not sure if they still do, and they didn't allow for multiple servernames in one cert.


I run https://github.com/joohoi/acme-dns to solve the wildcard domain problem.

You can run it yourself locally, or trust (why?) the upstream's service.


Can you run it locally on your laptop?

I think you still need a steady hostname pointing to it, right?


Would duckdns.org provide the steady hostname needed?


Because HTTPS is not as easy as HTTP.

Sure there is Let's Encrypt and if you are facing Internet you are probably good to go.

If you are on an internal network, then good luck. You need to build a PKI, and then put into your devices the right certificate so that it is trusted.

If it was simpler, Apache would sing out its "It works!" in HTTPS and not HTTP.


So here's how I do it for internal network devices. I have a RaspberryPi running on 192.168.100.1 on my local network. On https://www.duckdns.org/ or whatever your favorite DNS provider is, I signed up for a free account and created myRaspberryPi.duckdns.org and pointed it to 192.168.100.1. While you're logged in, grab the DuckDNS API key.

Next you need to use ACME or Caddy (I use the latter) and tell it to do the Let's Encrypt DNS challenge using DuckDNS. It looks like this for Caddy:

    # in the Caddyfile
    tls {
        dns duckdns
    }

    # in the CaddyEnvfile
    DUCKDNS_TOKEN=your-api-key-goes-here
Then you start it like this: nohup caddy -http-port 80 -conf /etc/caddy/Caddyfile -envfile /etc/caddy/CaddyEnvFile -agree -email you@email.com &

That's it, now I can go to https://myRaspberryPi.duckdns.org and I've got HTTPS on my local network without anything exposed on the internet EXCEPT my device's internal IP. You've got to evaluate how much of a threat that is.


Wouldn't this be subject to Let's Encrypt's rate limit of 50 certs per week for duckdns.org? Do they have an exception or are not enough people using this trick for it be a problem (yet)?


That is a really good point that I didn't consider.


Let's Encrypt works on internal networks too.

Fun fact: TLS doesn't require certificates, and some browsers even used to support HTTPS in these TLS modes many moons ago. See eg https://security.stackexchange.com/questions/23024/can-diffi...


Let's Encrypt only works on public domains that happen to not route externally. I can never (or at least, should never) get LE certificates for *.pikachu.local, but that's a perfectly valid hostname for a local machine.


Ah? That's good to know!

How to set this up on a domain which is not connected to Internet? How is the check done?


It's not easy but iirc you can do it with a DNS-01 challenge, if your internal domain name is valid (doesn't have to resolve to anything though).


The problem is that I also have domains which are completely internal, not known/resolvable outside


This is probably a bad idea and I'd recommend migrating off such names as a background task.

Realistically you can't entirely deconflict these names. So you always have a risk of shadowing names from the public Internet.

The public CAs spent years in denial over this (yes they used to sell publicly trusted certs for "private" names, this is now prohibited). Create internal.example.com and things get easier. To the extent security by obscurity is worth trying it's just as available this way (split horizon DNS etcetera)


> Realistically you can't entirely deconflict these names. So you always have a risk of shadowing names from the public Internet.

It's totally save and legitimate for ycombinator to use secret.ycombinator.com on their intranet without telling anything about it to the outside internet.


Those are names you own, and a CA will happily issue you certs for those names (but Let's Encrypt won't without a DNS record saying the name at least exists)

The grandparent was, as I understand it, talking about names they don't own, for which you've no assurance somebody else won't own them (on the public Internet) tomorrow. This used to be very common, decades ago Microsoft even advised corporations to do it for their AD, but it's a bad idea.


What's with domains such as blubb.mysystem.local or foo.invalid?

.invalid and .local are reserved domains and guaranteed to never be in use on the public internet - yet I can't get certificates for them


If you could get certificates for them, so could anyone else including your adversaries, since there is no system of ownership for them. It would be like issuing certs for https://192.168.1.1


Actually that's why browsers already treat http://127.0.0.1/ and certain other local IPs as if it was via https.

For all local IP and domain space - that is 192.168/16, 10/8 and so on - it should automatically treat them as if they were safe anyway.


They're likely be part of a cafe/hotel/guest wlan or a poorly managed "intranet" full of vulnerable stuff that needs to be shielded from CSRF. That's in addition to having ambiguous addresses. So should definitely be treated as less safe.


Could you run an internal CA server instead of self signing? At least then you reduce your attack surface if you’re compromised internally.


Yes, this is what is being done today.

My point was that HTTPS is (much) more complicated than bare HTTP and this is probably one of the reasons it is not taking over the web in a storm (though progress is undoubtedly there)


It works over DNS. There is a lot written about it on the net, I don't have any specific recommended article.


I now how to do the check over DNS when the name is known outside - the problem is that I have my own internal domain not visible on Internet


You need a registered domain for this. It's a good idea for other reasons too.


There is one "good" reason against https: handshakes take enormous amounts of CPU, relatively speaking. It's quite easy tp DoS server by skipping the expensive part on your end. You can load a core with 10~30Mbit@2k rps if your not even optimized.

Whereas the same server could tank 40k rps HTTP requests.


This is an argument I hear often, but I have yet to see an effective L7 DoS with the TLS handshake being the bottleneck. It's almost always the application code that gives up, rather than the CPU spikes due to TLS.

I have a 1 vCPU 2GB server that terminates TLS with dual Prime256v1/curve25519 + RSA 2048 setup with a 10 minute keepalive time, running AES 128, 256 (CPU has AES-NI), and CHACHA20-POLY1305 comfortably handling several millions of requests a day and CPU load hovering 10-20%.

The amount of ECC handshakes are surprisingly high, and CHACHA works wonders too with user agents today.

Given the threats from passive attacks today, this is a cost that must be paid. It just looks quite affordable with modern protocols.


> comfortably handling several millions of requests a day and CPU load hovering 10-20%

Parent suggested that at 172 million requests per day (2000 rps), there would be trouble.

Assuming "several million" is <= 17 million (or even up to 34 million, given the 10-20% range stated), then your stats would tend to agree.


The only place I've had to care about this was on an embedded hardware server. Even then, if the handshakes were too much, it'd just drop the connections and continue to serve those it could. It wasn't enough to knock the whole thing offline.

If a 16bit 200Mhz microprocessor can handle a few thousand connections/second, then a modern processor should definitely be able to stay upright fairly easily.


It’s not exactly apples to apples... but my 64Mhz embedded processor is doing way more than 10,000 chacha20-poly1305 encodes of 64 bytes with another 64 bytes of additional data for the AEAD per second. Granted, it has some hardware crypto functions.

I am still skeptical TLS handshake on site visit is actually bogging down anyone’s computer.


The stream cryptography is not the issue here. Neither is "TLS handshake on site visit". The issue is that you have to spend the handshake cost before you can look into the request at all.


Do you have a source on that? Quite a few people seem to disagree: https://istlsfastyet.com/


In my testing for high-throughput scenarios like copies over ssh/rsync/https/smb (i tried them all) in every case encryption was a big hit to throughput. hardware assistance (built into the CPU) helped a lot but it was still a massive boost to shut off encryption - saving literal minutes on every bulk transfer, multiple transfers per day.

For the average case it probably doesn't matter, and you can optimize it, but I think it is totally understandable that the average novice could end up with bad https performance if only because the defaults are bad or they made a mistake. If hardware assist for the handshake and/or transfer crypto is shut off (or unavailable, on lower-spec CPUs) your perf is going to tank real hard.

I ended up using ssh configured to use the weakest (fastest) crypto possible, because disabling crypto entirely was no longer an option. I controlled the entire network end to end so no real risk there - but obviously a dangerous tool to provide for insecure links.

Also worth keeping in mind that there are production scenarios now where people are pushing 1gb+ of data to all their servers on every deploy - iirc among others when Facebook compiles their entire site the executable is something like a gigabyte that needs to be pushed to thousands of frontends. If you're doing that over encrypted ssh you're wasting cycles which means wasting power and you're wasting that power on thousands of machines at once. Same would apply if the nodes pull the new executable down over HTTPS.


How long ago was this — and how fast was your network? On hardware less than a decade old you shouldn’t be seeing that unless you’re talking about 10+Gb networking.


A year ago in my development VMs, it was the difference between like 40MB/s throughput and 200+


Oh, yeah, for good clients it's totally fine. But e.g. a machine I'll try an http benchmark on in a couple hours (2 cires; 4780 BogoMIPS each) only managed 4177 ops/s using the fastest-available curve X25519 with

  openssl speed ecdh
  
  gatling -V -n -p 80 -u nobody
I know this is somewhat extreme, but on a cpu that was about 30% faster I got 40k rps for small files using the kernel's loopback, which is where the cpu spent most of it's time.

Feel free to try.


This should happen only during the handshake though.


If this is true, how do you explain the lack of any notable L7 TLS DoS attacks?


People throwing up CDNs and dumb bandwith blasting often being easier?


Depends on the stack used. If you have persistent connections you'll incur far fewer handshakes than requests. If you use an elliptic curve scheme key exchange costs are negligible. But sure, if you do one 4096 bit RSA exchange for every request it will be costly.


I speak of a L7 DDoS.

Assume the worst way to attack without being clearly obvious: handshake CPU grinding.


Assuming this is true 2000rps per CPU core seems pretty reasonable. That would only be a bottleneck when serving static files. Only the most basic apps are going to be able to serve that much traffic per core.


My biggest gripe with the current de facto recommended approach (even mandated in HSTS) is that you need to redirect to https from untrusted http.

So you are being forced to either not serve http, or to condition users to trust MITM-able redirect. How many people will notice a typoed redirect to an https page with a good certificate?

The solution is simple: browsers should default to https, and fall back to http if unavailable. Sure, some sites have broken https endpoints, but browsers have enforced crazier shit recently.


That's what HSTS is for - you set a HSTS policy, and the browser will remember this site for a certain time you can set (usually 1-2 years).

And going further, you can enable HSTS preloading, meaning the next release of browsers is going to hardcode your website as always and only ever going to be used with HTTPS.

See for example my domain https://hstspreload.org/?domain=kuschku.de, which is currently in the preload lists of all major browsers including Chrome, Firefox, Edge and even Internet Explorer.

I also deploy the same for mail submission with forced STS, and several other protocols.


Right, so HSTS will protect a visitor who has visited your web site at most max-age ago using that particular browser and device.

Or, as I stated, for preload, you have to either not have HTTP at all, or have a redirect to HTTPS: it should be clear from my above post why I think a redirect is a bad idea. I also dislike turning off HTTP for those that don't have any other option.

To me it seems that browsers just switching to https-by-default and http-as-fallback is a much simpler, better, backwards-compatible change that should just work. What am I missing and why do you feel HSTS is a good idea compared to that?


Because some websites serve something different on 443 and 80, and you won’t get the right result by visiting 443.

The preload list allows you to specifically say that for your own website clients should always use HTTPS, which is a good solution, as it means no one is ever going to visit kuschku.de on port 80, except for curl and dev tools, for which the redirect is useful.


I disagree with the claim that it's better for a web site to implement HSTS than to fix whatever they are serving on 443.

But to each their own.


It’s possible for me, today, to implement HSTS, and have my site served securely everywhere, today.

Browsers can’t set 443 as default, because other websites are broken, other websites I can’t fix and the browsers can’t fix either.


We have differing views of "everywhere, today": you acknowledged yourself there are cases where it won't happen, it's just how much we think that's important where we differ. That's ok, I appreciate your point and thanks for spending the time to explain.

As for what browsers can or cannot do, they also can't introduce DNS-over-http, introduce stricter cookie policies breaking a bunch of web sites, or reduce effectiveness of ad-blockers, drop flash, or... Sure, defaulting to https is too high a bar (not expressing an opinion on any of those — eg. good riddance to Flash :) — but browsers can and have done stuff that's just as bad, forcing web site creators to adapt their web sites).


Annoyingly, if you want to get a let's encrypt cert you have to serve http. Back when I was manually purchasing & installing certs I didn't even listen on 80 for several services.

(Exception being if you use the dns challenge)


>(Exception being if you use the dns challenge)

Exactly. DNS challenges don't suffer from this issue.


"gnu.org" is on the list marked as a Chinese website...


There are some other confusing ones as well.

nature.com is marked as Chinese, as are nginx.org and ntp.org.

example.com is Indian in the list as is the now defunct dmoz.org.

I don't understand the methodology behind the country assignments at all…


Weirdly nature.com seems to actually redirect to https, as does zara.com, lenovo.com, genuis.com, and senate.gov. Is this list stale, or did no one spot-check this?


Yes, senate.gov in particular:

% curl -I senate.gov HTTP/1.1 301 Moved Permanently Server: AkamaiGHost Content-Length: 0 Location: http://www.senate.gov/ Date: Tue, 17 Dec 2019 10:37:04 GMT Connection: keep-alive

% curl -I www.senate.gov HTTP/1.1 301 Moved Permanently Server: Apache Location: https://www.senate.gov/ Content-Length: 231 Content-Type: text/html; charset=iso-8859-1 Date: Tue, 17 Dec 2019 10:37:08 GMT Connection: keep-alive

It seems to meet the requirement for exclusion from the list. Data updated 16 Dec 2019, so I don't think it's stale.

I've also checked from Australian and a European connection, so I don't think it's a regional thing. The other genuis.com doesn't work for me, the other sites redirect and set a cookie.


If you're trying to get senate.gov onto the HSTS preload list, you have to redirect http://senate.gov to https://senate.gov before https://www.senate.gov

Maybe their tester applies the same criteria - although to me that feels a bit unfair...


It takes multiple redirects to reach https for several of those. It may just be looking at the first hop - which makes a certain sort of sense.


Article states they allow multiple 301 or 302 redirects. What is not allowed are JS based redirects. There might also be a limit to the number of redirects followed, but that isn't mentioned in the article.


Same with w3.org, which is fifth on the list, and ebay-kleinanzeigen.de. Seems like quite a few entries are off.


w3.org redirect to www.w3.org, but not HTTPS. This makes sense for the standards org that defines HTTP, and needs to maintain backwards compatibility.


Except the standards org that defines HTTP is the IETF, not the W3C...


Opps! You're right, the W3C only helped author it.

I was also wrong to say that w3.org never redirects to HTTPS. If the browsers sends a Upgrade-Insecure-Requests HTTP-header, then it redirects. That allows it to support all browsers as securely as possible.

Sites like whynohttps.com and observatory.mozilla.org should really test for this pattern.


I noticed it as well. I first thought it was a result of using CDN services or recycled IP addresses, but gnu.org doesn't use a CDN, and its IPv4 and IPv6 are both served by Hurricane Electric, which never did any business in mainland China.

Must be a bug.


One annoyance with this system, from the linked webpage:

>an expectation that a site responds to an HTTP request over the insecure scheme with either a 301 or 302

Doing things this way is the final nail in the coffin for Internet Explorer 6, since IE6 does not use any version of SSL which is considered secure here in 2019. And, yes, I have seen in people the real world still using ancient Internet Explorer 6 as recently as 2015, and Windows XP as recently as 2017.

Which is why I instead do the http → https redirection with Javascript: I make sure the client isn’t using an ancient version of Internet Explorer, then use Javascript to move them to the https version of my website. This way, anyone using a modern secure browser gets redirected to the https site, while people using ancient IE can still use my site over http.

(No, I do not make any real attempt to have my HTML or CSS be compatible with IE6, except with https://samiam.org/resume/ and I am glad the nonsense about “pixel perfect” and Flash websites is a thing of the past with mobile everywhere)


Be aware that blocking scripts from insecure connections is something you'd usually want to do...


“usually” being the operative word. I’m not quite ready to throw IE6 (Internet Explorer 6) and all http-only browsers completely under a bus yet.


Why not look at the User-Agent header and 301 to https if you don't see IE6?


That’s actually a good idea. It was simpler to set up the Javascript redirect. It I were to go that way, I would probably redirect IE6 to a “neverssl” subdomain (which also would be useful for dealing with WiFi capture portals).


Can you use old crypto for IE6 using some kind of agent detection while using new crypto for modern browsers? I thought Cloudflare does something like that. But there's a danger of MITM downgrade attack with this approach...


Most people in this space want to do SHA-1 which is prohibited so you need a deal with a CA that uses a "pulled root" to do this. That means they told the trust stores this CA root will not comply with the SHA-1 prohibition and so it's untrusted in a modern browser, but IE6 doesn't know that so it trusts the SHA-1 cert. The CA obviously wants actual money for sorting this out for you. In fact I don't even know if this idea ended up successful enough to be commercially available at all.

If you don't do this to get SHA-1 then you're relying on the users somehow having applied enough updates to not need SHA-1 but for some reason insisting on IE6 anyway. That's a narrower set of users. At some point you have to cut your losses.


Preloads list is an absolute kludge that does not and will never scale and creates a huge deal of problems and works only for specific browsers.

The task is not as simple as using DNS to store strict https flags(as DNS can be manipulated by intermediary), but hardcoding the lists in the browsers and keeping the lists in the chrome's code is definitely not a solution.


The goal is to slowly move higher levels into that list.

e.g. in the past it was just domains and subdomains.

Today there are already some TLDs on the list themselves.


The solution is to make the default connection port 443 HTTPS and allow people to drop listening on port 80.


I mostly have port 80 egress traffic blocked on Little Snitch. The web is painful to use like that but gives you an idea of the sorry state of websites.

A lot of websites just don't serve over HTTPS, or serve them with domains whose CN or SAN don't match the host.

Many that do support https have links that downgrade you back to http on the same domain.


Same here. The browser extension HTTPS Everywhere can sometimes help, but I still have to turn off Little Snitch when some links are posted on HN


How do you use public Wi-Fi with captive portals?


As ninkendo said, whitelist captive.apple.com.

If nothing else works, temporarily disabling the firewall is a couple clicks away, barely takes any time or effort at all.

I don't know why people are making such a fuss out of this.


Allowing http://captive.apple.com should make macOS’s captive portal auth window work.


Most captive portals I've seen use HTTP redirection to the actual domain of the captive portal, so it would still fail as soon as it follows the redirected URL.


If you block port 80, you'll never get to the part where you do URL filtering in the first place.

(And also the redirection thing.)


I mean whitelist port 80 for captive.apple.com. Sorry if that wasn't clear.

macOS has a background daemon which automatically hits captive.apple.com on connection to a WiFi network, to detect if it's behind a captive portal (and opens up a browser window to let you complete the flow, if it gets a 302). So that much should work even if you block egress port 80 but whitelist captive.apple.com.

...that is, assuming the portal to which you get redirected would be served over https, but I guess that isn't a given either.


You just don't.


Seems impractical, but ok.


With LTE I rarely have the need to use wlan.


One thing that surprised me was how hard it was to set up https https redirects for websites on aws and Google cloud. I needed too set up a load balancer to do https.

The redirects are also hard, I have a static site using Google storage and I have to create a server instance and redirect from there because it's not possible to do an automatic redirect. I don't know why the big cloud hosting providers aren't cooperating to make full https implementation easier.


Recently an OpenShift cluster I admin went down because of long-lived certs not being rotated in time. There are many clients, servers, nodes, services, and configs involved, so rotating is non-trivial, so of course it's automated, and of course because it's not tested regularly, the automation just doesn't work after a while. Using the automation only seems to make things worse, and getting everything working again ends up taking days.

PKI is technically the best practice for these systems, but it's also the most fragile and complicated. At a certain point, if the security model is so complex that it becomes hard to reason about, it's arguable that it's no longer a secure model, to say nothing of operational reliability.

I also have a whole rant about how some business models and government regulations literally require inspecting TLS certs of critical transport streams, and how the protocols are designed only to prevent this, and all the many problems this presents as a result, but I don't think most people care about those concerns.

Oh, and gentle reminder that there are still 100% effective attacks that allow automated generation of valid certs for domains you don't control. It doesn't happen frequently (that we know of) but it has happened multiple times in the past decade, so just having a secure connection to a website doesn't mean it's actually secure.


Is it still the case that when you think you connect in https to a website, only the segment to cloudflare is encrypted and the segment cloudflare to the web server might not be?


Yes, that's SSL termination. Generally this happens at the CDN, load balancer or proxy (e.g. nginx used as a cache) layer and is pretty common since the fleet of servers handling the request after being routed are in a private network. With CF, the request from CF to the origin is over a public network and it will depend on how the user has configured their CF setup as to whether or not that hand-off is then encrypted. If they are doing SSL termination in CF, then it won't be encrypted from CF to the origin server.


Depends on the website's Cloudflare configuration. Cloudflare supports both methods - CF to website can be HTTP or HTTPS.


Yes, it's called "flexible SSL".


The biggest problem with forcing everything HTTPS is a false sense of security & trust that this gives to none-techie users.

Security of the data transfer layer does not mean can or should trust the website you are visiting.

Just because a website has a padlock does not mean it is trust worthy and you can hand over your CC details.

https://www.amazon.somethiing.other.co/greatDiscount may look great to some!


If we migrate to HTTPS everywhere we can get rid of HTTP for general use and switch to a different UI, where HTTPS websites don't have any special icon but HTTP ones get a warning icon.

It's already effectively how password form submissions work in many browsers.


You can't have HTTPS everywhere until we can get HTTPS for IoT devices. My router doesn't serve it's configuration screen via HTTPS. How could it? I have to connect to it to configure it before it's on the internet.

Same with my IoT cameras and all the various local apps I run that can start a web server. Heck, my iPhone has tons of apps that start webservers for uploading data since iPhone's file sync sucks so bad.

We need a solution to HTTPS for devices inside home networks.


I agree that having an elegant and secure solution to enable HTTPS on non-internet-facing equipment would be nice. I work mainly on embedded devices and all my admin interfaces are over HTTP because there's simply no way to ship a certificate that would work anywhere. It would be nice if you could easily deploy self-signed certificates that would only work for local addresses and only for specific devices, although of course doing that securely and with good UI would be tricky.

In the meantime having big warnings when connecting to these ad-hoc web interfaces makes sense I think, since they can effectively easily be spoofed and MitM'd (LANs are not always secure in the first place so it makes sense to warn the user not to reuse a sensitive password for instance). It's annoying for us embedded devs but I think it's for the greater good.


We could already do that — just do away with the padlock icon now. In my browser, anyway, http:// gives a big "non secure" warning.


The problem is that, for better or worse, generations of internet users have been taught to look for the padlock before sharing any sensitive info (especially banking credentials and the like). Suddenly removing this prompt is probably going to confuse and worry many people.


That's a if though.... and has stated below it can't happen everywhere.

I've seen TV adverts from banks for example (Here in the UK) telling people to look for the padlock! This is not a verifiable method of safety.


This is exactly the opposite of a forcing HTTPS problem. When HTTPS isn’t everywhere, HTTPS gives a false sense of security. When it is, browsers can stop emphasizing it. We’re already well down that path, with padlocks no longer being green/being hidden entirely, EV certificates losing their confusion vector, insecure pages being assigned the icon that sticks out…


I think that was the main reason why browser vendors moved away from the green padlock symbol.


true but the only way to get there is to force http into oblivion. right now people think:

http: insecure and https: secure

probably only when http ceases to exist can we start differentiating between trustworthy and untrustworty.

how we actually do that is something we still need to figure out.

for now we have a check against sites that are known to distribute malware. maybe we need to somehow track which sites are known to be trustworthy.

different factors can go into that. their privacy statement, past incidents and their response. etc...


> The biggest problem with forcing everything HTTPS No it isn't. Https not being 100% bulletproof is unrelated to using it everywhere. And it's lightyears away from its biggest problem.


Maybe I’m wrong, but I feel SSL has a downside of relying on more centralization. If a visitor to my totally-static webpage wants to bypass that layer and request the http version directly, I’m going to let them. (Obviously not excited about the idea of being mitm’d but it’s not a security risk, so leave that tradeoff up to the visitor).


https://doesmysiteneedhttps.com/

MITM can do anything to your site, so your totally-static site may not be static any more at the victim's end. It may be a site collecting private details, attacking the browser, or using the victim to attack other sites.

Your static HTTP site is a network vulnerability and a blank slate for the attacker.


Thanks for the reply. I've seen that site but it seems to be aimed at people who don't offer any https at all. At this point I'm still more comfortable offering visitors the decision. (Not many people visit my site by the way.)


So then disable javascript for http sites


That won't do anything. If someone can Man-in-the-Middle you, then they can easily forge a 302 redirection to a malicious web page that could be HTTPS.


Ok, cool, I found a new numerical overflow image rendering in your browser library. Now I can shove an <img> tag in the insecure stream and exploit you.


One potentially good reason to not force SSL: https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites...

TL;DR: Secure websites can make the web less accessible for those who rely on metered satellite internet (and I'm sure plenty of other cases).


Trading security for convenience is rarely a good idea. The rest of the world should not conform the to failures of certain areas to provide internet.


I see your point. But we trade security for convenience 24/7/365. We could all have bulletproof glass in our homes, personal security cameras everywhere, backup generators, panic rooms, etc, but we don't, because it's not convenient (and I know the expense is primarily what makes it not convenient, but I think it's still a valid argument).

Providing access to Wikipedia over http to people in third world countries may be worth the risk of someone MITMing the site with propaganda.

The suggestion is only to give some users the option.


Mitm with propaganda is the least of the worries. Full on exploit code is.

The fact is as an ecosystem develops completely increases. Lifeforms in that ecosystem have to spend more time and effort protecting themselves from outside attacks as time progresses.


Unfortunately convenience usually does tradeoff against security. Thoughtful UX can deliver both, but it's rare to find in practice.

That casual dismissal of davidmurdoch's counterargument comes across tone-deaf to people stuck on crappy connections.


No, but it's a good idea to have the option, so long as people are aware of the implications.


The caching problem can be fixed through other means. Signed HTTP Exchanges[0] seem like a promising solution.

[0]: https://wicg.github.io/webpackage/draft-yasskin-http-origin-...


TLS 1.3 with 1-RTT should improve this situation at least somewhat. I suspect HTTP3 will help in high packet loss situations but it's going to be a while until that's deployed. Also wikipedia is still perfectly reachable via http if you disable the HSTS preloading in your browser


I'm not an expert, but would this be fixable by installing a new root certificate on the computers who want to use the caching server, and then having the caching server sign the pages it transmits using the new root certificate?


I consider myself young, but I've been around long enough to to rely on One True Service Provider for anything.

And "Let's Encrypt" is not an answer to "HTTPS is not free". It's not. We all are going to see our projects outlive Let's Encrypt (or their free tier).

In the end, nothing is secure. A dedicated attacker will find a way, given enough resources. Any security measure is just a deterrent.

My deterrent is that it's not worth MITM'ing my personal website with, like, 10 monthly visitors. (The reader might gasp that I lock my bicycle with a chain that can be snapped in a second, and that a strong enough human can probably bash my home door in).

Anyway. It's almost 2020, and if you are still advocating on moving the entirety of the Web to reliance on Big Centrally Good Guys, I really don't know what else to say to you.


Because it's always pain in the ass to set it up and then renew?


How exactly is it a pain in the ass?

- If you are hosting a simple static page or blog, your hosting provider probably has Let's Encrypt plugin.

- If you have your own VPS, Caddy has you covered with file serving, fastcgi support for PHP, and proxying to (g)unicorn/nodejs/Go/.NET, and has HTTPS enabled by default.

- If you have more advanced setup (e.g. containers), traefik supports HTTPS with just a few lines of configuration.

- If you are big enough to afford cloud, it takes a few lines of Terraform code to provision certificate for load balancers (speaking for AWS, and assuming others have similar solutions).

For other cases (e.g. lots of traffic with custom haproxy/nginx/etc. setup), you are probably smart enough to find out how to enable Let's Encrypt support.


1) Not everything is running bare Apache. In fact, some services might have some rather strange web-driven GUI (or, more interestingly, curses-like) that requires you to carefully load a certificate, a CSR, and so forth in a somewhat arcane manner. Some pretty niche serving exists out there and I have had to deal with a bunch of them, to the point where I had to write extensive documentation on keeping the certificates up to date on each separate weird service. Many of these services have a "no user-servicable parts inside, your warranty will be voided ..." clauses in the service contract which deter spelunking.

2) Some services require wildcards, like proxies.

3) Some organizations have, due to someone far away making strange decisions, policies about certificate authorities, and people to audit for compliance. Therefore, a cert costs money and, for a site which is purely informational, that's a hard sell.

4) Because we're not running on a hosting provider, a VPS, containers, or cloud.

5) Because not everyone wants to deal with some combination of the above every three months due to Let's Encrypt's expiration policy.


I generally run apache/nginx in front of most things for SSL termination — this allows you to simplify SSL setup significantly.


Set up once and then a cron-task? [0] It doesn't have to be a pain. The tooling around this all exists.

It's very very nearly maintenance free [1].

[0] There's lots of tooling. My current preference is for https://github.com/lukas2511/dehydrated

[1] If something breaks you have to pay attention, otherwise... Not so much.


I have found that Let's Encrypt certbot makes it really simple to set up HTTPS and renew the certificate for simple websites. Examples for Nginx:

- Setup: https://github.com/susam/susam.in/blob/master/Makefile#L30-L...

- Renewal: https://github.com/susam/susam.in/blob/master/etc/crontab#L1


Because it's hard and a pain.

Sure, depending on your setup it's easy, but for a lot of setups it isn't. Instead of trying to say HTTPS is easy and shame everybody who isn't doing it more efforts should be diverted into creating an actual fully encrypted network that doesn't need CAs.


What actually happens when you try to force HTTPS over the internet: you centralize it, you make it harder for the small player, hobbyist, personal homepage guy, and make it easier for the big corporation.


It isn't just web sites. Many software repos still use http or native rsync. Some would argue that you validate the packages with GPG, but you would be amazed if you saw how many people install the GPG public key from the same mirror they download software from.


Gradle, granted they're fixing it.

https://blog.gradle.org/decommissioning-http


Had to access an EOL device and couldn't browse the web because of all ended certificates...


I don't get it. With Lets Encrypt, it's like one or two lines to get everything set up.

I'm guessing people aren't as lucky as I am to be running on newer machines and such.

I mean it even edits your nginx files to redirect http to https if you agree. It's not hard.


I set up Let's Encrypt for an older Exchange server a while ago. While I love the result, it was NOT a simple, one-line exercise.

Up to date documentation was near-impossible to find, and the scripts that came out of the box on the recommended client needed some fixing. The whole thing took about half a day, plus some hours a few weeks later once the unforgiving anti-abuse thresholds I accidentally triggered during end-to-end testing finally expired. Definitely wasn't a pleasant experience.


Nope, it is not straight-forward and still a confusing process.


It is relatively straightforward if you have a single site hosted on a well-supported operating system and web server.

It suddenly becomes really, really complicated if you have multiple servers, multiple domains, nginx configurations that the tool does not expect (but insists on rewriting).


The rewrite is optional, it's also fairly trivial to let certbot create certificates and adapt nginx afterwards.


Yes, but at that point it's not two lines with let's encrypt any more.

For my part, I had to write around a thousand lines of script and alter various existing code in order to switch from manual ssl (whenever the client paid for it) to automatic ssl (everywhere), because there was no way I was going to manually buy hundreds of certificates a year when I took over this role. Nowadays we're 100% ssl but it was harder for an existing person already accustomed to the existing system than doing nothing. I'm just too lazy to check a site every week and renew many certificates manually and copy around stupid files and generally go crazy. Plus, if it's automated, I think there's less chance of the keys being copied. So in my mind it was worth the effort, but it was surely effort.


So true. Even on hosting that fully supports let's encrypt thru an web based admin like cpanel or directadmin, the process can be confusing and error prone.


If we're purely talking about Let's Encrypt, it's not straightforward to set up on Azure either.

It's easy to set up a standard cert through Azure, but if you want to use Let's Encrypt there's a whole dance you have to go through to get there, and for many people it's not worth the time and they'll happily pay a bit of money to make it a few-clicks thing.


When I looked at doing it, I'd have to bump up my hosting plan for my vanity blog to somewhere in the neighborhood of $100/month to apply an SSL cert for my custom domain, which is just stupid for a site that gets a couple thousand visits a month and maybe earns me $5 in referral fees.


I believe it's free now - https://docs.microsoft.com/en-us/azure/app-service/configure...

Though hopefully they simplify it for cases such as yours.

Putting Cloudflare in front is also another cheap option.


It looks like you have to go up to at least a B1 app service, which at $50/month doesn't make a lot of sense for me, unless I can figure how to get my MSDN credits associated with that Azure subscription instead of one of the other two accounts I don't use, but that's a whole other can of worms...


Instructions are here: https://certbot.eff.org/

I don't know how it could possibly be any simpler.


It is simple for a one-server website. When you're on Alexa 1M, you certainly have a load balancer, multiple servers for redundancy, etc. It makes things not straightforward, and you certainly don't want to use the default certbot which overwrites your config.


I am on alexa 1m (50k even). I do not have a load balancer, I do not have multiple servers for redundancy. This isn't even a static site, most of our page views are the wiki, the server running all of this has 8 cores and 4 are constantly maxxed out by a non-website related process.

Most websites now and days are over engineered.


Checked my old site's rank. ~250000. One VPS, €4/month. Mostly static, but a decent part is served with a not so light Perl CGI script (!). I'm sure I wouldn't get away with that in top 1k websites, but 1m?


> "I am on alexa 1m (50k even). I do not have a load balancer, I do not have multiple servers for redundancy. This isn't even a static site, most of our page views are the wiki, the server running all of this has 8 cores and 4 are constantly maxxed out by a non-website related process.

Most websites now and days are over engineered."

That's awesome! Mind sharing some more details? (hosting plan/CDN/etc). Or even the URL?


Rented dedicated server running a 9900k. Windows hypervisor runs vms. database vm, website vm, and 3 game server vms running on this machine. each game server vm is running 2 instances of the game server, but only 1 ever has high pop.

https://tgstation13.org

Most of our traffic goes to our wiki: we are the most active open source video game on github. Most ss13 servers run their own codebase, forked from ours, but will still frequently point their players to our wiki rather then set one up on their own.

A Cloudflare caching layer was added back in march when we got a 4x spike in web traffic from a youtuber talking about the game.


When you're in that league you probably have expensive people who can deal with the additional complexity.


I mean the next more complicated case isn't that bad either. You set up a sidecar VM/container/machine/whatever-you-want that either instruments your DNS or gets the traffic from .well-known/acme-challenge and just renews your certs every day.

Then your load balancers pull the current cert from the sidecar every day with NFS/Gluster/Ceph/HTTP/whatver-you-want and reload the web server if it changed.

Assuming that you can catch a failure of your sidecar server in 89 days or so you don't need much more redundancy.


That's why you set custom NS records for _acme-challenge.domain.tld to your own NS servers and use the DNS challenge to get your certs.


Shameless plug for Certera for these more complex scenarios: https://docs.certera.io

I found that the certs behind a load balancer were enough of a problem that a solution was needed.


IMHO it is easier to setup SSL on LB. you don't need to setup them one by one, all servers (HTTP, SMTP, POP, IMAP and others) protected by the same SSL certificate, cipher suite with a SSL-terminated LB. Also many LBs support auto-renewal.


While I appreciate the efforts of certbot to make it as user-friendly as possible I still find this state of things unforgivable. I don't know where it went wrong so that today a developer must spend time learning and tweaking a low-level encryption tools. I'm just saying https will never be 100% unless it becomes a baked-in feature of any hosting.


Developers don't need to, unless they're the ones hosting your website. In which case, yes, I expect them to be able to configure web hosting software.


This is sadly not the case yet in many not so edge cases.

For instance Heroku still doesn't provide straightforward support for wildcard domains under SSL: https://devcenter.heroku.com/articles/understanding-ssl-on-h...

There is myriad of other cases, basically every time you diverge a bit from the 80% path, you're in for a treat and will deal with all the intricacies of SSL management.


Certbot, and most other standalone ACME clients, are just stop-gaps.

The end game is first-party support for automatic HTTPS in all web (and other) servers. It is happening (e.g. mod_md), it's just going to take time. For example, to get it packaged for all distributions.

For shared hosting, if you ignore the few providers at the top who are either CAs (e.g. GoDaddy) or are in contracts with CAs (e.g. Namecheap), the overwhelming majority of them are already providing free and automatic SSL for all hosted domains.


> The end game is first-party support for automatic HTTPS in all web (and other) servers.

There's still a need for certbot et al when you have multiple services (e.g. web and mail and XMPP) running on a single domain name. In fact, I actively avoid servers that insist on doing ACME themselves because it breaks my unified ACME process.


A management fad called dev-ops is what went wrong, before you could count on your sysadmin to take care of that :) Apart from that, not everything always makes sense to use in production without a good level of understanding --- and might otherwise lead to, for example, a false sense of security.


Starting with baking ACMEv2 in the major webservers (apache, IIS, etc).


If Microsoft baked in Auto-cert-install in to IIS that allowed you to cherry pick a provider, and/or just select their own free CA, that'd really solve the problem for Windows based web servers. In my experience CertBot/ACME type renewal doesn't work reliably for Windows/IIS.


What is the value in HTTPS being 100%? That seems silly to me. Many many things do not have any need for encryption.


Most things would benefit from encryption. Even if you don't need integrity protection, and you don't have any need of privacy, and you don't care about authenticating your peers you still want encryption because otherwise middleboxes ossify everything.

If the middlebox can't see inside your flow because it's encrypted it can't object to whatever new thing it's scared of this time whether that's HD video or a new HTML tag.


Not a significant issue in practice as far as I can tell. I deliver text over the internet, and sometimes binaries over the internet, and it happens very fast because there is no useless cruft in the process to satisfy some security twonk's paranoid delusions.


It’s for ops. Not dev


Maybe not everyone host website on a platform where you can easily install these things.

For example, I have a simple web app hosted on Heroku free plan, and I have to use CloudFlare SSL to get it served over https on my custom domain. But it actually is half encrypted as the connection between CloudFlare and Heroku is plain http.


Right. Another example is that you cannot use https with subdomains on GitHub Pages.

https://github.community/t5/GitHub-Pages/Does-GitHub-Pages-S...


I'd appreciate it if the instructions explained why they need sudo, rather than explain what sudo is...


> I don't get it. With Lets Encrypt, it's like one or two lines to get everything set up.

My employer won't use Let's Encrypt because they (LE) want unlimited indemnity and that's a deal breaker for them (employer).


To add to your point, a lot of insurers only provide cyber insurance with a certificate from a specific range of CAs, and LetsEncrypt is not one of them. Frustratingly, Symantec is allowed.


I'm fine with people who think it's too hard...

What i cannot stand is people who can do it, but refuse to out of laziness. Or because they want their content to be insecure on purpose.

This applies mostly to big orgs, so indie devs can have some leeway if it's too hard to implement.


> i cannot stand is people who can do it, but refuse to out of laziness

(Raises guilty hand)

I run a couple of sites on my hosted server that are still http. They both sit behind a varnish setup and to be honest I just have not found the time to get it done. Usually when I mess with my configurations I lose a week to troubleshooting stupid stuff and I just can't bring myself to do it.


Hey at least you aren't running a fortune 500 with millions of users (You aren't right?)


haha, right. They're really just hobby projects, almost entirely read-only.


Assuming you are talking about software developers, you can't expent people do extra work out of virtue. They will do it only if there is an economic incentive. Setting up a transport layer security is not in software developer's interest or competence.


This is about managers and executives who call the shots on implementing these features. It is not your responsibility as a software dev working for a big company to implement something they do not pay you for.


I’m curious what your opinion is on people who don’t to make a point.


I mean, if you don't value your users privacy of course i'm not going to think you're a very swell person.

Again this really only applies to people in a comfortable position to do this and choose not to. The average developer is not my target here, it's the big guys.


I don't do it on my own site. I'm capable of doing it, and certainly did it for my job. But my own site... It's free with HTTP, but they charge for every level that includes HTTPS. I'm it's major user (so far) so \/\/


Depends on your setup.

I currently use a mini CDN (content delivery network) of three different OpenVZ servers in the cloud to host my content, so getting things to work with Let’s Encrypt took about two or three days of writing Bash and Ansible scripts which get the challenge-response from Let’s Encrypt, uploading it to all my cloud nodes, having Let’s Encrypt verify it got a good response, uploading the new cert to all of the cloud nodes, then using Ansible to log in to all the nodes, put the new cert where the web server can see it, then restarting the web server.

Point being, the amount of effort needed to get things to work with Let’s Encrypt varies, and can be non-trivial.


I started using lets-encrypt before it supported Nginx (using standalone mode). I recently tried the Nginx-based mode, and it wrecked my reverse proxy config pretty thoroughly.

Still, the stand-alone mode is pretty dang easy. I've also considered the /.well-known mode but there was some tiny snag.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: