Hacker News new | past | comments | ask | show | jobs | submit login
Chromium and Mozilla to enforce 1 year validity for TLS certificates (googlesource.com)
203 points by vld 15 days ago | hide | past | favorite | 363 comments

With the tightening of certificate trust, demise of self-signed certificates, etc., is there any remaining way to establish a consumer-oriented HTTPS server on a local network? Thinking of things like routers, printers, and self-hosted IoT devices here. Some of the label printers we support at work have simply atrocious workarounds to get them to work, and I'm wondering if it's the manufacturer's fault or if that use case has been completely abandoned in the push for tighter security on the Internet.

It’s a glaring security hole, IMHO. I create such devices and the only way I know is self-signed certs, but the browsers complain a lot about that. Ideally there’d be a way to sign .local domains with browsers handling it while letting people know to verify the identity of their local devices/services and that the identity isn’t verified by https like most sites.

The issue lies between the browsers and https system. SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?". Then if it changes informs you of that. Browsers could easily implement that for .local with self-signed certs.

Of course browser developers assume everyone has internet all the time and you only access servers with signed domains. I’ve wondered what it’d take to get an ITEF/W3C RFQ published for .local self-signed behavior.

(Edit: RFQ, not my autocomplete’s RTF)

> Ideally there’d be a way to sign .local domains with browsers handling it while letting people know to verify the identity of their local devices/services and that the identity isn’t verified by https like most sites.

For these types of sites we run a local CA, and sign regular certificates for these domains and then distribute the CA certificate to our windows clients through a GPO. When put into the correct store, all our "locally-signed" certificates show as valid.

In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.

> In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.

This is basically what I've been doing lately as well. I'll create a wildcard letsencrypt cert for .vpn.ourdomain.com and then point the subdomains to internal IPs. You can even set up a split-dns where it responds to the challenge txt records for letsencrypt, but only the internal side responds to requests under .vpn.ourdomain.com.

That works if you have control over the device/network. IoT devices usually don’t work that way. It’d be nice to be able to ensure the passcode to the device isn’t broadcast in clear http.

It handles it by asking "Do you want to trust this new server?"

Asking the end user to accept downgraded security is a huge security antipattern.

Also, if I’m operating an evil wifi AP at a coffee shop and I intercept your web request for bankofamerica.com with a redirect to bankofamerica.local, would HSTS prevent the redirect? Or could I then serve you a bad cert and trick you into accepting it?

Also, what sokoloff said makes a lot of sense. Encryption without authentication is worthless, and that cert chain only works in so far as someone at the top vouches for someone’s identity. If that’s your print server, then you are the one vouching for its identity. It makes more sense for you to be the certificate authority and just build your own cert chain.

If the browser correctly explained what you were doing, and warned you that this is an attack unless you are in control of the entire network and the machines on it, I don't see the problem.

What would it say?

“You’re connecting to an IoT device that has a worthless certificate. Would you like me to open up a completely pointless AES256 session with it and pretend that you have a secure connection?”

Just use HTTP.

The identity isn’t trustworthy, but as mentioned below their are ways to handle that with device id’s. Also it’s not pointless to encrypt communication with a device you’ve verified the identity of in the past. It prevents hijackers from later hijacking the device, without The user knowing it. Just the same as the nastygram you get when your ssh server changes its private key.

You’re effectively claiming SSH is pointless and/or useless encryption as it doesn’t use certificate chains to verify url/domain. Your argument is the same as saying that any devops ssh’ing into a new local server is pointless and they should just use telenet.

(Disclaimer: I'm not a security expert).

Ideally, I think, something like this: "You're trying to connect to a new device on your local network. To ensure the security please check that the device has a display or a printed label that says 'HTTPS certificate ID: correct horse battery staple couple more random words'?" (mobile devices may suggest to scan a QR code instead).

I'm pretty sure if at least one major browser vendor would implement something like this (denoted by a special OID on the certificate), IoT vendors would be happy to follow. Verifying a phrase or scanning a code is not a big burden, and it resolves trust issues.

The fingerprint could be either from a private key generated on device (for devices that have a display and can display dynamic content) or from vendor's self-signed "CA" with special critical restrictions (no trust for any signatures unless individually verified + signed certs are only valid on what clients consider to be a local network) which private keys are not on the device itself (for devices with printed labels, to avoid having the same private key on all devices).

It certainly sounds a lot better than simply asking browser vendors to give .local a pass on cert validity.

I’m still wary of any flow that would have browser users “accepting” a device as secure - could I impersonate that device on the local network? Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.

Maybe another approach would be to build infrastructure (like protocols and client software) to make building a home cert chain easy? A windows client that would let you create a root cert, install it in your cert store, and then give you server certs to hand out to devices? Give it a consumer friendly brand name or something and get IoT vendors to add a front-and-centre option to adopt a new server cert.

Authentication isn’t a tricky problem; it’s the trickiest.

> I’m still wary of any flow that would have browser users “accepting” a device as secure -

It’s about accepting that communication with the device is secure, not guaranteeing that the device itself is secure. In reality you don’t know if your bank’s servers are secure or if they encrypt passwords properly, etc, but you do know it’s them and your communication isn’t tampered with.

> could I impersonate that device on the local network?

Not readily with a device specific id check and TOFU (trust on first use) similar to SSH. If the device certificate was stored permanently for .local urls like `my-device-23ed.local`, then anyone who tried intercepting or MITM’img that device would have the user receive a message "warning device identity has changed, please check your device is secure ... etc " warning.

Not having any browser support .local certificate or identity "pinning" means that anyone who compromised your network (WiFi psk hacking anyone?) can impersonate a device you’d not know it. Browsers forget self-signed certs regularly, if they let you "pin" the certificate at all. A hacker can intercept the .local url (trivial) and use another self-signed cert. the user’s only real option is to blindly accept it whenever it happens. Then an intruder can MITM the connection to the device all they want. Is your router’s config page really your router? Who knows.

> Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.

Any `.local` domain isn’t allowed to have a normal cert or global dns name. They could trick them on first use, but again with a device specific ID on first use it’d make that harder to do. After trust-on-first-use any access afterword wouldn’t be able to be tricked without a explicit warning to the user about changing device identity and that something funny might be happening.

If browsers implemented entering a device specific code as part of the "do you accept this device" on first use, that’d make it a much more usable and secure pattern. It’d standardize the pattern and encourage IoT shops do setup the device id checking properly.

To impersonate a device using .local certificate/identity pinning, a hacker would need physical access to the device to get it’s device id code, then hi-Jack the mdns request with the correct device specific .local address (on first use!), then setup a permanent MITM in order to impersonate a device. Otherwise the user would get a warning. Possible but serious resources required. With physical access you can modify hardware, possibly install a false cert on the user machine, etc, so security in that scenario would be largely compromised already.

Perhaps some IoT devices use custom apps and certificates but many just use http, or self-signed https. In my experience, IoT device makers have little experience with something like creating a CA. Getting users to install it would be a headache. Time is money on those projects and having an entire factory down because they can’t figure out how to install a certificate chain on Windows 7, well, most users will complain loudly. Currently IoT is full on IT-installing-certificate-chains, or no security at all. Many go with none at all therefore.

Therefore the current status quo with browser certificates on .local domains encourages far more security gaps and effectively makes it difficult for non-internet connected device to Operate securely without a fairly expensive and complicated IT setup.

I like this idea, as we already add a device serial ID to the ap hosted mode, partly for this reason. Chromecast’s show a random code and users seem to be able to handle this fine.

That works fine with a brand new device you just unboxed. But what should happen 3 years later? Should the IoT device have a certificate that is valid forever?

I would say yes, as long as certificates are unique per-device. The harm of my light bulb's cert being compromised is less than the harm of my light bulb no longer working if the manufacturer's CA goes offline and it can't renew its cert.

HTTP traffic would be unencrypted, so everyone (esp. on Wifi) could record passwords etc. flying around. With HTTPS, you at least need to MITM the connection to do that. If you establish trust in some other way (cert ID printed on the device?), the connection is secure.

Not just intercept: Using HTTPS prevents messing with the connection in-flight. So an attacker won't be able to inject their own payload into that web page you just requested.

Except you’re using garbage certificates so anyone could MITM you and inject whatever they like.

In what way are self-signed certs garbage? They're essentially the same as ssh certs.

That's a separate kind of attack, not intrinsic to the protocol. If the keypair got leaked or a CA misbehaves and issues multiple certificates that can be used for a host, then yes, it can happen.

It's like calling someone you don't know, but over a secure line.

I love this idea. There are enough influential tech people who read HN, can we make this happen please?

Thirded. This sounds like a sound solution.

How is that different than how self signed certs work now?

My browser warns me, I can accept the warning for that particular certificate, and it warns me again if it changes..

At least for Chrome and Firefox, I can't accept self-signed cert easily for permanently. It asks again if I exited the browser.

Do you have them configured to clear those settings on exit? Is the certificate actually the same when you visit the site again?

Chrome and Firefox remember the acceptance of self-signed certs for a long time on my PC.

Why do you need to use self-signed certs? (As contrasted with CA-signed certs that happen to be signed by a CA that you own and trust as suggested by arwineap?)

It took my a little over an hour one evening to figure out how to create my own CA, trust it, and sign certs for all my local devices (except my UniFi cloud controller which I admit I gave up on due to time).

Because 99.99%+ of users don't have the technical skill to do this, but still need to be able to access local devices and it would sure be good if they could do so in a secure manner?

So the answer is to subvert the global certificate infrastructure that protects web traffic? No, it isn’t. Your IoT device has no security at all if a non-technical user is setting it up or if it doesn’t have a way of accepting a user configured certificate, and you shouldn’t pretend otherwise by dressing it up in bad certificates and worthless encrypted tunnels.

Just use HTTP.

I mean if they’re worthless tunnels then so is every SSH tunnel. Should we just go back to telnet?

On your own network? Does it really matter whether you use telnet or ssh? And if it’s on a shared network, don’t you have an IT department that can set up the local key infrastructure and push out certificates?

The argument here is that we should enable lots of shitty IoT devices to masquerade as being secure, and inure browser users to click ‘yes’ to accepting a broken certificate.

If it’s on a managed network, IT can set up a certificate and push that out to client machines. If it’s on your home network you can do that (unless your IoT device can’t take a user configured client cert, in which case it’s rubbish anyways), and if you can’t then you might as well use HTTP.

Ok we can always use HTTP instead. I personally hate how using HTTPS gets harder and harder every single year.

A lot of the devices mentioned ("routers, printers, and self-hosted IoT devices") don't give ways to change the certificate. Besides, not every individual or company wants to or is knowledgeable enough to manage their own CA. For the audience in HN it might be an hour one evening, to others the instructions read like black magic.

Routing is scary magic. DNS is scary magic. Wifi is scary magic. Everything in computing is scary magic until someone writes an app with good UX for it. It's not a fundamental problem.

It is a fundamental problem, because you need specialized knowledge to understand what is being presented and asked of you in the UI.

The average user knows absolutely nothing about routing, most will throw their hands up or their eyes will swim if you so much as mention something like IP address.

They also know nothing about DNS and don't have to: because we always give them defaults that they never see and they go along with their lives.

As for wifi, once again, largely automated. Most people never change the default SSID and password. There's some manufacturers that will make a good UI, but it stops at the SSID and passwords because that's the extent of most users' understanding. Some users have a vague understanding that 2.4GHz and 5GHz is different, but don't know the significance of the difference. Channel, authentication type, and other options aren't given to users in those UIs because they simply wouldn't know what to do with it and people don't read manuals anyways.

> SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?".

The problem is to figure out whether to trust the server you need to get its fingerprint through another channel. Is there an HTTPS equivalent of that?

You don't need to get the fingerprint through another channel. Getting the fingerprint through another channel prevents some classes of attacks. Blindly storing the first fingerprint offered also prevents a variety of attacks.

> It handles it by asking "Do you want to trust this new server?"

That's basically how it works though; your OS packages a group of trusted CA certs. You can add additional trusted CA certs, even ones minted by you to ensure your apps trust the connection

There are two options:

* Manually install a root certificate, which is a confusing process for most end users and a non-starter for anyone who cares about security. (Imagine walking your parents through the process.)

* Trust a self-signed certificate, which is an increasingly difficult and counterintuitive process since Chrome and Firefox started competing to see who could destroy their usefulness faster. I'm not even sure if it's possible anymore.

Neither of these are acceptable.

I mean I’m not sure if there’s a solution that will make everyone happy then. Making trusting self-signed cheers easy and not scary has real security implications because users just click-through warnings.

Making casual users create their own root certificates sounds like an even worse problem. Now an attacker isn't restricted to impersonating your lightbulbs. They can impersonate any domain if they can get your private CA. Now imagine if an IoT vendor engages in questionable practices like creating the CA for you and the user only has to download an exe that automatically installs the root certificate. The benefit for the vendor would be that all devices you order from their website would be shipped with correctly signed https certificates. Later a hacker dumps the database with root CAs and uses it to impersonate your bank.

That’s why self-signed and "local signed" should be distinct concepts, IMHO. The .local domain is already special cased, and could provide a different UI path more akin to how SSH works. AFAICT, you can’t get a https cert for a .local domain, so it’d not break existing https security model. It’d provide a more secure way for apps like syncthing to provide a secure local UI as well. Getting browsers to accept my self-signed certificate is a pain and makes people just use http.

Honestly, I don't trust most end users to install root certificates

If you are doing something for an end user, I think it makes a lot of sense just to get a certificate; it's just not a large barrier anymore.

The mechanism SSH uses is called Trust on First Use ("TOFU") and is closer to what used to be HTTPS certificate pinning. In this scheme, certificates never expire, and if they do, clients warn about the unexpected change in certificate.

It is different from the CA PKI system, where the client trusts any certificate signed by a trusted CA without prompting the user at all, and doesn't prompt the user if the certificate for a site changes.

Self-signed certificates give you basically this. It's a bit of a hassle to mark them as trusted, but you only have to do it once.

That has not been my experience with current versions of Chrome.

Crazy idea: Why not serve an initial page over HTTP, and then implement encryption in JS using webcrypto for all subsequent calls.

I'm not sure self-signed HTTPS can do much better than this anyways.

(Yes, yes, it's a crazy idea, hehe)

You can no longer do webcrypto because the initial page is compromised.

Self signed HTTPS works for this case as long as you know the fingerprint/cert to accept.

Oh, yeah... webcrypto only works on HTTPS.

So you would need to ship a crypto library in JS, hehe :)

Self-signed certs probably does work, if you install the certificate root on your machine. It just not something you would advice end-users to do.

The other problem is shipping a crypto library if the entire page and script is not served over HTTPS means that it's no longer useful, because the crypto library is compromised.

> (Edit: RFQ, not my autocomplete’s RTF)

Sorry for the mostly insubstantial comment but it may help you in the future: it’s RFC (Request For Comments) not RFQ.

And it’s IETF (Internet Engineering Task Force) not ITEF.

Plex uses a combination of wildcard certificates and a custom DNS resolver to offer HTTPS on local networks, but it does require a working internet connection to work. [1]

You can also get a certificate through the Let's Encrypt DNS challenge without having to expose a server to the Internet, but you'll still need ownership of a domain name and either an internet connection or a local DNS server to support HTTPS using that certificate.

There is always the option of creating a local certificate authority for your devices, but this is kind of a pain. There are some new applications that aim to make this easier [2], but there is no easy way around having to install the root certificate on each device.

[1] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-... [2] https://github.com/smallstep/certificates

If you just want the green lock to show up, the device can get a certificate from Let's Encrypt. The manufacturer would need to provide an API that lets the device pass the DNS challenge.

For example, you could have serialnumber.manufactuerer-homedevices.net, and each device would get a cert for its serial's host name. Ideally, you should properly secure that API with some form of attestation key included on the device. Alternatively, the host name could be e.g. the hash of the devices' generated key (that way you could ship the devices without placing individual keys on them, but the host name would change after a factory reset).

Making this actually secure is hard, though, because you need the user to visit the URL for his device. If an attacker can simply get a cert for differentserial.manufacturer-homedevices.net and direct the victim there, you don't win much actual security.

I'm not a huge fan of it, but, it seems like the way things are going is to simply run a service which is basically a large proxy.

Your device connects out with some kind of persistent connection to their central service then requests to your device go to their server, which does AAA and routes to your local device. Fixes the SSL issue, avoids any NAT headaches, enables fully remote access and most importantly for PMs it makes the device useless without your server-side components. If there is any local accessibility at all, it can be neutered or reduced.

I don't entirely hate this model, its not my favorite, but, its the way things are going.

I would be happy if my router supported letsencrypt.

Why would I even bother copying and distributing self-signed certificates if I can just properly get a certificate for my own personal router?

It’s idiotic that people still trust pure HTTP and have no option of switching.

If your router needs configuring before it can access the Internet, then it can’t use certificates that require the internet to generate or validate.

Or if you change ISP and need to change your router internet connection configuration, your router cannot be accessed.

When was the last time you needed to configure your router to access the internet?

I understand if that router is something industrial, but then you can probably figure out how to do that over SSH anyway (which is secure).

I think this is the only practical answer, unfortunately. Everything else might possibly be made to work for a personal project, but definitely isn't an option at scale.

The way plex does it works great at scale.

It requires people to care more about "self hosted" than "PM says this will centralize user access and allow us to collect data and better monetize"

I'm not knocking either model. They both work (technically) but you need to understand your market and what works better for them.

Buy a domain, create a subdomain for local use, and issue ACME certs with Let's Encrypt every 60 days.

If your vendor device or software doesn't support automated certificate rotation, put nginx/haproxy/envoy in front of it.

This won't work either, btw: You'd have to request from Let's Encrypt a new certificate for each individual device. LE has several rate limits that will prevent that from working for anything more than a trivial number of devices: https://letsencrypt.org/docs/rate-limits/

The only way I see how this would work is if you not just purchase a domain but also an internet-facing server and do the renewal and certificate management centrally for all devices - at which point, your device is definitly not standalone anymore.

This will work fine. LetsEncrypt will raise ratelimits for you. I've done it for a commercial CDN and they were very accommodating and helpful.

Plex does this, for example, though they use DigiCert's free certificates: https://www.plex.tv/blog/its-not-easy-being-green-secure-com...

The LE rate limits are (mostly?) for new cert issuance. I’ve never run into a rate limit on automated renewals and seem to recall it was either non-existent or comically far away from anything any individual would hit.

You can do wildcard certs with LE, I run hundreds of k8s services all secured with LE and wildcard certs.

We're talking about customer hardware. If someone looks at the insides of the device and finds, of course, the private key for your one shared wildcard certificate, the issuer is required to invalidate it immediately.

You can, but that wouldn't quite work for the prosumer router manufacturer case the OP mentioned: LE would revoke the cert once you distributed it.

You can, but, you can't (by policy) distribute keys across multiple customers.

I have a nasty habit of requesting revocation of such compromised keys whenever I find them. CAs are required to revoke within 24 hours, I think, though unfortunately revocation is surprisingly ineffective.

Do you actually find those often? I've actually never seen one. I will admit I've also never specifically looked very hard.

I'd say one every couple of years.

https://letsencrypt.org/docs/certificates-for-localhost/ has great documentation on that topic, including more examples.

This is a ridiculous requirement that is not at all practical.

How is it not practical? It's really not hard to set up and there is great documentation out there

I get the feeling you have no real experience either running a company network or dealing with end users and home networks. Any of these solutions work fine for a majority of people who just use their laptop in Starbucks, but they really break down when you need to start doing anything more complicated than that.

Please, educate me as to what I am overlooking. The requirements of buying a domain name and getting a LE wildcard cert should be trivial to someone with the experience you seem to have.

Why would the average customer of IoT products have the same expertise as the person you are replying to?

All of these juvinile "It's easy! just implement ${SUPER COMPLICATED INFRASTRUCTURE WITH SPECIFIC REQUIREMENTS AND LIMITATIONS I'M GOING TO TO PRETEND AWAY}" replies from eager-idiot hacker tweens is just trolling.

You can also run your own CA.

> This is a ridiculous requirement that is not at all practical.

Really. This: https://jamielinux.com/docs/openssl-certificate-authority/ gives you a CA in about an hour. HashiCorp Vault will give you a CA in 5 minutes. certstrap will give you a CA in 15 seconds. It’s 2020, it ain’t voodoo anymore.

Just spinning up a CA is a couple of commands. Running one sanely (to include security of the private keys, availability and auditability of the signing machine, keeping backups, publishing a CRL, setting up ACME if you want any kind of automation) is significantly more involved.

But this is silly. If this isn’t completely trivial to add to your app then something has gone horribly wrong.

* Every machine in your infra already has backups, right? Nothing about your signing boxes are special in this regard.

* All your services are already HA, right? The API servers that now have to run some glorified OpenSSL commands aren’t any different than your normal API endpoints.

* You already have to protect secrets on your machines. DB passwords, API keys. What’s one more?

* You don’t have to implement ACME. These are your devices talking to your devices.

Nothing different than any long lived component in any infrastructure. There is no reason to look for reasons not to use a CA.

For a company? Absolutely not. In private? Probably not worth the effort, just skip the cert warning.

Most hardware in this category really needs to be set-and-forget, whether online or not. You can't have every random sound system and light controller having to dial out to a third party every month. You need to be able to come back five years later and still be able to configure the hardware.

Without buying a domain. (and continously spending money to keep it owned)

Run your own CA internally and handle the CA distribution problem with MDM tools.

I admit, that's a solution, even if a very unpleasant one: Installing a custom root CA is intentionally complicated, so this is hardly doable as an onboarding experience. The setup must be repreated for every single client device that should access the server.

There remains the question how I would get the CA certificate onto client devices in the first place.

Lastly, with asking consumers to install a CA certificate, I ask for a significantly more powerful permission than if I could just have them trust my certificate. This seems like a step backwards security-wise.

> Lastly, with asking consumers to install a CA certificate, I ask for a significantly more powerful permission than if I could just have them trust my certificate.

CA certificates can be constrained. https://tools.ietf.org/html/rfc5280#section-

Are common certificate validation libraries honoring these constraints?

When I tried to use this many moons ago, most things ignored the constraints; although I could mark the extension critical, and then some (but not all, yay) of the things that didn't understand would refuse the CA.

IDK NSS seems to have code to verify it:


As does webpki:


But haven't tested it (or checked other libraries).

Update: tested it with openssl and webpki. both claim to have support but it only works with openssl. For webpki I had to file two bugs:



Can you name and shame those that ignored the critical extension? Sounds CVE-worthy. A date to guess the versions you used would also help.

No, it was on the order of 5 years ago; everybody was garbage back then. But, if this had become usable, I would expect to have seen articles about using it since then.

How do you actually generate a constrained CA certificate? I have tried to do this for a long time but openssl is inscrutable.

There seems to be a guide for openssl here [0] but it seems kinda complicated. This discussion inspired me to add name constraints support to rcgen [2]. If you aren't afraid to write Rust, you should give using it a try.

[0] https://www.marcanoonline.com/post/2016/09/restrict-certific...

[1] https://tools.ietf.org/html/rfc5280#page-41

[2] https://github.com/est31/rcgen/commit/059cc19fcd1b8bb57feed5...


It is no more complicated than a self signed certificate. Two clicks in Firefox, 4 taps in iOS

Every time and no tracking if device indentity changes.

That is also an insane and unrealistic suggestion.

~$5/year (US) for a domain, and a one time investment of setting up a few scripts might save you a lot of time in the long run.

Honestly, my main issue is not even the price, it's that devices cannot be stand-alone anymore. Even if my device is purely for LAN use and wouldn't need the internet at all, I now need to ensure it has an internet connection and I have to keep a domain owned that must be constantly renewed.

The device will also only be accessible if an internet connection is present, even if both the device and the client are in the same LAN - because the client has to access the device through the domain.

This means, should I ever lose the capacity to support the device and renew the domain, the device will become useless, even if technically, it is still completely functional.

> Honestly, my main issue is not even the price, it's that devices cannot be stand-alone anymore.

That’s not true at all. I’ve created a CA and a script to generate and sign server certificates and I generated them left right and centre now for my very standalone, local network only with no access to the internet whatsoever services. I added my CA to my browsers and my iPhone and everything works perfectly.

Will you also add it to the iPhones of other people that would want to use the device? (Or more realistically, would they let you add it?)

Depends on them I guess. If it's a corporate phone then it's no problem. The rest can either add it or get used to cert warnings.

If you're in a context where you can personally install it on phones of friends and relatives, that will work, I agree.

I'm thinking of an example to illustrate what I mean. (Sorry if this appears to be moving the goalposts)

Imagine some small business is selling a home surveillance camera, or a network printer or whatever else. The thing is that it's a product intended for perivate, layman consumers and intended for LAN use.

With HTTP, you could add a local web server as a simple way to manage the device pretty easily: Just open a server, communicate the IP address to the user, done. No internet connection required, no continuing support from the company required. Even if the company went bust, the existing units continued to work and the web interface stayed accessible.

There seems to be no good way to replicate this with HTTPS. The closest seems indeed to be a custom root CA - however, then you need to communicate to your users how to install the CA certificate on their own devices, clicking through all kinds of scary warnings and dismissing "this section is for admins only" notices. I predict that not a lot of people would do that.

This also leaves you with the challange to safely get the certificate to your users. You could serve the certificate from the device over HTTP - however, then you'll require that your customers download a root certificate, over an unencrypted connection without any integrity checks and install it on their device. This seems like ripping open a mojor security hole.

Meanwhile, even if the company purchases a domain and attempts to get a certificate from a public CA, deployment will be difficult as described in all the other branches of this thread.

In short, I think you can pick any three of the following four conditions, but I see no way to archieve all four at the same time.

(1) use modern web features (all recently added and all future features require https)

(2) have your site usable on a client device that does not belong to you

(3) present a non-confusing user experience (no cert warnings, etc)

(4) have the device stay accessible even after you stop actively supporting it (by purchasing domains, running cloud services, having deals with CAs, etc etc)

> This also leaves you with the challange to safely get the certificate to your users.

Because the hardware vendor does not own nor configure the private network, they are not able to certify to the network’s users that a particular network node is the device it’s supposed to be, and not an impersonator. Only the network administrators can do that, and so it is the network administrators that must generate the certificate and install it on the device. In this way the admins bestow a programatic declaration of trust on the network node.

The device manufacturers can only provide tools for showing that the device was not tampered with. TLS/SSL certificates are not for that purpose.

This only applies if they want to access internal services without cert warnings, so asking them to install a cert seems reasonable?

> Honestly, my main issue is not even the price, it's that devices cannot be stand-alone anymore.

I'm wondering where the impression fo" not any more" comes from. Really the situation hasn't changed much. You can have your HTTP webinterface. You can have HTTPS with a selfsigned cert and click away the warning. The only thing that really has changed is that for your HTTP connection you will get a warning that the connection is not secure.

I don't think the ability of browsers to load HTTP pages will go away any time soon.

Aren't browsers preventing submission of form data over http now?

No. Your DNS can also be locally, so you have no Internet dependence.

A .net is 83 cents a month.

That's usually a limited special offer. Not everyone wants to change domains every year.

There’s always the .local TLD, which is reserved for this use case:


That works, but you can't get public certs for it, because you can't prove you own that domain (indeed, you don't :))

Please remind me where I said anything about public certificates. ;-)

xg15 is going to have to run a self-hosted Certificate Authority (CA) and generate certificates himself.

That article goes on to state that .local is reserved by RFC6762 (multicast DNS), which if you use that domain on your network, will cause problems with any services using it, usually Macs or iPhones.

This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local." is link-local, and names within this domain are meaningful only on the link where they originate. [...] Any DNS query for a name ending with ".local." MUST be sent to the mDNS IPv4 link-local multicast address (or its IPv6 equivalent FF02::FB).

I'd recommend using something like .lan instead.

I think you’re talking about running DNS locally (not sure) and resolving .local addresses by DNS. In that case, yes, the devices that do lookups by mDNS will experience a delay caused by first querying mDNS before falling back onto DNS. The solution is to set up mDNS for the internal resources.

Using an unregistered domain like .lan has serious security implications. See here: https://serverfault.com/a/17566

.lan is called out in appendix G of the MDNS RFC as "not recommended, but many people do this".

Personally speaking, I'm not too worried about .lan getting registered as a gTLD anytime soon. I'm a lot more worried about forgetting to renew my domain and having things horrifically break if/when that domain gets picked up by someone else. This is a lot more likely...

I’m not sure I understand... What would break on your local network if a public domain you own and use only for internal resources is registered by someone else? How is this different from making up a domain name? In both cases you have to set up something to resolve the names to IP addresses on your local network, be it a hosts file or DNS. I would expect that to keep working regardless of the ownership of the domain name.

I would have said the same about .dev, and did, until Google came along and registered it.

Why not just use mDNS too and stick with .local?

I haven't heard about that yet. this sounds interesting indeed. But how would I get a valid certificate for a .local domain?

You need to set up your own “chain of trust” to verify your self-generated certificates. You can run your own Certificate Authority for example. (There are other approaches too.)

If only we had NameConstraints: we could have a CA limited to *.clientdevices.manufacturer.com, installed in everyone's trust root.

Installed? Everyone?

It would be enough to send it as an intermediate CA cert, no need to install.

Going the self-signed DNS name restricted CA way would likely still not fly with browsers, because there's no way to securely deploy the trust root. (Because if it requires user interaction to install that can be exploited by malicious actors.)

You can always issue the TLS certificate on an internet-facing system that can do the corresponding ACME challenge, then give it to the internal service and have all clients of that service resolve the domain name for the certificate via a static configuration in /etc/hosts. That's how I do TLS for my intranet-only LDAP server.

It would be nice if there were a way to be a CA for a subdomain. Then each manufacturer could sign *.mydevice.com.

I've just resigned myself to accepting the fact that I am not the browser vendor's target market anymore and I'll have to keep an older copy of FF around for the sole purpose of accessing devices I own that will never see an update to modern crypto/certificate standards.

Do self-signed certs not work? Yes, you have to tell your browser to permanently accept them the first time you connect, but after that, they work.

For some reason, iOS Safari won't do like all the other browsers, show a warning and then let you access. No, it outright rejects self-signed certs. You have to go through the trouble of installing the root CA into the phone, which is not practical.

Why as a vendor would you use a self-signed certificate that causes the browser to scream at the customer when you could just not use TLS, plain old HTTP.

Some features require Secure Context. Browsers just don't enable those features in a context that isn't secure (HTTP is only considered "secure" on the loopback network to your own machine). If it's a Javascript API it returns an error, if it's an HTML or HTTP feature it doesn't work. "Here's a nickel kid, get yourself a secure context".

Both Chrome / Chromium and Firefox have explicitly set policy that new features (as opposed to tidier ways to do things that already exist like DOM improvements) will require Secure Context, and there's already a weak assumption that even some tidying up will go into secure context when the rationale for not doing so is shaky (e.g. some of the web crypto features that needn't technically require Secure Context do anyway).

I mean yes, browsers have amply demonstrated they don't care about secure local device communication at all. Mostly from ignorance and disinterest. It's their loss, just means everyone has to install an app or that smart lightbulb only talks to the vendor cloud.

Because the alternative is to embed a TLS private key that would allow you to MITM every other one of those devices. Someone extracted it? Looks like you have to either (a) bury your head in the sand or (b) rollout an expensive recall to change certs on those devices.

Why use slightly compromised HTTPS versus plaintext HTTP? Same reason they have those super cheap locks on diaries from the 90s: it's a deterrent. Makes it a little harder to do a bad thing.

You have already answered why no one in their right mind would embed a shared certificate across all devices. I don't think you are being realistic with yourself when you believe people use self-signed certificates; they don't.

You are missing what happens instead. There is just simply no web management interface on the device anymore. You need to download the vendors app to configure and use the device. Maybe, if the vendor cares, they use their own CA to secure a local connection to the device. Much more likely, the app and device exclusively talk to their cloud and use that as a middleman to exchange information.

But it also makes it a little harder for the user to do what they want too because they have to click through a (correctly) scary-looking security warning.

Chrome seems to intentionally forget you accepted self-signed certificates after some period of time.

I feel like Chrome has generally become much more amnesic as of the past few months. Lost more signing in to various services, which isn't a bad thing, I'm just not sure what (if anything) changed.

Unfortunately the answer is no.

Backdate your self-signed cert. so far that works around any validity length restrictions.

The answer to that may be to stop using the web browser to do damn near everything. Your computer will load an app, distributed by a trusted app store or downloaded directly from the device, that talks only to the device on the local network, and to a whitelist of allowed internet hosts, and nothing else. The app will have a client certificate so the device won't blindly trust the computer either. Firefox will only communicate over the local network in a limited way, enough to download the app from the device or get a link to the manufacturer's app store profile. Or that discovery can become part of the operating system and browsers will stop talking over the local network at all.

This CCADB vote provides the context missing from this link to a Chromium patch. After the CA issuers rejected 2017 and 2019 proposals (Ballot 185, Ballot SC22) to reduce certificate issuance times to ~1 year, Apple announced enforcement of the rejected 398-days limit across all platforms on 01 Sep 2020, the CAs reversed their position while complaining that they were being forced to, and Chromium is now implementing the policy as well.




> SUB ITEM 3.1: Limit TLS Certificates to 398-day validity Last year there was a CA/Browser Forum ballot to set a 398-day maximum validity for TLS certificates. Mozilla voted in favor, but the ballot failed due to a lack of support from CAs. Since then, Apple announced they plan to require that TLS certificates issued on or after September 1, 2020 must not have a validity period greater than 398 days, treating certificates longer than that as a Root Policy violation as well as technically enforcing that they are not accepted. We would like to take your CA’s current situation into account regarding the earliest date when your CA will be able to implement changes to limit new TLS certificates to a maximum 398-day validity period.

Nit: It's CA/B Forum (Certificate Authority / Browser Forum, a standing meeting between the major browser vendors - which are also roughly the set of major OS vendors except Mozilla stands in for the Free Unixes - and the major publicly trusted Certificate Authorities). The original purpose of this meeting was to find common ground between these two groups and this has borne considerable fruit over the years in the from of the Baseline Requirements.

CCADB is a totally different service run by Mozilla and Microsoft (using Salesforce, I presume because they both agree this is terrible but neither can accuse the other of using their preferred pet technologies?) notionally open to other trust stores to track lots of tedious paperwork for the relationship with trusted CAs. Audit documents, huge lists of what was issued by who and to do what, when it expires, blah blah blah. Like a public records office it's simultaneously fascinating and a total snooze fest. Mozilla is using it in this case to conduct their routine survey of CAs to check they understand what they're obliged to do, they're not asleep at the wheel and so on.

Sounds like CAs will be forced to keep shrinking cert length until everyone standardizes on 1 month. They no longer have any real power.

A less labor-intensive approach would be require CAs to revalidate the 'proof of ownership' basis of issued certificates monthly, and publish a revocation via CRL if the validation times out or fails for 1 month + 1 day. This would further encourage automation of the ecosystem without requiring redeployment in the cases where automated verification passes each month.

Less labour intensive for whom?

Anyone using email validation now needs to click a link every month, or their cert goes away.

I used to have the unfortunate task of managing a massive SAN cert used for white-label hosting with a bunch of our customer's domains.

Getting every single customer to get their tech person to look at the mailbox and click a link was often a multi-month process.

Less labor intensive than requiring validation and deploying a new signed certificate every month.

>and publish a revocation via CRL if the validation times out or fails for 1 month + 1 day.

If you're in a position to MITM using a stolen certificate, you're probably also in a position to block the CRL response from going through. Since failing to get an updated CRL doesn't result in a security warning, your CRL proposal is essentially useless.

> you're probably also in a position to block the CRL response from going through

Not if the certificate is OCSP-Must-Staple.

One of the arguments that I've seen for shorter-lived certs is that revocations aren't honored particularly well. If we could fix that, then your proposal would make sense (but I'm not sure that's doable)

Misses the point. The concern is all historic traffic being vulnerable to a single encryption failure.

Short cert lives make certain decloaking much, kuch more difficult.

It looks like 84% of sites [1] use forward security with modern browsers, which should mean historic traffic is not vulnerable to a leaked key.

It seems like driving this number up is a better way of dealing with historic traffic than quickly expiring certs. Limiting the duration of leaks of future traffic seems like the right justification for short lived certs.

[1] https://www.ssllabs.com/ssl-pulse/

However in TLS 1.2 and earlier in most cases there is also a potentially long-lived key inside the server to enable faster (1-RTT) resumption. Bad guys who obtain this key get to decrypt all TLS sessions protected with that key, even if the client never used resumption at all. This is fixed in TLS 1.3, where having that long term key only lets you see inside subsequent resumptions that don't redo the DH key exchange.

That recent GnuTLS bug resulted in bad guys not even needing to steal that resumption key for any servers using affected versions of GnuTLS because GnuTLS was just initialising it to zero...

I heard perfect forward secrecy is intended to prevent decrypting past traffic.


CAs are resting all and every changes because it's easier, it makes sense.

CA's are resisting because the only person to buy from them is someone who can't set up certbot and lets-encrypt. As soon as they cant issue for longer than a year, their market is being whittled away.

> the only person to buy from [CAs] is someone who can't set up certbot and lets-encrypt

Digicert is in the process of migrating their customers to ACME (the issuance protocol used by Let's Encrypt and certbot). Where's your god now? :)

And that's two out of how many?

Will browsers start allowing self signed certificates though?

As long as you first create a root certificate then you can create how many certificates you want.

Assuming non-chained root CAs remain trusted.

I can forsee the browsers eventually treating self-created CAs like they currently treat self-signed certs. if they're not traceable to a trusted root CA then there's no accountability, from a browser perspective, in the event of abuse or breach.

Then people will create their own root CA and use it to sign the existing root CAs. Whatever it takes. Corporate users need internal certificates.

Self-signed certificates are insecure, so, no.

Aren't they allowed already, with a click-thru warning screen? And you can also choose to trust them permanently, aka trust on first use.

No... web of trust is an important aspect to https.

s/web of trust/centralization/

s/centralization/validating ownership

Without centralization I can MITM at the coffee shop and steal passwords.

WoT would fix that, unless the other coffee shop patrons have (directly or indirectly) trusted you.

They should at least allow for local addresses

Remember the good old time when it was not an almighty cartel of browsers that controlled your internet?

This is so an arbitrary decision and so much a pain in the ass. Again, a limited number of people used their corporate interests to decide for the whole world with almost no discussion.

The worst is that the "security" argument for this change is quite weak. Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.

Now, you as an user are so stupid, that browsers will decide for you what website is deemed safe for you to visit, the same as with appstores. Compared to the good old time, like traditional pc software installation, where it was you, the user that was free to decide the websites that you wanted to trust: google.com vs myshaddyfraudyweb.com

I'm surprised to see this as the highest comment on this post.

This is a clear security win, and thus good for users. And no, I don't trust websites to have my best interests in mind, not remotely. Hell, if browsers hadn't started warning about insecure connections then I suspect that even to this day most websites would still be insecure. We used to leave it up to the choice of each website, and that was a clear failure, and now they're being forced to provide better security, which is a clear win.

I agree with you about publicly available websites, but I'm not convinced this policy makes sense for IoT devices, especially for ones that aren't connected to the internet.

CA/B isn't a cartel, indeed it jumps through a bunch of hoops to ensure it isn't a cartel. Cartels are illegal in many countries (the one you're most likely thinking of right now, OPEC, doesn't need to care that cartels are illegal because its members are sovereign entities, and thus they decide what the law is)

Moreover, this didn't come from CA/B anyway, it was rejected there. CA/B agreed the previous 825 day limit, and the 39 month limit before that, but this new rule did not get support at CA/B so Apple imposed it unilaterally (and with some really poor communication but whatever).

Google and Mozilla have just decided that since they wanted this limit, and Apple has effectively imposed it anyway, they might as well go along for the ride.

I'm pretty sure Google, Mozilla, and Apple are who they meant in the first place, not CA/B.

>Compared to the good old time, like traditional pc software installation, where it was you, the user that was free to decide the websites that you wanted to trust: google.com vs myshaddyfraudyweb.com

People can barely tell whether it's really microsoft calling them saying their computer is infected. What makes you think they'll be able to tell the difference between google.com and google-secure-login.com, or whether they should download the "codec pack" that their shady streaming site is offering?

Corporations think they have to protect people from themselves now because people are now required, even encouraged, to blindly run all remote code they're sent. It's because browsers have become the OS. And now it's standard to metaphorically open every email attachment you receive.

> Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.

That sounds like a disagreement; it benefits the user, so let the website opt out? Because websites are known to have users' well-being in mind?

“Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.”

I would think the choice on how long to trust a certificate should be on the user, possibly using the hint that the creator of the certificate gave. You wouldn’t trust a certificate from evil-empire.com, no matter its expiration date, would you?

The discussion should be about whether the browser should make that decision on behalf of the user. I’m not sure I’m in favor of that. On the other hand, browsers already do a lot in that domain, for example by their choice of trusted root certificates (and changes to that list)

Yes, maybe it was not clear but that was what I wanted to say: It should be the job of the website to decide the expiration date for it's certificate. So they decide if they want to look shady, careless and use 10 years certs or look like trustable and serious and use 6 months. And indeed users would be able to use that to determine the trust they give to a website.

So in the end, websites determine their 'trust value' without the browsers 'police', that will let the possibility for special cases.

For example, if I do a device that is to be used out of internet for 3 years, logically the user will not see an issue with a 5 years certificate.

You mean the good old days of IE 5.5?

No, I don't. Browsers have always controlled the internet since the web became the dominant way the internet is used. And I'm really quite happy for them to do this because lord only knows helping my various relatives with their computers has proven to me that someone needs too.

Yeah those good old times when Comodo was hacked and issued certificates for gmail.com and nobody really cared. Or when some shady CAs sold intermediate certificates in devices so you could man in the middle all your network connections (and everyone else's, too).

So bad those times are over and we have this browser cartell enforcing some basic security standards for TLS. Screw them!

Shortening the validity duration does not stop any of those issues. It just shortens the duration of a potential attack to one year.

The validity time is part of a process where browser vendors have tightened rules for CAs over time.

We got plenty of gradual improvements over time. Validity time does not stop incidents, but it makes the impact smaller and allows ecosystem improvements to propagate faster.

Take for example Certificate Transparency, which is one of the most important ecosystem improvements. It was required for new certificates in 2018. But we still can't rely on Certificate Transparency logging for all certificates, as the certificate lifetimes were so long.

In the future such improvements will take maximum 1 year till all certificates have them.

Eliminate, no. But the goal of security is generally not to make breaches impossible, but to mitigate them / make them harder to achieve.

It's an uphill battle but I'm glad browser vendors are fighting it.

From the source code:


  // For certificates issued on-or-after the BR effective date of 1 July 2012:
  // 60 months.

  // For certificates issued on-or-after 1 April 2015: 39 months.

  // For certificates issued on-or-after 1 March 2018: 825 days.

  // For certificates issued on-or-after 1 September 2020: 398 days.
The source code also requires certificates issued before 1 July 2012 to expire on Jul 1st, 2019 at the latest.

On 30 April 2018 it became a requirement (in Chrome) for all certificates issued after that date to be recorded in a public Certificate Transparency log[0]. A certificate issued on 28 February 2018 could therefore be issued without being logged, while having a validity period of 39 months. Such a certificate would be valid until 28 May 2021.

Does that mean that next May, for the first time ever, the domains of all HTTPS sites on the web will be recorded in a public log? I think the only caveat to that is wildcard certificates.

[0] https://www.feistyduck.com/bulletproof-tls-newsletter/issue_...

In practice it's probably already true or very close to true that names from certificates in the Web PKI that are intended to be publicly accessible are all logged. As you observe if the name listed is a wildcard this doesn't tell you which (if any) of the names implied by that wildcard actually exist, and indeed no names for which certificates were issued need necessarily exist, the rule is only that if they did exist they'd belong to the subscriber.

Although the Chrome mandate only technically kicked in on 30 April in practice most CAs were considerably ahead of that date, in addition some of the logs are open to third parties uploading old certificates, Google even operates logs that deliberately accept certain untrustworthy certificates, just because it's interesting to collect them.

If you're excited to know what names exist, the Passive DNS suppliers can give you that information for a price today, their records will tell you about names that aren't associated with any type of certificate, and lots of other potentially valuable Business Intelligence. They aren't cheap though, whereas harvesting all of CT is fairly cheap, you can spin up a few k8s workers that collect it all and store it wherever (this is one of the tasks I did in my last job).

This is Google and Mozilla aligning with Apple's earlier announcement (https://support.apple.com/en-us/HT211025).

The CABF has talked about doing this before, most recently in SC22 (https://cabforum.org/2019/09/10/ballot-sc22-reduce-certifica...). In that case all browsers supported it, but it wasn't passed by the CA side.

This may be good for security, but it is extra burden for small web developers and individuals. Big players will have cert renewals automated.

It's possible and free for small players to use letsencrypt, that still takes some time to set up, manage and maintain over time.

Without automation, you've got an annual chore to do or your site goes offline.

I think some hosts are already starting to offer free and easy SSL certs to their small customers, but I do expect automated SSL management to be generally available for the masses before this takes effect.

Check out Caddy Server. It was only a few days ago when I was still managing my own certs and renewing them with a Cron job. Caddy now acts as my proxy for my various web domains and it handles certs automatically. Like literally you fill out a few lines in the config called a Caddy file and you do Caddy run and it gets the certs itself. And as long as it's running, it renews them automatically.

Giving an Internet-connected program autonomous write-access to system-critical filesystems is not considered good practice in production environments.

Much better to have a separate central cert management system that handles renewals and pushes the certs outwards to the DMZ systems.

This is true for enterprise, but for small business Caddy or Traefik is totally fine.

Caddy is good for enterprise use too because of its configurable, pluggable storage backends (doesn't have to be a file system). You can achieve the permission segmenting your company requires.

You can also use it as a certificate manager independently of a web server if you want.

Responsibilities have changed a bit. If you're going to host a website you are going to have to put a modicum of effort into ensuring that you are not harming others by doing so.

>are not harming others

How is HTTP harmful when you visit my website about amateur radio? An expired cert is no more harmful than bare http in this non-commercial non-institional personal context. It's the one being discussed in this sub-thread in case you missed it and assumed the normal HN business context.

The burden is real and completely unecessary for personal websites. This makes the web more commercial by imposing commercial requirements on everyone.

It's what killed off self-signing as a speed bump against massive surveillance and centralized everyone into the benign dictactorship of letsencrypt. But centralization will lead to problems when money is involved. Just look at dot org.

The real harm comes from this fetishism of commercial/institutional security models.

> How is HTTP harmful when you visit my website about amateur radio?

"Unharmful" HTTP sites are used to silently hack people's computers and keep them under observation for months. Every unsecured site contributes their small piece to keep the web unsafe for people who needs it to be safe.


This is exactly the same as blaming getting shot at a neighbor's BBQ on the neighbor for not hiring private security to deal with the government army specifically attacking you.

If your threat model includes nation state attacks you're gonna have problems no matter what. Change your personal behavior accordingly. Don't tell everyone else they need to wear bullet proof vests around the house and hire corporate security goons. They don't and doing so is burdensome.

You're right, today. But as everything, stuff that starts as very advanced tools only at the disposal of big agencies, with time ends up being reachable for more mundane users, or in this case, criminals.

So, in a way, it's probably just a matter of time that the kind of silent hack depicted in the Amnesty article is used for attacks targeted towards more general victims. I don't look forward to the day that just by reading an unprotected HTTP site is enough to get my phone compromised as part of a widespread scamming effort from someone trying to get credit card details or banking stuff... but it will propbably end up coming if we don't move all together for a more secure WWW.

I know complaining about it won't change the future course of things but these are all problems coming from treating the browser like the OS and exposing more and more low level functionality.

They aren't problems with security in HTTP versus HTTPS for a personal or small business static website.

Totally agree. The current situation with the complexity of browsers is crazy. Its implications are spoiling all around other technologies and causing all sorts of issues, like this one.

The problem with your analogy is that it downplays certain aspects and plays up others.

Attacks on HTTP sites are known threats that we have evidence for, they aren't ridiculous or unheard of. The defense is not "everyone where a bulletproof vest", it's get a certificate and set up HTTPS - a one time cost that will protect thousands of people.

You're making the choice for your users, who may not be as informed as you are, to not protect them. That's very different from asking them to wear a bulletproof vest.

Agreed. I'm sick to the back teeth of fscking with HTTPS/SSL on all the client static sites I manage. Certbot-apache was so flaky I had to switch every client to Nginx so that I could use certbot-nginx. The web has become a no-go zone for do-it-yourselfers. If I didn't setup my clients in VPSs I don't know how we would manage all the mailserver blacklisting and endless HTTPS/SSL requirements.

dehydrated is nice and painless.

These days, it's harder to set up a website _without_ SSL/TLS. If you're buying a domain name, they'll likely offer free HTTPS. If you're setting up a site via Shopify / Wix / etc, it'll use HTTPS. From what I've seen, sites without a valid certificate are either ancient and no longer maintained, or are built by devs learning the ropes of web development and haven't bothered to set up certbot on their server just yet.

There was already a fetishism of commercial/institutional security, and LetsEncrypt gave it quite the blow. Now companies that you used to have to pay a yearly fee for a certificate are offering their certificates for free.

It does stink that corporations have to be in the middle in the first place, but that's due to the difficult problem of "trust." I'm not sure it's possible to decentralize it, besides some sort of blockchain solution that would be unworkable in the real world.

> It's the one being discussed in this sub-thread in case you missed it and assumed the normal HN business context.

I did miss that but I did not assume a business context.

> The burden is real and completely unecessary for personal websites.

Users who visit your website are still at risk of having their connection hijacked - they could be phished, exploited, etc. This is maybe not something you consider important, it is certainly a sort of "boil the ocean" approach, but given the efforts put in up until this point I think it's already the case that most users are probably not visiting HTTP sites on the average day. Continuing that effort seems reasonable.

> This makes the web more commercial by imposing commercial requirements on everyone.

I'm not sure what you mean.

We have had this conversation to death: https://doesmysiteneedhttps.com/

I find some of the arguments on that page, uh ... unhelpful at best. Circular, even.

I guess I could see that for a small subset of the arguments presented, but that leaves all the rest. Honestly, "There's nothing sensitive on my site anyway." covers 90% of arguments I've seen against HTTPS and that answer is strong. The presence of weak arguments doesn't undermine the strong arguments.

There are injection tools that don't seek to steal data, but to weaponize your client.

It’s free to own a certificate today, so doesn’t matter.

This is silly to use as a blanket statement. There is nothing harmful about hosting a website. Especially personal sites, internal sites, or small businesses who use it as little more than a brochure that serves static content. My roof repair guy is not harming anyone by posting his information on a basic website.

What if I visit your roof repair guy's site and content is injected, informing me that they now take payments online? Or that I can download their special Roof Repair App to manage my bookings? Or it contains an exploit payload?

It is extremely uncommon for me to actually visit an HTTP website - I even have HTTPSEverywhere block them by default, so I'd know if I were. That means that I am relatively protected to such avenues until I visit your roof repair guy's website.

If I was the bad actor I would simply purchase google ads in the name of the target business sending traffic to my own site with wonderful green padlock - it's cheaper and has bigger reach than trying to hijack TCP/IP traffic.

More to the point - if I am running a collection of Karl Marx works it is highly unlikely that he would request payments.

You're describing a completely different attack vector, which is the entire point - to push attackers to different attacks. if we eliminate HTTP, we can focus more effort on the attacks you're describing.

Regardless of the content, hijacking is a danger to users.

It's worrisome that you injected yourself into the conversation between me and my users. How is this any of your business?

I don't really understand your point. You're upset because I am advocating for your users despite not being one? I... don't care at all.

It isn't my business so I've done nothing to reach out to your users or interfere in your website. We're having a discussion about technology on a technical forum.

It is the browser developers' business though since they are tasked with protecting users from these specific threats.

Because he might be a user too.

And are you blocking all Web traffic except from people who your users, somehow? If not, then everyone is your user.

One reason for these proposals was to put pressure on the SSL certificate ecosystem to provide (CAs) and adopt (hosting) automated SSL renewal practices. Businesses have had three years since Let's Encrypt first went live to adopt such practices, but many chose not to — not just hosting providers, but e.g. bigcorp load balancers too.

Guess what - not all websites are businesses. In fact not all websites are dynamically-generated so why the fsck do we all have to put up with this madness? Make HTTPS/SSL necessary for transactional sites but for simple static sites give me a break.

did you know there are ISPs out there that inject garbage code into the html of unprotected sites?

get out of here with this HTTPS is unnecessary tedium.

Better not do business with shady companies then.

or you could follow established best practices and secure your site with TLS.

Both. I am not responsible if you chose a shitty ISP.

But this makes no sense because the user has basically zero control of how their traffic is routed to/from you.

If we could trust the entire network we wouldn’t need TLS for any site.

There are lots of good options for low-maintenance SSL certificates, from self-hosted (Let's Encrypt) to CDNs (CloudFlare) to hosting platforms (Netlify).

Even so, this doesn't actually change much. I've never bought a certificate valid for more than a year. I'm not aware of any major player that sells certificates valid for more than a year. So this rule has existed for a long time in practice, but is only now being codified.

I genuinely believe that in 2020 the myriad of ways of getting automated TLS is easier than logging into a website and uploading a CSR and then placing that certificate somewhere.

Check out Certera https://docs.certera.io

It's PKI for Let's Encrypt certificates. Helps you issue, renew, revoke certs from a central place. Also get alerts so you know when things have changed, expired, failed to renew.

While a lot of places give you certs built in, there's a whole world of places you still need certs. Like FTP, mail, behind load balancers, disparate environments and systems, etc.

In the future, I'm planning on creating a way to automate the certificate exchange process. This should help with using and exchanging certs used in client authentication and things like SAML SSO. If expiration get down to a month or less, I see a need for a system to help do all of these things and more.

This looks interesting as a log of Let's Encrypt certificate operations, but is it more than that, and why would I want to use it?

To centrally manage all of your LE certificates, keys, alerting, etc. You can also more easily use LE certs in a wider array of scenarios too. Check out the docs to learn more.

I did have a look at the docs, but they more explained the how, rather than the why - I missed some kind of intro/overview explaining the value proposition.

I'm still a bit fuzzy on this - why would I want alerting, for example? Automation is a big part of LE, and my certs are configured to auto-renew. If that was to fail for some reason, then LE will send me an email - is it this part where this tool comes in, providing improved alerts where automation has failed?

That's great feedback. I'll update the docs to better explain the why.

To elaborate on the why for alerting, there are many situations that I've seen where things change and subsequently fail silently. Perhaps some dependencies, or maybe configuration changes, caused things to break. Also, alerting doesn't only have to be for your certificates. You can point to any endpoint to monitor as well. There are three aspects of alerting: changes to the cert (perhaps you care about a 3rd party certificate and its underlying key changing), failure to renew, and expirations. Each comes with its own benefits and use cases.

To expand on the why a bit further for the project as a whole, it's really as a way to help consolidate and centralize things. I've seen many disparate ways of using Let's Encrypt. From various clients to some hacks to better support more complicated scenarios. By separating obtaining the certificate from applying, it helps facilitate many things, like using LE certs behind load balancers & proxies, non-standard ports, things that don't speak HTTP, etc.

If certificate expiration continues to decrease in time, we'll need some capabilities to exchange certificates in an automated fashion as well. I'd also like to incorporate Certificate Transparency logs so you can be sure no one has issued certs for your domain(s). There are many cool and interesting scenarios but mostly the challenges come when managing things at scale. So, it's not really all that useful if you're only managing one or two certs.

You've got it backward. It is a boon for small players and troubles for large companies.

Small players can easily get certificates manually or automate. The platforms/tools they use often give certificates out of the box (cloudflare, heroku, wordpress, etc...).

Large players can't manage certificates. Developers/sysadmins can't use let's encrypt because it's prohibited by higher up and blocked. Even if they could use it, it's not supported by their older tools and devices. The last large company I worked for had no automation around certificates and the department that handled certificate requests was sabotaging attempts to automate, possibly out of fear of losing their jobs.

> that still takes some time to set up, manage and maintain over time.

I’d say it takes less time than going through a single paid certificate store… Assuming you already have a tool. If you don’t, then maybe it’s the same or 5 minutes more.

Can you describe the kind of person who hosts their own website but cannot easily set up Let's Encrypt automatic renewal?

There's no cert because there's no need for one in the first place. Mentioning that is pretty silly - it's obvious that there's nothing wrong with a static site with now cert, and no one is arguing against that.

> no need for one in the first place ... it's obvious that there's nothing wrong with a static site with no cert

Oh yes, there is.



> Your site needs HTTPS.

> there's nothing wrong with a static site with no cert

Not really. Google says "switch to HTTPS or lose ranking":


Good to note. But I think you're distracting from the article's talking point.

I disagree with "switch to HTTPS or lose ranking", but that's an HTTP vs. HTTPS issue with Google's search ranking, not about Chromium or Mozilla. This article is about Chromium & Mozilla making stricter rules for HTTPS certificates. That's not a bad thing, to hold HTTPS sites to a better standard.

The whole "Let's Encrypt should solve all your problems" attitude is arrogant and short-sighted.

1) In my experience the user experience even for technical admins is still flakey on at least some popular platforms. In other words, it's not as incredible as you think.

2) It's not available to a host that doesn't connect to the internet but does occasionally get connected to by a local browser (eg. IoT firewalled inside my LAN is one obvious such case; I'm sure there are others).

And most importantly:

3) You'd have to be insane or naive to accept an architecture that leaves you dependant on a single vendor (especially if you need that vendor more than they need you!).

How fortunate, then, that LE isn't the only vendor. Not even the only ACME vendor, nor the only free vendor (https://zerossl.com/features/acme/).

If your device never connects to the internet then how would any public cert work? It would expire like any other?

Me. I use shared hosting on a server that runs a reverse nginx proxy to my nginx server. I don't have root on the server. I have a LE cert that I need to manually fiddle with DNS settings every 3 months to get. If you know how to automate it I'd love to hear about it.

Why doesn't their nginx proxy /.well-known/ requests for your domain to your nginx? Then you could just use `certbot certonly --webroot --webroot-path /path/to/webroot/for/your/domain -d your.domain.name -d www.your.domain.name` once and put `certbot renew` and nginx reload in crontab weekly, and you're good to go.

If you can't use HTTP-01 and must use DNS-01 challenge, I would check whether the software that runs your host's DNS management panel has an API in addition to manual mode. If not, I would check for ability to automate HTTP requests to that tool (parse the HTML, submit the forms, basically). My hope would be that the tool is popular and someone already did the work and code exists to operate it as if it had an API.

If you can do that, you can write (or find one already written) a certbot plugin that performs the DNS challenge using your credentials to the host provided DNS settings. certbot has number of plugins for the big hosting providers: https://github.com/certbot/certbot

certbot is the most popular Let's Encrypt client, but it's not the only one. Maybe another client has support for your situation. I would maybe ask the support of your hosting provider, maybe they know something.

Letsencrypt is broken or an incredible pain in so many different setups its not even funny.

What are setups where it’s broken? Sincerely.

If you can’t accept inbound http traffic then you use DNS verification and if you never contact the internet then no public cert could work for you.

Devices with web based interface (KVM over IP, IPMI, etc).

iDRAC has a CLI tool that can be instrumented to install new certs regularly. I’m sure other vendors do as well.

That's me! I'm technical enough to self-sign for ssl for my sites (it and tor are what I do instead) but I run on lots of old hardware and old (>5 years) OSes. The tools for constantly re-updating letsencrypt simply don't work and all the containerizations didn't exist yet. I've tried nearly a dozen LetsEncrypt updates solutions, compiled from source, from debs, "standalone" only bash solutions, etc, there's always a catch that prevents it from working.

Are those >5 year OSes receiving security patches?

They probably receive more security patches than Centos 8 and by that I mean Centos 8 is lagging behind.

Shared hosting

Unless they set up LtE for their customers

(And as much as I like LtE I think it's complicate to depend in one issuer only)

Semaphor asks "can you describe the kind of person". Since when is "shared hosting" a person?

People who know how to set up a website on a shared hosting platform probably also know how to renew a LE certificate, I think.

Lots of people. Such arrogance from those who post on hackernews.

it takes absolutely no time at all to set up for an individual on their VPS, compared to the faff of going through the openssl csr process + buying from a CA

"On their VPS". What are you smoking?

It is not much harder to do this every year as opposed to doing this every few years. It’s just an incentive to streamline the process.

> It's possible and free for small players to use letsencrypt, that still takes some time to set up, manage and maintain over time.

If you want to run a webserver but are unable to set up a cronjob that does

  certbot renew
you don't deserve external users. Full stop.

If it's just you and you don't care about your own security, then do whatever you want in your own browser.

I have tried to look at the documentation for certbot and the amount effort they put into optimizing the fastpath makes it incredibly difficult to do things manually. The documentation is absolutely awful. Certbot uses .pem files which are practically useless to any JVM based application. So now you got to add your --deploy-hook and add a custom script to convert everything. Don't use any of the blessed DNS providers? Again write your own authentication and cleanup hooks. Suddenly your simple certbot setup involves 3 different scripts that have to be tailored to your specific situation. Sure there are nice blog posts that go through the entire thing but the official documentation basically pretends that your use case doesn't even exist because everyone is running Apache and Nginx, right guys?

> you don't deserve external users. Full stop.

It’s shit attitudes like this that killed the old internet we all loved

Don't feed the 5 hour old troll account.

It's not wrong to build whatever you want for yourself.

But if you have external users on your site sending data to your site, you have a responsibility to not treat your users' data as meaningless.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact