Hacker News new | past | comments | ask | show | jobs | submit login
Safari will no longer trust certs valid for more than 13 months (theregister.co.uk)
209 points by nimbius 3 months ago | hide | past | web | favorite | 175 comments



There are two mutually exclusive views of the web. As a set of protocols to allow individual humans to share information about things they love and the web as a set of protocols to make a living.

There are real reasons for the for-profit web to want to limited cert lifetimes since revocation doesn't really work in practice. In terms of browser dev the two views are mutually exclusive and the one that funds the coders gets it's way. Especially with the W3C marginalized and a mix of corporations and corporate-centric standards groups running things now. Expect to eventually be unable to host a visitable or indexable website without relying on at least one third party service in the near future.


With Let's Encrypt it's cheaper than ever to host a personal website over HTTPS with a certificate that updates itself.

Due to Let's Encrypt, free hosting services like Netlify or GitHub Pages are now providing HTTPS certificates and installing it on your own server is pretty painless, if you're into managing your own server. And if your hosting provider doesn't support Let's Encrypt, you can always put Cloudflare in front of it.

So I don't really understand what you're talking about, when the price and ongoing maintenance of HTTPS certificates has gone significantly down for hobbyists.


> Let's Encrypt [...] Netlify [...] GitHub Pages [...] your hosting provider [...] Cloudflare

Those are exactly the kind of third-party services the GP was talking about.

> if you're into managing your own server

One of the core advantages of having a personal server has always been that you can keep it entirely off the internet and run it without any involvement of third-party services. That is not anymore possible. Like, I now need to purchase a domain name and set up infrastructure to fulfill Let's Encrypt's challanges, just so I can serve a page on my LAN.


Not to mention people who can't obtain certs due to geopolitics or the inability to enter a legal agreement with the ISRG.

I understand why letsencrypt is needed and what problem it's trying to solve, but forcing international dependencies on a small number of select entities is not the way to go about solving them.

I can't help but feel saddened when see one of the greatest monuments to human ingenuity, a global decentralized network, get dismantled like this.


You can use HTTP instead of HTTPS or you can use a self-signed cert.


Often, this doesn't even work for small projects:

- HTTP have no access to a significant part of web APIs added in the last years - and they will be blocked from all APIs that are being added in the future.

- Self-signed certs show security warnings that are deliberately confusion and discouraging to click through and will likely become even more so in the future. Show those to other people is no option.


In my opinion this doesn't fall under the "it's only on my LAN and a super small project" category. If you LAN is a company then you should be able to deploy a custom CA to your clients and sign your certs. If it's only your small side project you personally work on, then just trusting the cert locally works out too. If people don't want to use third party providers, they have to do some of the work on their own. That's nothing new (at least to me).


A small, personal project does not mean that the developer is the only person that uses it. Servers can also be used by friends, family, roommates, etc for which installing and managing custom CAs is a hassle.

I do agree that using Self-signed certs and clicking through security warnings is possible - however, is being made deliberately tedious (e.g. Chrome will forget that you accepted the cert after a while). It also seems to me that this part is actively discouraged by browser vendors, so I'm honestly not sure how long it will stay open.

Self-signed certificates are also unpredictable to do API requests to because no accept UI is shown for such requests.

> That's nothing new (at least to me).

It absolutely is. With HTTP, you could simply run a local web server and have everything interested point their browser towards it - and everything worked. This is not possible anymore unless you want to make recurring payments for a domain and accept that you need an internet connection.


And this is exactly why SSL everywhere is a really, really, bad idea. (Plus the problems of IoT server certs, mentioned above...)


I understand the rationale behind https-everywhere and I believe it's absolutely necessary for the web at large. The problem of network attackers is certainly real.

However, a side-effect (intentional or not) is that the web is turned into a sort of app store: Either you belong to the platform or you don't, and whether or not you do is decided by third parties. (Who, btw, are not even bound by any kind of public mandate - they are simply private, profit-driven companies)

I also don't think the stated security advantages always make sense: Let's Encrypt will serve network attackers just as easily as legitimate customers. Meanwhile it will lead to a lot of stuff being exposed on the internet than would be necessary otherwise. We also force devices that simply should expose a local web interface to have a cloud service. I don't see how this makes anything more secure.

I guess what I'd want is simply a way to designate a device as "trusted" locally, without depending on third-party services, internet connectivity or anything else and without anything expiring. A way that should be encouraged to be used by non-techical users as well.


Yeah, right - and risk the wrath of your clients when they ask you why their shiny new website can't be accessed because it doesn't implement HTTPS. Face it, folks, the golden age of the internet is long past. Now we have to jump through a bewildering array of "security" hoops just to host a ing web page.


I was replying to someone who was complaining about what to do on "my LAN," no mention was made of clients. HTTPS isn't "security" with sarcasm quotes, it's actual security, it really does what it's supposed to do.


>So I don't really understand what you're talking about,

That's because you didn't finish reading my post.

>Expect to eventually be unable to host a visitable or indexable website without relying on at least one third party service in the near future.


You mean like how it has been for decades ?

You have always needed at minimum hosting, SSL and domain service providers.


You definitely didn’t need, and most sites didn’t use SSL a decade ago.


DNS you always needed.


1) no.

2) There are multiple organizations running DNS and the general consensus is that there shouldn’t be a single organization running it (except ICANN which is generally careful to delegate the root and TLDs to other organizations.)

3) Just because you have one bad thing doesn’t mean it’s ok to have two.


Not strictly necessary, but convenient. All browsers I know of allow you to browse to an IP address.


Well, I tried to respond to GP with a tongue in cheek link of an IP address to lmgtfy or Wikipedia on DNS or something, but every site I tried checked the Host header, (I assume) so it didn't work.


This shouldn't be downvoted. I distinctly remember browsing websites by their addresses.


You mean besides the fact that you need an ISP?


NO! Two big reasons, among many others: 1) These are unnecessary third-party entanglements, allowing anyone to be slienced within a year simply by refusing to renew their certificates. 2) It's the 21st century, very soon, most servers will not have administrators - smart IoT devices, if we want them to use SSL/TLS, need certificates. If you think updating these certs on regular servers is a PITA, try it on a an embedded device that has very limited or intermittent Internet access, if it's even connected to the public internet at all!


It’s not the price that’s concerning; it’s having external dependancies.


> Expect to eventually be unable to host a visitable or indexable website without relying on at least one third party service in the near future.

I mean... How do you expect to have a valid, trusted TLS certificate without a third party? Nobody says it has to be Lets Encrypt or Cloudflare or Amazon load balancers or ... so forth. Some certificate authorities even already have APIs....


No third party is strictly necessary.

You have to know what the servers public key is in a reliable way. That doesn't necessarily mean a third party has to be involved. Maybe you call them on the phone and check the fingerprint. Maybe it's your server and you sneakernet the key to your client. And to appease the dumb protocol you wrap it up as a certificate (self-signed) and accept yourself as trustworthy (what a concept!).

Even if you use a third party, it doesn't have to be some big company that everybody is functionally obligated to trust. It could be a mutual friend making an introduction.

And certificates are not the only or necessarily the right solution to this problem in all domains. For example, if you have to go and check OCSP every time to see if the certificate might have been revoked, then you are relying on an online trusted source; but if you have an online trusted source, why don't they just tell you the current key and dispense with the certificate mechanism altogether?

X.509 was designed for X.500 and concepts that were relevant for X.500 continue to plague us all. I deployed corporate PKIs back in the 1990s for Sun and Pfizer and worked with other clients. What I remember best from those times was how ridiculous the whole thing is, over-engineered by parties who clearly didn't keep focus on the fundemental problem that public key cryptography is meant to solve.


For the more concrete goal of having a valid TLS certificate that is trusted by browsers, you absolutely need a third party however.


My point is that this news item does not change reliance on a third party at all. Not that it’s impossible to exist.


A lot of other apps that use asymmetric crypto to authenticate the other end allow for “trust on first contact” for establishing trust.

It’s a little disappointing that many browsers have no support for this whatsoever and most have very poor support for it.


TOFU has its problems too, but its neither here nor there; CAs have been a part of HTTPS and SSL since the 90s. This news item, to me, does not signify any shift in whether a third party is needed or not.


I am telling you there is a conspiracy going on to end the open and free web.

We should fiercefully start to fight back otherwise those who made the internet great (the open source hackers working on LAMP) will loose everything.


I'll agree that modern mainstream browsers (and devices!) have an increasingly centralized security model that isn't at all conducive to an open web. But given the current state of the FOSS ecosystem, I'm not terribly concerned in the long term so long as the network layer remains open to all.

(Consider: QtWebEngine, PinePhone, SiFive, OpenTitan, IPFS, DAT, Solid, ...)


I think the web is already a lost battle. It'll just take time (a lot of it) for the control to fully sink in. But I think that once this control becomes mainstream then there will be something else with more freedom. Some form of an alternate internet that will have barriers to entry. Something like tor. It'll probably be based on p2p.


Gab's rework of Mastodon is starting to look like this.


How is the network layer open when legit SMTP servers with proper SPF records are blacklisted by Google and Microsoft with no accountability. The internet is owned by these organisations. It's a fait accompli. It's only a matter of time before sites not hosted on cloud platforms become blacklisted due to the pitfalls of self-hosted certification. Jeeezus.


Refusal of large email providers to properly federate is certainly frustrating, but I was actually referring to the network layer in the OSI model. So long as the raw IP traffic isn't subject to broad and pervasive restrictions by the authorities, new software stacks can be freely invented and used on a worldwide basis (IPFS, for example).

(Speaking of other software stacks, a few Gopher servers remain online today [1] and there's even a DECnet in operation! [2])

[1] http://gopher.floodgap.com/gopher/gw?gopher/0/v2/vstat

[2] http://www.update.uu.se/~bqt/hecnet.html


Does apple try to kill the web? Does Google try to take it over?

Why doesn't Apple kill a million sh.tty apps just trying to grab attention doing nothing useful?

It's time to get rid of Apple/Google browsers really


CAs have always been third-party services, though. Their APIs just used to involve humans talking to humans, rather than machines talking to machines. But that's no less of an operational dependency.

In a systems engineering plan (e.g. a NASA long-term project), both kinds of dependencies are considered liabilities that must be engineered for long-term reliability, fault-tolerance, etc.


In the past browsers did not scaremonger so much about self-signed certs and search engines would index sites that were just http and this didn't effect their ranking.


>In the past browsers did not scaremonger so much about self-signed certs

That's necessary because people tend to be fixated on whatever task they're doing, and that causes them to click pass any warnings regardless of risk or comprehension.


I can't tell if you are serious here.

You want self signed certs to be trusted and the web to move back to the days of HTTP where anybody could just intercept your traffic ?


That's not what he said.


Search engines have stopped listing http sites? Since when? I do not have https for my site (unclear if sourcefoge allows https for custom domains)


It's different if you can "buy" a cert and be "safe" for a few years vs needing to renew it every 90 days. If they choose not to renew your cert, you're out after what ever time is left on your cert.

Let's lower that to 60 ... 45 ... 15 .. ah, let's just renew all certs every day, that's the only way to keep control centralized.


You already have to rely on a third party to get a domain (and to get an IP address)


Every additional third party is an extra problem. It's an extra relationship to track.


Sorry if this sounds dumb but this is a non sequitur for me. How does the for-profit web benefit from being able to revoke certificates? And why does revocation not work in practice? I thought all major browsers check their own lists to see if certs are revoked


This paper:

https://cseweb.ucsd.edu/~schulman/docs/imc15-revocation.pdf

is slightly dated but gives a very good rundown of how revocation checks fail in practice along with different attempts to improve them.


Fsck certificates, period. Make an exception for ecommerce and banking sites but leave the rest of us alone. Certificates have become the equivalent of the UK's national policy of building road humps 50 metres apart to thwart speeding drivers.


You can always run your own company-internal or limited-to-your-home CA and add it as custom root CA to every browser in that domain. These maximum certificate validity rules IIRC don't affect custom root CAs you added.

I agree with you that the web is being commercialized, but limiting certificate lifetimes isn't the problem. It's rather stuff like the .org domain sale, google amp, Chrome killing the file:// protocol (after all, html files don't have ads inside them, can't be indexed and they don't make any money for Google), FLOSS projects adopting discord, etc.


What about people who want to share information about things they love with privacy and security? This change seems likely to enhance my privacy and security.


Anybody knows why Safari on iOS will reject using self-signed certificates for WebSocket connections? (doesn't happen on desktop)

This is a major annoyance since it is the only browser that does this and won't work with "wss://" URIs even after accepting the mandatory certificate exception. Accessing the page rightfully shows a warning on all browsers, to which the user can click on "Continue" or similar, to ignore this and start the demo, except if they try with iOS (this is for simple one-off tutorials or demos [1], so maintaining good certs or requesting users to install a custom root CA in their devices is out of the question).

I know, this seems like a StackOverflow question... but the only 100% relevant one I found [2] didn't get much love, so I thought maybe someone at HN knows more about this

[1]: https://github.com/Kurento/kurento-demos-js/tree/master/play...

[2]: https://stackoverflow.com/questions/36741972/using-a-self-si...


Apple recently (last fall) introduced a change that rejects all certificates that are valid for more than two years, whether they're public or private. I would double-check whether the self-signed certificates are valid for longer (many default self-sign processes make ten-year certs, for instance). Failing that, you may have to create a proper private PKI and get users to install the CA, or else use something like Let's Encrypt for certs.


Thanks! you gave me one thing to check. The certs are indeed made for 10 years validity, the idea being that they should just make the demo work for a long time, even if it is with a certificate warning.

I don't think it's worth the effort of setting up all the automatic renewal process with Let's Encrypt for what amounts to a quick static code example... so in principle I wanted to avoid having to set that up.

The contents of this article means that soon this 2-year maximum will become a 13-month maximum, right?

Is there any way to create these specific-purpose certs with some kind of "private use" or "development use" flag that allows not having to re-create them so soon? 13 months looks too short of a time period, I'm afraid nobody will remember to refresh the fake certs and the demos will break... for really no good reason (related to the code itself)


No, Apple was clear that this change (the 13-month limit) only affects public certificates, so internal private PKIs and self-signed certificates retain the two-year limit from before. I think that that two-year limit extends to every certificate used for TLS in the system, so there's no way to get around it by manually trusting the certificate or CA chain (there's no 'development use' flag on certs), but that bears testing.


My guess is you're required to install your own private root cert on iOS for doing websocket dev.

https://www.howtogeek.com/253325/how-to-create-an-ios-config...


Apple is the reason I just switched a handful of certificates from Let’s Encrypt to basic 2 year DV.

To support Apple Pay on the web, you have to go to your developer account on the Apple website, and under certificates generate a custom ASN.1 authentication file and upload it to the `/.well-known/` folder on your domain. Once uploaded, you have to click a “Verify” link, to check the file and mark the domain as Apple Pay approved. The issue is the verification only lasts as long as the certificate expiration, and if a new certificate is installed, you have to re-verify the domain with a newly generated authentication file, for each domain. A fresh certificate with an old authentication file does not work.

This means if using Let’s Encrypt, you have to manually step through the verification process for each of your production, staging and development environments every three months. There doesn’t appear to be any automated way to handle this.

After two cycles of this I opted to purchase 2-year certificates just to save the hassle of re-authenticating my web environments on a rolling basis by hand.

This announcement just means more frequent manual processes once again. What a pain.


There is some argument for this kind of certificate pinning (though I'm honestly not sold on the idea), but I think that this example is a further argument for scriptable certificate renewal. Most ACME clients allow you to run scripts after the certificate is renewed, so you could (in principle) trigger a script that does this Apple-specific verification process for you (maybe you could even trigger the "verify" button click by messing around with cURL -- though it'd be pretty annoying for there to be no API for this process).

By manually getting 2-year certificates you're setting yourself up to forget part of the renewal process. This was the main argument behind Let's Encrypt having such short expiration windows -- it encourages people to script their entire deployments.


I think this is an argument that while a certain group of server administrators and security professionals love ACME, it still has a number of kinks to work out.

In my case I’m not really setting myself up to forget about the renewal process as I have a script to generate the key and CSR and an ansible playbook to update the cert once sent to me. Certificate vendors are more than happy to email you to remind you of an upcoming renewal, in just the same way Let’s Encrypt does.

On a side note, trying to script out the proper setup/migration of Let’s Encrypt is WAY more involved and fraught with mistakes than a simple certificate upload. The failure case is that the initial certificate issuance succeeds but since the initial setup needs to happen before SSL is configured and working with Nginx, you can’t use the same config before and after the initial setup. Thus you need to have two separate Nginx configs and switch them, or you have to use standalone for the initial issuance and webroot for renewals. Both of these are far easier to mess up than uploading a certificate.

I’ve set up more than 10 different servers with Let’s Encrypt and I don’t think a single one has just worked. I think in every case something got messed up along the way, and you only find out about it 70 days later with the renewal email, IF you are the admin email on the LE account.

Don’t get me wrong, I think there are great things about Let’s Encrypt, but it has plenty of thorns to deal with. I’m glad we haven’t all been forced into three month renewals by the CAB forum (since the certificate vendors have a say and they got feedback from customers before agreeing). I am fairly annoyed that Apple decided to unilaterally change the rules when they were already part of an organization that deals with this topic. I can only imagine browser vendors moving forward will have little to no concern for site/server administrators and how their changes are affecting things.

As it stands now, we are at 400 day certificate lifetime, which means a bad actor can only impersonate for a year, in the name of revocation being performance prohibitive. This is effectively the same as three years from a users perspective. The only meaningful change would be a lifetime of something like a week or a day, but I shudder to think of all the ways that will fail spectacularly.


> maybe you could even trigger the "verify" button click by messing around with cURL

Good luck with that—some time ago Apple locked down pretty much everything related to developer accounts with 2FA requirement that, of course, works on their proprietary platform via sending codes to logged in Apple devices. Maybe you can snatch it via some Automator kung-fu but I seriously doubt that.


I have my own mini-CA for internal stuff, built using the xca[0] tool with certificates and private keys distributed manually. I usually make the keys valid for two years so that I don't have to renew and redistribute very often. Most of this started as a way to learn how this stuff works, but it's now turned into a "production" thing as I've started using this to issue user certificates for VPN authentication.

Is there any tool that I can use to help automate this in a reasonable manner?

Ideally, I'd love to see a web version of xca that supports ACME with some controls on how ACME certificates get issued. Bonus points for supporting OCSP as distribution of CRLs is another upcoming pain point.

[0] https://hohnstaedt.de/xca/


Step CA (https://smallstep.com/certificates/) is a small, manageable CA designed for private PKI uses, and it now fully supports using ACME to hand out certificates to your endpoints. That means anything like certbot that can speak ACME can pull private certs from a Step-CA instance.


If you're moving Private Keys you are Doing It Wrong. This is very common in VPN setups (and S/MIME) but still a terrible idea and worth taking the time to figure out how you'll make sure you don't do this.


I'm definitely open to learning how to do things better, and I have no doubt that some of what I'm doing is wrong - after all, this whole thing started as a homelab-turned-homeprod learning experience.

I think the proper way to do certs is to have the server (Web, VPN, whatever) create a certificate signing request and private key on the server, send the signing request to the CA to sign it, and then install the resulting signed cert on the server. Is this correct?

What I'm finding in some cases is that there are cases where this just won't work. For example, my QNAP NAS allows me to either create a self-signed certificate (I don't want this, I want it signed by my CA), get one from Let's Encrypt (same issue), or upload certificate, private key, and optional intermediate CA certificate files (and we're back to moving private keys). This is a limitation of QNAP's GUI for sure, but it's not unique to QNAP.

Similarly, I'm not sure how I'd generate the certificate plus private key on an iOS device and submit it for signing (the VPN scenario). This one particularly bothers me because the .mobileconfig file ends up being the key to the castle. Ideally I'd like the client to be authenticated with both a user-specific certificate and EAP, but I don't think iOS supports this. I haven't quite gone very far down this rabbit hole, so it's possible that I'm missing something.

When I finally secure my internal web server (which acts as a reverse proxy for all of my internal services), I'll try the CSR approach for the learning experience. This approach should also work fine on my VPN server.


> I think the proper way to do certs is to have the server (Web, VPN, whatever) create a certificate signing request and private key on the server, send the signing request to the CA to sign it, and then install the resulting signed cert on the server. Is this correct?

Yes. The CSR is public information so it's fine to send that somewhere. "Sign it" [the CSR] is a phrase that doesn't entirely reflect what's happening with the relationship between a CSR and Certificate, experts know it's technically wrong but they say it anyway, so don't sweat it but it's probably misleading.

I will try to circle back to see if I have any suggestions for your specific scenarios later.


Thanks!

I'm going to try the CSR route the next time I have to do this and see how that works out. xca seems to handle CSRs easily, so doing this less-wrong should work out fine.

I'm not too concerned about the QNAP scenario - it's consumer gear, so I expect it to lean more towards doing things easily over doing things correctly. The iOS scenario is much more interesting to me since this is something that's more applicable in the real world.


Well, you are mostly correct.

There are ways to distribute keys safely. You can use key derivation or you can set up transport key (yet another key, just to transport session keys). Both methods are allowed by PCI if done correctly.

Unfortunately that's not what typically happens. Setting up infrastructure to transport keys safely is such a pain in the ass that, for SSL/TLS, I always recommend to just generate the key at the site of its intended use.


If you're doing it from the machine where your CA is, and using secure transport, then it shouldn't be less secure than doing the CSR/Sign/Certificate dance.

I mean, it would be nice, especially if you are distributing to other people, but I don't really see a difference if we are talking about uploading cert/keys to servers where you have root access.


My desire to learn how this works still applies.

But if I'm dealing with this sort of thing in outside of my homelab it'd be nice to see how it all works. I figure that learning how the CSR dance works in my homelab will help me understand how it works in the real world.


check out Cloudflares CFSSL https://github.com/cloudflare/cfssl to manage the CA side, its API, and its OCSP

then check out Netflix Lemur for issuing and tracking certs automatically. https://github.com/Netflix/lemur


CFSSL was developed with cloudflare-specific use cases in mind. If you're looking for a general purpose personal CA, checkout smallstep/certificates: https://github.com/smallstep/certificates. Full disclosure, I work for smallstep, but we’re not exactly competitors. Regardless, here’s an external analysis of the two: https://jite.eu/2020/2/17/step-ca/


This looks very nice too! I'll have to look at this one closely, it seems to give me most of what I'm looking for. Thanks!


These both look really good! I'll have to check them out in more detail.


Use a web browser that respects your choices. Your software should work for you, not the other way around.


Users can choose to manually trust expired certs in macOS/iOS, I don't believe that's what is at issue here.


I develop for Apple platforms, and it's absolutely mind-boggling how frequently and regularly they break your code for no tangible benefit.

Hopefully their influence doesn't spread to the web too.


There is a tangible benefit. For users.

64-bit only, Project Catalyst, BitCode, HTTPS only connections etc are examples of initiatives which definitely has caused pain to developers but has immensely benefited users as a whole. And if you don't passionately care about users then frankly find another platform to develop on.


64-bit only certainly doesn't benefit me as a user on the balance. There's a lot of software that's useful that has zero need for yearly updates.


How does it benefit the user for half their software to break, or for them to have to pay a vendor (rightfully) for a massive update/overhaul just so the software keeps working?

64-bit Only benefits Apple because they don't have to maintain the 32-bit stack anymore. Any perf-sensitive software that needed to be 64-bit transitioned a long time ago.


>64-bit only

how has that benefitted the user? Directly and measureably, no "it's 0.3% faster now" excuses.


Not having to load both 32-bit and 64-bit libraries into memory is an efficiency (and battery life) improvement on the system as a whole.


That would fall into the "it's 0.3% faster now" category.


No joint announcement with other industry 'leaders' like Google and Microsoft? What is their stance on this? Will they be making similar changes to Chrome/Edge? And Mozilla? And maybe I am wrong but compromised certs are game over very soon aren't they? Reducing their lifetime to 'just' a year is still plenty of time to do enough damage seemingly, so what exactly does this high-handed change bring to the table? Another reason stated that it will keep the cert management folks busy and alert also sounds like grasping at straws to prop up the decision. I wonder what the entire reasons are...


The goal is to promote automation and continue lowering certificate lifetimes as operations get better. This ultimately will allow for lifetimes short enough to be useful.

As for the other browsers, Google originally proposed SC22 (https://cabforum.org/pipermail/servercert-wg/2019-August/000...) last year and all the browsers voted for it. CAs voted it down at the time but there were rumblings via various back channels that several major CAs actually wanted the ballot to pass but for political reasons could not publicly support it.

So while Apple is acting “unilaterally” here, there is universal support among browser makers and tepid support from CAs. You should expect Google and Mozilla to follow suit in the next 6-12 months.


How will that automation verify that certificate is issued to the legal owner of the web site and not a hacker? Are the challenges used by Let's Encrypt secure? For me, automating certificate issuance will lead to less and less verification, to the point where having a valid certificate will become meaningless.

EDIT: to clarify - there are two bad things about Let's Encrypt:

1. It's automated

2. It's free

The fact that it's automated results in less human intervention along the way, which on one hand lowers costs, on the other hand makes it detecting scams harder (unless they deploy some really Machine Learning that detects frauds).

The fact that it's free means that there's no credit card number or other info that would help identify actual person that requested certificate issuance.

Together those things make things less secure, not more.

EDIT 2: Both types of Let's Encrypt challenges look like pushing down the responsibility to either web server owner or DNS service. Maybe that's a good thing, since at least there's one fewer party that can screw things up.


> The fact that it's automated results in less human intervention along the way...

Let's Encrypt is far from unique in being heavily automated. If anything, its verifications are more stringent than those used by other major CAs.

> The fact that it's free means that there's no credit card number or other info that would help identify actual person that requested certificate issuance

Payment details are basically useless in terms of tracking abuse. Attackers have no lack of access to untraceable or fraudulent payment methods.


In regards to your edits, let me also mention that Let’s Encrypt does not offer (and indeed can not offer) “Extended Validation” certificates, due to things related to the two “bad things” you mention. However, extended validation certificates don’t in fact prevent fraud the way they are intended to — indeed, a security researcher managed to get ahold of a certificate for Stripe, Inc. simply by legally registering a corporation under that entity name elsewhere and purchasing a certificate for that legal company.[1] This issue with EV certificates — the fact that they are issued to a company as opposed to just a domain name — cause them to potentially pose a greater risk to web users.

[1] https://arstechnica.com/information-technology/2017/12/nope-...


I think you may not be very familiar with Let’s Encrypt’s challenges. Allow me to briefly explain the gist of them:

The two most common challenges are an http challenge, and a DNS challenge. The http challenge gives you a response code to host as a file on the domain during the validation period. This challenge is, for all practical purposes, random, and cannot be guessed. Then, after your script tells Let’s Encrypt that the response to its challenge is available and up-to-date, Let’s Encrypt performs an http GET request to retrieve that response, and checks to ensure it is exactly what the script provided. Only then does it proceed with signing your CSR and giving you a valid certificate.

This requires (at least temporarily) a web server running on port 80 at the domain in question, and in order to break it you would need to be able to effectively either hijack the A record for the domain as read by Let’s Encrypt, or to break into the web server to properly issue a certificate that one then steals. Impossible? Probably not. Impractical? Very.

DNS challenge is even more secure, in my opinion, as it works the same but the response code is stored in a TXT record for Let’s Encrypt to validate. In order to break this you would need control of the DNS servers.

So, to put it rather simply, >Are the challenges used by Let’s Encrypt secure? Yes, so long as you trust your DNS and web servers not to be compromised. And if they are, it’s frankly game over anyway.

Now let’s contrast this with, for instance, getting a multi-year certificate from the likes of Verisign or similar: this (as far as I am aware) requires manual interaction, which can at least theoretically allow for human error, of which there are many chances.

Additionally, many more traditional CAs will let an inexperienced user have the CA generate the private key and then transmit it to the user. This opens up a LOT of dangerous possibilities, as now this private key is being saved and moved around, and could easily be missed and left on the workstation used to perform the work. Or a MitM attack could even snatch it in transit.

Honestly, I don’t think there is much (if any) point in still using manual verification. The human aspect of it also opens up chances for forgery, and so on.

Let’s Encrypt’s challenges are specifically designed to be difficult or impossible to hijack, and so far as I understand it the private key should never leave the server it will remain on.

So again, to answer your question succinctly and to the best of my knowledge: yes, the challenges used by Let’s Encrypt are most certainly secure.


> Additionally, many more traditional CAs will let an inexperienced user have the CA generate the private key

This should not be true for any CA in the Web PKI. If you have evidence that a CA trusted by Mozilla offers this service you should give that evidence to m.d.s.policy (or me and I'll see it gets passed on with attribution)

There have been resellers who offer this. These are independent businesses from the CAs, and it's even crazier to let them (basically middlemen with no oversight) pick your private keys or know what they are. But as separate businesses it's hard for us to effectively stop them.


> DNS challenge is even more secure, in my opinion, as it works the same but the response code is stored in a TXT record for Let’s Encrypt to validate. In order to break this you would need control of the DNS servers.

> Now let’s contrast this with, for instance, getting a multi-year certificate from the likes of Verisign or similar: this (as far as I am aware) requires manual interaction, which can at least theoretically allow for human error, of which there are many chances.

What I've never understood is how this doesn't ultimately just shift the security risk to my DNS registrar.

Instead of social-engineering the CA to give me a cert, I have to social-engineer the registrar to store a TXT record. I don't see why one should be significantly harder than the other.

> Additionally, many more traditional CAs will let an inexperienced user have the CA generate the private key and then transmit it to the user. This opens up a LOT of dangerous possibilities, as now this private key is being saved and moved around, and could easily be missed and left on the workstation used to perform the work. Or a MitM attack could even snatch it in transit.

Again, it's harder for a rogue CA to abuse my certificate - but instead, a rogue registrar could now easily manipulate my DNS record and receive a valid cert of its own.


> What I've never understood is how this doesn't ultimately just shift the security risk to my DNS registrar. > Instead of social-engineering the CA to give me a cert, I have to social-engineer the registrar to store a TXT record. I don't see why one should be significantly harder than the other.

Rogue registrars aren't necessary: one can simply attack the registrar or interfere with the DNS traffic. These attacks have already been seen in the wild and are continuing today: check out info on DNSpionage (https://blog.talosintelligence.com/2019/04/dnspionage-brings...) and Sea Turtle (https://blog.talosintelligence.com/2019/04/seaturtle.html) attacks.

Meanwhile, Let's Encrypt has been making some interesting changes. For instance, they just introduced multi-perspective challenges (https://letsencrypt.org/2020/02/19/multi-perspective-validat...) in which they submit multiple challenges to the user from different network paths. Attackers hijacking network paths to interfere with challenges must then intercept all possible paths to a client, which is much harder.

That said, I'm not a fan of devolving our certificate validation to DNS--it's like building a castle on Jello. It wasn't designed to be a security-first protocol, and it's definitely showing its age.


This is correct. But perhaps even more correct would be to say it _consolidates_ the risk at your DNS registrar, since of course bad guys who seize control of your DNS records could anyway deny all service and capture anything unencrypted.

You should definitely not use an untrustworthy DNS registrar or registry for important things, but that was true regardless and it hasn't stopped the .com TLD (which is run very badly indeed) making a tremendous amount of money.


I think there's an underlying assumption that your server, your DNS registrar, and all the CAs present in your root store are trustworthy. It may not be a particularly safe assumption, but it's presumably better than unverified TLS or trust on first use (for the majority of present day usecases).

If anything, I would expect universal adoption of automated verification methods to improve security. Instead of only needing to trick a single CA out of an entire root store into issuing a certificate, you would instead need to hijack the DNS listing without being noticed by _anyone_ (and hopefully all CAs, as well as everyone else, would be on the lookout for this).


The challenges used to verify ownership and whether the task is automated or not are mostly orthogonal issues. I don't see how automating the renewal makes it less safe, if anything it removes human error from these tedious tasks.


>The fact that it's automated results in less human intervention along the way, which on one hand lowers costs, on the other hand makes it detecting scams harder (unless they deploy some really Machine Learning that detects frauds).

You think that other CAs manually issues their DV certificates?

>The fact that it's free means that there's no credit card number or other info that would help identify actual person that requested certificate issuance.

How do you feel about CAs that accept cryptocurrency, or accept prepaid credit cards?


Setting aside the question of Let's Encrypt challenges... even for OV or EV certificates, the CA does not verify the organization at the issuance of every certificate.

Instead, there is a verification process up front when you establish your company's account with the CA, and then each employee's account in the CA web application is associated with the verified company identity.

To be more specific, once the initial company verification is done, the CA relies on their authentication scheme to ensure that the person logging in and requesting a new EV certificate is authorized to do so for that company. The importance of authentication is why both MarkMonitor and CSC require 2FA, for example.

I can't think of a reason that such certificate issuance could not be automated. You would just need to have a machine-to-machine authentication scheme that you can trust and monitor. That is easier than Let's Encrypt's challenges because you can manually establish shared secrets in advance (i.e. API keys).


CAs are free to come up with more complicated schemes for customers that think they need them, including systems that include humans in the loop if that's what is requested. It's kind of embarrassing that hasn't happened yet, but apparently "just buy long-term certificates instead of bothering to improve things" wins out unless there is external pressure.

That said, many CA verification processes are just less-standardized variants of the things LE does. If you can get LE to give you a fraudulent cert, you will also find another CA you can fool.


A customer can (and some high value customers do) proceed as follows:

* Pick a CA you can do business with. Let's say it's Sectigo as an example here

* Arrange a deal with Sectigo whereby they'll use an agreed process such as phoning a specific (confidential) contact number and speaking with Dave your Head of IT Security to confirm it's as expected before each certificate is issued for your names. Maybe this is a minimum volume deal like you'll pay them $2000 for the first up to 100 certificates per year and then $10 for each additional certificate.

* Set the CAA resource in your DNS for your names to require Sectigo as the only authorised CA.

Now when bad guys try to trick Sectigo it doesn't work because Sectigo calls Dave who shuts it down and you're onto them. If they instead try to trick say, Let's Encrypt the CAA resource says only Sectigo is allowed and the attack fails immediately.

The Ten Blessed Methods (of which Let's Encrypt offers three) are obligatory though, you can't make a deal with a CA to just skip it, they must use one of those methods. However if minimum friction is your goal you could find a CA that also operates as a DNS registrar for your names, whereupon one of the methods (3.2.2.4.12) means they only need to confirm this fact internally, no work for you.


How does doing it manually verify that certificate is issued to the legal owner of the web site and not a hacker?


It doesn’t. It actually opens up more opportunities for error, intentional or otherwise, that can possibly be exploited by a malicious actor.


Humans are actually detrimental to preventing abuse.


Google and Mozilla are both major backers of LetsEncrypt which is tackling this problem from the other side (issuing short-lived certificates and putting in a system to automatically update them).


IMHO, expiration dates on certificates are and have always been the Wrong Thing. The Right Thing is to have the certificate contain a time stamp of when it was issued. The client should decide whether the certificate is still trustworthy. The cert can contain a recommended expiration date, but the dispositive information should be the issue date.


> The client should decide whether the certificate is still trustworthy.

Great, now you need to constantly go to some site like caniuse.com to see what % of browsers thinks your certificate is valid.


I mean, in a world with more than an oligopoly of browser makers, that'd already be true given that browsers each ship with their own trusted CA store.

In the original conception of X.509, it was supposed to be an enterprise's system administrator (e.g. a university's IT department) that would define the set of root CAs that the enterprise's computers would trust. So there'd be literally no way to know whether a given user device would trust your certificate. That was supposed to be a choice that is up to them, not something you can pre-guarantee by making deals with third parties.


You have to do that anyway to check if the certificate is revoked. Or, we'll, you should. I'm not sure sure if there are any production tls stacks that actually handle revocation well.


”The client should decide whether the certificate is still trustworthy”

That’s what is happening now. Except for common sense, nothing stops a browser from trusting a certificate with an expiration date in the past.

The expiration date more is a statement from the issuer “I wouldn’t trust this after this timestamp”.

Having said that, I think Apple should say “our OSes will stop trusting certificates issued over two years ago” instead of ”our OSes will stop trusting certificates that expire in over two years”

If they did that, one could still use certificates with longer expiration periods, allowing other parties to cache them for longer periods.

Now they force everybody to follow them, so that, if this change breaks sites on iOS, it also breaks sites everywhere else.


Everyone would just issue certificates for two years longer.

This is designed to fix the fact that revocation is utterly broken.


That is... a very absurd idea.

You really don't want a situation where webpages may work or not work based on some completely unclear criteria.


This is exactly what Safari deciding to arbitrarily not trust valid certs is.

The de facto standard will be that you will need to accommodate the least tolerant user agent if you want your certs to work for all browsers.


But Safari has laid out clear criteria as to under what conditions the certs will be invalid.

If I'm understanding correctly, the GP is saying browsers should be making decisions based on a much wider variety of criteria (for example, who issued the cert, how popular is the website using the cert, etc), which would be much more difficult to troubleshoot.


The likely result is that two year certs will just cease to be available, even if other clients don't decide to make an equivalent rule.

And yes, the de facto standard is that for the general purpose web you can't sell certificates based just on, for example, getting an OK from Microsoft (whose trust stores includes a tonne of governments and government agencies whose inclusion a cynic might assume is not coincidental to those governments agreeing to purchase Microsoft's products...) because those certificates don't work on an iPhone or in Firefox or...

The relationship that is symbolised by the CA/Browser Forum has always been asymmetrical like this, it was this way before the CA/B Forum was created, and if the two groups aren't able to reach common ground the CA/B will just gradually go away.


It's basically what has been announced here, though, no? There's no technical impediment to a CA continuing to hand out 2+ year certs. The only reason they would not do so is that webpages will start breaking if they're accessed from Safari.


The criteria are very clear:

- Safari

- max 398 days


To the owner of the cert, I suppose. But how many browser users are even aware of the existence of certificates, let alone CAs and expiration dates?


> The criteria are very clear:

today. Tomorrow they may change to be 397 days, or 39.8 days, or "until the next iPhone is released".


Isn‘t that a bit like arguing about the 825 days? The CA/B could change that aby day they want and there‘s nothing you can do. Others will follow Apple and the CA/B will follow too, like they should have in 2017, IMHO.

The 398 days isn‘t something they came up right before they presented it. It‘s already 3 years old...


Nope. Today, tomorrow and for 6 more months is 825 days. As of September 1, 2020, it will be 398 days - a change people knew was coming for a long time.


The point is that they can change it any time ("tomorrow"), not that they will literally change it tomorrow. How much time they will give you to prepare doesn't really matter, it's still unilaterally forcing everyone to adapt.


Yes but what we have now is also absurd. One day I can visit a site just fine and the very next I can’t and the browser won’t even let me override the “invalid” certificate.


How about grading the validity of the certificate on a continuous scale? P(site loads) = 1 - R*(days since issue) where browsers can choose R as they wish. This gets rid of this discontinuity.


Can someone chime in with the original intended purpose of the expiration date?

The one that I can imagine (without research) is that the issuer knows the quality of their own security practices, and if they says that a certificate will expire by X date, they are saying that they can't guarantee that they will still be the only person with the secret key after that point.

Any other explanation?


The issuer shouldn't know the private (not secret) key corresponding to the public key in the certificate anyway. In the Web PKI they're explicitly forbidden from knowing this key and good ones will react to being shown a private key by revoking any associated certificates -- that's what happened in the Trustico incident.

They do know a private key for their entire CA certificate which they're using to sign things, but that's not what the expiry date in the leaf certificates they issue you is about.

Nor is it intended that the lifetime of a certificate reflects the expected time to break the private key the user knows, otherwise Apple would let you have say a two year certificate if you went to P-384 certificates because they're harder to break in theory. There is obviously a threat over time that your keys leak ("Oops, the backup tapes I put in the trash have our current keys") but that's hard to assign a specific time interval to, it just means you should definitely change keys sometimes.

The main thing that we're always worried about on a specific timescale is the name information. The same people control google.com today as two years ago, and likewise for my.lovely.horse but how about bobs-exhausts.example ?

But the main impetus for desiring shorter lived certificates is that the certificate lifetime in practice constrains our ability to make technical changes like the SHA-1 deprecation. We'd like to make changes faster, and if you've just bought an 825 day certificate realistically our changes can't be effective in less than 825 days unless we're willing to be so disruptive your apparently valid cert ceases to work in popular browsers, which will be a hard thing to sell for merely precautionary changes.


I agree with your overall point and appreciate the clear explanation. However, I'd like to suggest that perhaps what you describe as "merely precautionary changes" should never occur on anything less than a ~2 year timescale anyhow. If it's urgent, go right ahead and break things. But the internet is _big_, and modifications can be _expensive_, so precautionary changes should probably occur quite slowly at this point.


If a certificate never expires, revocation lists have to grow without bound forever.

If a certificate expires, revocation lists only have to hold revoked certificates that haven't expired.


Without an expiration date the only way to retire a certificate would be to explicitly revoke it. That'd be a bit heavy handed IMO.

Forcing certificates to expire means that domain ownership gets re-validated every time. Nobody would risk buying 2nd-hand domains for anything sensitive if the previous owners could have a permanently valid certificate stashed somewhere. Or you'd have to contact all the CAs that could've issued a certificate for that domain and ask them to revoke it if they did.


Is there not already a standard procedure for certificate revocation on domain ownership transfer? It seems like there really should be - even a 13 month expiry is hardly enough to prevent intentional abuse.


"Standard" is a big ask. If this is a concern then you can proceed as follows:

1. Wait 24 hours or so after securing control over the names.

2. Use the Certificate Transparency logs to determine which certificates if any exist for names you now control and want revoked.

3. Discover the revocation process for the issuer of each cert.

4. Use each process you discovered. All of them should be willing to revoke if you can prove you now control these names. Some (Like Let's Encrypt) offer an automated way to do this, for others you may end up talking to a Customer services person or using an email ticket queue.


Thanks for another clear explanation of process! But... the fact that there isn't some standardized automated revocation procedure that all CAs are required to support at this point seems somewhat absurd to me.

At a minimum, it seems like such processes based on both proof of domain ownership and proof of private key possession should exist. Moreover, there ought to be a way to specify via DNS which CAs can issue certificates for your domain (I guess DNSSEC DANE will provide this?).


Getting the CAs to all agree on some particular process is like herding cats. In practice the situation where you'd want this is relatively uncommon, so it's as though you demanded US States all come up with a single unified set of traffic regulations - most Americans do not routinely drive in more than one or two states and so they'd support their home state in insisting that whatever happens it should be spared the expense of changing. It would go nowhere.

You can use CAA to tell CAs whether they're authorised to issue for names in your domain, and since CAA is a DNS resource you can secure it with DNSSEC. But CAA isn't retrospective, it won't magically revoke certificates which already exist, although it could prevent issuance based on stale authorisations because it's supposed to be checked live.

Using DANE here would be pretty fraught even if it was widely deployed which is far from the truth.


You're thinking of CAA, which in theory works even if only the CAs look at the records, which is important because no browser will; all mainstream browsers have abandoned DNSSEC, which is moribund.


> the issuer [...] are saying that they can't guarantee that they will still be the only person with the secret key after that point.

It's not quite clear what you mean here.

The issuer should never have the private key for the end-entity cert (the one representing the domain or whatever). Only the owner should have the private key.

EDIT: Hmm, according to another comment here, some CAs will offer to generate both halves of the keypair for the end user. Yikes.

If you are talking about the CA's own private key...if that ever leaks, then no certificates signed by it should ever be trusted again; expiration date has no relevance here.


> EDIT: Hmm, according to another comment here, some CAs will offer to generate both halves of the keypair for the end user. Yikes.

I wasn't able to find this comment, but perhaps I didn't look hard enough? EDIT: Wait, I think I found it. I will reply there.

In the Web PKI this is prohibited but there are or at least were resellers who'd offer this service to their customers. You should not use this service of course, and some CAs have pledged to tell their resellers not to offer it.

In S/MIME it's more common because often the end user is both technically unsophisticated and not given real control over their client in order to mint their own keys anyway. But S/MIME is... probably not important, certainly it isn't what Apple's change is about.


The latter did already happen: Diginotar, in 2011. The Dutch government tried to restrain browser makers from revoking Diginotar's certificates, since many of their websites and services used them. It's a good thing certificates don't have an eternal life span.


One effect of expiration dates is shorter revocation lists - if a revoked certificate was originally valid forever, the revocation list should list it since it was revoked to the end of times. Short lived certificates results in shorter CRLs.

Lengthy CRLs have performance cost, they need to be downloaded and processed by clients.

There is also a growing risk correlated to certificates validity length. If CRLs are unavailable to a client, the CRL server might be offline or unreachable or the client's access is maliciously blocked. The client have no knowledge of the reason and need to decide whether to trust the certificate without CRL check or not. If clients would be configured not to trust in such cases, the path to DOS is clear - block access to the CRL server and you effectively block traffic to all services of that CA. If clients are configured to trust certificates when CRLs are unavailable, which I beleive is the default on all OSs currently, blocking access to CRLs allows attacker to fool clients to trust revoked CRLs. With short living certificates, the opportunity window for such attacks is smaller.


Well, it's the one that works in practice. The issuer is the one that known when he will renew the certificate. If every client decided for himself, half of the internet would be broken at each time.


Millions of dollars in IT consultation work. Target in 2014, 2019 (it was a 5 year certificate), Microsoft Teams 2020, etc.


1. As an analogue for password expiration (with all the commentary that entails).

2. Recurring revenue models for CAs.


> The Right Thing is to have the certificate contain a time stamp of when it was issued

Well of course they should, that's why it is right there, called "not before"

> The client should decide whether the certificate is still trustworthy. The cert can contain a recommended expiration date, but the dispositive information should be the issue date.

Well, that exaclty is the case. Most clients decided to honor the recommended expiration date, and Apple just announced they won't.


And how are the CAs supposed to squeeze you of money?


While I agree that it is better from a security perspective not to issue long lived certificate and that automation would be best, I dislike the fact that the subdomain of any certificate I issue becomes public due to Google's certificate transparency project.

By making every subdomain public, it makes the job easier for any attacker wanting to try and find smaller servers to target. It's not that I believe in security through obscurity but besides making sure all servers are as secure as they can be, I do believe in not making the job easier for adversaries.

So, instead I use wildcard certificates, and for this automation gets much more annoying, you need to use DNS to validate it (route 53 or similar does provide an api that can be used) and then I'm not sure if I'm confortable having each server generate their own wildcard certificate, leading to 100s of wildcard certificates...

This is why I currently use old style 1 year wildcard certificate which gets updated through chef but I'm really not sure if this is the best solution or not.


I think a lot of places do this for convenience, and don't realize the risk it puts them under (not that you are unaware of this, just that others may not be). Whether they have everything generate its own wildcard certificate, or create one wildcard and share it to everything, they've created a significant security risk. Namely, any time a wildcard cert and private key leaks, the attacker can now impersonate literally everything behind that name. By using wildcards everywhere, they're creating tons of leakage points, and only one of them has to leak the private key and the attacker wins, particularly if they can grab it without the user being aware.


Without certificate transparency how do you know nobody has issued a certificate for your server? Surely that's a far higher risk than knowing a domain?


oh, I completely agree that certificate transparency is beneficial for that and we have an alert setup. But, that still leaves me not wanting to leak every single subdomain we use internally. Hence using wildcard certificates.


CT didn't change this for bad guys. If you're a bad guy (or a neutral researcher with a budget for the data) you can buy what's called "Passive DNS". Several suppliers will give you a list of DNS requests and their answers, the identifying information for who made the requests is elided so it's not PII but it has the same effect of making the fact servername.example.com exists in effect public information.

Even if you are unusual in actually having machines named cy23hdc9.example.com not exchange2016.example.com then the existence of this service means you need to stop assuming nobody knows these names. Anybody who cares knows them.


Yes, which is why you used to be able to have fully internal domains that are served by an internal DNS server and are never seen on the public internet.


You can still have this: it's called 'split-horizon' or 'split-brain' DNS. It can be tough to set up and maintain for large DNS environments, but not /that/ difficult even at that level. If you do this and you don't want those names in the public CT records, though, you have to implement a private internal PKI for those servers so they're not getting public certs. Or use a wildcard, which carries its own significant security risks.


> you have to implement a private internal PKI for those servers so they're not getting public certs

This would also involve setting up a custom CA and distributing your CA cert to all machines that should be able to access the page. Good luck with that!

All of this is a ridiculous amount of effort to set up and maintain. It's a lot easier to just make the internal domains accessible to the internet and be done with it.

The effect being that networks are being nudged into exposing a lot more surface to the internet - and this somehow in the name of more security.


It's not terribly painful to do if you have central management (e.g. you can push them out through a GPO or MDM system). For small businesses, however, I totally agree it's a pain in the neck: the school my wife teaches at would have a devil of a time doing it with their current IT staff workload, and they'd still have issues with unmanaged devices. And all that's to say nothing of the ticking time-bomb that occurs when you set up your own PKI--good luck remembering to replace that root CA ten years down the line when it explodes.

The cynical part of me says that certain companies might be very interested in the search possibilities gained from exposing internal networks to the Internet, and the increasing lock-in that occurs when you make your systems dependent on their public CA instead of your own private one. But perhaps that's just the tinfoil-hat talking. :)


You could create your own CA, limited to your domain and install it on the respective devices.


I never heard about Passive DNS before. Thanks.

That said, shouldn't that be illegal? Where does this data come from anyway? I'm guessing spyware - on the phone, in browsers, and spyware browsers like Chrome.


Unless the people answering DNS queries for you have explicitly contracted (with you, or with all users) not to do this you should assume they contribute to one or more passive DNS services because it's free revenue for them.

I'd be very surprised if any of the suppliers use spyware, seems like it'd be far less effective and also more expensive to do.


Wouldn't it be a better idea to use alternate domains instead of subdomains? You could use something like [any subdomain].atdie8e73bhdbdie93ruhe.[any tld] for obfuscation?


Those cost money.


You could automate the generation of the wildcard certificate and push it out with chef to all your servers. Instead of replacing it once a year, you replace it every 90 days.


I understand the reasons behind wanting to shorten certificate validity periods, but CA or root certificates often have expiration periods far into the future. What’s the argument for this? Ease of use? Historical reasons?


The two big reasons for short certificate validity periods are to manage revocation issues (expiry is the one guaranteed way to revoke a certificate), and to reduce the risk of forgetting to renew certificates (by having to rotate them more frequently, your IT is more likely to have better processes in place). There's a secondary concern of reducing the ability of CAs to backdate certificates (which happened when SHA-1 was prohibited).

Neither of those really apply to the root certificate store. Updating the root certificate (to add or remove CAs) requires an OS/browser update instead of relying on CRL or OCSP queries which are flakier--and browsers have a variety of mechanisms to punish misbehaving CAs that aren't "drop you from the root store effective immediately."

Meanwhile, the process for updating root stores is entirely different than regular certificates, and it takes months to years to add a new root certificate--and I believe you don't get a pass to merely update your new root certificate for an existing CA.

The other risks with overly long validity times are ameliorated in large degree by the fact that CAs are regularly audited and have much greater scrutiny placed on their issuance.


It's not easy to (at least this was the case until a few years ago) to ship updates to an old device (think Android 4.x or Windows XP; even worse for embedded systems). Hence to avoid the devices become useless bricks even if otherwise fully functional, root certs need to have more than a decade cert validity at minimum. (That's my personal theory, I'm not in the industry).


I think the industry is very much trying to push for the exact opposite of that, with things like planned obolescence to ensure you keep consuming their latest products.


Root certificates are special. They are shipped to the device out-of-band, e.g. with OS updates. This makes them much more difficult to replace than other certificates, especially on devices that do not receive updates often like embedded devices and mobile phones.

For this reason root certificates have a very long validity and very stringent requirements in their security parameters, e.g. large key sizes, and how they are used. Usually the certificate sits in a well secured off-network physical device and is only used a few times during its lifetime to sign sub-certificates.


Because those certificates are much well guarded (HSMs), rather than an unencrypted .pem file sitting on a server or someone's computer.


Just look at the state of the "modern" internet with it's certification wars, 2-level auth, Google SMTP gatekeeping, mandatory HTTPS, Letsencrypt v2 & certbot renewal fsck-ups. What hope in hell does the ordinary user have of navigating all this? Do we all now need to be experienced sysadmins just to use the internet?


Apple Drops SSL/HTTPS Bomb - Forget Long Certificates

https://keychest.net/stories/apple-drops-sslhttps-bomb-forge...


Google doesnt trust my password for a gmail account that I last used 6 months ago (I had to also provide one my previous password to be able to login). It is getting ridiculous...

Now I gotta keep an history of passwords, just for Google...


Apple's browser is already called "the New Internet Explorer":

https://fabiofranchino.com/blog/css-height-parent-flex-safar...

https://dev.to/nektro/safari-is-the-new-internet-explorer-1d...

https://arstechnica.com/information-technology/2015/06/op-ed...

... and millions similar articles

I developed widgets for web developers and I had to change it to stop using the 'fixed' positioning anywhere, because iOS is the only browser that interpret it differently on all touch enabled devices. They have many other issues as well. I have to spend half of my time to fix their buggy browser!!!!!!!!! Not even the old internet explorer causes that many issues as safari.

They torpedoed progressive web apps and many important web standards. Why? Just because they want you to use their apple store, where they can take 30% from every transaction without doing anything.

I hope Apple will go bankrupt and stop hurting the web community as they successfully did in the past years.


Safari market share ... so who cares.

I am not talking about mobile usage.


I posted about this a couple of days ago: https://news.ycombinator.com/item?id=22373673 didn't get much traction then :-(


Is this an early April fool article?


What do you find strange about this?


That the CAs are happily still selling 2yr TTL certs?


I don't think certificates are a good solution for most websites. Instead have the browser store the public key first time you visit the site. Then ask the user every time it changes, like with SSH. Also browser should send their public key to the server! So that we don't have to come up with a new password for every damn site.


>Instead have the browser store the public key first time you visit the site. Then ask the user every time it changes, like with SSH.

The vast majority of users will not understand this. Also let's say you go to a site one day, and you get a prompt that their certificate has changed. Is this legit? How can you know?

>browser should send their public key to the server!

This already exists (HTTPS Client Certificates) but it's a huge pain in the ass so it's barely ever used. When it is used, it's generally within a controlled corporate environment.


There could still be a trust chain. But most websites don't need it, just like most websites only use the basic level of certification today, all they really want is encryption. (or the green padlock :P)

Client certificates has kinda been deprecated by browsers. Which is why it's such a PITA. Also handling signing certificates by itself is a PITA. But most website's wouldn't need signing, just encryption.

Automatic SSL certificate signing is not that secure, if an attacker would be able to mess with DNS, or access the HTTP server, they could also create a fake certificate via Letsencrypt. Or if the attacker has access to the client, they could sneak in a root certificate. Most nation states and ISP (those who would like to spy on it's citizen) already have root certificates. All that SSL certificates do is to create extra work for site maintainers. We can have encryption without certificates. If you would for example side-load a list of public keys for the main sites you visit, that would be much more secure then SSL certificates are today.


Browsers toyed with the idea of using DNSSEC + DANE which would allow a website owner to use self-signed certificates and have them trusted just like commercial certificates.

A DNS record (TLSA) would tell the browser how to deal with trusting the certificate, its public key or to validate the chain of certificates, etc: https://www.rfc-editor.org/rfc/rfc7671.html.

This would have radically changed the landscape of certificate authorities and browser vendors.

Google and Mozilla implemented it back in the day but changed directions. Among other issues, doing additional DNS lookups when you're trying to get the page on the screen as quickly as possible put the kibosh on things. Plus politics.

Mind you, using DANE + TLSA is the defacto standard for Mail Transfer Agents to do secure email with each other but SMTP and HTTP have different security requirements.

We may eventually get there, but there are political and technical hurdles to overcome when it comes to DNSSEC, which is required for DANE and all the rest. It's really taken off in parts of the European Union, South America, and Asia, but not so much in the US, where only about 25% of internet users are behind a DNSSEC-aware resolver: https://stats.dnssec-tools.org.

The article DNSSEC, DANE and the failure of X.509 is a pretty good summary: https://blog.hansenpartnership.com/dnssec-dane-and-the-failu....


> Also browser should send their public key to the server!

That would make fingerprinting even easier?


The browser could generate an unique key pair for every website that request a public key. Or there could be a dialog like "news.ycombinator.com want's to know your identify, which profile to use?" (option to generate a new profile), kinda like password managers work today.'

The advantage with reusing "profiles" is that you could use the same profile on many web apps like Facebook and Instagram, then you could allow your Facebook friend to access your Instagram photos. Or your "contact list" could be offline (eg. managed by your browser, and not by web apps) and you manage access rights via your contact list, rather then on each and every app.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: