There are real reasons for the for-profit web to want to limited cert lifetimes since revocation doesn't really work in practice. In terms of browser dev the two views are mutually exclusive and the one that funds the coders gets it's way. Especially with the W3C marginalized and a mix of corporations and corporate-centric standards groups running things now. Expect to eventually be unable to host a visitable or indexable website without relying on at least one third party service in the near future.
Due to Let's Encrypt, free hosting services like Netlify or GitHub Pages are now providing HTTPS certificates and installing it on your own server is pretty painless, if you're into managing your own server. And if your hosting provider doesn't support Let's Encrypt, you can always put Cloudflare in front of it.
So I don't really understand what you're talking about, when the price and ongoing maintenance of HTTPS certificates has gone significantly down for hobbyists.
Those are exactly the kind of third-party services the GP was talking about.
> if you're into managing your own server
One of the core advantages of having a personal server has always been that you can keep it entirely off the internet and run it without any involvement of third-party services. That is not anymore possible. Like, I now need to purchase a domain name and set up infrastructure to fulfill Let's Encrypt's challanges, just so I can serve a page on my LAN.
I understand why letsencrypt is needed and what problem it's trying to solve, but forcing international dependencies on a small number of select entities is not the way to go about solving them.
I can't help but feel saddened when see one of the greatest monuments to human ingenuity, a global decentralized network, get dismantled like this.
- HTTP have no access to a significant part of web APIs added in the last years - and they will be blocked from all APIs that are being added in the future.
- Self-signed certs show security warnings that are deliberately confusion and discouraging to click through and will likely become even more so in the future. Show those to other people is no option.
I do agree that using Self-signed certs and clicking through security warnings is possible - however, is being made deliberately tedious (e.g. Chrome will forget that you accepted the cert after a while). It also seems to me that this part is actively discouraged by browser vendors, so I'm honestly not sure how long it will stay open.
Self-signed certificates are also unpredictable to do API requests to because no accept UI is shown for such requests.
> That's nothing new (at least to me).
It absolutely is. With HTTP, you could simply run a local web server and have everything interested point their browser towards it - and everything worked. This is not possible anymore unless you want to make recurring payments for a domain and accept that you need an internet connection.
However, a side-effect (intentional or not) is that the web is turned into a sort of app store: Either you belong to the platform or you don't, and whether or not you do is decided by third parties. (Who, btw, are not even bound by any kind of public mandate - they are simply private, profit-driven companies)
I also don't think the stated security advantages always make sense: Let's Encrypt will serve network attackers just as easily as legitimate customers. Meanwhile it will lead to a lot of stuff being exposed on the internet than would be necessary otherwise. We also force devices that simply should expose a local web interface to have a cloud service. I don't see how this makes anything more secure.
I guess what I'd want is simply a way to designate a device as "trusted" locally, without depending on third-party services, internet connectivity or anything else and without anything expiring.
A way that should be encouraged to be used by non-techical users as well.
That's because you didn't finish reading my post.
>Expect to eventually be unable to host a visitable or indexable website without relying on at least one third party service in the near future.
You have always needed at minimum hosting, SSL and domain service providers.
2) There are multiple organizations running DNS and the general consensus is that there shouldn’t be a single organization running it (except ICANN which is generally careful to delegate the root and TLDs to other organizations.)
3) Just because you have one bad thing doesn’t mean it’s ok to have two.
I mean... How do you expect to have a valid, trusted TLS certificate without a third party? Nobody says it has to be Lets Encrypt or Cloudflare or Amazon load balancers or ... so forth. Some certificate authorities even already have APIs....
You have to know what the servers public key is in a reliable way. That doesn't necessarily mean a third party has to be involved. Maybe you call them on the phone and check the fingerprint. Maybe it's your server and you sneakernet the key to your client. And to appease the dumb protocol you wrap it up as a certificate (self-signed) and accept yourself as trustworthy (what a concept!).
Even if you use a third party, it doesn't have to be some big company that everybody is functionally obligated to trust. It could be a mutual friend making an introduction.
And certificates are not the only or necessarily the right solution to this problem in all domains. For example, if you have to go and check OCSP every time to see if the certificate might have been revoked, then you are relying on an online trusted source; but if you have an online trusted source, why don't they just tell you the current key and dispense with the certificate mechanism altogether?
X.509 was designed for X.500 and concepts that were relevant for X.500 continue to plague us all. I deployed corporate PKIs back in the 1990s for Sun and Pfizer and worked with other clients. What I remember best from those times was how ridiculous the whole thing is, over-engineered by parties who clearly didn't keep focus on the fundemental problem that public key cryptography is meant to solve.
It’s a little disappointing that many browsers have no support for this whatsoever and most have very poor support for it.
We should fiercefully start to fight back otherwise those who made the internet great (the open source hackers working on LAMP) will loose everything.
(Consider: QtWebEngine, PinePhone, SiFive, OpenTitan, IPFS, DAT, Solid, ...)
(Speaking of other software stacks, a few Gopher servers remain online today  and there's even a DECnet in operation! )
Why doesn't Apple kill a million sh.tty apps just trying to grab attention doing nothing useful?
It's time to get rid of Apple/Google browsers really
In a systems engineering plan (e.g. a NASA long-term project), both kinds of dependencies are considered liabilities that must be engineered for long-term reliability, fault-tolerance, etc.
That's necessary because people tend to be fixated on whatever task they're doing, and that causes them to click pass any warnings regardless of risk or comprehension.
You want self signed certs to be trusted and the web to move back to the days of HTTP where anybody could just intercept your traffic ?
Let's lower that to 60 ... 45 ... 15 .. ah, let's just renew all certs every day, that's the only way to keep control centralized.
is slightly dated but gives a very good rundown of how revocation checks fail in practice along with different attempts to improve them.
I agree with you that the web is being commercialized, but limiting certificate lifetimes isn't the problem. It's rather stuff like the .org domain sale, google amp, Chrome killing the file:// protocol (after all, html files don't have ads inside them, can't be indexed and they don't make any money for Google), FLOSS projects adopting discord, etc.
This is a major annoyance since it is the only browser that does this and won't work with "wss://" URIs even after accepting the mandatory certificate exception. Accessing the page rightfully shows a warning on all browsers, to which the user can click on "Continue" or similar, to ignore this and start the demo, except if they try with iOS (this is for simple one-off tutorials or demos , so maintaining good certs or requesting users to install a custom root CA in their devices is out of the question).
I know, this seems like a StackOverflow question... but the only 100% relevant one I found  didn't get much love, so I thought maybe someone at HN knows more about this
I don't think it's worth the effort of setting up all the automatic renewal process with Let's Encrypt for what amounts to a quick static code example... so in principle I wanted to avoid having to set that up.
The contents of this article means that soon this 2-year maximum will become a 13-month maximum, right?
Is there any way to create these specific-purpose certs with some kind of "private use" or "development use" flag that allows not having to re-create them so soon? 13 months looks too short of a time period, I'm afraid nobody will remember to refresh the fake certs and the demos will break... for really no good reason (related to the code itself)
To support Apple Pay on the web, you have to go to your developer account on the Apple website, and under certificates generate a custom ASN.1 authentication file and upload it to the `/.well-known/` folder on your domain. Once uploaded, you have to click a “Verify” link, to check the file and mark the domain as Apple Pay approved. The issue is the verification only lasts as long as the certificate expiration, and if a new certificate is installed, you have to re-verify the domain with a newly generated authentication file, for each domain. A fresh certificate with an old authentication file does not work.
This means if using Let’s Encrypt, you have to manually step through the verification process for each of your production, staging and development environments every three months. There doesn’t appear to be any automated way to handle this.
After two cycles of this I opted to purchase 2-year certificates just to save the hassle of re-authenticating my web environments on a rolling basis by hand.
This announcement just means more frequent manual processes once again. What a pain.
By manually getting 2-year certificates you're setting yourself up to forget part of the renewal process. This was the main argument behind Let's Encrypt having such short expiration windows -- it encourages people to script their entire deployments.
In my case I’m not really setting myself up to forget about the renewal process as I have a script to generate the key and CSR and an ansible playbook to update the cert once sent to me. Certificate vendors are more than happy to email you to remind you of an upcoming renewal, in just the same way Let’s Encrypt does.
On a side note, trying to script out the proper setup/migration of Let’s Encrypt is WAY more involved and fraught with mistakes than a simple certificate upload. The failure case is that the initial certificate issuance succeeds but since the initial setup needs to happen before SSL is configured and working with Nginx, you can’t use the same config before and after the initial setup. Thus you need to have two separate Nginx configs and switch them, or you have to use standalone for the initial issuance and webroot for renewals. Both of these are far easier to mess up than uploading a certificate.
I’ve set up more than 10 different servers with Let’s Encrypt and I don’t think a single one has just worked. I think in every case something got messed up along the way, and you only find out about it 70 days later with the renewal email, IF you are the admin email on the LE account.
Don’t get me wrong, I think there are great things about Let’s Encrypt, but it has plenty of thorns to deal with. I’m glad we haven’t all been forced into three month renewals by the CAB forum (since the certificate vendors have a say and they got feedback from customers before agreeing). I am fairly annoyed that Apple decided to unilaterally change the rules when they were already part of an organization that deals with this topic. I can only imagine browser vendors moving forward will have little to no concern for site/server administrators and how their changes are affecting things.
As it stands now, we are at 400 day certificate lifetime, which means a bad actor can only impersonate for a year, in the name of revocation being performance prohibitive. This is effectively the same as three years from a users perspective. The only meaningful change would be a lifetime of something like a week or a day, but I shudder to think of all the ways that will fail spectacularly.
Good luck with that—some time ago Apple locked down pretty much everything related to developer accounts with 2FA requirement that, of course, works on their proprietary platform via sending codes to logged in Apple devices. Maybe you can snatch it via some Automator kung-fu but I seriously doubt that.
Is there any tool that I can use to help automate this in a reasonable manner?
Ideally, I'd love to see a web version of xca that supports ACME with some controls on how ACME certificates get issued. Bonus points for supporting OCSP as distribution of CRLs is another upcoming pain point.
I think the proper way to do certs is to have the server (Web, VPN, whatever) create a certificate signing request and private key on the server, send the signing request to the CA to sign it, and then install the resulting signed cert on the server. Is this correct?
What I'm finding in some cases is that there are cases where this just won't work. For example, my QNAP NAS allows me to either create a self-signed certificate (I don't want this, I want it signed by my CA), get one from Let's Encrypt (same issue), or upload certificate, private key, and optional intermediate CA certificate files (and we're back to moving private keys). This is a limitation of QNAP's GUI for sure, but it's not unique to QNAP.
Similarly, I'm not sure how I'd generate the certificate plus private key on an iOS device and submit it for signing (the VPN scenario). This one particularly bothers me because the .mobileconfig file ends up being the key to the castle. Ideally I'd like the client to be authenticated with both a user-specific certificate and EAP, but I don't think iOS supports this. I haven't quite gone very far down this rabbit hole, so it's possible that I'm missing something.
When I finally secure my internal web server (which acts as a reverse proxy for all of my internal services), I'll try the CSR approach for the learning experience. This approach should also work fine on my VPN server.
Yes. The CSR is public information so it's fine to send that somewhere. "Sign it" [the CSR] is a phrase that doesn't entirely reflect what's happening with the relationship between a CSR and Certificate, experts know it's technically wrong but they say it anyway, so don't sweat it but it's probably misleading.
I will try to circle back to see if I have any suggestions for your specific scenarios later.
I'm going to try the CSR route the next time I have to do this and see how that works out. xca seems to handle CSRs easily, so doing this less-wrong should work out fine.
I'm not too concerned about the QNAP scenario - it's consumer gear, so I expect it to lean more towards doing things easily over doing things correctly. The iOS scenario is much more interesting to me since this is something that's more applicable in the real world.
There are ways to distribute keys safely. You can use key derivation or you can set up transport key (yet another key, just to transport session keys). Both methods are allowed by PCI if done correctly.
Unfortunately that's not what typically happens. Setting up infrastructure to transport keys safely is such a pain in the ass that, for SSL/TLS, I always recommend to just generate the key at the site of its intended use.
I mean, it would be nice, especially if you are distributing to other people, but I don't really see a difference if we are talking about uploading cert/keys to servers where you have root access.
But if I'm dealing with this sort of thing in outside of my homelab it'd be nice to see how it all works. I figure that learning how the CSR dance works in my homelab will help me understand how it works in the real world.
then check out Netflix Lemur for issuing and tracking certs automatically.
Hopefully their influence doesn't spread to the web too.
64-bit only, Project Catalyst, BitCode, HTTPS only connections etc are examples of initiatives which definitely has caused pain to developers but has immensely benefited users as a whole. And if you don't passionately care about users then frankly find another platform to develop on.
64-bit Only benefits Apple because they don't have to maintain the 32-bit stack anymore. Any perf-sensitive software that needed to be 64-bit transitioned a long time ago.
how has that benefitted the user? Directly and measureably, no "it's 0.3% faster now" excuses.
As for the other browsers, Google originally proposed SC22 (https://cabforum.org/pipermail/servercert-wg/2019-August/000...) last year and all the browsers voted for it. CAs voted it down at the time but there were rumblings via various back channels that several major CAs actually wanted the ballot to pass but for political reasons could not publicly support it.
So while Apple is acting “unilaterally” here, there is universal support among browser makers and tepid support from CAs. You should expect Google and Mozilla to follow suit in the next 6-12 months.
EDIT: to clarify - there are two bad things about Let's Encrypt:
1. It's automated
2. It's free
The fact that it's automated results in less human intervention along the way, which on one hand lowers costs, on the other hand makes it detecting scams harder (unless they deploy some really Machine Learning that detects frauds).
The fact that it's free means that there's no credit card number or other info that would help identify actual person that requested certificate issuance.
Together those things make things less secure, not more.
EDIT 2: Both types of Let's Encrypt challenges look like pushing down the responsibility to either web server owner or DNS service. Maybe that's a good thing, since at least there's one fewer party that can screw things up.
Let's Encrypt is far from unique in being heavily automated. If anything, its verifications are more stringent than those used by other major CAs.
> The fact that it's free means that there's no credit card number or other info that would help identify actual person that requested certificate issuance
Payment details are basically useless in terms of tracking abuse. Attackers have no lack of access to untraceable or fraudulent payment methods.
The two most common challenges are an http challenge, and a DNS challenge. The http challenge gives you a response code to host as a file on the domain during the validation period. This challenge is, for all practical purposes, random, and cannot be guessed. Then, after your script tells Let’s Encrypt that the response to its challenge is available and up-to-date, Let’s Encrypt performs an http GET request to retrieve that response, and checks to ensure it is exactly what the script provided. Only then does it proceed with signing your CSR and giving you a valid certificate.
This requires (at least temporarily) a web server running on port 80 at the domain in question, and in order to break it you would need to be able to effectively either hijack the A record for the domain as read by Let’s Encrypt, or to break into the web server to properly issue a certificate that one then steals. Impossible? Probably not. Impractical? Very.
DNS challenge is even more secure, in my opinion, as it works the same but the response code is stored in a TXT record for Let’s Encrypt to validate. In order to break this you would need control of the DNS servers.
So, to put it rather simply,
>Are the challenges used by Let’s Encrypt secure?
Yes, so long as you trust your DNS and web servers not to be compromised. And if they are, it’s frankly game over anyway.
Now let’s contrast this with, for instance, getting a multi-year certificate from the likes of Verisign or similar: this (as far as I am aware) requires manual interaction, which can at least theoretically allow for human error, of which there are many chances.
Additionally, many more traditional CAs will let an inexperienced user have the CA generate the private key and then transmit it to the user. This opens up a LOT of dangerous possibilities, as now this private key is being saved and moved around, and could easily be missed and left on the workstation used to perform the work. Or a MitM attack could even snatch it in transit.
Honestly, I don’t think there is much (if any) point in still using manual verification. The human aspect of it also opens up chances for forgery, and so on.
Let’s Encrypt’s challenges are specifically designed to be difficult or impossible to hijack, and so far as I understand it the private key should never leave the server it will remain on.
So again, to answer your question succinctly and to the best of my knowledge: yes, the challenges used by Let’s Encrypt are most certainly secure.
This should not be true for any CA in the Web PKI. If you have evidence that a CA trusted by Mozilla offers this service you should give that evidence to m.d.s.policy (or me and I'll see it gets passed on with attribution)
There have been resellers who offer this. These are independent businesses from the CAs, and it's even crazier to let them (basically middlemen with no oversight) pick your private keys or know what they are. But as separate businesses it's hard for us to effectively stop them.
> Now let’s contrast this with, for instance, getting a multi-year certificate from the likes of Verisign or similar: this (as far as I am aware) requires manual interaction, which can at least theoretically allow for human error, of which there are many chances.
What I've never understood is how this doesn't ultimately just shift the security risk to my DNS registrar.
Instead of social-engineering the CA to give me a cert, I have to social-engineer the registrar to store a TXT record. I don't see why one should be significantly harder than the other.
> Additionally, many more traditional CAs will let an inexperienced user have the CA generate the private key and then transmit it to the user. This opens up a LOT of dangerous possibilities, as now this private key is being saved and moved around, and could easily be missed and left on the workstation used to perform the work. Or a MitM attack could even snatch it in transit.
Again, it's harder for a rogue CA to abuse my certificate - but instead, a rogue registrar could now easily manipulate my DNS record and receive a valid cert of its own.
Rogue registrars aren't necessary: one can simply attack the registrar or interfere with the DNS traffic. These attacks have already been seen in the wild and are continuing today: check out info on DNSpionage (https://blog.talosintelligence.com/2019/04/dnspionage-brings...) and Sea Turtle (https://blog.talosintelligence.com/2019/04/seaturtle.html) attacks.
Meanwhile, Let's Encrypt has been making some interesting changes. For instance, they just introduced multi-perspective challenges (https://letsencrypt.org/2020/02/19/multi-perspective-validat...) in which they submit multiple challenges to the user from different network paths. Attackers hijacking network paths to interfere with challenges must then intercept all possible paths to a client, which is much harder.
That said, I'm not a fan of devolving our certificate validation to DNS--it's like building a castle on Jello. It wasn't designed to be a security-first protocol, and it's definitely showing its age.
You should definitely not use an untrustworthy DNS registrar or registry for important things, but that was true regardless and it hasn't stopped the .com TLD (which is run very badly indeed) making a tremendous amount of money.
If anything, I would expect universal adoption of automated verification methods to improve security. Instead of only needing to trick a single CA out of an entire root store into issuing a certificate, you would instead need to hijack the DNS listing without being noticed by _anyone_ (and hopefully all CAs, as well as everyone else, would be on the lookout for this).
You think that other CAs manually issues their DV certificates?
>The fact that it's free means that there's no credit card number or other info that would help identify actual person that requested certificate issuance.
How do you feel about CAs that accept cryptocurrency, or accept prepaid credit cards?
Instead, there is a verification process up front when you establish your company's account with the CA, and then each employee's account in the CA web application is associated with the verified company identity.
To be more specific, once the initial company verification is done, the CA relies on their authentication scheme to ensure that the person logging in and requesting a new EV certificate is authorized to do so for that company. The importance of authentication is why both MarkMonitor and CSC require 2FA, for example.
I can't think of a reason that such certificate issuance could not be automated. You would just need to have a machine-to-machine authentication scheme that you can trust and monitor. That is easier than Let's Encrypt's challenges because you can manually establish shared secrets in advance (i.e. API keys).
That said, many CA verification processes are just less-standardized variants of the things LE does. If you can get LE to give you a fraudulent cert, you will also find another CA you can fool.
* Pick a CA you can do business with. Let's say it's Sectigo as an example here
* Arrange a deal with Sectigo whereby they'll use an agreed process such as phoning a specific (confidential) contact number and speaking with Dave your Head of IT Security to confirm it's as expected before each certificate is issued for your names. Maybe this is a minimum volume deal like you'll pay them $2000 for the first up to 100 certificates per year and then $10 for each additional certificate.
* Set the CAA resource in your DNS for your names to require Sectigo as the only authorised CA.
Now when bad guys try to trick Sectigo it doesn't work because Sectigo calls Dave who shuts it down and you're onto them. If they instead try to trick say, Let's Encrypt the CAA resource says only Sectigo is allowed and the attack fails immediately.
The Ten Blessed Methods (of which Let's Encrypt offers three) are obligatory though, you can't make a deal with a CA to just skip it, they must use one of those methods. However if minimum friction is your goal you could find a CA that also operates as a DNS registrar for your names, whereupon one of the methods (22.214.171.124.12) means they only need to confirm this fact internally, no work for you.
Great, now you need to constantly go to some site like caniuse.com to see what % of browsers thinks your certificate is valid.
In the original conception of X.509, it was supposed to be an enterprise's system administrator (e.g. a university's IT department) that would define the set of root CAs that the enterprise's computers would trust. So there'd be literally no way to know whether a given user device would trust your certificate. That was supposed to be a choice that is up to them, not something you can pre-guarantee by making deals with third parties.
That’s what is happening now. Except for common sense, nothing stops a browser from trusting a certificate with an expiration date in the past.
The expiration date more is a statement from the issuer “I wouldn’t trust this after this timestamp”.
Having said that, I think Apple should say “our OSes will stop trusting certificates issued over two years ago” instead of ”our OSes will stop trusting certificates that expire in over two years”
If they did that, one could still use certificates with longer expiration periods, allowing other parties to cache them for longer periods.
Now they force everybody to follow them, so that, if this change breaks sites on iOS, it also breaks sites everywhere else.
This is designed to fix the fact that revocation is utterly broken.
You really don't want a situation where webpages may work or not work based on some completely unclear criteria.
The de facto standard will be that you will need to accommodate the least tolerant user agent if you want your certs to work for all browsers.
If I'm understanding correctly, the GP is saying browsers should be making decisions based on a much wider variety of criteria (for example, who issued the cert, how popular is the website using the cert, etc), which would be much more difficult to troubleshoot.
And yes, the de facto standard is that for the general purpose web you can't sell certificates based just on, for example, getting an OK from Microsoft (whose trust stores includes a tonne of governments and government agencies whose inclusion a cynic might assume is not coincidental to those governments agreeing to purchase Microsoft's products...) because those certificates don't work on an iPhone or in Firefox or...
The relationship that is symbolised by the CA/Browser Forum has always been asymmetrical like this, it was this way before the CA/B Forum was created, and if the two groups aren't able to reach common ground the CA/B will just gradually go away.
- max 398 days
today. Tomorrow they may change to be 397 days, or 39.8 days, or "until the next iPhone is released".
The 398 days isn‘t something they came up right before they presented it. It‘s already 3 years old...
The one that I can imagine (without research) is that the issuer knows the quality of their own security practices, and if they says that a certificate will expire by X date, they are saying that they can't guarantee that they will still be the only person with the secret key after that point.
Any other explanation?
They do know a private key for their entire CA certificate which they're using to sign things, but that's not what the expiry date in the leaf certificates they issue you is about.
Nor is it intended that the lifetime of a certificate reflects the expected time to break the private key the user knows, otherwise Apple would let you have say a two year certificate if you went to P-384 certificates because they're harder to break in theory. There is obviously a threat over time that your keys leak ("Oops, the backup tapes I put in the trash have our current keys") but that's hard to assign a specific time interval to, it just means you should definitely change keys sometimes.
The main thing that we're always worried about on a specific timescale is the name information. The same people control google.com today as two years ago, and likewise for my.lovely.horse but how about bobs-exhausts.example ?
But the main impetus for desiring shorter lived certificates is that the certificate lifetime in practice constrains our ability to make technical changes like the SHA-1 deprecation. We'd like to make changes faster, and if you've just bought an 825 day certificate realistically our changes can't be effective in less than 825 days unless we're willing to be so disruptive your apparently valid cert ceases to work in popular browsers, which will be a hard thing to sell for merely precautionary changes.
If a certificate expires, revocation lists only have to hold revoked certificates that haven't expired.
Forcing certificates to expire means that domain ownership gets re-validated every time. Nobody would risk buying 2nd-hand domains for anything sensitive if the previous owners could have a permanently valid certificate stashed somewhere. Or you'd have to contact all the CAs that could've issued a certificate for that domain and ask them to revoke it if they did.
1. Wait 24 hours or so after securing control over the names.
2. Use the Certificate Transparency logs to determine which certificates if any exist for names you now control and want revoked.
3. Discover the revocation process for the issuer of each cert.
4. Use each process you discovered. All of them should be willing to revoke if you can prove you now control these names. Some (Like Let's Encrypt) offer an automated way to do this, for others you may end up talking to a Customer services person or using an email ticket queue.
At a minimum, it seems like such processes based on both proof of domain ownership and proof of private key possession should exist. Moreover, there ought to be a way to specify via DNS which CAs can issue certificates for your domain (I guess DNSSEC DANE will provide this?).
You can use CAA to tell CAs whether they're authorised to issue for names in your domain, and since CAA is a DNS resource you can secure it with DNSSEC. But CAA isn't retrospective, it won't magically revoke certificates which already exist, although it could prevent issuance based on stale authorisations because it's supposed to be checked live.
Using DANE here would be pretty fraught even if it was widely deployed which is far from the truth.
It's not quite clear what you mean here.
The issuer should never have the private key for the end-entity cert (the one representing the domain or whatever). Only the owner should have the private key.
EDIT: Hmm, according to another comment here, some CAs will offer to generate both halves of the keypair for the end user. Yikes.
If you are talking about the CA's own private key...if that ever leaks, then no certificates signed by it should ever be trusted again; expiration date has no relevance here.
I wasn't able to find this comment, but perhaps I didn't look hard enough?
EDIT: Wait, I think I found it. I will reply there.
In the Web PKI this is prohibited but there are or at least were resellers who'd offer this service to their customers. You should not use this service of course, and some CAs have pledged to tell their resellers not to offer it.
In S/MIME it's more common because often the end user is both technically unsophisticated and not given real control over their client in order to mint their own keys anyway. But S/MIME is... probably not important, certainly it isn't what Apple's change is about.
Lengthy CRLs have performance cost, they need to be downloaded and processed by clients.
There is also a growing risk correlated to certificates validity length. If CRLs are unavailable to a client, the CRL server might be offline or unreachable or the client's access is maliciously blocked. The client have no knowledge of the reason and need to decide whether to trust the certificate without CRL check or not. If clients would be configured not to trust in such cases, the path to DOS is clear - block access to the CRL server and you effectively block traffic to all services of that CA. If clients are configured to trust certificates when CRLs are unavailable, which I beleive is the default on all OSs currently, blocking access to CRLs allows attacker to fool clients to trust revoked CRLs. With short living certificates, the opportunity window for such attacks is smaller.
2. Recurring revenue models for CAs.
Well of course they should, that's why it is right there, called "not before"
> The client should decide whether the certificate is still trustworthy. The cert can contain a recommended expiration date, but the dispositive information should be the issue date.
Well, that exaclty is the case. Most clients decided to honor the recommended expiration date, and Apple just announced they won't.
By making every subdomain public, it makes the job easier for any attacker wanting to try and find smaller servers to target. It's not that I believe in security through obscurity but besides making sure all servers are as secure as they can be, I do believe in not making the job easier for adversaries.
So, instead I use wildcard certificates, and for this automation gets much more annoying, you need to use DNS to validate it (route 53 or similar does provide an api that can be used) and then I'm not sure if I'm confortable having each server generate their own wildcard certificate, leading to 100s of wildcard certificates...
This is why I currently use old style 1 year wildcard certificate which gets updated through chef but I'm really not sure if this is the best solution or not.
Even if you are unusual in actually having machines named cy23hdc9.example.com not exchange2016.example.com then the existence of this service means you need to stop assuming nobody knows these names. Anybody who cares knows them.
This would also involve setting up a custom CA and distributing your CA cert to all machines that should be able to access the page. Good luck with that!
All of this is a ridiculous amount of effort to set up and maintain. It's a lot easier to just make the internal domains accessible to the internet and be done with it.
The effect being that networks are being nudged into exposing a lot more surface to the internet - and this somehow in the name of more security.
The cynical part of me says that certain companies might be very interested in the search possibilities gained from exposing internal networks to the Internet, and the increasing lock-in that occurs when you make your systems dependent on their public CA instead of your own private one. But perhaps that's just the tinfoil-hat talking. :)
That said, shouldn't that be illegal? Where does this data come from anyway? I'm guessing spyware - on the phone, in browsers, and spyware browsers like Chrome.
I'd be very surprised if any of the suppliers use spyware, seems like it'd be far less effective and also more expensive to do.
Neither of those really apply to the root certificate store. Updating the root certificate (to add or remove CAs) requires an OS/browser update instead of relying on CRL or OCSP queries which are flakier--and browsers have a variety of mechanisms to punish misbehaving CAs that aren't "drop you from the root store effective immediately."
Meanwhile, the process for updating root stores is entirely different than regular certificates, and it takes months to years to add a new root certificate--and I believe you don't get a pass to merely update your new root certificate for an existing CA.
The other risks with overly long validity times are ameliorated in large degree by the fact that CAs are regularly audited and have much greater scrutiny placed on their issuance.
For this reason root certificates have a very long validity and very stringent requirements in their security parameters, e.g. large key sizes, and how they are used. Usually the certificate sits in a well secured off-network physical device and is only used a few times during its lifetime to sign sub-certificates.
Now I gotta keep an history of passwords, just for Google...
... and millions similar articles
I developed widgets for web developers and I had to change it to stop using the 'fixed' positioning anywhere, because iOS is the only browser that interpret it differently on all touch enabled devices. They have many other issues as well. I have to spend half of my time to fix their buggy browser!!!!!!!!! Not even the old internet explorer causes that many issues as safari.
They torpedoed progressive web apps and many important web standards. Why? Just because they want you to use their apple store, where they can take 30% from every transaction without doing anything.
I hope Apple will go bankrupt and stop hurting the web community as they successfully did in the past years.
I am not talking about mobile usage.
The vast majority of users will not understand this. Also let's say you go to a site one day, and you get a prompt that their certificate has changed. Is this legit? How can you know?
>browser should send their public key to the server!
This already exists (HTTPS Client Certificates) but it's a huge pain in the ass so it's barely ever used. When it is used, it's generally within a controlled corporate environment.
Client certificates has kinda been deprecated by browsers. Which is why it's such a PITA. Also handling signing certificates by itself is a PITA. But most website's wouldn't need signing, just encryption.
Automatic SSL certificate signing is not that secure, if an attacker would be able to mess with DNS, or access the HTTP server, they could also create a fake certificate via Letsencrypt. Or if the attacker has access to the client, they could sneak in a root certificate. Most nation states and ISP (those who would like to spy on it's citizen) already have root certificates.
All that SSL certificates do is to create extra work for site maintainers. We can have encryption without certificates. If you would for example side-load a list of public keys for the main sites you visit, that would be much more secure then SSL certificates are today.
A DNS record (TLSA) would tell the browser how to deal with trusting the certificate, its public key or to validate the chain of certificates, etc: https://www.rfc-editor.org/rfc/rfc7671.html.
This would have radically changed the landscape of certificate authorities and browser vendors.
Google and Mozilla implemented it back in the day but changed directions. Among other issues, doing additional DNS lookups when you're trying to get the page on the screen as quickly as possible put the kibosh on things. Plus politics.
Mind you, using DANE + TLSA is the defacto standard for Mail Transfer Agents to do secure email with each other but SMTP and HTTP have different security requirements.
We may eventually get there, but there are political and technical hurdles to overcome when it comes to DNSSEC, which is required for DANE and all the rest. It's really taken off in parts of the European Union, South America, and Asia, but not so much in the US, where only about 25% of internet users are behind a DNSSEC-aware resolver: https://stats.dnssec-tools.org.
The article DNSSEC, DANE and the failure of X.509 is a pretty good summary: https://blog.hansenpartnership.com/dnssec-dane-and-the-failu....
That would make fingerprinting even easier?
The advantage with reusing "profiles" is that you could use the same profile on many web apps like Facebook and Instagram, then you could allow your Facebook friend to access your Instagram photos. Or your "contact list" could be offline (eg. managed by your browser, and not by web apps) and you manage access rights via your contact list, rather then on each and every app.