Hacker News new | past | comments | ask | show | jobs | submit login
Security concerns with the e-Tugra certificate authority (ian.sh)
159 points by jamespwilliams on Nov 17, 2022 | hide | past | favorite | 94 comments



I understand why browsers are in a position where they don't want to remove CAs unless there is repeated and egregious issues, but I wish there was some third party that would rate CAs. I'd be willing to lose access to 1% of the web if it meant cutting 90% of these garbage CAs off my root certificates list.


I have a simple solution, allow multiple CAs to issue a cert and for a site to use a CAA record or the like to pin them and for browsers to encourage and enforce CAA record upkeep. So you can say letsencrypt and digicert and only those two can issue a cert for my site. Or if updating DNS is cumbersome, include a field in the cert for alternative CAs. Browsers can then pin CAs to a site.

That way it would require multiple CAs to be compromised or act in concert to affect a user who visited the site prior to a compromise. For new visitors, browsers can expand their normal CT check to include alternative CA declarations. The connection is secure only if all CAs certify all other CAs as valid and the certificate being authentic. If there is a valid cert issues in CR logs from a different CA, user gets an insecure connection warning -- which does mean random CAs can mis-issue certs and cause errors (for sites with multiple CAs that is) but will only help get rid of those CAs.


This. Browsers should mark sites without CAA as untrusted. (Also without DNSSEC maybe?)

I'm wondering why DANE didn't take off.


Because DNSSEC is just a second PKI that can't fix the problems with WebPKI without fixing those same problems with itself.


There are a whole plethora of practical reasons why DANE turned out to be unworkable, but this really is the core of it. It's worth keeping in mind that you can revoke an entire CA --- immensely popular and important CAs have been entirely removed from trust stores, quite recently --- but you can't do that with the DNS hierarchy. You can't do long term secrets without some form of revocation, and that's fundamentally what DANE represents.


> It's worth keeping in mind that you can revoke an entire CA

> ... but you can't do that with the DNS hierarchy.

DANE effectively revokes ALL CAs. Forever.

Once again, you are confused about what is the role of TLDs in DANE. They are not equivalent to CAs in WebPKI.

In a DANE-based PKI system, TLDs have neither less nor more power than they already have today with the WebPKI system.

So there is exactly as much need to revoke them in DANE as there is today with the WebPKI system, which is also susceptible to the exact same attacks from TLDs as the DANE-based PKI system.

> It's worth keeping in mind that you can revoke an entire CA --- immensely popular and important CAs have been entirely removed from trust stores, quite recently

You can, but browser vendors are reluctant to do it because it would break, potentially, thousands or millions of websites if a single CA gets removed from trust stores.

And it would take months or years for that removal to be reflected in millions of deployed systems out there who seldom or never update.

In DANE, there are no such problems.


Are you saying registrars are equivalent to CAs now and they can revoke domains just like they could reclaim them before? Right now, I trust TLDs to tell me the NS of the domain so I can query its IP against that NS, I don't trust them to validate that the true owner of the site currently controls the domain. So under DANE, who else if not TLDs does one ultimately trust to attest to that claim? Honestly asking, I don't know enough about DANE.


> Are you saying registrars are equivalent to CAs now and they can revoke domains just like they could reclaim them before?

I'm sorry, I don't understand the question. Could you rephrase it or expand on it? Specifically, I don't understand what "before" is in regards to -- before what?

> Right now, I trust TLDs to tell me the NS of the domain so I can query its IP against that NS, I don't trust them to validate that the true owner of the site currently controls the domain.

But you trust registrars for that, right? And registrars basically send that information to the TLD and coordinate with the TLD so that they point the NS record to whatever the owner of the domain wants.

Either way, yes, both your registrar and your TLD have the same technical ability to attack and perform a man-in-the-middle against your entire domain, either in the current WebPKI system or a DANE-based PKI system. So that is not a problem with DANE any more than it's a problem with the WebPKI system.

Of course, what is possible to do technically and what is legal can be entirely different matters. But it is technically possible, and usually illegal in most circumstances (except under judicial order or as a result of law enforcement I suppose), to do these kinds of attacks with both DANE and WebPKI.

> So under DANE, who else if not TLDs does one ultimately trust to attest to that claim? Honestly asking, I don't know enough about DANE.

Which claim, who is the owner of the site? It would be the same as it is now. As far as I understand, registrars do that and then coordinate with whoever manages the TLD.

In fact, that's what I've been saying, the criticism against DANE is equally valid with WebPKI, because the domain system would be the same. Except WebPKI is more fragile and has a lot more weak links in its chain.

tptacek says you can't revoke keys in a DANE system. Yes, you can, you can revoke them yourself! You don't need to wait for browser vendors to revoke an entire CA, potentially affecting thousands or millions of other websites. And they may actually decide not to revoke the CA, even if they issued rogue certificates, for several reasons.

What browsers can't do in DANE is revoke the TLD's keys (but the TLDs can do it themselves if they were attacked). But of course, it makes no sense to revoke the TLD's keys if they are the attacker.

Because if a TLD hijacks your domain (and they already do that today, whenever law enforcement and judges require them to do it), then neither DANE nor WebPKI can do squat about it.


You have confused discretionary revocation with misissuance.


> You have confused discretionary revocation with misissuance.

Ok, then please enlighten me. What is misissuance in DANE?

Because if misissuance in DANE is your TLD hijacking your domain... how does insecure DNS + WebPKI prevent that kind of attack?


DANE doesn't revoke any CAs. You can read AGL's post about why DANE just adds an nth new CA to the trust store (or, it would: no browser currently plans to adopt DANE; browsers actually removed their DANE code).

Meanwhile: it simply is the case that if LetsEncrypt starts misissuing certificates, Google and Mozilla can (and likely will) revoke it. Meanwhile: if the operators of the root zone or any the FVEY TLDs misissue, nobody can revoke them.

You could try to argue that this was academic, except that the USG routinely and publicly manipulates the DNS for policy reasons.


> You can read AGL's post about why DANE just adds an nth new CA to the trust store

Who's AGL and where is that post?

I don't see why it would be like adding a new CA. If the client can do DNSSEC validation then he could just use DANE, bypass the whole WebPKI system and basically distrust all CAs (if the website has already migrated to DANE).

It's not the same as adding a new CA, because in that case you are trusting that CA and you are also trusting all other CAs at the same time.

> (or, it would: no browser currently plans to adopt DANE; browsers actually removed their DANE code).

Yeah, I wonder why they did that. So Firefox adds DNS-over-HTTPS but then it doesn't implement DANE?

How does that make any sense? If you're already doing DNS-over-HTTPS then you're basically already at the point where you can get full DANE protection for free.

> Meanwhile: it simply is the case that if LetsEncrypt starts misissuing certificates, Google and Mozilla can (and likely will) revoke it.

According to their own stats, Let's Encrypt is protecting around 300 million active domains.

Which means Google and Mozilla can't revoke Let's Encrypt without breaking the Internet. So don't count on that happening, ever.

> Meanwhile: if the operators of the root zone or any the FVEY TLDs misissue, nobody can revoke them.

If those operators were attacked, they can revoke themselves.

If they are the attacker (like when the government hijacks your website due to e.g. piracy), you are exactly as screwed as you would be with the WebPKI system.

> You could try to argue that this was academic, except that the USG routinely and publicly manipulates the DNS for policy reasons.

Yes. And how does the WebPKI secure your website when that happens? Answer: it doesn't.

You're moving the goal posts and you don't even notice.


Respectfully, it sounds like you haven't paid much attention to the last 10 years of browser DNSSEC integration. For instance: you seem shocked (disbelieving, maybe?) at the idea that browsers can't simply enable end-to-end DANE validation. Here's a thing: DNSSEC records don't even universally transit the Internet; very widely deployed middleboxes will block DNS packets that contain DNSSEC signatures or DANE records.

There's no point in arguing this stuff, because we're operating from different premises. You read about DANE and think it's great. I know the feeling! I was a huge HPKP booster back in the day; that didn't work either. Meanwhile, one of my premises is: we can only do things that actually work, not just things we wish would work.

(I happen to not wish for a single Internet-wide government-controlled top-down PKI rooted in 1990s cryptography, but that's not relevant to my argument.)

I understand if you feel like I'm moving the goalposts. You don't know where the goalposts are. That's OK. But I assure you: there is no need for me to move them at all; my argument just isn't far enough from the end zone to bother.


> Respectfully, it sounds like you haven't paid much attention to the last 10 years of browser DNSSEC integration.

You are correct. That could mean I'm wrong, but it doesn't necessarily mean so. I would be happy to know whether I'm wrong and why.

> For instance: you seem shocked (disbelieving, maybe?) at the idea that browsers can't simply enable end-to-end DANE validation. Here's a thing: DNSSEC records don't even universally transit the Internet; very widely deployed middleboxes will block DNS packets that contain DNSSEC signatures or DANE records.

Indeed. That is solved by DNS-over-HTTPS for which Firefox and Android already have an implementation.

> There's no point in arguing this stuff, because we're operating from different premises. You read about DANE and think it's great. I know the feeling! I was a huge HPKP booster back in the day; that didn't work either. Meanwhile, one of my premises is: we can only do things that actually work, not just things we wish would work.

Sure, if it wouldn't work I would accept it (although I'd like to know why). But you haven't given me a convincing argument about why it can't work.

As far as I can see, in other posts you're just arguing about things DANE should solve that WebPKI doesn't.

And in this post, you're just taking into account how things stand at the present and not taking into consideration what organizations like Mozilla, Google, Apple and Microsoft could do to improve the situation, which let's be clear, is absolutely unacceptable in terms of security considering we are living in the year 2022.

> I understand if you feel like I'm moving the goalposts. You don't know where the goalposts are. That's OK. But I assure you: there is no need for me to move them at all; my argument just isn't far enough from the end zone to bother.

Yes, I do. The goal post is having a more secure system than insecure DNS and WebPKI while also being much simpler.

DNSSEC and DANE achieve that.

But you say that it's preferable to have a worse, much more complex, much more insecure system just because governments can subvert DNSSEC and DANE (even though they can subvert WebPKI even more today!).


The situation is far less "unacceptable" than you think it is, and DNSSEC would make things worse, not better.

https://sockpuppet.org/blog/2015/01/15/against-dnssec/


OK, I hadn't read that post before. Some of the arguments you had already made but some I hadn't read before.

But I mean, it doesn't bode well when even the first paragraph is incorrect:

> All secure crypto on the Internet assumes that the DNS lookup from names to IP addresses are insecure. Securing those DNS lookups therefore enables no meaningful security.

This is incorrect because if an attacker can make a DNS lookup (say for example.com) resolve to an attacker-controlled entry (and there are multiple ways to do this), the attacker can trick almost all CAs into signing a certificate for example.com, which the attacker can use to perform a MITM attack against example.com.

So WebPKI fails to protect against insecure DNS.

> DNSSEC is a Government-Controlled PKI

This is the same "moving the goal posts" argument. The government already controls the domain system, so you wouldn't be any worse with DNSSEC and DANE than with insecure DNS and WebPKI. You'd actually be much more secure.

I mean, sure, if you want to propose a security system that has all the benefits of DNSSEC and also subverts the government, feel free to do so (and good luck with that).

> In a world where users rely on DANE, those governments have much of the same cryptographic authority as the CAs do now

Governments would have the same authority as today. Nothing would change with that. I'm not sure why you think DNSSEC has to subvert the government when neither insecure DNS nor WebPKI does that.

In fact, insecure DNS and WebPKI allow a lot more attacks from governments, as any government can attack any website in the entire world.

> Had DNSSEC been deployed 5 years ago, Muammar Gaddafi would have controlled BIT.LY’s TLS keys.

Muammar Gaddafi could have controlled BIT.LY's TLS keys with the WebPKI system, if he wanted!

Or do you think insecure DNS and the WebPKI system protects against a government hijacking a domain? It doesn't, at all! Governments routinely hijack/seize (and block) domains with the current system.

> DNSSEC is Cryptographically Weak

> A modern PKI would almost certainly be based on modern elliptic curve signature schemes,

As far as I know DNSSEC supports elliptic curves.

> But the code that handles lookups today is buried deep in the lowest levels of application network code, and is rarely if ever wired for user interface.

Browsers can definitely have such a UI, they already have their own DNS caching mechanisms and whatnot. And other applications already have the same problem with TLS, this is no different with DNSSEC.

> DNSSEC is harder to deploy than TLS.

Yeah, I'm not sure about that. Like you said before, DNSSEC was automatically enabled by registrars in Europe for millions of customers, without them doing anything.

And even when it's not automatically enabled, usually you just have to tick a box in your DNS provider and/or copy a key into your registrar.

Sure, more complicated deployments are possible but the same is true for TLS.

In fact, even the most basic TLS deployment requires significantly more steps than enabling DNSSEC.

> In fact, it does nothing for any of the “last mile” of DNS lookups: the link between software and DNS servers. It’s a server-to-server protocol.

It does if the last mile is DNS-over-HTTPS, or DNS-over-TLS, or the client does DNSSEC validation.

> Even if DNSSEC was deployed universally and every zone on the Internet was signed, a coffee-shop attacker would still be able to use DNS to man-in-the-middle browsers.

No, nowadays this wouldn't be a realistic scenario anymore.

Clients should do DNS-over-HTTPS and/or DNS-over-TLS and/or DNSSEC validation.

> DNSSEC is Unsafe

It's not, you mentioned NSEC3 yourself below, while dismissing it as "bizarro", which I don't see how it can be a valid argument.

> DNSSEC is Architecturally Unsound

> effective security is almost invariably application-level and receives no real support from the network itself.

Right, because applications do so much to enforce DNS security and handle DNS hijacks nowadays.

If DNSSEC was enabled by default then applications wouldn't have to worry about DNS security and they would be just fine.

But those that wanted it, could still handle it differently (like browsers). For example, by having a DANE version of openssl which would give them more information about failures.

> How could US government IT not love it?

Oh, why don't you ask that about the WebPKI? That's a government's dream come true, actually.

With WebPKI, countries like Russia / China / Ethiopia / Somalia / Whatever can perform MITM attacks on .COM domains. Or whatever TLD they want.

With DNSSEC and DANE they can't.

> If you’re running systems carefully today, no security problem you have gets solved by deploying DNSSEC.

Yes, because everyone is running systems so carefully today.

And if no security problem gets solved by deploying DNSSEC, will you let me MITM your DNS server and we'll see what happens?

Haven't CAs already demonstrated their incompetence, power abuse and inability to prevent governments from around the world from attacking domains in other countries?

I really mean no offense or disrespect, but this post honestly just reads like a bunch of FUD to me...


> This is incorrect because if an attacker can make a DNS lookup (say for example.com) resolve to an attacker-controlled entry (and there are multiple ways to do this),

That'd be annoying to achieve, unless the authoritative name server is compromised... oops.

> This is the same "moving the goal posts" argument. The government already controls the domain system, so you wouldn't be any worse with DNSSEC and DANE than with insecure DNS and WebPKI. You'd actually be much more secure.

Compromising both nameserver operators and public CA's is much more difficult and much more visible. There is *no transparency* with DNSSEC alone.

> Yeah, I'm not sure about that.

DNSSEC deployment percentage by TLD should confirm it for you. If it's not that hard, why do some ccTLD's still not support it? It's ridiculous that website owners' website security should depend on that.

> It does if the last mile is DNS-over-HTTPS, or DNS-over-TLS, or the client does DNSSEC validation.

DNS-over-DANE-HTTPS sounds a bit tautological, doesn't it?

> In fact, even the most basic TLS deployment requires significantly more steps than enabling DNSSEC.

It really doesn't if we're speaking of DANE vs. WebPKI.

> It's not, you mentioned NSEC3 yourself below, while dismissing it as "bizarro", which I don't see how it can be a valid argument.

NSEC3 which is considered barely a hindrance to zone walking.

> If DNSSEC was enabled by default then applications wouldn't have to worry about DNS security and they would be just fine.

Assuming operators enable it

> Oh, why don't you ask that about the WebPKI? That's a government's dream come true, actually.

And DNSSEC would be even more so.

> With DNSSEC and DANE they can't.

Ehh? That's a leap.

> Yes, because everyone is running systems so carefully today.

And they'll be more diligent with DNSSEC which has far fewer safeguards?

> Haven't CAs already demonstrated their incompetence, power abuse and inability to prevent governments from around the world from attacking domains in other countries?

And haven't registrars demonstrated the same? Oh, that's right, we wouldn't know, because there's no oversight, there's barely any deployment, there's barely any good tooling.

> I really mean no offense or disrespect, but this post honestly just reads like a bunch of FUD to me...

I'd really say the same about claiming DNSSEC+DANE would "fix" WebPKI.


> That'd be annoying to achieve, unless the authoritative name server is compromised... oops.

We weren't discussing how annoying it is to achieve, just whether WebPKI protects against insecure DNS or not. And it doesn't (in some scenarios), because it depends on DNS to generate valid certificates but has no way to validate DNS responses (which could have been compromised in transit, by multiple kinds of attacks).

> Compromising both nameserver operators and public CA's is much more difficult and much more visible. There is no transparency with DNSSEC alone.

How is it much more difficult? It's exactly as difficult as compromising nameserver operators in DANE. If you compromise the nameserver then the CA would be automatically compromised as well.

There is no transparency with WebPKI either unless the target is a website, not any other kind of TLS-based service.

Either way, tptacek and I have already discussed in another post how a DANE-based transparency scheme would work and it would be much easier and less resource intensive to do than certificate transparency.

> DNSSEC deployment percentage by TLD should confirm it for you. If it's not that hard, why do some ccTLD's still not support it? It's ridiculous that website owners' website security should depend on that.

Because there's not much value in doing so yet. The vast majority of clients aren't validating DNSSEC by default yet.

> DNS-over-DANE-HTTPS sounds a bit tautological, doesn't it?

I don't see a problem, although some special code might have to be written to handle that. You'd just need to validate the certificate of the DNS server before declaring the DNS-over-HTTPS connection as authenticated. The DNS server could provide the signed responses upon establishing the TCP connection, which the client would validate.

> And DNSSEC would be even more so.

No, it would be less so. Have you even read any of my arguments?

As I said, governments can attack any TLS-based service in the world right now.

With DNSSEC and DANE, they couldn't -- they could only attack domains over which they already have control.

> Ehh? That's a leap.

Please explain how so.

Explain how they would be able to do a MITM attack if they have no control over an unrelated TLD, which would be needed to sign a valid certificate.

> And haven't registrars demonstrated the same? Oh, that's right, we wouldn't know, because there's no oversight, there's barely any deployment, there's barely any good tooling.

Right, there isn't, which is why we should change that.

> I'd really say the same about claiming DNSSEC+DANE would "fix" WebPKI.

I didn't claim that, so your statement is invalid.

I only claimed that DNSSEC+DANE is simpler and much more secure than insecure DNS+WebPKI. Which makes it worth to transition to.

The two systems would need to coexist during the transition period, but for any already-transitioned domain, the benefits would be clear and substantial.

Arguments about slow adoption are invalid because almost no effort (DNS-over-HTTPS being the exception) is currently being made by those who actually have the means to drive adoption (i.e. Mozilla, Google, Apple, Microsoft).


> We weren't discussing how annoying it is to achieve, just whether WebPKI completely protects against insecure DNS or not. And it doesn't.

Security of trust services basically always boil down to how "annoying" they are to compromise.

> How is it much more difficult? It's exactly as difficult as compromising nameserver operators in DANE.

Because there's actual oversight of the compliance and oversight of WebPKI CAs, unlike DNSSEC.

> Either way, tptacek and I have already discussed in another post how a DANE-based transparency scheme would work and it would be much easier and less resource intensive to do than certificate transparency.

No standard, no proof.

> Because there's not much value in doing so yet. The vast majority of clients aren't validating DNSSEC by default yet.

Because it's more difficult than TLS, while not being better than it.

> With DNSSEC and DANE, they couldn't -- they could only attack domains over which they already have control.

Says you?

> No, it would be less so. Have you even read any of my arguments?

You haven't provided arguments why that would be the case.

> Please explain how so.

Please explain how so. I shouldn't be trying to prove the negative of your initial claim that is without support.

> I only claimed that DNSSEC+DANE is simpler and much more secure than insecure DNS+WebPKI.

And that's the FUD part.


> Security of trust services basically always boil down to how "annoying" they are to compromise.

No, real security is achieved by mathematical / cryptographic impossibility. "Annoyance" is hardly a deterrent, except for the least determined attackers.

> Because there's actual oversight of the compliance and oversight of WebPKI CAs, unlike DNSSEC.

That doesn't matter at all, because if you compromise the domain's nameserver (like you would have to do in DANE to attack it), then CAs will just happily sign a rogue certificate, which can be used to do a MITM.

> No standard, no proof.

What a good argument. Your argument is basically like saying in 2002: "electric vehicles aren't viable because no infrastructure for them exists right now".

> Says you?

Well, if I'm wrong why don't you tell me why I'm wrong? How is that an argument?

> You haven't provided arguments why that would be the case.

I have, please read my comments.

I will say it again briefly so you can understand it: governments currently can attack any domain in the world, regardless of TLD.

In DNSSEC+DANE they could only attack their own TLDs.

Insecure DNS and WebPKI is therefore a government's dream come true, not DNSSEC+DANE.

> Please explain how so. I shouldn't be trying to prove the negative of your initial claim that is without support.

In DNSSEC+DANE governments can't compromise CAs, because they wouldn't exist.

That's why DNSSEC+DANE prevents the governments of Russia and China from doing MITM attacks against TLS-based services in .COM domains.

Do you understand now?

> > I only claimed that DNSSEC+DANE is simpler and much more secure than insecure DNS+WebPKI.

> And that's the FUD part.

Well, if you think it's FUD, are you going to actually provide valid technical arguments about why it's FUD like what I did with tptacek's post, or are you just going to nitpick my arguments, completely miss the points I was making and then accuse me of FUD?


> No, real security is achieved by mathematical / cryptographic impossibility.

You can't do trust that way. It's an entirely human concept for which we use cryptography for it to scale better.

> That doesn't matter at all

Says you? Again. You're ignoring half of the rationale behind CT (which I will *not* be rephrasing here, it's freely available to read).

> What a good argument.

It is. Unless you can actually show a working DNSSEC transparency standard, it doesn't exist. Some hypothetical future feature to remedy flaws is *really* not a strong argument towards DNSSEC.

> Well, if I'm wrong why don't you tell me why I'm wrong? How is that an argument?

There's no argument, just your opinion. It's starting to become funny how vague this discussion is becoming with you avoiding providing anything of substance that could be properly refuted.

> In DNSSEC+DANE they could only attack their own TLDs.

Huh? They'd attack any of the myriad of registries, just like they could attack CA's. It's just that CA's are held to a much higher standard and supervision. You can't just ignore that part.

> In DNSSEC+DANE governments can't compromise CAs, because they wouldn't exist.

This is just nominative pedantry, they're functionally still trust authorities that can be compromised. Call them CAs or Key-As, it doesn't matter.

> Well, if you think it's FUD, are you going to actually provide valid technical arguments about why it's FUD

It's very asymmetric effort for me to counter non-technical opinions and hypotheticals with technical ones. Not very fair towards my time and I'm not going to.

> completely miss the points I was making and then accuse me of FUD?

You're the one calling it "insecure WebPKI" while dismissing everything wrong with DNSSEC, so it's not even an accusation, it's an astute observation.


> Because DNSSEC is just a second PKI that can't fix the problems with WebPKI without fixing those same problems with itself.

Oh, yes it can.

In DANE, say for example, if you have a .COM website, there is no chance that some obscure company from Russia, Turkey, China, or some other country out of hundreds, can issue a valid certificate for your website or any other TLS-based service that you're hosting.

In WebPKI that can happen at any time. Regardless of whether that would have subsequent repercussions... or not. Because maybe that company issued a certificate because they were attacked or exploited... say, for example, because some employee inserted a USB key on his computer (or got payed by a third party to do it).

So even when you are attacked in the WebPKI system, not only there's nothing you can do about it, but the CA that issued the certificate may not even suffer any consequences because of it (except having to implement "more security controls" or "fix some flaw", presumably).

And of course, each countries' national security agency can always bypass such security controls, as long as they can find some plausible cover story for the result of that CA's "investigation" of itself.


You're discussing a hypothetical world in which DANE was universally deployed and all Internet paths were clear for it. We can only asymptotically approach that world (and at present we are nowhere close). Until we get there, browsers have to be able to fall back to CA certificates. In addition to fallback being one of the most famous rats nets of security vulnerabilities in all of practical cryptography, this situation dictates that browsers won't be able to eliminate X.509 with DANE; they'll merely be adding a new trust anchor to the root store. The worst of both worlds.


> We can only asymptotically approach that world (and at present we are nowhere close).

Well, we're nowhere close because neither browsers nor operating systems are currently doing DNSSEC validation by default.

This could change (figuratively speaking) tomorrow if vendors decided to do so (along with implementing DNS-over-HTTPS if they haven't already).

> In addition to fallback being one of the most famous rats nets of security vulnerabilities in all of practical cryptography, this situation dictates that browsers won't be able to eliminate X.509 with DANE;

I think you are failing to see a future where the vast majority of these obstacles have been solved and therefore websites could afford to disable support for the remaining tiny (but long) tail of WebPKI-only clients.

Even despite the fallback problem, I think a DANE-based PKI system would be much simpler than WebPKI currently is, by orders of magnitude. So I don't think we would be worse than where we are now, where we just keep adding layers upon layers of complexity without actually solving the real problem at the root.

> they'll merely be adding a new trust anchor to the root store. The worst of both worlds.

No, it's not merely adding a new trust anchor, because that implies the old anchors would still be accepted even though the new anchor is working.

The idea is that if the client can do DNSSEC validation then it can disable all the other anchors and just use DANE (if the website has already migrated).


You are right. I am failing to see that future.


Anyone who can mitm a site can mitm a caa record.


That's also true today, without CAA records.

Anyone who can MITM a site can also get a valid certificate for it.


I think you are talking about MITM all traffic to a site as opposed to MITM all traffic from specific clients (e.g.: hostile webhost vs compromised router).


Oh I see what you mean. So the parent poster is saying that if some router gets compromised, then it can perform a MITM against its clients, faking the CAA record of some website which would trick the clients into believing the correct CA for that website is different from the real one.

Indeed, that would be a problem with the CAA approach, currently. To be immune against that attack, clients would either need to 1) use DNS-over-HTTPS, 2) use DNS-over-TLS or 3) perform DNSSEC validation of the CAA record.

Either 1, 2 or 3 would be enough to thwart the attack, but of course, it would be better if they did either (1 and 3) or (2 and 3).


I once did sorta this for a long period, as a daily-driver exercise: un-trusted all the CAs in the browser bundle except a few that I recognized as very popular.

Then, on occasion of a site with untrusted CA, figured out which one it was, and then decided whether I wanted to trust it. It was a headache, but it happened only very infrequently.

This exercise was inspired by my unease with the current CA scheme, and seeing some of the ones that were trusted by default.


I did that for a while too, but on my phone. It was a big hassle as apps and services do not handle this gracefully at all. Sometimes you get a failure for something to connect, sometimes it just stops working silently. Rarely I got a clear message about the untrusted cert.


Be great if you could open your certificate store, look through the list of roots, and see how often you have used each one, and a list of sites they have approved.

Then you could say "hmm, these I've never used -- set it to manual approval for each site they try to authenticate"

Or "For this one, that's great, I'll use it to authenticate .ae domains, but nothing more, or I'll allow .bankofchina.com, but nothing more"

Would only apply for 0.001% of browser users though, and they are the ones who are more likely to push for a better solution which will apply to the rest of the world, so I can see why they don't want that.


Certificate Transparency is even better, it allows you to see all certificates these CAs issued. This also means the site operator can monitor for certificates issued for their site and notice if a CA they aren't using issues one.

Unfortunately, Firefox doesn't seem to enforce CT https://developer.mozilla.org/en-US/docs/Web/Security/Certif...


> Certificate Transparency is even better, it allows you to see all certificates these CAs issued.

What OP suggested would potentially prevented a rogue behavior from a hacked CA to cause any damage. With Certificate Transparency you would just have a nice log about it afterwards.

> This also means the site operator can monitor for certificates issued for their site and notice if a CA they aren't using issues one.

All they could do is to just turn off the site and report the incident. It would still take hours before that certificate would be revoked and that information propagated to browsers. I'm also fairly confident 99.5% of website operators do not watch any cert monitors.


> With Certificate Transparency you would just have a nice log about it afterwards.

Fear of consequences is a powerful thing. A rouge cert issue basically kills the CA.


Yeah, but in some cases it's worth it.


Why is that? The link in the section explaining it doesn't isn't very helpful


There is law in Turkey which is passed in the state of emergency (2016) and these laws later become permanent. If the government demanding anything from a Turkish company and this demand will not be complied quickly, then the government takes the control of the company (replacing boss, changing banking passwords) temporarily in order to comply. This process does not involve judicial authority but an administrative one. It wouldn't matter if it involved judicial authorities because justice system is worst kind of joke.

I know it because they took control of our company in 2016. The reason in the decision: "inspector found no evidence of tax evasion, which is suspicious for a Turkish company, therefore we take control of the company." (not joking)


I wish the TLS Name Constraints extension were widely supported. Then browser vendors could just say that until that law gets repealed, they won't accept root CAs from any Turkish entities without a Name Constraints extension limiting them to only sign within the .tr TLD.


X.509 Name Constraints are widely supported in browsers at this point - ref. https://bettertls.com - at least for DNS SANs and for the common cases.


This is why certificate transparency is so important. They can sign fraudulent certificates and MitM websites for a short while, but the CA will probably be permanently blacklisted if any browser in the wild encounters these certificates.

Turkey can eliminate their trust based companies one by one if they deem it necessary, but for a government seemingly trying to focus on export the distrust would probably hurt more than it would yield. See DigiNotar and WoSign/StartCom.


It is very easy for Turkish government to issue fake certificates via a Turkish company. They did it with Turktrust once (CA certificate issued to EGM and EGM issued a cert for *.google.com, EGM stands for Turkish Police), they can do it again.


Any government that can seize the domain can issue a fake cert for that domain, so no matter what is put in place, the Turkish government could always issue a fake cert for .tr - or any other domain owned by a Turkish company.

The *google.com stuff is the more dangerous, but that can be detected pretty quickly if widely deployed - the intelligent way would be to only do so in very target situations and very, very rarely.

(Google added certificate pinning and other things to try to protect against this in the future)


No government can do it as easy as Turkish government, in many of the countries they have laws and there are mechanisms to ensure they are followed - if not there are punishments. Turkey does not have laws as for 2022 (they only exists on paper and no one cares). If Turkey does this there won't be any punishment to itself for harming the CA company and any journalist reporting this incident will be thrown to jail, if not killed for exposing Turkish Intelligence secrets.

The probabilities talk. 0.00001% this is happening in Europe (which would ended up with punishment for liable parties) vs. >50% this is happening in Turkey (punishment of journalists for exposing this etc).


If a country is corrupt from top to bottom then it doesn't really matter what the laws are.

But in the US the same thing can happen completely legally, via a National Security Letter, with no real oversight or appeal. And much of Europe is starting to follow the same path.


Sure, Turkey is way more likely to do it than other countries, but it is done in various places and various ways - the US even has a default page for "this domain has been seized" and they've been known to run "illegal" domains for quite awhile collecting data.


Which is not the same thing with issuing rouge certificates to MitM you, especially for political purposes. For example, a winner of local best talent TV show (Atalay Demirci)'s Twitter account hacked and his messages published online and just because of his ordinary messages with a former Turkish deputy (Hakan Sukur), who is now in exile in the U.S. - he got jailed for political reasons.

So a rouge certificate for *.twitter.com can really ruin ordinary people life in Turkey. We are talking about a human life here.


One of the best ways to get issues like this in front of people is through Mozilla's dev-security-policy list (which was originally the mozilla.dev.security.policy Usenet group). People from other browser manufacturers monitor that list.

The list archives are at https://groups.google.com/a/mozilla.org/g/dev-security-polic... this issue is covered at https://groups.google.com/a/mozilla.org/g/dev-security-polic...


Oh, this reminds me if TurkTrust scandal[0] when they granted a local govt. corporation the ability to generate fake certificates and they were caught when generated fake Google certs. Considering the identity of the mayor, I'm still not convinced that no people were harmed.

[0] https://nakedsecurity.sophos.com/2013/01/08/the-turktrust-ss...


I wish I could set my browser into a mode where CAs are an opt-in.

I enable this setting and at first I can't go to Google unless I enable Google's CA. The go to Amazon and get the warning, get displayed what I will be enabling and have the ability to use Google to search if I will trust them. So I enable DigiCert Global CA in order to use Amazon, and ask myself why the hell I need to trust that 3rd party.

And so on. This way I would never have Hong Kong Post Office's CA enabled in my browser.


You can't do this directly in your browser, but you can do it with your system certificate store. Lots of people disable these small, low-volume CAs.

(I wonder if there's a tool out there that does so automatically? Something like Filippo's mkcert but for automatically pruning the CA store down to just the ~dozen major operators?)


Exactly this. CA curation. These sort of trade-offs are probably beyond a single entity to assess (arguments about fundamental PKI limitations aside), but I'm certain if there was an API, even just some simplified file format, "the web" could come up with a way to crowd-source better intel on what CAs should and shouldn't be in the list, all the way down to explicitly using only ones you want. A bit like bittorrent blocklists, ye olde spam list, etc.


The CA system and WebPKI is more or less fundamentally flawed.

- Too few CAs and you loose on competition.

- Too many CAs and the probability of vulnerabilities approaches 100%.


"Competition" is not a desirable property in a cryptosystem. The fact that it's a property of WebPKI at all is a historical quirk of CA organizational practices and the (entirely artificial) market for certificates.

In other words: we should be seriously cutting down the number of trusted CAs.


The problem is that CAs aren't really part of the cryptosystem. They hold the roots of trust but charge for there services and are for-profit. Without any competition we would have a terrible monopoly.

I agree that there are too many CAs. But the ideal number is also significantly more than 1.

...well unless you include zero. If we can replace this with something better that would be much more desirable.


How so? The cryptosystem in question is WebPKI. CAs form the roots of trust in WebPKI, as you mentioned.

> They hold the roots of trust but charge for there services and are for-profit. Without any competition we would have a terrible monopoly.

This hasn't been true for years! Let's Encrypt provides free certificates to anyone with a domain. They issue millions of certs a day and secure a massive chunk of the web[1].

I think everybody is in broad agreement that the CA system is brittle, vulnerable to mismanagement, bad actors, etc. But "something better" is very hard to identify and implement. Churchill's saying about the "worse system except for all the others"[1] applies equally well here, IMO.

[1]: https://blogs.fcdo.gov.uk/petermillett/2014/03/05/the-worst-...

[1]: https://www.linuxfoundation.org/resources/case-studies/lets-...


> This hasn't been true for years! Let's Encrypt provides free certificates to anyone with a domain.

Because they are the competition.


The point being that the monopoly's back has been broken. We're no longer in a situation where there needs to be >100 CAs for market competition purposes; there's now a free, universally available, public service.


Is LE really universally available? I'm not aware of them making any public commitment to serve unpopular websites (thinking of e.g. KiwiFarms), and they used to check an opaque Google blacklist before issuing certificates.


They list no stipulations about prohibited certificate uses[1]. Their restrictions on domain uses seem to be mostly tied to legal requirements (not issuing for sanctioned countries, for example). It's all also pretty transparent, from what I can tell[2][3]. Certainly more so than a normal CA.

[1]: https://letsencrypt.org/documents/isrg-cp-v3.3/#1.4.2-prohib...

[2]: https://letsdebug.net/

[3]: https://github.com/letsdebug/letsdebug


This is the correct take. There's a bunch of policies and practices but essentially dozens of shady providers have complete ability to impersonate anyone on the web, and the behavior of those policies and practices is set by a might-makes-right of the most popular browser vendors.

The entire system is equal parts insecure and absolutely impractical, such that 80% of organization have had an outage caused by certificate issues but certificates aren't meaningfully providing security.


FWIW, a Tughra is the great seal of Ottoman emperors. Soliman the Magnificent's is well-known:

https://www.dailyartmagazine.com/tughra-of-sultan-suleiman-t...


Wow. That's absolutely beautiful. Thanks for sharing that.


Disappointed that a daily art magazine doesn't allow zooming in on photos from mobile!


Aren't CAs supposed to go through vetting and audits before getting trusted in operating systems and browsers? We even went through a SOC audit just for our own internal CA!


While adding a new CA to trusted set requires some kind of rigor, there's a quite large selection of CAs that were just "always there".

Since many browser vendors are reluctant to remove CAs (because it breaks lots of websites for users), they don't have too much leverage enforcing security practices.


Here's the audit: https://bugzilla.mozilla.org/show_bug.cgi?id=1628720#c14

The requirements they are supposed to check against: https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-... - section 5 addresses security, referencing "The CA/Browser Forum’s Network and Certificate System Security Requirements" - https://cabforum.org/network-security-requirements/, which at least in the version 1.3 that I saw required pentests and security scans.


I'd imagine CA/B forum adopting new rules to avoid situations like this. The very least just a requirement to publish security contact information.


And who is auditing them ? CIA ? Goldman Sachs ? KGB ? EFF ?


> In many regards, certificate authorities are audited comprehensively against industry-specific audit standards. Certificate authorities also routinely get hacked. Despite this, not a single certificate authority runs a bug bounty program, and of the major CAs, only GlobalSign and Let’s Encrypt even offer a security.txt to help disclose issues. Only an annual penetration is generally required of CAs.

These feel like the wrong metrics: the attackers who compromise CAs don't generally overlap in skillsets with people who engage in bug bounty programs, and (AFAIK) `security.txt` has had no significant adoption in the broader community.


Seems like you are saying that bug bounty researchers focus more on application security issues and not other types of security, which is certainly true, but [some % of] breaches occur via appsec issues and they’re what bit e-Tugra here.


Even rogue states have their own CA's and we are trusting them. Thát is a real security concern.


[Citations needed]

Last time I checked we don't have any CA's from North Korea or Afghanistan...

Happy to be proven wrong.


Last time it was KZ's MITM cert that was rejected: https://blog.mozilla.org/security/2019/08/21/protecting-our-....

Although you can argue that US' FPKI is "rogue" for the purposes of WebPKI (there's no control, no transparency, no audit etc.) and cross signature over FPKI Bridge was one of the reasons contributing to Symantec forcible retirement.


Those aren't the only two "rogue dates.


At least certificate transparency means it’s possible to check if malicious certificates where issued.


I think that interpreting the results can be confusing. Companies can certainly check their own domains regularly, but as an ordinary web user, it's difficult to determine from CT logs whether or not example.com that you are about to visit has a fraudulently issued cert or not. Maybe they happened to legitimately switch providers that day. You weren't in the meeting, you don't know.

(I know that CAA records exist in DNS, which are a good signal. But there isn't DNS change transparency; every DNS reply can be unique, so someone can easily serve a faulty CAA record to an evil/hacked CA, and you wouldn't notice. DNSSEC exists to solve this particular problem, but is its own new and exciting set of problems that I have yet to fully understand.)


This is a huge security issue according to Turkish law (called KVKK). They should report this security breach here https://www.kvkk.gov.tr/veri-ihlali-bildirimi/ but I haven't seen anything reported from them.


> there were many log lines referencing [EJBCA]

Is this link supposed to go somewhere specific, or just to the EJBCA homepage? It currently just points back to https://ian.sh/etugra


I’m still surprised that in the year 2022 people still pay enough money for certs that CAs manage to exist providing SSL certs. EV and OV is effectively dead (and was never a good idea anyways), so who exactly is paying for their certs?


Lots and lots and lots of systems are not setup to handle Let's Encrypt auto-renewing certs, and so people keep plugging away with the ones they've always been using.

Dropping them down to one year duration may help push people towards LE and friends.


How much web does removal of these CAs actually impact? I’m an English speaking westerner* and most of the certs I see signed are either letsencrypt or digicert.

* Relevance: I assume other languages / countries use more localised certs.


You can pull a full list from crt.sh: https://crt.sh/?caid=163319, then for each child CA search for % (there are possibly also other roots, rinse and repeat).


Any chance you’d know if full signing logs are available for download anywhere?


Are there any CAs that are wholly owned and based in the EU, and who do strictly offline signing of certificates?


> wholly owned and based in the EU

I believe anf.es is

> strictly offline signing of certificates

CSRs, signed certs, certificate transparency log entries, revocations etc have to get between the signing device and public some how. You can create a gap, but then you have to bridge it somehow for this data.

Do you want an organization that prints out and types in every request going across that gap? You're probably not going to find that.


Is there any of this that could not be done on a daily basis?

Requests go on a memory card at the end of the day, onto the signing device which signs them for a year, back on the card and onto the network-connected device from where they are mailed back to the customer.

What functions of a CA have to be run more frequently than this?


CA Baseline requirements say certificate revocation must be within 24h so a daily process may miss; a manual process on-demand or every 12hours would comply.

I suspect it's more about competitiveness though. Manual processes are expensive and slow, which likely will push customers to choose other CAs. I don't think there is a market for manually operated certificates. I believe this for two reasons:

1. Because any CA can issue a certificate, you are exposed to the risk of some other CA getting hacked regardless of how secure the CA you choose is. (Although CT and CAA may mitigate this).

2. Air-gapping is neither necessary nor sufficient to be secure. Not sufficient because even an air-gapped system needs controls against insider threats, tampering of data before it gets transfered across the air-gap, attacks that breach the air-gap [1] etc. Not necessary because well run systems can be reasonably secure even without an air-gap.

[1] https://en.wikipedia.org/wiki/Air-gap_malware


What is the threat model that results in this being your requirement?


There is no threat model, just curiosity, but obviously if the signing device is online then its signing key could potentially be retrieved by an attacker on the Internet.


The CA/B baseline requirements include storage of the private key on a FIPS 140 Level 3 cryptographic device (i.e. an HSM) so there's a certain minimum degree of assurance there.


> I ended up taking a deep look into e-Tugra, a Turkey-based certificate authority trusted by Apple, Google, Mozilla, and other clients

He shall look at American CAs. Trusting CAs based on pedigree is ... funny.


aaand episode #352381 of "woah all these people who i thought were sophisticated are actually just a bunch of lazy / ignorant sysadmins". wouldnt want one obscure CA you never heard of to get hacked, that would compromise every other website (or maybe not with this X.509 name constraints gimmick people are talking about in this thread)


Why do we need these obscure CAs?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: