Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The flipside of the same technical points is https://sslmate.com/blog/post/entrust_distrust_more_disrupti... where some non-browser clients don't handle this, or worse, handle it incorrectly.


Right; it's imperfect, as everything is. But of course, it's also a huge bit of leverage for the root programs (more accurately, a loss of leverage for CAs) in killing misbehaving CAs; those programs can't be blackmailed with huge numbers of angry users anymore, only a much smaller subset of users. Seems like a good thing, right?


Only insofar as you trust the root programs to use their leverage responsibly, both today and in the medium-to-long-term future.


The operators of the root programs control the browsers (and, in some cases, the operating systems!), so this doesn't make much sense.


Any leverage against the root programs in the form of angry users is also leverage against the browser devs, is it not? Either way you look at it, the root programs/browser devs receive less pushback and gain more autonomy.

My biggest concern is long-term censorship risk. I can imagine the list of trusted CAs getting whittled down over the next several decades (since it's a privilege, not a right!), until it's small enough for all of them to collude or be pressured into blocking some particular person or organization.


Your concern is that there will be... too few CAs?


Yes. It would add another potential point of failure to the process of publishing content on the Web, if such a scenario came to pass. (Of course, the best-case scenario is that we retain a healthy number of CAs who can act independently, but also maintain compliance indefinitely.)


I don't know what to say; I think that's the first time I've heard that concern on HN. Thanks!


To add to this, EU's 2021 eIDAS (the one with mandatory trustlist) was a response to similar lack of availability. Contrary to what most HNers instincively thought it wasn't about interception: EC was annoyed that none of root programs is based in EU, and that causes 100% of trust decisions to be made on the other side of the Big Water. EC felt a need to do something about in, having in regard the fact that TLS certificates are needed for modern business, healthcare, finance etc., they have seen it as economy sovereignity issue.

My point is, lack of options, aka availability, is (or may be perceived as) dangerous on multiple layers of of WebPKI.


No, eIDAS 2.0 was an attempt to address the fact that the EU is not one market in ecommerce, because EU citizens don't like making cross-border orders. The approach to solving this was to attach identity information to sites, ala EV certificates. The idea for this model came from the trust model for digital document signatures in PDFs.

There are already plenty of CAs across the pond.


That's orthogonal problem. eIDAS had to solve many problems to create full solution. You're right that we have many TSPs (aka CAs), NABs also. EU have experience running continent-wide PKI for e-signatures that are accepted in other countries. But no root programs in WebPKI, which were essentially unaccountable to EU, but a critical link in the chain in end-to-end solution. There's was no guarantee that browser vendors won't establish capricious requirements for joining root programs (i.e. ones that would be incompatible with EU law and would exclude European TSPs). Therefore the initial draft stated that browsers need to import EU trustlist wholesale and are prohibited from adding their own requirements.

(That of course had to be amended, because some of those additional requirement were actually good ideas like CT, and there should be room for legitimate innovation like shorter certs, and also it's OK for browsers to do suffifcient oversight for TSPs breaking the rules, like the ongoing delrev problem).


Serious question: if the EU wants a root program they control, shouldn't step one be building a browser that anybody wants to use?


1) From an eurocrat pov, why build a browser when you can regulate the existing ones instead? EU core competence is regulating, not building, and they know it.

2) You don't actually need to build a browser to achieve this goal, you just need a root program, and a viable (to some extent) substitute already exists. cf. all "Qualified" stuff in EU lingo. So again why do the work and risk spectacular failure if you don't need to.

3) Building alternative browser for EU commerce that you'd have to use for single market, but likely wouldn't work for webpages from other countries would be a bad user experience. I know what I'm sayig, I use Qubes and I've got different VMs with separate browser instances for banking etc. I'm pretty sure most people wouldn't like to have a similar set up even with working clipboard.

There are things you can't achieve by regulation, e.g. Galileo the GPS replacement, which you can't regulate into existence. Or national clouds: GDPR, DSA, et al. won't magically spawn a fully loaded colo. Those surely need to be built, but another Chromium derivative would serve no purpose.


I feel like if you can make an Airbus, you can make a browser and a search engine.


If you're talking about technical capability, yeah, no contest here.

But if EC can legislate e-signatures into existence, then it follows that they can also legislate browsers into accepting Q certs, can they not?

Mind you, they did exactly that with document signing. They made a piece of paper say three things: 1) e-signatures made by private keys matching Qualified™ Certificates are legally equivalent to written signatures, 2) all authorities are required to accept e-signatures, 3) here's how to get qualified certificates.

Upon reading this enchated scroll, 3) magically spawned and went to live its own way. ID cards issued here to every citizen are smartcards preloaded with private keys for which you can download an X.509 cert good for everydoy use. The hard part was 2), because we needed to equip and retrain every single civil servant, big number of them were older people not happy to change the way they work. But it happened.

So if the hard part is building and the easy part is regulating, and they have prior art already exercised, then why bother competing with Google, on a loss leader, with taxpayer funds. And with non-technical feature, but a regulatory one, which would most likely case the technical aspects like performance and plugin availability to be neglected.


If there was just one CA then there would be no CABforum and users would have no leverage. This is the situation in DNSSEC. I don't think it's that bad, as one can always run one's own . and use QName minimization, but still, com. and such TLDs would be very powerful intermediate CAs themselves. And yet I still like DNSSEC/DANE as you know, except maybe I'm liking the DNAE+WebPKI combo more. And I don't fear "too few CAs" either because the way I figure it if the TLAs compromise one CA, they can and will compromise all CAs.


Well, I will give you this: this is a novel take. The WebPKI and DANE, because, heck, it's all compromised anyways.

Personally: I'm for anything that takes leverage away from the CAs.


Well, it's u/LegionMammal978's novel take, I just riffed on it.

> Personally: I'm for anything that takes leverage away from the CAs.

You can automate trusted third parties all you want, but in the end you'll have trusted third parties one way or another (trust meshes still have third parties), and there. will. be. humans. involved.


Yep. Too many CAs is a failure mode of the CA system, and too few CAs is also a failure mode of the CA system.

In fact, if just Letsencrypt turned bad for some reason, it's already enough to break the CA system, whether browsers remove it or not.


/cough/ VeriSign /cough/


> Right; it's imperfect, as everything is.

Is this tautology helpful? For sure it's commonly used, but I honestly have a hard time seeing what information it conveys in cases like this.


Non-browser clients shouldn't be expected to crib browser trust decisions. Also, the (presumably?) default behavior for a non-browser client consuming a browser root store, but is unaware of the constraint behavior, is to not enforce the constraint. So they would effectively continue to trust the CA until it is fully removed, which is probably the correct decision anyway.


To me that's an odd position to take, ultimately if the user is using Mozilla's root CA list then they're trusting Mozilla to determine which certs should be valid. If non-browser programs using the list are trusting certs that Mozilla says shouldn't be trusted then that's not a good result.

Now of course the issue is that the information can't be encoded into the bundle, but I'm saying that's a bug and not a feature.


Mozilla’s list is built to reflect the needs of Firefox users, which are not the same as the needs of most non-browser programs. The availability/compatibility vs security tradeoff is not the same.


> the information can't be encoded into the bundle

Can it not? It seems like this SCTNotAfter constraint is effectively an API change of the root CA list that downstream users have to in some way incorporate if they want their behavior to remain consistent with upstream browsers.

That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.

That said, it might be better to intentionally break backwards compatibility as a forcing function to force downstream clients to make that decision intentionally, as failing open doesn't seem safe here. But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.


> That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.

That's what the blog post I linked in the top comment suggests is the "more disruptive than intended" approach. I don't think it's a good idea. Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.

There's an extra issue of not-often-updated systems too, since now you need to coordinate a system update at the right moment to remove the root.


> Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.

Note that Mozilla supports not SCTNotAfter but DistrustAfter, which relies on the certificate's Not Before date. Since this provides no defense against backdating, it would presumably not be used with a seriously dangerous CA (e.g. DigiNotar). This makes it easy to justify removing roots at `DistrustAfter + max cert lifetime`.

On the other hand, SCTNotAfter provides meaningful security against a dangerous CA. If Mozilla begins using SCTNotAfter, I think non-browser consumers of the Mozilla root store will need to evaluate what to do with SCTNotAfter-tagged roots on a case-by-case basis.


> But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.

Yet that is the thing that goes around under the name "ca-certificates" and practically all non-browser TLS on Linux everywhere is rooted in it! Regardless of what the intent was, that is the role of the Mozilla CA bundle now.


Hmmmm, speaking of distrust and Mozilla...

I wonder how much I should be concerned about Mozilla's trust store's trustworthiness, given their data grab with Firefox? I've switched to LibreWolf over that (more in protest than thinking I'm personally being targeted). But I'm pretty sure LibreWolf will still be using the Mozilla trust store?

I haven't thought through enough to understand the implications of the moneygrubbing AI grifters in senior management positions at Mozilla being in charge of my TLS trust store, but I'm not filled with joy at the idea.


What actual risk are you worried about here? Mozilla changed their data policy, therefore the root store might do what...?


WebPKI is a maze of fiefdoms controlled by a small group of power tripping little Napoleons.

Certificate trust really should be centralized at the OS level (like it used to be) and not every browser having its own, incompatible trusted roots. It's arrogance at its worst and it helps nobody.


> (like it used to be)

When are you imagining this "used to be" true? This technology was invented about thirty years ago, by Netscape, which no longer exists but in effect continues as Mozilla. They don't write an operating system (then or now) so it's hard to see how this is "centralized at the OS level".


It was true for at least Chrome until around 2020: Chrome used to not ship with any trusted CA list and default to the OS for that.

Firefox has their own trusted list, but still supports administrator-installed OS CA certificates by default, as far as I know (but importantly not OS-provided ones).


Ah, so you're referring to Chrome, which shipped Chrome 1.0 at the tail end of 2008, and was in effect exercising normal root programme behaviour by the time I was on the scene in 2015 (but presumably also before).

Here's an example of them doing just that:

https://security.googleblog.com/2017/09/chromes-plan-to-dist...

I think this is just a lack of perspective on your part.


So your presumptions (no, Chrome did not have their own root program in 2015 or before; they announced [1] it in 2020, as I wrote, and blocking individual CAs out of it on a case by case basis does not quite make a full root program) and unwillingness to spend one minute to fact check them are a lack of perspective on my side?

My point was that Chrome (arguably not an insignificant browser) did use the OS’s root CA lists for 12 years (or 7, if you really want to count individual CA bans as a program; arguably both not an insignificant time span).

[1] https://groups.google.com/g/mozilla.dev.security.policy/c/3Q...


I think this is, at best, a lack of understanding of what's really going on, and at worst just obstinence.

As Ryan explains this was basically the same thing with new paint on it, it reminds me of the UK's "Supreme Court". Let me digress briefly to explain:

On paper historically the UK didn't have an independent final court of appeal, significant appeals of the law would end up at the Lords of Appeal in Ordinary ("Law Lords" for short) who were actual Lords, in principle unelected legislators from the UK's upper chamber. Hundreds of years ago they really were just lords, not really judges at all. The US in contrast notionally has an independent final court of appeal, completely independent from the rest of the US government, much better. Except in reality the Law Lords were completely independent, chosen among the country's judges by independent hiring process while the US Supreme Court are just appointed hacks for the President's party, not especially competent or effective jurists but loyal to party values.

So the fresh coat of paint gave the UK a "Supreme Court" in 2009 by designating a building and sending exactly the same factually independent jurists to go work in that building with their independent final court of appeal and stop requiring them to notionally be Lords (although in practice they are all still granted the title "Lord") who met in some committee room in the Palace of Westminster. The thin appearance changed to match the ideals the Americans never met, the reality was exactly the same as before.

And that's what Ryan is writing about in that post. In theory there was now a Chrome root programme, in practice there already was, in principle now you need a sign-off from Ryan's team in practice you already did. In theory now you're now discussing inclusion on m.d.s.policy because you want Chrome trust programme approval, in practice that's already a big part of why you're there.

When Chrome 1.0 shipped it was important to deliver compatibility to gain market share. Soon it didn't matter, they could and did choose to depart from that compatibility to distinguish Chrome as better than the alternatives.

7 years for one browser compared to 30 years for all browsers does I'm afraid seem a long way from "like it used to be" and much more "how I wrongly thought it should work".


Why should it be centralized at the os level?

Https certificate trust is basically the last thing I think about when I choose an os. (And for certain OSes I use, I actively don't trust its authors/owners)


It is genuinely weird to think Microsoft should get a veto over your browser if that browser stops trusting a CA, right?


The only thing that is genuinely weird is having four different certificate stores on a system, each with different trusted roots, because the cabals of man-children that control the WebPKI can't set aside their petty disagreements and reach consensus on anything.

Which makes sense, because that would require them all to relinquish some power to their little corner of the Internet, which they are all unwilling to do.

This fuckery started with Google, dissatisfied with not having total control over the entire Internet, deciding they're going to rewrite the book for certificate trust in Chrome only (turns out after having captured the majority browser market share and having a de-facto monopoly, you can do whatever you want).

I don't blame Mozilla having their own roots because that is probably just incompetence on their part. It's more likely they traded figuring out interfacing with OS crypto APIs for upkeep on 30 year old Netscape cruft. Anyone who has had to maintain large scale deployments of Firefox understands this lament and knows what a pain in the ass it is.


that’s not what he meant, and you know it. he means use the OS store (the one the user has control over), instead of having each app do its own thing (where the user may or may not have control, and even if he does have it, now has to tweak settings in a dozen places instead of one). they try to pull the same mess with DNS (i.e. Mozilla’s DoH implementation)


I don't understand, because the user has control over the browser store too.

(As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain).


> I don't understand, because the user has control over the browser store too.

i already mentioned that ("may or may not"). former or latter, per-app CA management is an abomination from security and administrative perspectives. from the security perspective, abandonware (i.e. months old software at the rate things change in this business) will become effectively "bricked" by out-of-date CAs and out-of-date revocation lists, forcing the users to either migrate (more $$$), roll with broken TLS, or even bypass it entirely (more likely); from the administrative perspective, IT admins and devops guys will have to wrangle each application individually. it raises the hurdle from "keep your OS up-to-date" to "keep all of your applications up-to-date".

> As an erstwhile pentester

exactly. you're trying to get in. per-app config makes your life easier. as an erstwhile server-herder, i prefer the os store, which makes it easier for me to ensure everything is up-to-date, manage which 3rd-party CAs i trust & which i don't, and cut 3rd-parties out-of-the-loop entirely for in-house-only applications (protected by my own CA).


It's baffling to me that anyone would expect browsers to make root store decisions optimized for server-herders. You're not their userbase!


neither are pentesters


Right, I don't think the pentester use case here is at all dispositive; in fact, it's approximately as meaningful as the server-herders.


> (As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain)

Can you please explain? I'm just curious, not arguing.


It's a good question! When you're testing websites, you've generally got a browser set up with a fake root cert so you can bypass TLS. In that situation, you want one of your browsers to have a different configuration than your daily driver.


Unfortunately, OS vendors like microsoft are quite incompetent at running root stores https://github.com/golang/go/issues/65085#issuecomment-25699...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: