Hacker Newsnew | past | comments | ask | show | jobs | submit | sleevi's commentslogin

All the time. Many CA distrust events involved some degree of “amateurs” reporting issues. While I hesitate to call commenters like agwa an amateur, it certainly was not professionally sponsored work by root programs or CAs. This is a key thing that Certificate Transparency enables: amateurs, academics, and the public at large to report CA issues.

At the same time, it sounds like the issues you describe aren’t CA/issuance issues, but rather, simple misconfigurations. Those aren’t incidents for the ecosystem, although definitely can be disruptive to the site, but I also wouldn’t expect them to call trust or identity into disrepute. That’d be like arguing my drivers license is invalid if I handed you my passport; giving you the wrong doc doesn’t invalidate the claims of either, just doesn’t address your need.


it seems more ad-hoc, bounty-driven , rather than systematic. is that a fair perspective?


I wish there were bounties :-)

There is systematic checking - e.g. crt.sh continuously runs linters on certificates found in CT logs, I continuously monitor domains which are likely to be used in test certificates (e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1496088), and it appears the Chrome root program has started doing some continuous compliance monitoring based on CT as well.

But there is certainly a lot of ad-hoc checking by community members and academics, which as Sleevi said is one of the great things that CT enables.


Thanks for highlighting that— and for the efforts to assemble this project. Honestly before this post about the CT logs i hadn’t been aware of systematic auditing being done.


Because it wasn’t actually a server misconfiguration, nor was it, as others have speculated, about Postel’s Law.

The way X.509 was designed - to the very first version - was the notion that you have your set of CAs you trust, I have my set, and they’re different. Instead of using The Directory to resolve the path from your cert to someone I trust, PKIX (RFC 2459-et-al) defined AIA.

So the intent here was that there’s no “one right chain to rule them all”: there’s _your_ chain to your root, _my_ chain to my root, all for the same cert, using cross-certificates.

Browsers adopted X.509 before PKIX existed, and they assumed just enough of the model to get things to work. The standards were developed after, and the major vendors didn’t all update their code to match the standards. Microsoft, Sun, and many government focused customers did (and used the NIST PKITS test to prove it), Netscape/later Mozilla and OpenSSL did not: they kept their existing “works for me” implementations.

https://medium.com/@sleevi_/path-building-vs-path-verifying-... Discusses this a bit more. In modern times, the TLS RFCs better reflect that there’s no “one right chain to rule them all”. Even if you or I aren’t running our own roots that we use to cross-sign CAs we trust, we still have different browsers/trust stores taking different paths, and even in the same browser, different versions of the trust store necessitating different intermediates.

TLS has no way of negotiating what the _client’s_ trust store is in a performant, privacy-preserving way. https://datatracker.ietf.org/doc/draft-kampanakis-tls-scas-l... or https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-t... are explorations of the problem space above, though: how to have the server understand what the client will trust, so it can send the right certificate (… and omit the chain)


TLS implementations for linux IMAP email back in the day would fail-over to unencrypted credentials if the TLS handshake was unsuccessful. Not sure if that was somebody's Postellian interpretation or if it was just the spec. We had to actually block the unencrypted ports in the firewall because there was no way to tell from the client side whether you had automatically been downgraded to in-the-clear or not.


Because this is not the “start of drafting” but roughly “final text that is largely a rubber stamp approval.” This is the output of having been through the the trilogue process - where the Parliament would/has shut things down before - and not been shut down.

[1] https://en.m.wikipedia.org/wiki/Formal_trilogue_meeting


> and the recent legislation coming from there seems very inspired by the great firewall.

https://www.europarl.europa.eu/RegData/etudes/STUD/2020/6487...


The EU is currently proposing to mandate the inclusion of roots that have been government approved, and to limit browsers from removing/distrusting them without notice/approval.

https://www.eff.org/deeplinks/2022/12/eidas-20-sets-dangerou...


The proposed regulation requires that QWACs MUST be accepted and recognized as such, such as using the European List of Trusted Lists as part of the root store.

That is, if a QWAC is issued by a CA that is not part of the browser root store, it must not be rejected (as any other untrusted certificate would be).


One element that results in less security is that it becomes more difficult to replace.

For example, QWACs cannot legally be automated (e.g. via ACME), because of certain restrictions applied to needing to validate the natural or legal person making the certificate request. This actually was an issue for one CA (BuyPass) that tried to support ACME but ran afoul of the framework.

While originally QWACs were proposed as optional, regulation such as PSD2 attempts to make them mandatory for (financial services) servers to obtain. If one of those keys is compromised, then the server wishing you obtain a replacement certificate may have to wait weeks to obtain such a certificate, or make an in-person visit to the CA (e.g. the post office).

A considerable number of compromised or misissued certificates have failed to been revoked on the industry-agreed upon timelines (24 hours or 5 days, depending), because of challenges CAs have faced because their customers haven’t (or legally can’t) automate replacement, and because the additional information in the certificate requires manual validation, despite having no technical impact on the TLS connection.


Not being able to automatically renew certificates seems like a rather minor point in the bigger picture.

I get QWAC goes against the trend of phasing out EV certs. But isn’t the real issue that the browsers don’t trust TSP audits carried out for EU member states?


It’s actually a huge issue - look at how eliminating a key difficulty in obtaining certificates massively increased HTTPS adoption (via LetsEncrypt and others)

Similarly, automation affects how easy or hard it is to replace a CA, for example, if moving to distrust a CA. If you rely on QWAC attributes, you can only use QWAC CAs, and changing CAs becomes significantly more complex.

The audit issue is definitely an issue: the audits used are fundamentally different than what browsers try to achieve, and so having to adopt the lower standard definitely impacts user security. However, my point was that in addition to those concerns, the technical design itself results in less robust and less agile systems, and that makes things less secure.


Yes. It requires the EU Trustmark, a logo designed through a secondary-school competition, to be displayed with certain colors and sizing, as directed through Implementing Acts (which have the force of law, but decided at the Commission level).


The draft revisions actually propose such authentication to be mandatory to implement for service providers if their users would like to use it.

That is, it specifically targets websites (particularly Very Large Online Platforms) that they MUST accept such ID in lieu of an email or password, at the user’s request. This was part of the original motivation for the revisions, to target “Sign in with Facebook” or “Sign in with Google” and require such sites also offer a “Login with EU” option.

Source: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM%3A20...


So $VLOP is compelled to accept QWAC user-certificates, if one user requests it? And QWAC user-certificates are issued by TSPs whose CA cert must appear in the root-store unconditionally?

That means there is nothing preventing $TSP from forging my certificate, and giving it to criminals/government-agents, and nothing to keep the TSP in line, because the single audit constraint is "Keep the Minister satisfied".

I personally don't have a problem with the idea of replacing passwords with user-certs, provided I get to generate my own cert with my own private key. But the evidence is that general users can't learn how to use certificates.

I hate passwords, but I'd rather use passwords than a user-cert issued by an unreliable CA.


The "unreliable CA" you are talking about here happens to be banks and similar. Do you trust that your bank doesn't just steal your money? Yes, you basically can't function in modern society if you don't. These e-id's just piggybacks on that trust to also work on online sign-ins. Most people worry more about their bank account being compromised than their github, so if these CA's (ie banks) starts to abuse their position we would have way bigger troubles than someone stealing your github accounts.


I see, QWACs are to be issued by banks. And websites are required to trust them.

So if the bank gets hacked, then presumably the EU will indemnify the relying website against any legal action for trusting an unreliable CA? Even if that website is in China/Russia/Belarus?

You seem to have read the proposed regulation, Jensson; the information you've given is not in the position paper. Any chance of a summary?


The QWACs can be issued by anyone who meets the minimum requirements, which are substantially less than those required for TLS server CAs in browsers. So while it’s true that banks can issue these, in practice there are many small companies with fewer than a thousand or so certs out there which have the same requirement that they must be accepted.

The eID certificates do come with probative (legal) effect, but this is where it gets complicated.

If the CA is hacked or screws up, yes, the CA is liable. But only if you did everything you were supposed to, such as checking every element of the certificate. These certificates have a variety of fields, such as “liability only up to XX euros”, and you (the site or user) are liable if you use it for more than that.

PSD2 has shown that the standards are a nightmare to fully implement. https://wso2.com/blogs/thesource/all-you-need-to-know-about-... gives a useful overview of how it’s worked for PSD2, and the new Digital Identity Framework/eIDAS Revisions proposes to make that the approach the standard everywhere.

In practice, this means that the server accepting your certificate needs to implement all of this correctly (spoiler: they don’t), or they bear the liability if the CA gets hacked - and they can’t distrust that CA. It also means the CA potentially learns every site you visit, because the sites have to check with the CA (if using OCSP).

Of course, if the government themselves directed the CA to misissue - e.g. at the direction of law enforcement - no such liability would be presumed, because it was a presumably lawful issuance.


Thanks. Your explanation is miles more informative about that than the original article.


I've worked on identity infrastructure in an EU country, I know a lot of details how it works, the EU proposal is just an extension and merger of the local ones. I can just explain how the local ones works, I don't know the exact details of the EU proposal as I no longer work in that industry.


I'm saying it'll go even further than that though. If you want to use the service you will have to authenticate through this method. This is pretty much as perfect as it gets for any company trying to vacuum up data, because they will be able to uniquely identify every user. It's effectively the end of privacy by obfuscation, because you will have to identify yourself.


Yes, the current regulation is targeted at government sites authenticating citizens, but the goal with these revisions is to require VLOPs to support this, along with allowing them the ability to require this for all websites. The original roadmap called out by the European Agency for Cybersecurity (ENISA) suggests a long-term goal of making this mandatory, effectively reviving the idea of the “Internet drivers license” (for users) and “Authorized domestic website” (for servers).

Source: https://www.enisa.europa.eu/publications/qualified-website-a...


They can already do that though, nothing is stopping them from adding this to their sites right now. EU already has e-id for people and companies can use that if they want.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: