In what sense is this true? They're effective because browsers only accept certs which can first prove that they were submitted to a CT log, and it's easy to search the CT logs for e.g. gmail.com to find potential misissuance events.
This is the first time I've heard CT described as ineffective. What gives?
That's because the owner of example.com wasn't monitoring CT. Had they been monitoring, they would have been alerted quite quickly. As a case in point, when Certinomis misissued a certificate for test.com, my monitor notified me 16 minutes after issuance, I filed a report with Mozilla two hours after issuance, and a representative of Mozilla responded less than 3.5 hours after issuance. That's pretty fast.
And now Mozilla is kicking Certinomis out, providing yet another example of how CT is improving the Web PKI. CT works.
I've written my own Certificate Transparency monitor called Cert Spotter. I use both the standalone open source version (https://github.com/SSLMate/certspotter) and the hosted service (https://sslmate.com/certspotter) to monitor my own domains as well as several test/example domains (example.com, test.com, etc.).
There are Google Groups (because a lot of this is a Google project or spins out of Google projects) for a lot of this, including mozilla.dev.security.policy (the public discussion of Mozilla's root trust store policy), certificate-transparency (general discussion of CT) and crtsh (discussion of Rob's crt.sh monitor database and site). They're all usually quiet enough that you could read back a few months and/or subscribe and have them in your background.
Thank you very much for the links.
Yeah, it's technically true that CT isn't designed to directly prevent most attacks.
THe largest effect of CT is that CAs get pressured to get their stuff in order. It has been a major milestone in improving the ecosystem, particularly because it also led to many real distrusts, including of one of the largest CAs at all (Symantec).
Yes, CT doesn't try to prevent most attacks directly. It tries to create an ecosystem where CAs fear getting distrusted when they don't prevent attacks in the first place. And it works very well.
Another point is that CTs alone won't prevent the attacks nor inform you about a problem - they need a monitoring system.
(yes I know that's not the problem with CT - it's great, just trying to justify a strong opinion of "it doesn't work")
> Efforts to improve this mess, such as Extended Validation (EV) certificates, have gained no traction with users, as they are largely immune to subtle changes in the content and colours of the browser’s navigation bars, and certificate transparency logs appear to be completely ineffective in catching CA-related name hijack events in real time.
That’s a pretty extreme opinion. I am surprised it appeared on official apnic.net site. I do wonder if it has anything to do with the fact that most major browsers, who actually decide who get to be CA, are US based.
How much do you trust:
GUANG DONG CERTIFICATE AUTHORITY
Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK
Unizeto Sp. z o.o.
XRamp Security Services Inc 
AC RAIZ FNMT-RCM
etc, To have not been compromised, or selectively delivering certs to "interesting" targets. Because they can tell you practically any host on the internet is a valid SSL domain.
Stuff like DNS CAA records, and certificate transparency help somewhat, but are either trivially stripped by MITM attacks or not implemented at the CA or software level.
 Appears to have went out of business entirely, how safe is that SSL cert now?
I trust them no less than I trust the domain resellers who administer DNS.
It used to be that all you had to do to be a CA was knock on some browsers' doors and say "I'm a CA, plz add me, kthxbai." But that hasn't been true for quite some time; now you have to go through audits and several other hoops to be added to the lists, and miscompliance results in browsers dropping often fatal ban hammers, as DigiNotar and WoSign can attest. Browsers have also developed fairly stringent policies for enforcing minimum cryptographic rules and audit mechanisms that allow for strong policies.
The goal of many DANE proponents is to replace the entire CA infrastructure and PKI system, but they haven't really put in the work to actually demonstrate the ability of clients to enforce strong policies in practice. The root keys are no longer unacceptably weak, but there is still a proliferation of weak keys. And it's not like DNS providers have done a good job of actually enforcing policies around things like name spoofing or enforcing UTS#46 validity.
Instead, we're just supposed to take it at their word that DNS providers make better CAs than our current CAs, and drop all of the auditing and the like that happens today. And if the Libyan government decides it wants to MITM a popular link shortener, we're supposed to just shrug our shoulders and say "nothing we can do," I guess.
> The sole criterion for a domain validated certificate is proof of control over whois records, DNS records file, email or web hosting account of a domain.
Meanwhile, CT has done more to protect the Web PKI in a couple years than DNSSEC ever has, and without forklifting out infrastructure. In fact, if rumors are true and at least some of the CA misissuance was detected due to broken pins, you can say the same thing about HTTP certificate pinning, and that technology wasn't even successful.
The fact that DNSSEC is a true PKI, and the only true Internet-scale PKI, and one that is 100% aligned with the domainnames that users understand and which are embedded in URIs and email addresses, and the fact that DANE gives one a way to opt out of the WebPKI mess -- that all makes DNSSEC/DANE superior.
How many years have to pass before we give up on this boondoggle? We're coming up, within this next 12 months, on twenty five. Do we wait for year thirty to finally, formally declare time of death? I'm fine either way. But I'd say we passed the mark where people can casually pretend that DANE is something anyone should expect to get value out of at least five years ago.
So I take umbrage at the tone of that comment above, that "it's OK that DANE adds one more CA to the mess of CAs we already have because some people will lock their sites down to DANE-only". Well, no, no they won't, because that's akin to taking your site off the Internet, and that will remain the state of play indefinitely (probably forever).
† and that assumes that after the IETF figures out how to specify stapled DANE that the major browser vendors will adopt it; the evidence points the other way.
I keep repeating the following and people ignore it, but it's a guaranteed way to stop mis-issuance cold - not detect it after a few hours, not make it less likely when a domain and website opt to jump through five separate hoops.
The registrar should coordinate between domain owners and certificate authorities to provide a second factor, second party, cryptographic verification of the intent to issue a cert. Rather than say "ok, this organization says who owns this domain", and then "ok, this other organization controls who can make domain certs", you actually validate the second thing based on the first thing. A hacker can control a domain, but that doesn't mean they own it, or that they control the private key(s) used to purchase or transfer the domain.
And you can automate this a dozen different ways for large organizations, it's not difficult. The difficult part is getting the registrars, CAs, and browsers to accept it.
Entering this list costs hundreds thousands of dollars, an attack using a different channel is going to be more economical.
I’m curious what you mean by this. CT is required for a cert to be trusted by Chrome. It seems like most CAs would participate, considering the low value of a cert not trusted by Chrome.
Some of the large public CAs offer, either explicitly in their ordinary "purchase" journey or by special arrangement, not to log pre-certificates. And a few in-house CAs at very big outfits like Google deliberately don't log pre-certificates.
You do NOT need to log pre-certificates to be trusted by Chrome, the reason to do that is to produce a drop-in experience for non-technical subscribers. If you have technical staff (which obviously Google does) you can log certificates retrospectively and staple the SCTs to the TLS setup as needed, allowing you to log on a "just in time" basis at a cost of needing a bunch more smart people to run your web servers. If those servers serve google.com and gmail.com you already needed a large international team anyway. So these latter certs do get logged, but literally it may be seconds before they're used (and inevitably Google has screwed up and not logged one and that broke some of their sites in Chrome briefly... closed course, professional drivers, do not attempt)
I think its possible that the average web developer underestimates the degree to which the web PKI is monitored by private and government organizations. And public cert logging only makes that easier.
Yes, we have too many CAs, and some of them are pretty suspicious. It would be great to be able to give them TLD limits. But becoming a CA is still very slow and expensive, and certificate transparency will catch bad behavior. So there is a very strong incentive for each CA to be good.
Compare this to random server exploit, which is deployed anonymously and has no direct monetary harm to the maker company.
No wonder there has been very few cases of CA-based compromise compared to good old software exploits.
Anything that doesn't have a strong hierarchical binding to the names users understand (domainnames) is simply not going to have better security than WebPKI.
The web is absolutely built on naming of services that users can mostly understand -- that is, domainnames. Users cannot understand x.500 naming. So the web absolutely needs a reliable way to authenticate domainnames. WebPKI is not reliable, no way, not even with CT. That leaves DNSSEC as the only remaining alternative.
It's DNSSEC or WebPKI, and WebPKI is no good. So it's DNSSEC or nothing.
Users don't actually understand domain names. If Google were to make the result of searching for "facebook" return a facebook.google.com that MITM'd Facebook, they could capture ¼-⅓ of Facebook's traffic without any software complaining about it.
Your comments have a strong sense of "if we just made sure that the server is truly using the domain name, then everything would be hunky-dory." In fact, as far as I can tell from actual security problems, fully securing the domain name (or even authenticating the server properly in the first place) wouldn't improve security meaningfully for most people.
Hint: Because ISPs are in a position to intercept and even MitM traffic, but generally not in a position to decide what URLs you go to.
We have the technology to actually ensure that https://news.ycombinator.com/ is really https://news.ycombinator.com/ but we do not have technology that could ensure this is where you get when you type "Hacker News" into Bing.
Part of what has to happen to civilisations around technology is that they adjust to what is possible. Different adjustments are possible, the egg washing thing is an example: in some countries you purchase eggs in the state they're laid (which means sometimes there is dirt on the shell), and you don't keep them in a refrigerator; in other countries you purchase eggs "washed" clean and they're refrigerated. Either of these approaches works fine, but mixing them up doesn't work.
So, for example, teaching people to bookmark sites is an adjustment that works with our technology, whereas teaching them to get everywhere by following sponsored links on Google not so much.
WebAuthn is another example. A Security Key doesn't care that you thought mybnak.example was your bank, doesn't care if you got there from a phishing email, from a bogus advert or you just can't spell when typing. It will resolutely refuse to provide mybnak.example with your mybank.example credentials because those strings are not the same. You can press the button over and over, swear, throw things, but because it's based on the URL it won't let you screw yourself.
Privacy is valuable and relevant, and you need some modicum of authentication to get privacy. The main threat here is "someone who's watching all the traffic go by on the coffee shop wifi"
If you spoofed an idn that looked like Façebook.com, you get damn near 100%
The "bad news" here is that the TLS dnssec-chain-extension was dropped by tls-wg. Chain-extension staples DNSSEC records to TLS handshakes, so that you don't need to do extra DNS lookups to get them. This is important because a pretty big fraction of Internet users are on networks that can't reliably look up DNSSEC records using the actual DNS protocol.
If you're hungry for the details (of course you are!), I believe they can be summed up as follows:
1. The point of dnssec-chain-extension is to enable browsers to use DANE, either as an alternative or an enhancement to the X.509 CA hierarchy.
2. That means the threat model for dnssec-chain-extension has to include attackers with valid certificates; define them out of the threat model and you've defined DANE out of having a point.
3. An attacker with a valid certificate can strip dnssec-chain-extension out of a TLS handshake.
4. So they had to reinvent certificate pinning to make dnssec-chain-extension make sense.
5. But certificate pinning is already a dead letter as a browser standard, because of operational concerns around abuse and also consequences of misconfiguration.
If you're just here for another dose of my DNSSEC snark, I will observe for you this: Geoff Huston, a giant in the Internet/DNS operations research community, concedes in this post that DANE is the whole "reason DNSSEC is worth the effort". He also believes that dnssec-chain-extension was vital to getting it deployed in browsers (I'm skeptical Google, Microsoft, or Apple were going to dip their toes in this swamp-water again, but whatever). And Huston apparently didn't notice until today that the tls-wg killed this draft last year.
If you're keeping score, the DNSSEC "stick-a-fork-in-it-ometer" is currently registering:
* The elimination of DNSSEC from macOS, Mozilla, and Chrome (in that last case accompanied by a statement about why the team doesn't believe DANE is workable).
* The success of DNS-over-TLS and DNS-over-HTTPS, both of which accomplish 96% of the bottom-up, fuck-DNSSEC goals of DNSCrypt (or whatever it was Dan Bernstein called it).
* The success of Certificate Transparency and, more broadly, Google's success at cracking down on CAs and getting CT deployed, which further deflates the impetus for DANE.
* LetsEncrypt (and, I guess, Amazon's Certificate Manager), which took money out of this whole contest (I don't think money was ever the big deal in the real world that IETF people thought it was, but it sure drove a lot of dumb mailing list posts).
* MTA-STS, the SMTP strict transport security system that locks down TLS between MTAs, which was standardized by all the major email providers, specifically (stated in the draft!) to avoid the need for DNSSEC, and which is now being rolled out at GMail.
* And now I guess we'll pretend that DANE in browsers was somehow on the table and that it just died. I'm just happy we agree it's not happening.
I don't think I'm quite ready to stick a fork in it, but it's getting close.
My favorite detail from this writeup, by the way, is the fact that Sweden and the Netherlands got their DNSSEC adoption by paying people to use the protocol.
All of this friction would lead to many sites not using HTTPS because it's just too much of a hassle. And it's the visitors to those sites that suffer most.
"I don't think money was ever the big deal in the real world that IETF people thought it was, but it sure drove a lot of dumb mailing list posts"
I am pointing out that I disagree, it was a very big deal and it's why I created Let's Encrypt.
With DoH we get DNS push and a browser could do DANE without incurring the associated latency penalty. They could peak ahead in the stream and see if a DANE (or SRV) record exist. No latency, no middle box, no additional secure connection, no dnssec-chain-extension, no certificate pinning. DANE support in the browser would then be purely a optimistic code path that improves the security if the DoH server happens to send it.
It's a "pinning mechanism" like HSTS, so it doesn't work as a ransom tool, and the risk of a foot-gun is much more manageable. You're apparently very enthusiastic about MTA-STS, which has roughly the same pinning.
The idea spread that TLS should be used after Snowden is a bizarre antipattern. HTTP cache appliances have been rendered obsolete, and domain-validated certificates only provide assurances that you are connecting to the same server that LetsEncrypt had connected to.