Hacker News new | comments | show | ask | jobs | submit login
Understanding the first imperative of a commercial Certificate Authority (utoronto.ca)
57 points by type0 13 days ago | hide | past | web | favorite | 18 comments





> relatively long duration certificates are one area where they have something that Let's Encrypt doesn't.

For some time recently, AlwaysOnSSL[1] was issuing automated, free and trusted certificates with 12 months validity via HTTP or DNS DV. They've since disabled new registrations (who knows why), but I'm still able to get these on my existing account.

Comodo is issuing 12 months certificates for cPanel servers, and 3 month certificates for all domains on any licensed cPanel server (without the rate limits imposed by Let's Encrypt). In fact, the cPanel Comodo CA is currently the second largest CA with some ~13-14% of all currently trusted certificates, in front of Comodo with 8% and behind Let's Encrypt with ~66% [2].

So, I'm not sure that it's strictly true that commercial CAs will refuse to do anything that cannibalizes their own sales. I think there are some mixed models that can work (RIP StartCom), and I hope we see more of them for the sake of the WebPKI. Overwhelming reliance on a free CA that is funded purely by donations doesn't seem like a good thing to me.

--

1. https://alwaysonssl.com/issue.php

2. https://censys.io/certificates/report?q=tags%3Atrusted&field...


> So, I'm not sure that it's strictly true that commercial CAs will refuse to do anything that cannibalizes their own sales.

Someone (ie. some CA) had to cross-sign Letsencrypt root certificate (or its direct derivative). Turns out that CA is IdenTrust, a bank consortium. [1]

I'm not disagreeing its a good rule of thumb though.

[1] https://en.wikipedia.org/wiki/Let's_Encrypt#Technology


IdenTrust was initially part of a bank consortium, but by the time we (Let's Encrypt) started talking to them they were just an independent company. Now IdenTrust is a division of HID Global via acquisition.

Alternatively they could have tried to get the browser vendors to include them as an authority. That may have been hard but now that they are so large they would probably succeed.

The strategy is/was to do both. We started getting our root into browsers immediately and we sought out a cross-sign to let us operate while the ~7-10 year root inclusion and propagation process happens.

Well, they are directly included in most root programs now (ISRG Root X1). The cross-signing was a bootstrapping and compatibility solution (e.g. Windows XP).

It seems likely at some point that they will drop the cross-signing.


But still not Microsoft's programme. It would be interesting to know (but we'll probably never be told) if there's any particular reason for it taking so long...

https://crt.sh/?caid=7394

The top row of "Valid¹" items with one "No" and one "Defer to OS" is the trust you care about, whether ISRG is trusted for the Web PKI. "Defer to OS" is what Chrome says by default about almost everything - basically Google prefers to let your OS vendor pick what's trusted unless they feel the need to step in and override.

I have no idea why Apple trusts ISRG for things ISRG doesn't do and has no intent of starting to do, my instinct is to label it "incompetence" but I've never been an Apple fan, I'm sure they have some excuse.

On the other hand, at least Microsoft has a comprehensive and reliable update process, so there's a good chance most of their population of systems can be upgraded in months rather than years once they do make a decision.

All but one of the programmes listed is a closed door operation, we have no idea how they decide what to do, and only a cynic like me would suggest that it's because they mostly react to events after the fact...

The exception is Mozilla, which makes all these decisions in public in the newsgroup m.d.s.policy


But that only works for new browser releases.

This way, LE works on ancient Android devices as well as on god knows which strange Internet-enabled devices.


I went down the rabbit hole one day when discussing how certificates work. To me trust was a vague concept considering what was at stake. I ended up at Auditors but even then it feels odd that so much trust comes down to a few people.And even worse those auditors use computers that use certificates of companies they audit.

I used to work in a casino and Compliance/Regulator (a type of audit) would check the settings I input. After much employee churn it got to the point where the regulator was just a warm body with a clipboard. I'd enter a setting say jackpot limit $1,200, the auditor asked me to show him, they checked it off. But the regulator used my copy of the data so they were telling me what I had input was right but they were using the exact same data. The regulator should have used a separate source of data (they used to), what if my data was wrong? It's happened before many times.

Anyway that's what CA auditors reminded me of.


Audit ought correctly to be a last resort, we should only audit things because the superior options were impractical (or outright impossible). Auditors are invariably conflicted (not giving a clean audit is bad for business) and other actors must work to mitigate this problem.

Part of the failure of Symantec (causing them to exit the CA business last year as a result of financial pressure) was that rather than having in place their own effective oversight they had increasingly relied upon audits from the Big Four to confirm that things were being done correctly at their "partner" companies, even though the wider PKI community had seen repeatedly that audit is not very effective and should not be a substitute for oversight roles.

As a result when problems with CrossCert were identified, Symantec found they didn't have the records that ought to have existed, and ended up asking CrossCert for them, of course CrossCert did not have such records. This situation couldn't have happened if Symantec had effective oversight, since necessarily this oversight role would examine the (non-existent) records.

Individual audit companies may prove either incompetent or corrupt too. Mozilla concluded that the Hong Kong EY (Once "Ernst and Young" but short names are cool now apparently) was either incompetent or corrupt and told its parent that it will no longer accept audits from this outfit. This was part of the problem with StartCom, their audits don't mention things that should have been evident at the time to a competent auditor.

A central purpose of Certificate Transparency is to reduce the need to rely on auditors. With all the signed certificates made public, independent researchers can interrogate data once only accessible to auditors. For example, the question "Were any certificates issued for Debian Weak Keys?" is something we might once have resolved with audit - let's ask the CA "Do you check for weak Debian keys?" and if they say "Yes" that's a pass. But today researchers can simply download all the public keys, and memcmp() them against Debian Weak Keys. Upon finding one, they raise it as a Problem Report. Sure enough the CA discovers that although of course they believed they were checking for these weak keys, for a brief period the check was disabled and they issued a certificate. Oops. Observatory tools like Censys might have helped, but Certificate Transparency gives us guaranteed coverage.

Nevertheless, audit is a necessity for some practical questions. Auditors physically visiting a facility can verify that it is indeed physically secure (are the data centre fire doors left open onto a shopping centre car park?) and that documentation physically exists (are there, in fact, binders full of procedures? It's funny, when they know nobody follows the procedures humans are often too lazy to even bother writing them down...). Likewise auditors must attend certain ceremonies e.g. to create a new private key. It would be very expensive for representatives from the major Public Trust Stores to be sent around the world to do this, and it's clearly not acceptable to just insist that all CAs must be physically located in Northern California.


>Individual audit companies may prove either incompetent or corrupt too...

I agree. I've experienced both situations. KPMG did an audit of my former workplace and I was amazed at what they found and so fast. It was a single receipt that a cashier rang in but the supervisor had forgot to log off the POS. The cashier didn't notice and rang in that one item under the supervior's login. The KPMG auditor caught it on paper not electronic they had gone through four year's of paperwork for one department and spotted it (on the first day of the audit too!).

After that Deloitte or KPMG seemed to be mailing it in. I knew some departments in the place where I worked had greatly slipped in following protocol and none of it was discovered by the auditors. Or I should say I wasn't aware of any discoveries since I was not directly involved in the process.


The central thesis of the article is rather naively assuming (or asserting) that CAs profits come primarily and directly from selling certs, rather than eg value-add services. It might be very well that it is the case, but before going any further I'd like to see the claim backed up by something

The central thesis of the article seems to reject all market forces that aren’t directly controlled by the CA. For starters, CAs have to compete to maintain the trust of the browsers. The author glosses over this as if it’s barley relevant, but the browsers essentially decide whether CAs are allowed to conduct business at all. CAs also have to compete for the trust of their customers. The author seems to think that CAs publishing a bit of fud somehow entirely negates the customers ability to assess their trustworthiness. I’d say that any actor who makes decisions about trust based only on marketing content probably doesn’t care very much about the integrity of the CA system. The existence of commercial insentive doesn’t intrinsically make a system corrupt. The CAs product is essentially trust, if their product isn’t good enough, then they won’t be able to sell it. Especially since there’s a number of different supervising bodies to ensure they’re doing a sufficient job.

I disagree. A CA's actual product (what you pay money for) is some magic bits that put a padlock in the browser URL bar or, alternately, allow you to do HTTPS with browsers without your visitors getting scary warnings from the browser (and attackers being able to MITM your visitors). Since it is extremely rare that CA's are de-trusted by browsers, especially on short notice, any CA that can create this good today is basically as good as any other CA. 'Trust' doesn't come into it as such, and to the extent that you must trust your CA, you cannot feasibly verify that it actually is trustworthy and will remain trustworthy, and thus that certificates it has signed for you will remain trusted by browsers for their duration.

(DigiNotar was trustworthy right up until people discovered that it had been compromised. StartComm actively hid that it had been bought by WoSign; it took people some digging to turn this up and verify it. Other CAs have routinely denied that they had problems when they did.)

Even if you could, picking an especially trustworthy CA doesn't add any extra security for your website, because you are actually dependent on the security of all of the CAs out there, any of which can in theory mis-issue a TLS certificate for your site. Sure, there's CAA records, but this only helps if the CA actually follows the rules properly; CAs can fail here at both a technical and a business level. (One CA recently proposed that it should be allowed to ignore CAA records if the domain owner told it to.)

Even attempting to do basic checking of a CA probably doesn't make sense at a business level unless you're buying certificates in bulk. Suppose that you're buying four TLS certificates and the best price you can get is $30 per certificate per year, or $120 in total for a year. How much engineer time does it make sense to spend in order to pick the most 'trustworthy' CA (ie one that is likely to remain trusted by browsers for a year), given that CA de-trusting is an extremely rare event and if it happens, you're only out another $120 to buy certificates from the next CA?

(I'm the author of the linked-to entry and as you can tell, I have opinions on CAs and the TLS ecosystem. One of them is that Certificate Transparency is a really big deal because it removes a lot of the 'trust me' from the whole CA ecosystem.)


I'm not sure if you intended to or not, but you've perfectly proven my point. DigiNotar and StartComm were both forced out of business because people couldn't trust them.

I can generate a set of magic bits on my own computer without any input from a CA, and I'll be able to use them to encrypt a TLS session. What I can't generate on my computer is some trust, which is what I get when I buy a cert from a CA. Also, as a customer of CAs, I have a selection of different companies to choose from, and I base that selection mostly on trustworthiness. It's not so difficult for me to determine which CAs have had the least number of trust-undermining incidents.

To your point of likelihood around CAs being de-trusted, something that happens even less often (as fas as I'm aware), is relying parties claiming on the relying part warranty, so I don't think appealing to likelihood is an especially valid point here.


My apologies for not being clear about my DigiNotar and StartComm examples. What I intended to point out here is that in neither case were these failures something which one could have discovered in advance by reasonable amounts of checking how trustworthy the CA was. DigiNotar's compromise was (as far as I know) completely unpredictable and appears to have come from a state-level attack; StartComm actively hid itself and took significant work to uncover. Both CAs would have passed most people's checks done beforehand.

If advance warning is nonexistent, there's no point in even trying to check. If advance warning is hidden, you have a cost/benefit tradeoff to make, and I believe that in most cases (ie with low amounts of money involved), it's not good use of scarce and expensive engineer time to try to assess the state of a CA.

(This is especially the case if your company doesn't already have an engineer who is relatively expert in the CA ecosystem and knows where to even start looking. Without that expertise, would something like StartComm's quiet sale to WoSign have raised any alarms even if you discovered it?)

As far as magic bits go, the one thing that self-generated magic bits can't do is insure that a complete first time visitor can't be MITM'd by an active attacker. Whether this matters (or matters more than the scary browser warnings) depends on circumstances.


I think I can boil down an example about why I prefer not to talk about trust by itself as it relates to CAs. Imagine a hypothetical version of Let's Encrypt that has all of its operational excellence and security, but that uses root certificates that are not cross-signed and not included in any browsers. I would argue that this CA is exactly as trust-able as LE is (by hypothesis its procedures and technologies are the same, and we trust LE's), but clearly it is not as useful as a CA as LE is because it is not included in the root set of any browser (which we call 'trusted' and which generally implies that the people behind the browser believe that the CA will not issue certificates improperly).

If we say that this CA is not 'trustworthy' here, what we really mean is 'this CA is not in browser root sets and so the TLS certificates it issues provoke browser warnings'. This is useful in one sense (it is what most people care about), but I prefer to be explicit about what we mean (partly because 'trust' is a loaded term with tangled implications).


I disagree that they compete on trust. A CA is either trusted on not, they have to stick to the rules to stay in the browsers lists - but as see all the time - do the bare minimum to keep in the game.

But as a user of certificates, there is no levels of trust to compare, either the padlock is green or its not.

So I look only at price, and maybe more abstract things like customer service.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: