Hacker News new | comments | show | ask | jobs | submit login
How I Tricked Symantec with a Fake Private Key (hboeck.de)
200 points by hannob 12 months ago | hide | past | web | favorite | 38 comments

Symantec is surely testing the patience of Google/Mozilla now. Illegitimate revocation seems almost on the same level as illegitimate issuance of certificates. Imagine the impact on an HPKP site.

It's not. A CA can revoke only certificates that they themselves issued, so the harm that a CA can do through inappropriate revocations is limited to its own customers and their users. If a CA gets a reputation for doing this, its customers can simply take their business elsewhere (this may be costly and inconvenient, but it's always an option, though in the HPKP case it requires them to have planned ahead). This gives CAs a clear incentive not to revoke certificates inappropriately, so the system works.

By contrast, illegitimate issuance by a CA that browsers trust is a threat to everyone's security. Furthermore, the parties directly harmed—the owners of the domains of the illegitimate certificates and their users—typically have no relationship at all with the offending CA, and consequently no direct recourse against it. That's why browser vendors—the only parties that CAs truly have to answer to—have to get involved in such cases.

I agree with your summary but does it not demonstrate a lack of thorough process control?

I wondered where Apple and Microsoft were in this whole thing and I found this from one of the Chromium trust discussions:

"Assessing the compatibility risk with both Edge and Safari is difficult, because neither Microsoft nor Apple communicate publicly about their changes in trust prior to enacting them."

Why or why not would you want to withhold this kind of information?

To some degree, I believe Microsoft and Apple want to avoid any risk of being seen to collude with other vendors to essentially destroy the business of another company.

Both have company cultures built upon secrecy as the base value, I believe. "Why would you want to withhold this kind of information?" then answers with "Because you want to withhold any information, except on a need-to-know basis."

You should always have a plan if your key is compromised. It's recommended to announce two keys via HPKP: primary and backup key. If your primary key failed, you can always issue another certificate with backup key and change HPKP.

I think that it's even possible to issue another certificate with different CA using old private key. I'm not sure if all CA communicate with each other about revoked keys.

>You should always have a plan if your key is compromised.

But the point here is that there was no compromise. None at all, the author simply forged the whole thing and submitted it as part of a legitimate bundle for added plausibility in case there was a human in the loop (which there shouldn't be, and anyway that part could be trivially forged too, just register a bunch of domains over time, get certs for them, then leak the actual legitimate private keys on purpose). So having a primary/backup/backup backup/backup backup backup/backup^n is all useless in this scenario because only the public component was necessary to make a fake sufficient to fool Symantec's incompetent systems.

>It's recommended to announce two keys via HPKP

It's not just recommended, it's required. HPKP can certainly lock you out of your own site if done wrong but there are safeguards against that. A shockingly high number of sites that try to use HPKP don't actually do anything at all because every browser out there ignores their HPKP headers because they're malformed in some way.

That you would even consider trying HPKP without running the SSLLabs server test or Hardenize against it (which would identify these defects) is also shocking in itself.

> Imagine the impact on an HPKP site.

I don't see why the impact on a site using HPKP would be different than for any other site. In both cases the site would have to install a new cert.

The only difference with the HPKP site is that they'd need to make sure their new cert uses the same key as the old one. (Or they could use a backup key/cert, which I'd expect them to have anyway if they're using HPKP.)

Fairly sure that they'd be blocked from issuing a new cert with the same key?

You'd think so given that Symantec believes the key is compromised, but that's actually not the case. I actually saw a fairly interesting discussion about this over on the mozilla.dev.security.policy mailing list just the other day: https://groups.google.com/forum/#!topic/mozilla.dev.security...

However in that case it was Comodo that didn't block the compromised private key.

Right. If you read the rest of the thread though you'll see that that's because they're not actually required to check. (Or at least, it's certainly arguable that they're not.) Any other CA could have done the same thing and that would be considered perfectly acceptable behavior per the Baseline Requirements.

I started the thread, I have read it :-)

Doesn't the max-age parameter[1] restrict a browser from accepting only those previously specified keys for a certain time frame? Therefor newly issued certificates should throw a warning. Otherwise it would be trivial for a MitM'ed sever to deliver their own key hash via a HPKP header. Or am I incorrectly understanding the value of the pinned hash?

1: https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key...

You're correct about the header. The part you're missing is that it's entirely possible for the site operator to get a new, unrevoked certificate that uses the same underlying private key issued to themselves by a different (or even the same) CA. Such a certificate would be accepted just fine by browsers which have that key pinned. HPKP pins public keys, not individual certificates.

Gotcha, I appreciate the explanation.

I agree with the author that one of the most annoying parts of the process is that Symantec doesn't tell the legitimate owner any details to actually help them.

If I got that type of error/reissue email, I would assume it was something fairly normal going on, when in reality I really, really need to know that the private key had been leaked.

https://support.comodo.com/index.php?/Knowledgebase/Article/... currently has instructions for checking consistency. Did they change it after this post was published?

Edit: Jul 9 snapshot from https://archive.fo/Zdmck didn't include those instructions, so they only added that recently.

I imagine submitting a fake Symatec key and have them revoke their own cert and the confusion it would create, like a prank. But might actually create real damage.

That makes me wonder if CAs can be revoked this way. I mean, do browsers even check for CAs and intermediates in ocsp?

Theoretically not; but as we have seen, there is often a gap between theory and practice.

That's amazing. It's so simple most people wouldn't even think to try it.

In fact, a lot of the really good hacking stories are. The recent .IO takeover for example.

> For example a private key contains values p and q that are the prime factors of the public modulus N. If you multiply them and compare them to N you can be sure that you have a legitimate private key.

But make sure than neither p nor q is 1.

It is absolutely unacceptable that anyone can revoke any domain's cert by presenting a fake private key that doesn't cryotographically match to the cert.

This makes me wonder how easy it would be to social engineer getting Symantec to revoke a third party's cert, even if you don't have the private key, just by having a talented person on a phone with faked outgoing caller ID via SIP trunk. See: Mitnick's Art of Deception book for examples.

Apparently, it's as easy as the blog author presented. It is exactly what they did: reported a private key leak with a fraudulent private key file (unrelated and forged cert details).

this seems trivial to do, couldn't you extract the public key from the private key, encrypt something with this extracted key and see if the decrypted message matches?

In the post I recommended to sign a test message with the private key and check it with the public key - which has the same effect, it's just the other way round. Whether that's "trivial" is another question, given how arcane the usage of the tools doing these things is.

There are multiple ways to properly check this. However the point is: Symantec didn't do it and the vast majority of the guides you find on the Internet about this are wrong.

> openssl x509 -in [certificate] -noout -pubkey > /tmp/pubkey.pem

I cry a little every time sometimes uses /tmp this way.

This is insecure.

I cry a little when someone says something is wrong, without explaining how to do it right. What's insecure with this, and how should it be done instead?

The default umode of /tmp is usually world-writeable, so files created like this could be manipulated by bad actors on the same system. Not so critical in this case, but it can be seen as "code smell"

This of course assumes you are running around an actual multiuser system without 1777 /tmp and 077 umask like it is 1989.

All the attacker has to do is:

  touch /tmp/pubkey.pem
  chmod a+w /tmp/pubkey.pem
before the victim runs the code.

No sticky bit, no restrictive umask, also no protected_hardlinks/protected_symlinks is going to save you.

Proof-of-concept code like this can't anticipate everything about the environment it's going to run in, without risking distracting from its point. I guess it would be better to just put the files in the current directory, to not encourage the use of /tmp. The script[0] looks safer.

While we're on the subject of the openssl tool and file permissions, I've been disappointed... On my system, this command creates key.pem word-readable: "openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -nodes" I've been meaning to set my umask to fix issues like this, as I have rare need to let another user account access my files. On the other hand, there aren't supposed to be any other users on my systems, and it's more likely my browser will get pwned and have access to my self-readable private keys anyway.

[0] https://github.com/hannob/tlshelpers/blob/46c50b27adf79476ae...

rrix explained why it is bad. The safe way is to create the file using mkstemp (to avoid accidental collisions) in a directory owned by you (to avoid malicious collisions).

FYI: I changed this now.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact