Hacker News new | past | comments | ask | show | jobs | submit login
Public Key Pinning Being Removed from Chrome (groups.google.com)
261 points by ejcx on Oct 27, 2017 | hide | past | favorite | 107 comments

I can't support this at all, and ironically this is partially my fault.

My and @eganist's Black Hat / DEF CON talk "Abusing Bleeding Edge Web Standards for AppSec Glory" demoed an exploit concept that we called "RansomPKP", which was essentially a pattern of hostile pinning that could theoretically enable pivoting from a web server compromise to holding a site for ransom. Hostile pinning was by no means a new concept, and even has some discussion in the IETF spec itself, but we found this to be a fun novel application and used it to spur some minor security improvements to browsers' HPKP implementations.

However, this talk also led to concerns being vocalized about the viability of HPKP in general (https://news.ycombinator.com/item?id=12434585), ultimately leading to this. This was not our intention at all, and I don't see hostile pinning alone as a reason to give up on HPKP.

I would much rather see some discussion around improving the usability of HPKP before jumping straight to putting it on the chopping block — both from a site operator's end and a user end. For example, off the top of my head, why not make it possible for users to click past the HPKP error screen like they can with any other TLS error screen?

I think this has very little to do with you.

Hpkp was championed by a lot of security people, who in turn got a lot of people to foot gun themselves (Scott Helme even admitted that he is oftentimes one of the first people called by people who HPKP foot-gun themselves).

There were only a handful of sites that actually needed HPKP level security, and ransom-hpkp was the least of people's worries. Hpkp was more dangerous to people rolling it out on purpose than it was to mass header injection or similar :/

HPKP has been doomed from the beginning. Here is sleevi saying he regrets it in 2/2016: https://twitter.com/sleevi_/status/696171562383224832

I agree that RansomPKP itself isn't that big a real-world concern (which was part of my point), but it did motivate the first wide discussions that I'd seen questioning whether HPKP should exist.

The linked Qualys blog post / HN thread was shortly after our talk, which (along with our conversation with Scott Helme around that time) led to Scott's post "Using security features to do bad things"[1]. RansomPKP and related follow-ups are directly highlighted by Scott's recent post "I'm giving up on HPKP"[2] in which he announced his decision to remove HPKP from the Security Headers tool[3], and Scott himself is cited in this post by Chris Palmer.

Note that I'm not suggesting that Scott himself is responsible for this, or that anything he's said has been in bad faith. My point is simply that my talk was one part of the chain of events that started the ball rolling on this conversation.

I'm also not saying that RansomPKP / hostile pinning is the most important reason that people have for not liking HPKP — in this case Chris lists it as only one of three motivations. Clearly, the usability issues with its implementation have been a much bigger problem, which is what I would like to see serious attempts to improve on before throwing out all the work that's been done up until now.


Edit: re: sleevi edit: the tweet you linked doesn't say anything about regretting the concept of TLS key pinning entirely, just that it's done as a header. I'll admit it's ambiguous, but that sounds to me like he would rather have kept the feature but changed the API. I would be all for deprecating the HPKP header if it were replaced with a better / more usable interface to the feature.


1: https://scotthelme.co.uk/using-security-features-to-do-bad-t...

2: https://scotthelme.co.uk/im-giving-up-on-hpkp

3: https://securityheaders.io

I will be VERY upfront that I DO blame Scott Helme for this. I mentioned this in 2/2016 as well[0]

It's fair. There was a lot of buzz about Ransom HPKP. The whole thing was doomed from the start, and I was pretty upset every time I saw anyone publicly push for it.

0: https://twitter.com/ejcx_/status/698227927390023681

I worked with Scott on the HPKP components of that initial blog post (I'm sure he can confirm) and I won't blame him at all for what took place in hindsight. Google actually denied a bounty on disclosures surrounding RansomPKP, so there was nothing to suggest this was the path they would eventually follow.

Again, I think this decision has nothing to do with Ransom HPKP and everything to do with how it's not a usable standard, and people who try to use it correctly fail.

Huh, well that's an interesting point. I don't know exactly how popular Security Headers is, but given that it's at least partially targeted at novices, I can see how this backlash might have been avoided if HPKP had been omitted there from the start.

Well in fairness, our talk probably wouldn't have been accepted by either Black Hat or DEF CON if we omitted RansomPKP. Or any of the HPKP suicide stuff.

I'm curious about that followup message (is that you?) about how Cyph (seems to be some sort of encrypted messaging thing) relies on "HPKP Suicide" for... in-browser code-signing? I didn't find any resources laying out exactly how this works.

The first comment on that thread is from @eganist (not me, but my colleague). This is how Cyph's HPKP-based code signing works: https://cyph.team/websigndoc

tl;dr: the same idea that we showed how to apply maliciously via RansomPKP is also applied for defensive purposes, in this case to persistently pin a client-side page with logic that validates and runs signed packages.

Here's a potentially easier to read (doesn't require JavaScript) document explaining WebSign:


It's a really smart idea, although it did have some odd edge cases, and required you to trust that they really were throwing away the keys as promised.

There is some talk in the W3C of extending the SRI standard to let a website declare that all (or just certain) included resources have been signed by an (offline) PGP key:


so we might one day reach a point where running a webapp at least has the small security guarantee that a TOFU policy gives you. If this could be combined with versioned releases of webapps, and the signature appearing in something like a Binary Transparency log, then the security guarantee could actually be quite meaningful:


Thanks Dane! I'd actually just published that copy on cyph.com to edit into the above comment, and missed the edit cutoff by a couple minutes.

The rest of that is very interesting! I wasn't aware of that PGP signing discussion, but it would be very exciting if it panned out.

From the post it would seem the very low usage numbers had more to do with it than anything you did.

"Scott Helme found in August 2016 that very few of the Alex [sic] Top 1 Million sites were using HPKP (375) or the Report-Only (RO) variant (76):"

See my reply above to @ejcx.

Removing dynamic pins was inevitable given the associated risk for all sites. Some ideas to fix those exist[1], but I'm not sure it's worth the effort in a fully CT-enforced web. That's probably time better spent somewhere else (such as improving CT itself and the gossip mechanism.)

I'm not convinced that static pins need to go too. There are something like 10 sites on that list currently, and all of them are valuable targets and should have the resources to ensure their pins don't fail. Even increasing that number to something like 100 should be manageable for browser vendors and would cover a large percentage of all page views (rather than just guarantee discovery after the fact).

[1]: https://blog.qualys.com/ssllabs/2017/09/05/fixing-hpkp-with-...

This is especially funny to me as our PCI DSS Network Scan just started flagging not having a HPKP Header as something thats necessary to remediate. I've had to waste half a day on the phone and then to write a Risk Mitigation Plan that explains how we mitigate the risk of an MITM Attack in case our CA gets breached...

It is deeply fucked up if a scanning checklist demands PKP, since most sites --- including most commerce sites --- shouldn't pin.

Well, try explaining that to the HSTS and HPKP folks. They already have answers littered on stack overflow and HN to advise to enable it for anything and everything. With exactly zero consideration for the potential to backfire.

It's only a matter of time before an intern or an audit guy have it deployed on majorcompany.com and result in a disaster. Symptom includes none of the client ever able to access the site again.

Last year I was contracting at an NGO where somebody decided to add our domain to an HSTS preload list. This worked great for the public facing site, which was already served over HTTPS, but we also had a load of internal apps on the same domain.

They were only accessible from our network (ethernet or VPN), so we hadn’t bothered with HTTPS before. Oh and we only found out about the issue when Chrome updated and everything broke - it was a fun few days!

HSTS should never result in "none of the client ever able to access the site again".

HPKP can, if you pin a key and then lose the key you pinned.

I know. But you shouldn't lump the dangerousness of HPKP on HSTS.

Also with HPKP you shouldn't just pin one key. You should pin several keys you own as well as several root and intermediate keys. But I agree it's very difficult to do right and there's still a risk of it failing.

does HSTS have potential to backfire?

Unless you're a static content site that is using TLS just to be an Internet Good Citizen to prevent passive traffic analysis, you absolutely should have HSTS enabled; it's not really a judgement call. Without HSTS, you almost might as well not do TLS at all; HSTS prevents a serious, effective, and easy attack.

By way of comparison, it has never been a good idea to default to HPKP. Privacy-sensitive sites should be pinned, and if you can't safely manage pinning, that's a pretty good sign that you're not mature enough to engineer privacy for the site either, so I don't have that much sympathy for the argument that it's a foot cannon (this is, of course, very easy for me to say). But if you're just selling coffee beans or scheduling laundry pickups, PKP has always been a very bad idea for you.

Why does being a static content site mean you should not enable HSTS? If anything, HSTS is easiest to roll out for static content sites.

One primary example would be sites like software repository mirrors, where the majority of content is signed already, and serving them over HTTPS provides negligible benefit to your users (other than the slight confidentiality increase in that an adversary wouldn't know what exactly you downloaded), as opposed to a site serving active content like JavaScript and CSS, which can have disastrous results to the users if an adversary tampers with them.

The latter example is where HSTS becomes an invaluable tool, since now the only way those resources could be delivered is through a trusted channel, verified by the PKI. The same value is not there for a software mirror, because of the other security safeguards already implemented, removing the need to trust the delivery channel. That said, most still do server their content over HTTPS as well.

The browser will completely block access to the site on any certificate issue. In practical terms, HSTS removes the "ignore certificate error" button that everyone got used to click.

It will backfire when the certificate expires, when some clients don't recognize the CA chain, when domains or subdomains don't match the certificate.

Are you using Qualys? It seems they are having major issues with header checks starting around Oct 1st. Tenable however still passes sites without HPKP. Getting our vendors, like fortinet, to implement all the security headers Qualys now demands will be impossible so we are checking other ASV's to see how they respond to our sites. Just for jollies you should go check pci.qualys.com's security headers.


Glad there are more reports of this showing up.

... isn't the risk that HPKP mitigates the risk of a MITM attack in case some other CA gets breached?

HPKP also protects against your CA if you pin the direct private keys your servers use, but I bet it also is quite annoying to come up with a good plan for how you are not going to loose/delete all the private keys you have pinned...

Right. I'm curious which approach the auditors wanted - either one would be a weird thing to mandate!

In theory, pinning your servers' private keys is actually kind of reasonable, if you generate like two or three sets of backup private keys and put them in off-site storage. And I've long been an advocate of buying at least one backup certificate from another CA just in case your current one gets distrusted.

(And it makes sense from a technical perspective why HPKP supports both of these approaches, but the ambiguity probably didn't help it from a policy perspective).

Interesting HN-discussion about the future of HPKP from a little over a year ago [1]. Reading it, I think this move was predictable.

The article suggests the Expect-CT header as a safer alternative. Scott Helme has a short but informative write-up how this works[2].

[1] https://news.ycombinator.com/item?id=12434585

[2] https://scotthelme.co.uk/a-new-security-header-expect-ct/

Crikeys. By that point the damage is done. How about a read-CAA-via-DNSSEC-and-confirm-that-it's-the-right-CA header?

(The certificate could embed a DNSSEC assertion about the CAA header or lack thereof, for that matter.)

Good riddance. It had low adoption and pales in comparison to what will be achieved with Certificate Transparency.

DNS redirect attacks (common/easy due to social engr) combined with malicious HPKP could result in some nasty ransoming ("many of your users can't access your site unless you pay me for the key"). I've heard many surprised it hasn't happened yet. Particularly considering the lack of recourse options for victims.

What would be the fix? I'm asking sincerely as someone who is only surface level familiar with HPKP, and have never implemented it (but my boss did...)

If someone ransomed you, would you need to pay them for the key, and then use the key on your site from then on? So, you could pay the ransom and they'd be able to decrypt all of your traffic from then on?

(I'm sure I just don't know how HPKP works, like there's some solution where the ransomer's key/the compromised key can be used to sign another key, and then HPKP pinners that cached the bogus key can now accept it as the new key... but then couldn't you use a compromised key to do the same attack again in the future?)

You would indeed have to use the attacker's key until the pins of all visitors who visited the site during the attack expire. I believe the maximum HPKP max-age in Chrome is 2 months, it might be longer in Firefox.

There is no mechanism that would allow you to use the attacker's key to sign another key: HPKP requires that the pinned key is present in the actual trust chain selected for your connection.

You could switch to a non-compromised key a bit earlier by setting max-age to zero and waiting for a certain percentage of the affected visitors to return, if you're willing to accept that the remaining affected visitors cannot access the site for a few months.

> What would be the fix? I'm asking sincerely as someone who is only surface level familiar with HPKP, and have never implemented it (but my boss did...)

The fix would be to embed the expected key fingerprint in DNS and have the browser issue either a 2nd request for it or have the DNS server return it as additional data just like when requesting a CNAME record and it returns the A record too. Then, to prevent DNS MITM attacks, have the whole zone and the domain's zonefile signed.

On the other hand, given that DNS is UDP, this opens up the possibility of an MITM attacker simply suppressing the 2nd request for the HTTPS key, or corp firewalls/MITM boxes/crappy provider DNS servers simply filtering out the responses...

As the other commenter said, this sounds a lot like DANE.

As such, it suffers from the same issue: it relies on DNSSEC. If you look at the trust chain for DNSSEC on the .com domain, you are trusting the US government and your registrar. The US government is the bigger issue here, as the NSA is also a part of them.

You might argue that this is 'good enough' but considering the momentum that these kind of systems have, a wrong decision here could really enable NSA spying for a long time. Besides, CT logs seem like a much better solution than key-pinning anyway.

This has always seemed like a really silly argument. You're already trusting the US government, VeriSign, and a multitude of other organizations that control CAs, so DANE doesn't make this worse.

It's kind of a moot point, though, since DNSSEC is garbage for other reasons. Certificate transparency logs are the current best effort in this area.

The point is not that DANE doesn't make things worse. The point is that it is not a solution. Originally, DANE was meant as a method to restrict rogue CA's from issuing certificates. The fact that state-actors can still do that after DANE makes DANE a bad solution.

The US Government is and should be the root of trust for US domains (certainly for .us, and de facto that's become the use of .com too). Since the US government can compel any US entity to follow secret orders, if you don't trust the US government you already couldn't use any US sites. DANE improves things compared to not, since it means you don't have to trust the US government if you're not using US sites, you don't have to trust the Chinese government if you're not using Chinese sites, you don't have to trust the government of Kazakhstan if you're not using Kazakh sites...

I am a dutch citizen and have a .nl domain. Yet, that does not mean I am ok with the dutch government issuing invalid certificates for my website.

True, it's an improvement that only the dutch government can do this, and not the Hong Kong post office. On the other hand, it is a major downside that we are encoding the possibility of government dragnet surveillance.

In the end, certificate transparency logs will let me notice whenever anyone issues a certificate for my website.

> it is a major downside that we are encoding the possibility of government dragnet surveillance.

Quite the opposite; DANE makes it possible to have a TLD that opts out of giving national governments access to it. Most existing TLDs are controlled by governments, but that doesn't have to be how it is.

I don't understand this US government nonsense and why it continues to persist. Will someone please explain to me like I'm 5 why you are trusting the US government if you deploy DNSSEC? Where in the trust chain of the .com zone does the USG come into play?

The .com domain is controlled by the USG. If they want to serve a false DNS response, they can access the key used to sign the .com domain. That key can be used to sign a new key for the relevant domain, which can be used to sign the response.

That the .com domain is under USG controll follows from: "The domain was originally administered by the United States Department of Defense, but is today operated by Verisign, and remains under ultimate jurisdiction of U.S. law."[1] That said, since the control was transfered away from the DoD, the control is much less.

A similar argument still holds for country level TLDs. Any government that administers its own TLD can use that with DNSSEC to forge DNS responses.

[1] https://en.wikipedia.org/wiki/.com

Verisign is just a company located in the USA. Does it then follow that any company located in the USA is 'controlled by the USG'? That seems like a bit of a stretch, to say the least. There is nothing else there. Your reference to history is irrelevant today.

Also, I really don't understand what this has to do with DNSSEC? Without DNSSEC anyone can forge responses. With DNSSEC you limit that to the zone administrator.

> Without DNSSEC anyone can forge responses. With DNSSEC you limit that to the zone administrator.

Whilst that is an improvement, it is still bad. Specifically, I'd it is not good enough to build a secure system on. There is an argument to be made that it is nice for defense in depth, but it should not be stand-alone security.

There are other practical concerns regarding DNSSEC at the moment with failure handling.

> Verisign is just a company located in the USA. Does it then follow that any company located in the USA is 'controlled by the USG'?

Verisign doesn't own .com -- the US government does. Verisign operates .com on contract with the US Chamber of Commerce.

Sounds a lot like DANE/TLSA.

Oh cool, haven't read about this one before. Wonder why it didn't get picked up despite being a standard :(

The basic reason is that people really don't like DNSSEC. tptaeck (https://news.ycombinator.com/user?id=tptacek) has plenty of comments really supporting that argument.

I won't say he's wrong, but he presents a very one-sided view. It's worth reading the responses to criticisms like his:


DANE is used a bit for SMTP (where it also forms a clear "only talk to our mail servers over TLS" signal, which as far as I know didn't exist before)

As a replacement for HPKP it doesn't solve the big attack scenario of someone taking over your DNS and obtaining a certificate to impersonate your site: if they can do that, they can also put false certificate details in DNS.

It's also tied to DNSSEC, which isn't universally deployed for domains, would if I understand correctly putting a full DNSSEC-resolver in the browsers to protect against local MITM (which isn't practical in many situations) and often criticized.

> Wonder why it didn't get picked up despite being a standard :(

It doesn't work on the real internet, only on some fantasy internet that merely exists in the head of DNSSEC advocates.

Basically you can only do DNSSEC if you can receive arbitrary DNS records. That's not the case for a non-negligible portion of Internet accesses, where those queries get filtered. Adam Langley has pointed this out years ago: https://www.imperialviolet.org/2015/01/17/notdane.html

So if you want to deploy DANE you can choose between falling back to insecure (so it's entirely pointless) or breaking the Internet for a large fraction of users. Neither is a very good plan.

The same could have been said about HTTPS 10 or even 5 years ago - a significant portion of internet users couldn't access it for one reason or another. And yet we're managing to transition to an HTTPS- only web. No reason DANE couldn't be progressively deployed the same way.

> DNS redirect attacks (common/easy due to social engr) combined with malicious HPKP could result in some nasty ransoming

How? Someone can hijack your resolver but they still need a valid certificate for the name before they can install the pin?

Once an attacker controls a domain it is trivial to get a DV certificate signed with their own key.

Hijacking a authorative DNS server is different kettle of fish to hijacking a resolver used by a client. You need to either infiltrate a CA or do the former for a DNS attack to work and get you a DV certificate.

Once a site is breached, the attacker is usually in control of the domain and he can order any legitimate certificate for it.

Seems reasonable to remove HPKP.

In my experience the use case that HPKP addresses the best is winning arguments with people who like ssh and think WebPKI and browsers are wrong. HPKP can be used to establish TOFU trust in the leaf key (but you need to pin your future key, too).

Winning that argument isn't worth the risks of HPKP, though.

As someone who's completely unfamiliar with the Chrome ecosystem I wonder what Blink has anything to do with this (why is this posted in blink-dev@googlegroups.com)? Isn't Blink just the rendering engine for Chromium that does DOM/CSS stuff?

They used to coordinate security stuff on mozilla.dev.security.policy but they switched to blink-dev, maybe to indicate this is Chrome's/Google's position only. The first big use of blink-dev for security that I remember was the Symantec thing.

Kazakhstan and probably Russia as well require all TLS traffic to be opened by MITM devices. https://m.habrahabr.ru/post/303736/ https://news.ycombinator.com/item?id=10663843 https://www.google.co.il/amp/s/www.rbth.com/document/1033000...

i wonder if other governments are enacting similar rules in one form or the other....

That sounds hard to implement.

If it's done by issuing a new certificate for a different key then won't it trigger red flags when certificate transparency becomes mandatory?

Resulting in the CA getting the kick.

IIRC the plan was to force users to manually install a root certificate (controlled by the government) on their devices. Local roots are exempt from any CT enforcement. Naturally, you can just not install the root certificate, but if all traffic is intercepted, I'd expect most users to do so to get around the warnings.

That probably only works if the root cert is mandated in all OEM installs...

And still you probably won't find any Linux distros with this support.

Note. incepting all traffic sounds expensive and very dangerous, ie. risk or leaking grows when you scale. It's probably better to only use it for select users.

Curious, why future versions of chrome would not force CT for official TLDs?

> Curious, why future versions of chrome would not force CT for official TLDs?

This would cause all corporate MitM proxies to fail. Certificates generated by these devices cannot be logged to the CT log servers accepted by browsers (they only accept certificates chaining back to a trusted root). Local roots were exempt from HPKP pins as well, so this is just keeping with existing policy.

why not provide an advanced feature that alerts you any time a cert changes, similar to what we get with SSH?

at least then security conscious users could make decisions for themselves.

why not provide an advanced feature that alerts you any time a cert changes

Because certificates change ... all ... the ... time. Again ... and ... again ... and ... again.

Years ago I tried using a Firefox addon called Certificate Patrol. I spent half my time approving changes. Here's a Stack Exchange question on exactly that topic. It's a few years old; I don't know if things have gotten better:


> Because certificates change ... all ... the ... time.

Not OP but I do see potential there. I've thought about it before. Try looking at it from a solution perspective rather than from "why don't we already" and "what would the issues be": certs change, yeah, but usually because they (almost) expired. We should check when Let'sEncrypt renews by default (is that 14 days before expiry?) and what common practice is, and go from there in triggering a warning.

And if there is some uncommon reason to roll over (e.g. suspected compromise), a header could be set either in advance or one could be set that signs the new fingerprint with the old key. The new one shouldn't be pinned right away since an attacker might have misused a compromised key, and a warning symbol could be displayed similar to the mixed-content warning. If someone is suspicious and it can't be delayed, they can call their bank (or whatever it is) and they'd know about it and be able to confirm things out of band.

I'm just conceptualizing but I don't see anything that's not easily solved. I think it could be a good addition.

The big sites have multiple certificates for a single domain and you will get one randomly depending on what server you happen to it.

> We should check when Let'sEncrypt renews by default (is that 14 days before expiry?)

30 days in Certbot defaults, and I think that's an official Let's Encrypt recommendation for authors of other clients.

I agree that the utility of this feature would be limited to a small percent of users for a small percent of cases.

The SSH example comes to mind because that is a system that does something similar.

In my experience every single SSH error on a key change has been a false positive but everyone accepts the high false positive rate because it is worth it to detect that one case where you are actually compromised.

But you don't detect that one case where you are actually compromised. You dismiss it like you do all the false positives. At best, when you get pwned you think back to having dismissed the key change warning and know what happened, but how does that actually help you?

I actually caught a semi-real one once. My new employer was MITMing all ssh traffic.

this does not follow at all. any time a cert has changed i either know the cause or verify the cause before using the server.

>Years ago I tried using a Firefox addon called Certificate Patrol.

yeah I too gave up on using that addon for the same reason

I imagine things have only gotten "worse" with Let'sEncrypt issuing 90 certificates.

You pin a public key (of which the private key signs the leaf certificate, the sub-CA or the CA) with HPKP, not the certificate. With the default settings, LetsEncrypt re-uses the keypair for a new certificate.

Why not have it approve an ultimate CA instead? Or least one up in the chain.

The user has no realistic way to determine whether a certificate change is legitimate or due to a man-in-the-middle, even if they are technical, unless they are told out-of-band. Normal users are likely to just click through any warnings, because they don't care about security as much as getting their work done, particularly since a warning is almost certainly a false alarm.

I agree. I wouldn't see this as being for average anonymous users using some mass market service.

It would only be useful in cases where people are willing to pay a price (in convenience) for security, understand the tech well enough to use it, and have some relationship with the domain owner that allows out-of-band verification of cert changes.

You can always monitor for any time a cert is issued on your own domains. Couple that with Expect-CT and you have something as good as HKPK, without the downside.

(Disclaimer: I run an open source monitoring project)

Can someone explain all this like im 5? I've always wondered what all this was about.

On the internet, when you send and receive data, your data gets handled by a lot of different people. In the old days, anybody who handled your data could tamper with it or impersonate anybody else. Cryptography to the rescue.

Suppose "Alice" and "Bob" want to send secret messages to each other, without allowing "Eve" the eavesdropper to read them, even if Eve can intercept the messages.

Traditional cryptography is "symmetric," where both Alice and Bob must share a secret before they can communicate. Symmetric cryptography won't suffice over the internet, because if Alice and Bob had a secure way of sharing secrets, they wouldn't need internet cryptography in the first place.

So the internet relies on public-key cryptography, where Alice and Bob each have a pair of keys (a "key pair"), one "public" key that everyone can see, even Eve, and one "private" key that has to be kept secret. Alice can encrypt a message using Bob's public key that can only be decrypted using Bob's private key.

At first, it might seem like public-key crypto solves the problem completely, but it creates a new problem: how will Alice get Bob's public key? If she asks Bob for his public key over an unencrypted public channel, Eve can intercept it and offer her own public key, acting as a "man in the middle" (MITM).

Luckily, public-key cryptography has one more trick up its sleeve. If you "encrypt" a message using a private key, it can be "decrypted" using the public key. Only Bob (the owner of Bob's private key) can encrypt messages that can be decrypted with Bob's public key, so anything Bob encrypts that way is effectively "signed" by Bob.

If Alice and Bob trust a third party, Charlie, Charlie can sign a message saying: "This is Bob's public key: 12345" and another message saying "This is Alice's public key: 23456". Eve can't impersonate Charlie without his private key. We call Charlie a "certificate authority." (CA)

When you visit an HTTPS website, the site presents a certificate signed by a CA. Your browser trusts a ton of CAs all over the world, many of them run by governments that you may not really want to trust; any of them can use their private keys to impersonate any site on the internet. This is a hard social problem as much as a technical problem.

High-value websites like Gmail, Facebook, or banks may want to say "Here's our certificate, but don't just trust any certificate authority about that. You should only trust Charlie's signature." That's called "pinning" the public key to a certificate authority.

It's a nice idea, but how will Gmail convey that message to its users? If Eve is a hostile government who intercepts messages and owns a trusted CA, they can impersonate Gmail, saying "Oh, you don't need to trust Charlie exclusively. You can trust any CA, even me."

Chrome comes with a static, hard-coded list of pinned keys for high-value sites, but that can't scale. They had the idea of allowing anybody on the internet to pin their keys, "dynamic" pinned keys or HTTP-based key public key pinning (HPKP).

The problem is, if you pin your public key and you need to change it for some reason, or if you need to switch certificate authorities for any reason, you're in big trouble. People have used HPKP and brought their site down, unable to bring it back up again, because browsers don't trust their new valid key.

As a result, very few sites used HPKP, so the Chrome team is planning to remove it.

Surprisingly to me, they even plan to remove the static list of pinned keys, in favor of "Certificate Transparency" where it's publicly obvious which CAs are signing which certificates. Rogue CAs would then have to reveal that they've gone rogue, at which point browsers could revoke their automatic trust in them.

That's an amazingly AWESOME answer, thank you. So... what's an example of a CA that the most highest value targets on the internet trust? (like Google, Facebook, Amazon, and various banks). Is there like a very trustworthy company that handles most of the big companies?

There's a list of pinned CAs at the top of the hsts preload list [1]. Which gives you an idea of who might be trusted.

Cert pinning is pretty nasty if you get it wrong. If you don't pin, there's a large number of CAs in most client's default trust stores. If you do pin, and the CA you pinned turns out to be bad, it didn't help. If you pin, but the CA stops issuing from the intermediate or the root that you pinned, you can't get a new cert (hope you had other options); note that CAs don't give much guidance about what to pin. If you pinned a CA that gets delisted, that's no good either. If you pinned two different CAs (a smart choice), but they merge, you no longer have a backup. So, you should pin a public key that you haven't gotten a cert with yet, and keep it safe,but also readily available for emergencies. But you only get one emergency -- hope your next emergency comes after you have time to figure out new pins and get them bundled everywhere; and in the meantime you have one key for everything, which isn't great.

(Based only on the name) Expect-CT doesn't provide nearly as much protection, any CA cert will work, but only if it was publicly recorded. If you monitor for certs issued on your domains, at least you know to raise a fuss if a CA you didn't authorize issues on your domains. That's probably enough to keep CAs in line, unless Let's Encrypt drives the net present value of a well distributed CA to below the value of illicit certificates.

[1] https://chromium.googlesource.com/chromium/src/net/+/master/...

Maybe this is a dumb question... but do larger companies like Google attempt to set up their own CA so they can be sure of the longevity and security, and not rely on a 3rd party?

Google is the only one I'm aware of. Microsoft has an intermediate CA, but it's not clear if that's actually independent of the CA that signed it.

It's likely different if you only serve requests for clients you distribute, and can bundle a CA cert. But if you're serving browsers, you need your CA in the default trust store which means passing audits, which is time and money and requires a fairly rigorous setup. If you do that, you still need to get your CA cross signed by an existing CA to use it, until the root is widely distributed, if you support mobile browsers, it's a long wait until you're really distributed. I don't know how much a CA charges to cross sign, but I would guess it's very expensive; and using the cross signed cert means sending an extra cert during tls handshake. There's an extension for clients to indicate supported CAs, but it's not really used, I'm not sure it's very sensibly designed -- anyway there's not a good way to know and then provide different certs for clients that don't know your CA.

Not a dumb question at all and the answer is yes they do set up their own CA's for that exact reason you mentioned.

Google's is called Google Trust Services:


(Note, Google was already running a sub-ordinate CA which was issued by a third party which is why you might have seen them issue their 'own' certificates previously, but they were actually 'subordinate' to the Root CA that issued them to Google initially).

As part of this process of setting up a new trusted Root CA it's necessary to get your trusted Root certificates into all products that may need to use your services. This is a trust system and has to start somewhere, so there is a list of trusted Root CA's and certificates pre-installed in every device. As you can imagine this can take time to do. Sometimes you can cross-sign your Root certificates using another pre-existing trusted CA (I think this is what LetsEncrypt did?) so that all products that trust that third party will now trust your Root CA (well, it's certificate).

That could potentially still leave you/devices that don't have your own Root CA's specific Root certificate open to tampering if the CA you got to cross-sign your certificates went rogue.

In mitigation of that last point above you could just buy out a pre-existing CA so they couldn't go rogue and make them part of your new Root CA collection, which is what Google/Alphabet did:


Even so, as you can see from that post, the process can still be lengthy to reach all products, particularly those that might not be updated with new trusted Root CA's (embedded devices etc) so Google Trust Services still plans (and has now I think?) to cross-sign its certificates with two other third parties it trusts to never go rogue (allowing products which already trust the third parties to now trust Google's new Root CA) in order to let it reach some of those products that don't (and perhaps will never be updated to ever) have it's newly owned Root CA's already installed in them.

It's a question of risk management and as you mention, the only way to fully protect yourself is to get everyone to trust you (your CA) as a Root CA, the rest just leaves you open to meddling.

You can see overall statistics for CAs at [0] or [1]. Google recently created its own CA for its properties [2].

[0]: https://trends.builtwith.com/ssl/root-authority

[1]: https://en.wikipedia.org/wiki/Certificate_authority#Provider...

[2]: https://pki.google.com/

An attacker (eg. a criminal or foreign power) can pretend to be a website you know. Certificate pinning is a way for website to alert you to a hostile change of certificate. HPKP Cert pinning has some issues that create other problems. Chrome is removing HPKP Cert pinning and replacing it with something better called Expect-CT.

Is there an alternative to prevent people from doing MITM attacks on mobile apps? With HPKP running, apps with Charles requires a rooted device.

For mobile apps it's easier than for websites because native apps have more control over what's going on than the website, and can read the details of http connections (so in worst case, you can roll out your own pinning).

For Android there are a built-in facilities for that in modern versions: https://developer.android.com/training/articles/security-con...

For iOS: not an expert, but this article seems good https://dzone.com/articles/ssl-certificate-pinning-in-ios-ap...

It might get a bit trickier when WebViews are involved though because, at least on Android, SSL in WebView is subject to different security rules than the java-initiated connections (AFAIU the problems due to https://www.chromium.org/developers/androidwebview/webview-c... could not have been avoided on the app side, for example, as it was a bug in Chromium).

> It might get a bit trickier when WebViews are involved though because, at least on Android, SSL in WebView is subject to different security rules than the java-initiated connections

They've started changing that:


Without key pinning, are threre other options for site operators to protect their users from MITM by traffic monitoring appliances?

HPKP doesn't protect against MITM via locally installed trust anchors. It explicitly permits that in both Chromium and Firefox.

The same as with 301, everyone recoomends them until they buy a domain or do some restructuring with new people some years down the line and 301 totally messed their site up without any reset button.

Can you explain this more?

HTTP provides a couple of different response codes for when one URL should redirect to another URL. The most common are 301 Moved Permanently and 302 Found, aka moved temporarily.

When designing a website and you find the need to redirect one URL to another, you have to choose which HTTP response code to use for the redirect. You might naively think 301 Moved Permanently is the right choice for when you perceive the redirection to be a non-temporary thing. Unfortunately HTTP 301 responses are cached very aggressively by web browsers by default, so if you install a 301 redirect in your website and choose to revert it, clients who have already seen the now-reverted 301 redirect will just keep following the cached redirect.

Basically, unless the URL you're redirecting is receiving way more than a thousand hits per second (i.e. unless you're running a large-scale website with lots of traffic), you should always use the temporary 302 redirect, even though you might perceive the redirection to be non-temporary.

Thanks for doing what would have been my job. Appreciated.

What about supporting client side certificates in HTTP2?

Less a Chrome policy thing and more a "Nobody has suggested a solution the working group making the standard likes enough, so there isn't even a defined way how". So unless I missed a recent development, don't expect it to happen.

You can use client certificates in h2, you just can't do TLS renegotiation. So you have to request a client cert in the initial handshake; you can't ask later (eg for a specific URL).

Ah, Google. The tech gods giveth standards, and the tech gods taketh standards away.

This is what happens when you let a single vendor define web standards, and have a majority share of the browser market. They can take their toys and go home, and websites won't support what they don't support.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact