My and @eganist's Black Hat / DEF CON talk "Abusing Bleeding Edge Web Standards for AppSec Glory" demoed an exploit concept that we called "RansomPKP", which was essentially a pattern of hostile pinning that could theoretically enable pivoting from a web server compromise to holding a site for ransom. Hostile pinning was by no means a new concept, and even has some discussion in the IETF spec itself, but we found this to be a fun novel application and used it to spur some minor security improvements to browsers' HPKP implementations.
However, this talk also led to concerns being vocalized about the viability of HPKP in general (https://news.ycombinator.com/item?id=12434585), ultimately leading to this. This was not our intention at all, and I don't see hostile pinning alone as a reason to give up on HPKP.
I would much rather see some discussion around improving the usability of HPKP before jumping straight to putting it on the chopping block — both from a site operator's end and a user end. For example, off the top of my head, why not make it possible for users to click past the HPKP error screen like they can with any other TLS error screen?
Hpkp was championed by a lot of security people, who in turn got a lot of people to foot gun themselves (Scott Helme even admitted that he is oftentimes one of the first people called by people who HPKP foot-gun themselves).
There were only a handful of sites that actually needed HPKP level security, and ransom-hpkp was the least of people's worries. Hpkp was more dangerous to people rolling it out on purpose than it was to mass header injection or similar :/
HPKP has been doomed from the beginning. Here is sleevi saying he regrets it in 2/2016:
The linked Qualys blog post / HN thread was shortly after our talk, which (along with our conversation with Scott Helme around that time) led to Scott's post "Using security features to do bad things". RansomPKP and related follow-ups are directly highlighted by Scott's recent post "I'm giving up on HPKP" in which he announced his decision to remove HPKP from the Security Headers tool, and Scott himself is cited in this post by Chris Palmer.
Note that I'm not suggesting that Scott himself is responsible for this, or that anything he's said has been in bad faith. My point is simply that my talk was one part of the chain of events that started the ball rolling on this conversation.
I'm also not saying that RansomPKP / hostile pinning is the most important reason that people have for not liking HPKP — in this case Chris lists it as only one of three motivations. Clearly, the usability issues with its implementation have been a much bigger problem, which is what I would like to see serious attempts to improve on before throwing out all the work that's been done up until now.
Edit: re: sleevi edit: the tweet you linked doesn't say anything about regretting the concept of TLS key pinning entirely, just that it's done as a header. I'll admit it's ambiguous, but that sounds to me like he would rather have kept the feature but changed the API. I would be all for deprecating the HPKP header if it were replaced with a better / more usable interface to the feature.
It's fair. There was a lot of buzz about Ransom HPKP. The whole thing was doomed from the start, and I was pretty upset every time I saw anyone publicly push for it.
tl;dr: the same idea that we showed how to apply maliciously via RansomPKP is also applied for defensive purposes, in this case to persistently pin a client-side page with logic that validates and runs signed packages.
It's a really smart idea, although it did have some odd edge cases, and required you to trust that they really were throwing away the keys as promised.
There is some talk in the W3C of extending the SRI standard to let a website declare that all (or just certain) included resources have been signed by an (offline) PGP key:
so we might one day reach a point where running a webapp at least has the small security guarantee that a TOFU policy gives you. If this could be combined with versioned releases of webapps, and the signature appearing in something like a Binary Transparency log, then the security guarantee could actually be quite meaningful:
The rest of that is very interesting! I wasn't aware of that PGP signing discussion, but it would be very exciting if it panned out.
"Scott Helme found in August 2016 that very few of the Alex [sic] Top 1 Million sites were using HPKP (375) or the Report-Only (RO) variant (76):"
I'm not convinced that static pins need to go too. There are something like 10 sites on that list currently, and all of them are valuable targets and should have the resources to ensure their pins don't fail. Even increasing that number to something like 100 should be manageable for browser vendors and would cover a large percentage of all page views (rather than just guarantee discovery after the fact).
It's only a matter of time before an intern or an audit guy have it deployed on majorcompany.com and result in a disaster. Symptom includes none of the client ever able to access the site again.
They were only accessible from our network (ethernet or VPN), so we hadn’t bothered with HTTPS before. Oh and we only found out about the issue when Chrome updated and everything broke - it was a fun few days!
Also with HPKP you shouldn't just pin one key. You should pin several keys you own as well as several root and intermediate keys. But I agree it's very difficult to do right and there's still a risk of it failing.
By way of comparison, it has never been a good idea to default to HPKP. Privacy-sensitive sites should be pinned, and if you can't safely manage pinning, that's a pretty good sign that you're not mature enough to engineer privacy for the site either, so I don't have that much sympathy for the argument that it's a foot cannon (this is, of course, very easy for me to say). But if you're just selling coffee beans or scheduling laundry pickups, PKP has always been a very bad idea for you.
The latter example is where HSTS becomes an invaluable tool, since now the only way those resources could be delivered is through a trusted channel, verified by the PKI. The same value is not there for a software mirror, because of the other security safeguards already implemented, removing the need to trust the delivery channel. That said, most still do server their content over HTTPS as well.
It will backfire when the certificate expires, when some clients don't recognize the CA chain, when domains or subdomains don't match the certificate.
Glad there are more reports of this showing up.
In theory, pinning your servers' private keys is actually kind of reasonable, if you generate like two or three sets of backup private keys and put them in off-site storage. And I've long been an advocate of buying at least one backup certificate from another CA just in case your current one gets distrusted.
(And it makes sense from a technical perspective why HPKP supports both of these approaches, but the ambiguity probably didn't help it from a policy perspective).
The article suggests the Expect-CT header as a safer alternative. Scott Helme has a short but informative write-up how this works.
(The certificate could embed a DNSSEC assertion about the CAA header or lack thereof, for that matter.)
DNS redirect attacks (common/easy due to social engr) combined with malicious HPKP could result in some nasty ransoming ("many of your users can't access your site unless you pay me for the key"). I've heard many surprised it hasn't happened yet. Particularly considering the lack of recourse options for victims.
If someone ransomed you, would you need to pay them for the key, and then use the key on your site from then on? So, you could pay the ransom and they'd be able to decrypt all of your traffic from then on?
(I'm sure I just don't know how HPKP works, like there's some solution where the ransomer's key/the compromised key can be used to sign another key, and then HPKP pinners that cached the bogus key can now accept it as the new key... but then couldn't you use a compromised key to do the same attack again in the future?)
There is no mechanism that would allow you to use the attacker's key to sign another key: HPKP requires that the pinned key is present in the actual trust chain selected for your connection.
You could switch to a non-compromised key a bit earlier by setting max-age to zero and waiting for a certain percentage of the affected visitors to return, if you're willing to accept that the remaining affected visitors cannot access the site for a few months.
The fix would be to embed the expected key fingerprint in DNS and have the browser issue either a 2nd request for it or have the DNS server return it as additional data just like when requesting a CNAME record and it returns the A record too. Then, to prevent DNS MITM attacks, have the whole zone and the domain's zonefile signed.
On the other hand, given that DNS is UDP, this opens up the possibility of an MITM attacker simply suppressing the 2nd request for the HTTPS key, or corp firewalls/MITM boxes/crappy provider DNS servers simply filtering out the responses...
As such, it suffers from the same issue: it relies on DNSSEC. If you look at the trust chain for DNSSEC on the .com domain, you are trusting the US government and your registrar. The US government is the bigger issue here, as the NSA is also a part of them.
You might argue that this is 'good enough' but considering the momentum that these kind of systems have, a wrong decision here could really enable NSA spying for a long time. Besides, CT logs seem like a much better solution than key-pinning anyway.
It's kind of a moot point, though, since DNSSEC is garbage for other reasons. Certificate transparency logs are the current best effort in this area.
True, it's an improvement that only the dutch government can do this, and not the Hong Kong post office. On the other hand, it is a major downside that we are encoding the possibility of government dragnet surveillance.
In the end, certificate transparency logs will let me notice whenever anyone issues a certificate for my website.
Quite the opposite; DANE makes it possible to have a TLD that opts out of giving national governments access to it. Most existing TLDs are controlled by governments, but that doesn't have to be how it is.
That the .com domain is under USG controll follows from:
"The domain was originally administered by the United States Department of Defense, but is today operated by Verisign, and remains under ultimate jurisdiction of U.S. law."
That said, since the control was transfered away from the DoD, the control is much less.
A similar argument still holds for country level TLDs. Any government that administers its own TLD can use that with DNSSEC to forge DNS responses.
Also, I really don't understand what this has to do with DNSSEC? Without DNSSEC anyone can forge responses. With DNSSEC you limit that to the zone administrator.
Whilst that is an improvement, it is still bad. Specifically, I'd it is not good enough to build a secure system on. There is an argument to be made that it is nice for defense in depth, but it should not be stand-alone security.
There are other practical concerns regarding DNSSEC at the moment with failure handling.
Verisign doesn't own .com -- the US government does. Verisign operates .com on contract with the US Chamber of Commerce.
As a replacement for HPKP it doesn't solve the big attack scenario of someone taking over your DNS and obtaining a certificate to impersonate your site: if they can do that, they can also put false certificate details in DNS.
It's also tied to DNSSEC, which isn't universally deployed for domains, would if I understand correctly putting a full DNSSEC-resolver in the browsers to protect against local MITM (which isn't practical in many situations) and often criticized.
It doesn't work on the real internet, only on some fantasy internet that merely exists in the head of DNSSEC advocates.
Basically you can only do DNSSEC if you can receive arbitrary DNS records. That's not the case for a non-negligible portion of Internet accesses, where those queries get filtered. Adam Langley has pointed this out years ago:
So if you want to deploy DANE you can choose between falling back to insecure (so it's entirely pointless) or breaking the Internet for a large fraction of users. Neither is a very good plan.
How? Someone can hijack your resolver but they still need a valid certificate for the name before they can install the pin?
In my experience the use case that HPKP addresses the best is winning arguments with people who like ssh and think WebPKI and browsers are wrong. HPKP can be used to establish TOFU trust in the leaf key (but you need to pin your future key, too).
Winning that argument isn't worth the risks of HPKP, though.
i wonder if other governments are enacting similar rules in one form or the other....
If it's done by issuing a new certificate for a different key then won't it trigger red flags when certificate transparency becomes mandatory?
Resulting in the CA getting the kick.
And still you probably won't find any Linux distros with this support.
Note. incepting all traffic sounds expensive and very dangerous, ie. risk or leaking grows when you scale. It's probably better to only use it for select users.
Curious, why future versions of chrome would not force CT for official TLDs?
This would cause all corporate MitM proxies to fail. Certificates generated by these devices cannot be logged to the CT log servers accepted by browsers (they only accept certificates chaining back to a trusted root). Local roots were exempt from HPKP pins as well, so this is just keeping with existing policy.
at least then security conscious users could make decisions for themselves.
Because certificates change ... all ... the ... time. Again ... and ... again ... and ... again.
Years ago I tried using a Firefox addon called Certificate Patrol. I spent half my time approving changes. Here's a Stack Exchange question on exactly that topic. It's a few years old; I don't know if things have gotten better:
Not OP but I do see potential there. I've thought about it before. Try looking at it from a solution perspective rather than from "why don't we already" and "what would the issues be": certs change, yeah, but usually because they (almost) expired. We should check when Let'sEncrypt renews by default (is that 14 days before expiry?) and what common practice is, and go from there in triggering a warning.
And if there is some uncommon reason to roll over (e.g. suspected compromise), a header could be set either in advance or one could be set that signs the new fingerprint with the old key. The new one shouldn't be pinned right away since an attacker might have misused a compromised key, and a warning symbol could be displayed similar to the mixed-content warning. If someone is suspicious and it can't be delayed, they can call their bank (or whatever it is) and they'd know about it and be able to confirm things out of band.
I'm just conceptualizing but I don't see anything that's not easily solved. I think it could be a good addition.
30 days in Certbot defaults, and I think that's an official Let's Encrypt recommendation for authors of other clients.
The SSH example comes to mind because that is a system that does something similar.
In my experience every single SSH error on a key change has been a false positive but everyone accepts the high false positive rate because it is worth it to detect that one case where you are actually compromised.
yeah I too gave up on using that addon for the same reason
It would only be useful in cases where people are willing to pay a price (in convenience) for security, understand the tech well enough to use it, and have some relationship with the domain owner that allows out-of-band verification of cert changes.
(Disclaimer: I run an open source monitoring project)
Suppose "Alice" and "Bob" want to send secret messages to each other, without allowing "Eve" the eavesdropper to read them, even if Eve can intercept the messages.
Traditional cryptography is "symmetric," where both Alice and Bob must share a secret before they can communicate. Symmetric cryptography won't suffice over the internet, because if Alice and Bob had a secure way of sharing secrets, they wouldn't need internet cryptography in the first place.
So the internet relies on public-key cryptography, where Alice and Bob each have a pair of keys (a "key pair"), one "public" key that everyone can see, even Eve, and one "private" key that has to be kept secret. Alice can encrypt a message using Bob's public key that can only be decrypted using Bob's private key.
At first, it might seem like public-key crypto solves the problem completely, but it creates a new problem: how will Alice get Bob's public key? If she asks Bob for his public key over an unencrypted public channel, Eve can intercept it and offer her own public key, acting as a "man in the middle" (MITM).
Luckily, public-key cryptography has one more trick up its sleeve. If you "encrypt" a message using a private key, it can be "decrypted" using the public key. Only Bob (the owner of Bob's private key) can encrypt messages that can be decrypted with Bob's public key, so anything Bob encrypts that way is
effectively "signed" by Bob.
If Alice and Bob trust a third party, Charlie, Charlie can sign a message saying: "This is Bob's public key: 12345" and another message saying "This is Alice's public key: 23456". Eve can't impersonate Charlie without his private key. We call Charlie a "certificate authority." (CA)
When you visit an HTTPS website, the site presents a certificate signed by a CA. Your browser trusts a ton of CAs all over the world, many of them run by governments that you may not really want to trust; any of them can use their private keys to impersonate any site on the internet. This is a hard social problem as much as a technical problem.
High-value websites like Gmail, Facebook, or banks may want to say "Here's our certificate, but don't just trust any certificate authority about that. You should only trust Charlie's signature." That's called "pinning" the public key to a certificate authority.
It's a nice idea, but how will Gmail convey that message to its users? If Eve is a hostile government who intercepts messages and owns a trusted CA, they can impersonate Gmail, saying "Oh, you don't need to trust Charlie exclusively. You can trust any CA, even me."
Chrome comes with a static, hard-coded list of pinned keys for high-value sites, but that can't scale. They had the idea of allowing anybody on the internet to pin their keys, "dynamic" pinned keys or HTTP-based key public key pinning (HPKP).
The problem is, if you pin your public key and you need to change it for some reason, or if you need to switch certificate authorities for any reason, you're in big trouble. People have used HPKP and brought their site down, unable to bring it back up again, because browsers don't trust their new valid key.
As a result, very few sites used HPKP, so the Chrome team is planning to remove it.
Surprisingly to me, they even plan to remove the static list of pinned keys, in favor of "Certificate Transparency" where it's publicly obvious which CAs are signing which certificates. Rogue CAs would then have to reveal that they've gone rogue, at which point browsers could revoke their automatic trust in them.
Cert pinning is pretty nasty if you get it wrong. If you don't pin, there's a large number of CAs in most client's default trust stores. If you do pin, and the CA you pinned turns out to be bad, it didn't help. If you pin, but the CA stops issuing from the intermediate or the root that you pinned, you can't get a new cert (hope you had other options); note that CAs don't give much guidance about what to pin. If you pinned a CA that gets delisted, that's no good either. If you pinned two different CAs (a smart choice), but they merge, you no longer have a backup. So, you should pin a public key that you haven't gotten a cert with yet, and keep it safe,but also readily available for emergencies. But you only get one emergency -- hope your next emergency comes after you have time to figure out new pins and get them bundled everywhere; and in the meantime you have one key for everything, which isn't great.
(Based only on the name) Expect-CT doesn't provide nearly as much protection, any CA cert will work, but only if it was publicly recorded. If you monitor for certs issued on your domains, at least you know to raise a fuss if a CA you didn't authorize issues on your domains. That's probably enough to keep CAs in line, unless Let's Encrypt drives the net present value of a well distributed CA to below the value of illicit certificates.
It's likely different if you only serve requests for clients you distribute, and can bundle a CA cert. But if you're serving browsers, you need your CA in the default trust store which means passing audits, which is time and money and requires a fairly rigorous setup. If you do that, you still need to get your CA cross signed by an existing CA to use it, until the root is widely distributed, if you support mobile browsers, it's a long wait until you're really distributed. I don't know how much a CA charges to cross sign, but I would guess it's very expensive; and using the cross signed cert means sending an extra cert during tls handshake. There's an extension for clients to indicate supported CAs, but it's not really used, I'm not sure it's very sensibly designed -- anyway there's not a good way to know and then provide different certs for clients that don't know your CA.
Google's is called Google Trust Services:
(Note, Google was already running a sub-ordinate CA which was issued by a third party which is why you might have seen them issue their 'own' certificates previously, but they were actually 'subordinate' to the Root CA that issued them to Google initially).
As part of this process of setting up a new trusted Root CA it's necessary to get your trusted Root certificates into all products that may need to use your services. This is a trust system and has to start somewhere, so there is a list of trusted Root CA's and certificates pre-installed in every device. As you can imagine this can take time to do. Sometimes you can cross-sign your Root certificates using another pre-existing trusted CA (I think this is what LetsEncrypt did?) so that all products that trust that third party will now trust your Root CA (well, it's certificate).
That could potentially still leave you/devices that don't have your own Root CA's specific Root certificate open to tampering if the CA you got to cross-sign your certificates went rogue.
In mitigation of that last point above you could just buy out a pre-existing CA so they couldn't go rogue and make them part of your new Root CA collection, which is what Google/Alphabet did:
Even so, as you can see from that post, the process can still be lengthy to reach all products, particularly those that might not be updated with new trusted Root CA's (embedded devices etc) so Google Trust Services still plans (and has now I think?) to cross-sign its certificates with two other third parties it trusts to never go rogue (allowing products which already trust the third parties to now trust Google's new Root CA) in order to let it reach some of those products that don't (and perhaps will never be updated to ever) have it's newly owned Root CA's already installed in them.
It's a question of risk management and as you mention, the only way to fully protect yourself is to get everyone to trust you (your CA) as a Root CA, the rest just leaves you open to meddling.
For Android there are a built-in facilities for that in modern versions: https://developer.android.com/training/articles/security-con...
For iOS: not an expert, but this article seems good https://dzone.com/articles/ssl-certificate-pinning-in-ios-ap...
It might get a bit trickier when WebViews are involved though because, at least on Android, SSL in WebView is subject to different security rules than the java-initiated connections (AFAIU the problems due to https://www.chromium.org/developers/androidwebview/webview-c... could not have been avoided on the app side, for example, as it was a bug in Chromium).
They've started changing that:
When designing a website and you find the need to redirect one URL to another, you have to choose which HTTP response code to use for the redirect. You might naively think 301 Moved Permanently is the right choice for when you perceive the redirection to be a non-temporary thing. Unfortunately HTTP 301 responses are cached very aggressively by web browsers by default, so if you install a 301 redirect in your website and choose to revert it, clients who have already seen the now-reverted 301 redirect will just keep following the cached redirect.
Basically, unless the URL you're redirecting is receiving way more than a thousand hits per second (i.e. unless you're running a large-scale website with lots of traffic), you should always use the temporary 302 redirect, even though you might perceive the redirection to be non-temporary.
This is what happens when you let a single vendor define web standards, and have a majority share of the browser market. They can take their toys and go home, and websites won't support what they don't support.