
Public Key Pinning Being Removed from Chrome - ejcx
https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/he9tr7p3rZ8/eNMwKPmUBAAJ?hn
======
buu700
I can't support this at all, and ironically this is partially my fault.

My and @eganist's Black Hat / DEF CON talk "Abusing Bleeding Edge Web
Standards for AppSec Glory" demoed an exploit concept that we called
"RansomPKP", which was essentially a pattern of hostile pinning that could
theoretically enable pivoting from a web server compromise to holding a site
for ransom. Hostile pinning was by no means a new concept, and even has some
discussion in the IETF spec itself, but we found this to be a fun novel
application and used it to spur some minor security improvements to browsers'
HPKP implementations.

However, this talk also led to concerns being vocalized about the viability of
HPKP in general
([https://news.ycombinator.com/item?id=12434585](https://news.ycombinator.com/item?id=12434585)),
ultimately leading to this. This was not our intention at all, and I don't see
hostile pinning alone as a reason to give up on HPKP.

I would much rather see some discussion around improving the usability of HPKP
before jumping straight to putting it on the chopping block — both from a site
operator's end and a user end. For example, off the top of my head, why not
make it possible for users to click past the HPKP error screen like they can
with any other TLS error screen?

~~~
ejcx
I think this has very little to do with you.

Hpkp was championed by a lot of security people, who in turn got a lot of
people to foot gun themselves (Scott Helme even admitted that he is oftentimes
one of the first people called by people who HPKP foot-gun themselves).

There were only a handful of sites that actually needed HPKP level security,
and ransom-hpkp was the least of people's worries. Hpkp was more dangerous to
people rolling it out on purpose than it was to mass header injection or
similar :/

HPKP has been doomed from the beginning. Here is sleevi saying he regrets it
in 2/2016:
[https://twitter.com/sleevi_/status/696171562383224832](https://twitter.com/sleevi_/status/696171562383224832)

~~~
buu700
I agree that RansomPKP itself isn't that big a real-world concern (which was
part of my point), but it did motivate the first wide discussions that I'd
seen questioning whether HPKP should exist.

The linked Qualys blog post / HN thread was shortly after our talk, which
(along with our conversation with Scott Helme around that time) led to Scott's
post "Using security features to do bad things"[1]. RansomPKP and related
follow-ups are directly highlighted by Scott's recent post "I'm giving up on
HPKP"[2] in which he announced his decision to remove HPKP from the Security
Headers tool[3], and Scott himself is cited in this post by Chris Palmer.

Note that I'm not suggesting that Scott himself is responsible for this, or
that anything he's said has been in bad faith. My point is simply that my talk
was one part of the chain of events that started the ball rolling on this
conversation.

I'm also not saying that RansomPKP / hostile pinning is the most important
reason that people have for not liking HPKP — in this case Chris lists it as
only one of three motivations. Clearly, the usability issues with its
implementation have been a much bigger problem, which is what I would like to
see serious attempts to improve on before throwing out all the work that's
been done up until now.

\---

Edit: re: sleevi edit: the tweet you linked doesn't say anything about
regretting the concept of TLS key pinning entirely, just that it's done as a
header. I'll admit it's ambiguous, but that sounds to me like he would rather
have kept the feature but changed the API. I would be all for deprecating the
HPKP header if it were replaced with a better / more usable interface to the
feature.

\---

1: [https://scotthelme.co.uk/using-security-features-to-do-
bad-t...](https://scotthelme.co.uk/using-security-features-to-do-bad-things)

2: [https://scotthelme.co.uk/im-giving-up-on-
hpkp](https://scotthelme.co.uk/im-giving-up-on-hpkp)

3: [https://securityheaders.io](https://securityheaders.io)

~~~
ejcx
I will be VERY upfront that I DO blame Scott Helme for this. I mentioned this
in 2/2016 as well[0]

It's fair. There was a lot of buzz about Ransom HPKP. The whole thing was
doomed from the start, and I was pretty upset every time I saw anyone publicly
push for it.

0:
[https://twitter.com/ejcx_/status/698227927390023681](https://twitter.com/ejcx_/status/698227927390023681)

~~~
eganist
I worked with Scott on the HPKP components of that initial blog post (I'm sure
he can confirm) and I won't blame him at all for what took place in hindsight.
Google actually denied a bounty on disclosures surrounding RansomPKP, so there
was nothing to suggest this was the path they would eventually follow.

~~~
ejcx
Again, I think this decision has nothing to do with Ransom HPKP and everything
to do with how it's not a usable standard, and people who try to use it
correctly fail.

------
pfg
Removing dynamic pins was inevitable given the associated risk for _all_
sites. Some ideas to fix those exist[1], but I'm not sure it's worth the
effort in a fully CT-enforced web. That's probably time better spent somewhere
else (such as improving CT itself and the gossip mechanism.)

I'm not convinced that static pins need to go too. There are something like 10
sites on that list currently, and all of them are valuable targets and should
have the resources to ensure their pins don't fail. Even increasing that
number to something like 100 should be manageable for browser vendors and
would cover a large percentage of all page views (rather than just guarantee
discovery after the fact).

[1]: [https://blog.qualys.com/ssllabs/2017/09/05/fixing-hpkp-
with-...](https://blog.qualys.com/ssllabs/2017/09/05/fixing-hpkp-with-pin-
revocation)

------
Roritharr
This is especially funny to me as our PCI DSS Network Scan just started
flagging not having a HPKP Header as something thats necessary to remediate.
I've had to waste half a day on the phone and then to write a Risk Mitigation
Plan that explains how we mitigate the risk of an MITM Attack in case our CA
gets breached...

~~~
tptacek
It is deeply fucked up if a scanning checklist demands PKP, since most sites
--- including most commerce sites --- shouldn't pin.

~~~
user5994461
Well, try explaining that to the HSTS and HPKP folks. They already have
answers littered on stack overflow and HN to advise to enable it for anything
and everything. With exactly zero consideration for the potential to backfire.

It's only a matter of time before an intern or an audit guy have it deployed
on majorcompany.com and result in a disaster. Symptom includes none of the
client ever able to access the site again.

~~~
riffraff
does HSTS have potential to backfire?

~~~
tptacek
Unless you're a static content site that is using TLS just to be an Internet
Good Citizen to prevent passive traffic analysis, you absolutely should have
HSTS enabled; it's not really a judgement call. Without HSTS, you almost might
as well not do TLS at all; HSTS prevents a serious, effective, and easy
attack.

By way of comparison, it has never been a good idea to default to HPKP.
Privacy-sensitive sites should be pinned, and if you can't safely manage
pinning, that's a pretty good sign that you're not mature enough to engineer
privacy for the site either, so I don't have that much sympathy for the
argument that it's a foot cannon (this is, of course, very easy for me to
say). But if you're just selling coffee beans or scheduling laundry pickups,
PKP has always been a very bad idea for you.

~~~
majewsky
Why does being a static content site mean you should not enable HSTS? If
anything, HSTS is easiest to roll out for static content sites.

~~~
AfroThundr
One primary example would be sites like software repository mirrors, where the
majority of content is signed already, and serving them over HTTPS provides
negligible benefit to your users (other than the slight confidentiality
increase in that an adversary wouldn't know what exactly you downloaded), as
opposed to a site serving active content like JavaScript and CSS, which can
have disastrous results to the users if an adversary tampers with them.

The latter example is where HSTS becomes an invaluable tool, since now the
only way those resources could be delivered is through a trusted channel,
verified by the PKI. The same value is not there for a software mirror,
because of the other security safeguards already implemented, removing the
need to trust the delivery channel. That said, most still do server their
content over HTTPS as well.

------
weinzierl
Interesting HN-discussion about the future of HPKP from a little over a year
ago [1]. Reading it, I think this move was predictable.

The article suggests the _Expect-CT_ header as a safer alternative. Scott
Helme has a short but informative write-up how this works[2].

[1]
[https://news.ycombinator.com/item?id=12434585](https://news.ycombinator.com/item?id=12434585)

[2] [https://scotthelme.co.uk/a-new-security-header-expect-
ct/](https://scotthelme.co.uk/a-new-security-header-expect-ct/)

~~~
amluto
Crikeys. By that point the damage is done. How about a read-CAA-via-DNSSEC-
and-confirm-that-it's-the-right-CA header?

(The certificate could embed a DNSSEC assertion about the CAA header or lack
thereof, for that matter.)

------
sigmar
Good riddance. It had low adoption and pales in comparison to what will be
achieved with Certificate Transparency.

DNS redirect attacks (common/easy due to social engr) combined with malicious
HPKP could result in some nasty ransoming ("many of your users can't access
your site unless you pay me for the key"). I've heard many surprised it hasn't
happened yet. Particularly considering the lack of recourse options for
victims.

~~~
yebyen
What would be the fix? I'm asking sincerely as someone who is only surface
level familiar with HPKP, and have never implemented it (but my boss did...)

If someone ransomed you, would you need to pay them for the key, and then use
the key on your site from then on? So, you could pay the ransom and they'd be
able to decrypt all of your traffic from then on?

(I'm sure I just don't know how HPKP works, like there's some solution where
the ransomer's key/the compromised key can be used to sign another key, and
then HPKP pinners that cached the bogus key can now accept it as the new
key... but then couldn't you use a compromised key to do the same attack again
in the future?)

~~~
mschuster91
> What would be the fix? I'm asking sincerely as someone who is only surface
> level familiar with HPKP, and have never implemented it (but my boss did...)

The fix would be to embed the expected key fingerprint in DNS and have the
browser issue either a 2nd request for it or have the DNS server return it as
additional data just like when requesting a CNAME record and it returns the A
record too. Then, to prevent DNS MITM attacks, have the whole zone and the
domain's zonefile signed.

On the other hand, given that DNS is UDP, this opens up the possibility of an
MITM attacker simply suppressing the 2nd request for the HTTPS key, or corp
firewalls/MITM boxes/crappy provider DNS servers simply filtering out the
responses...

~~~
rocqua
As the other commenter said, this sounds a lot like DANE.

As such, it suffers from the same issue: it relies on DNSSEC. If you look at
the trust chain for DNSSEC on the .com domain, you are trusting the US
government and your registrar. The US government is the bigger issue here, as
the NSA is also a part of them.

You might argue that this is 'good enough' but considering the momentum that
these kind of systems have, a wrong decision here could really enable NSA
spying for a long time. Besides, CT logs seem like a much better solution than
key-pinning anyway.

~~~
amalcon
This has always seemed like a really silly argument. You're already trusting
the US government, VeriSign, and a multitude of other organizations that
control CAs, so DANE doesn't make this worse.

It's kind of a moot point, though, since DNSSEC is garbage for other reasons.
Certificate transparency logs are the current best effort in this area.

~~~
rocqua
The point is not that DANE doesn't make things worse. The point is that it is
not a solution. Originally, DANE was meant as a method to restrict rogue CA's
from issuing certificates. The fact that state-actors can still do that after
DANE makes DANE a bad solution.

~~~
lmm
The US Government is and should be the root of trust for US domains (certainly
for .us, and de facto that's become the use of .com too). Since the US
government can compel any US entity to follow secret orders, if you don't
trust the US government you already couldn't use any US sites. DANE improves
things compared to not, since it means you don't have to trust the US
government if you're not using US sites, you don't have to trust the Chinese
government if you're not using Chinese sites, you don't have to trust the
government of Kazakhstan if you're not using Kazakh sites...

~~~
rocqua
I am a dutch citizen and have a .nl domain. Yet, that does not mean I am ok
with the dutch government issuing invalid certificates for my website.

True, it's an improvement that only the dutch government can do this, and not
the Hong Kong post office. On the other hand, it is a major downside that we
are encoding the possibility of government dragnet surveillance.

In the end, certificate transparency logs will let me notice whenever anyone
issues a certificate for my website.

~~~
lmm
> it is a major downside that we are encoding the possibility of government
> dragnet surveillance.

Quite the opposite; DANE makes it possible to have a TLD that opts out of
giving national governments access to it. Most existing TLDs are controlled by
governments, but that doesn't have to be how it is.

------
hsivonen
Seems reasonable to remove HPKP.

In my experience the use case that HPKP addresses the best is winning
arguments with people who like ssh and think WebPKI and browsers are wrong.
HPKP can be used to establish TOFU trust in the leaf key (but you need to pin
your _future_ key, too).

Winning that argument isn't worth the risks of HPKP, though.

------
alpb
As someone who's completely unfamiliar with the Chrome ecosystem I wonder what
Blink has anything to do with this (why is this posted in blink-
dev@googlegroups.com)? Isn't Blink just the rendering engine for Chromium that
does DOM/CSS stuff?

~~~
wolf550e
They used to coordinate security stuff on mozilla.dev.security.policy but they
switched to blink-dev, maybe to indicate this is Chrome's/Google's position
only. The first big use of blink-dev for security that I remember was the
Symantec thing.

------
feelin_googley
non-javascript url:
[https://groups.google.com/a/chromium.org/forum/?_escaped_fra...](https://groups.google.com/a/chromium.org/forum/?_escaped_fragment_=msg/blink-
dev/he9tr7p3rZ8/eNMwKPmUBAAJ)

------
MichaelMoser123
Kazakhstan and probably Russia as well require all TLS traffic to be opened by
MITM devices.
[https://m.habrahabr.ru/post/303736/](https://m.habrahabr.ru/post/303736/)
[https://news.ycombinator.com/item?id=10663843](https://news.ycombinator.com/item?id=10663843)
[https://www.google.co.il/amp/s/www.rbth.com/document/1033000...](https://www.google.co.il/amp/s/www.rbth.com/document/103300000000001000150127/amp)

i wonder if other governments are enacting similar rules in one form or the
other....

~~~
jopsen
That sounds hard to implement.

If it's done by issuing a new certificate for a different key then won't it
trigger red flags when certificate transparency becomes mandatory?

Resulting in the CA getting the kick.

~~~
pfg
IIRC the plan was to force users to manually install a root certificate
(controlled by the government) on their devices. Local roots are exempt from
any CT enforcement. Naturally, you can just not install the root certificate,
but if all traffic is intercepted, I'd expect most users to do so to get
around the warnings.

~~~
jopsen
That probably only works if the root cert is mandated in all OEM installs...

And still you probably won't find any Linux distros with this support.

Note. incepting all traffic sounds expensive and very dangerous, ie. risk or
leaking grows when you scale. It's probably better to only use it for select
users.

Curious, why future versions of chrome would not force CT for official TLDs?

~~~
pfg
> Curious, why future versions of chrome would not force CT for official TLDs?

This would cause all corporate MitM proxies to fail. Certificates generated by
these devices cannot be logged to the CT log servers accepted by browsers
(they only accept certificates chaining back to a trusted root). Local roots
were exempt from HPKP pins as well, so this is just keeping with existing
policy.

------
saas_co_de
why not provide an advanced feature that alerts you any time a cert changes,
similar to what we get with SSH?

at least then security conscious users could make decisions for themselves.

~~~
PhantomGremlin
_why not provide an advanced feature that alerts you any time a cert changes_

Because certificates change ... all ... the ... time. Again ... and ... again
... and ... again.

Years ago I tried using a Firefox addon called Certificate Patrol. I spent
half my time approving changes. Here's a Stack Exchange question on exactly
that topic. It's a few years old; I don't know if things have gotten better:

[https://security.stackexchange.com/questions/41578/why-
does-...](https://security.stackexchange.com/questions/41578/why-does-google-
ssl-cert-change-so-frequently)

~~~
lucb1e
> Because certificates change ... all ... the ... time.

Not OP but I do see potential there. I've thought about it before. Try looking
at it from a solution perspective rather than from "why don't we already" and
"what would the issues be": certs change, yeah, but usually because they
(almost) expired. We should check when Let'sEncrypt renews by default (is that
14 days before expiry?) and what common practice is, and go from there in
triggering a warning.

And if there is some uncommon reason to roll over (e.g. suspected compromise),
a header could be set either in advance or one could be set that signs the new
fingerprint with the old key. The new one shouldn't be pinned right away since
an attacker might have misused a compromised key, and a warning symbol could
be displayed similar to the mixed-content warning. If someone is suspicious
and it can't be delayed, they can call their bank (or whatever it is) and
they'd know about it and be able to confirm things out of band.

I'm just conceptualizing but I don't see anything that's not easily solved. I
think it could be a good addition.

~~~
user5994461
The big sites have multiple certificates for a single domain and you will get
one randomly depending on what server you happen to it.

------
Exuma
Can someone explain all this like im 5? I've always wondered what all this was
about.

~~~
dfabulich
On the internet, when you send and receive data, your data gets handled by a
lot of different people. In the old days, anybody who handled your data could
tamper with it or impersonate anybody else. Cryptography to the rescue.

Suppose "Alice" and "Bob" want to send secret messages to each other, without
allowing "Eve" the eavesdropper to read them, even if Eve can intercept the
messages.

Traditional cryptography is "symmetric," where both Alice and Bob must share a
secret before they can communicate. Symmetric cryptography won't suffice over
the internet, because if Alice and Bob had a secure way of sharing secrets,
they wouldn't need internet cryptography in the first place.

So the internet relies on public-key cryptography, where Alice and Bob each
have a pair of keys (a "key pair"), one "public" key that everyone can see,
even Eve, and one "private" key that has to be kept secret. Alice can encrypt
a message using Bob's public key that can only be decrypted using Bob's
private key.

At first, it might seem like public-key crypto solves the problem completely,
but it creates a new problem: how will Alice get Bob's public key? If she asks
Bob for his public key over an unencrypted public channel, Eve can intercept
it and offer her own public key, acting as a "man in the middle" (MITM).

Luckily, public-key cryptography has one more trick up its sleeve. If you
"encrypt" a message using a private key, it can be "decrypted" using the
public key. Only Bob (the owner of Bob's private key) can encrypt messages
that can be decrypted with Bob's public key, so anything Bob encrypts that way
is effectively "signed" by Bob.

If Alice and Bob trust a third party, Charlie, Charlie can sign a message
saying: "This is Bob's public key: 12345" and another message saying "This is
Alice's public key: 23456". Eve can't impersonate Charlie without his private
key. We call Charlie a "certificate authority." (CA)

When you visit an HTTPS website, the site presents a certificate signed by a
CA. Your browser trusts a ton of CAs all over the world, many of them run by
governments that you may not really want to trust; any of them can use their
private keys to impersonate any site on the internet. This is a hard social
problem as much as a technical problem.

High-value websites like Gmail, Facebook, or banks may want to say "Here's our
certificate, but don't just trust _any_ certificate authority about that. You
should _only_ trust Charlie's signature." That's called "pinning" the public
key to a certificate authority.

It's a nice idea, but how will Gmail convey that message to its users? If Eve
is a hostile government who intercepts messages and owns a trusted CA, they
can impersonate Gmail, saying "Oh, you don't need to trust Charlie
exclusively. You can trust any CA, even me."

Chrome comes with a static, hard-coded list of pinned keys for high-value
sites, but that can't scale. They had the idea of allowing anybody on the
internet to pin their keys, "dynamic" pinned keys or HTTP-based key public key
pinning (HPKP).

The problem is, if you pin your public key and you need to change it for some
reason, or if you need to switch certificate authorities for any reason,
you're in big trouble. People have used HPKP and brought their site down,
unable to bring it back up again, because browsers don't trust their new valid
key.

As a result, very few sites used HPKP, so the Chrome team is planning to
remove it.

Surprisingly to me, they even plan to remove the _static_ list of pinned keys,
in favor of "Certificate Transparency" where it's publicly obvious which CAs
are signing which certificates. Rogue CAs would then have to reveal that
they've gone rogue, at which point browsers could revoke their automatic trust
in them.

~~~
Exuma
That's an amazingly AWESOME answer, thank you. So... what's an example of a CA
that the most highest value targets on the internet trust? (like Google,
Facebook, Amazon, and various banks). Is there like a very trustworthy company
that handles most of the big companies?

~~~
toast0
There's a list of pinned CAs at the top of the hsts preload list [1]. Which
gives you an idea of who might be trusted.

Cert pinning is pretty nasty if you get it wrong. If you don't pin, there's a
large number of CAs in most client's default trust stores. If you do pin, and
the CA you pinned turns out to be bad, it didn't help. If you pin, but the CA
stops issuing from the intermediate or the root that you pinned, you can't get
a new cert (hope you had other options); note that CAs don't give much
guidance about what to pin. If you pinned a CA that gets delisted, that's no
good either. If you pinned two different CAs (a smart choice), but they merge,
you no longer have a backup. So, you should pin a public key that you haven't
gotten a cert with yet, and keep it safe,but also readily available for
emergencies. But you only get one emergency -- hope your next emergency comes
after you have time to figure out new pins and get them bundled everywhere;
and in the meantime you have one key for everything, which isn't great.

(Based only on the name) Expect-CT doesn't provide nearly as much protection,
any CA cert will work, but only if it was publicly recorded. If you monitor
for certs issued on your domains, at least you know to raise a fuss if a CA
you didn't authorize issues on your domains. That's probably enough to keep
CAs in line, unless Let's Encrypt drives the net present value of a well
distributed CA to below the value of illicit certificates.

[1]
[https://chromium.googlesource.com/chromium/src/net/+/master/...](https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json)

~~~
Exuma
Maybe this is a dumb question... but do larger companies like Google attempt
to set up their own CA so they can be sure of the longevity and security, and
not rely on a 3rd party?

~~~
toast0
Google is the only one I'm aware of. Microsoft has an intermediate CA, but
it's not clear if that's actually independent of the CA that signed it.

It's likely different if you only serve requests for clients you distribute,
and can bundle a CA cert. But if you're serving browsers, you need your CA in
the default trust store which means passing audits, which is time and money
and requires a fairly rigorous setup. If you do that, you still need to get
your CA cross signed by an existing CA to use it, until the root is widely
distributed, if you support mobile browsers, it's a long wait until you're
really distributed. I don't know how much a CA charges to cross sign, but I
would guess it's very expensive; and using the cross signed cert means sending
an extra cert during tls handshake. There's an extension for clients to
indicate supported CAs, but it's not really used, I'm not sure it's very
sensibly designed -- anyway there's not a good way to know and then provide
different certs for clients that don't know your CA.

------
jkooper
Is there an alternative to prevent people from doing MITM attacks on mobile
apps? With HPKP running, apps with Charles requires a rooted device.

~~~
jakub_g
For mobile apps it's _easier_ than for websites because native apps have more
control over what's going on than the website, and can read the details of
http connections (so in worst case, you can roll out your own pinning).

For Android there are a built-in facilities for that in modern versions:
[https://developer.android.com/training/articles/security-
con...](https://developer.android.com/training/articles/security-config.html)

For iOS: not an expert, but this article seems good
[https://dzone.com/articles/ssl-certificate-pinning-in-ios-
ap...](https://dzone.com/articles/ssl-certificate-pinning-in-ios-applications)

It might get a bit trickier when WebViews are involved though because, at
least on Android, SSL in WebView is subject to different security rules than
the java-initiated connections (AFAIU the problems due to
[https://www.chromium.org/developers/androidwebview/webview-c...](https://www.chromium.org/developers/androidwebview/webview-
ct-bug) could not have been avoided on the app side, for example, as it was a
bug in Chromium).

~~~
strcat
> It might get a bit trickier when WebViews are involved though because, at
> least on Android, SSL in WebView is subject to different security rules than
> the java-initiated connections

They've started changing that:

[https://developer.android.com/about/versions/oreo/android-8....](https://developer.android.com/about/versions/oreo/android-8.0-changes.html#o-sec)

------
ComodoHacker
Without key pinning, are threre other options for site operators to protect
their users from MITM by traffic monitoring appliances?

~~~
strcat
HPKP doesn't protect against MITM via locally installed trust anchors. It
explicitly permits that in both Chromium and Firefox.

------
_Codemonkeyism
The same as with 301, everyone recoomends them until they buy a domain or do
some restructuring with new people some years down the line and 301 totally
messed their site up without any reset button.

~~~
jakub_g
Can you explain this more?

~~~
rav
HTTP provides a couple of different response codes for when one URL should
redirect to another URL. The most common are 301 Moved Permanently and 302
Found, aka moved temporarily.

When designing a website and you find the need to redirect one URL to another,
you have to choose which HTTP response code to use for the redirect. You might
naively think 301 Moved Permanently is the right choice for when you perceive
the redirection to be a non-temporary thing. Unfortunately HTTP 301 responses
are cached _very_ aggressively by web browsers by default, so if you install a
301 redirect in your website and choose to revert it, clients who have already
seen the now-reverted 301 redirect will just keep following the cached
redirect.

Basically, unless the URL you're redirecting is receiving way more than a
thousand hits per second (i.e. unless you're running a large-scale website
with lots of traffic), you should _always_ use the temporary 302 redirect,
even though you might perceive the redirection to be non-temporary.

~~~
_Codemonkeyism
Thanks for doing what would have been my job. Appreciated.

------
darkhorn
What about supporting client side certificates in HTTP2?

~~~
detaro
Less a Chrome policy thing and more a "Nobody has suggested a solution the
working group making the standard likes enough, so there isn't even a defined
way how". So unless I missed a recent development, don't expect it to happen.

------
peterwwillis
Ah, Google. The tech gods giveth standards, and the tech gods taketh standards
away.

This is what happens when you let a single vendor define web standards, and
have a majority share of the browser market. They can take their toys and go
home, and websites won't support what they don't support.

