
Firefox 32 Supports Public Key Pinning - jonchang
http://monica-at-mozilla.blogspot.com/2014/08/firefox-32-supports-public-key-pinning.html
======
zdw
I wish that this sort of stuff would come down to API-level interfaces.

For example, for the longest time Python's SSL library wouldn't even verify
SSL certs:

[https://wiki.python.org/moin/SSL](https://wiki.python.org/moin/SSL)

And would gladly connect to MITM'ed sites. I think this is rectified, but the
information I've found is conflicting.

It seems to me that securing API endpoints would be even more important than
end users, as there's could be a much larger quantity of data through an API
than a browser.

~~~
giovannibajo1
Python's SSL state has been worse than other languages because of the 2.x ->
3.x transition, so basically 2.7 was left broken for longer than ideal.
Eventually, they decided to backport most of the network security improvements
([http://legacy.python.org/dev/peps/pep-0466/](http://legacy.python.org/dev/peps/pep-0466/)),
see there the timeline.

Notice that this applies to the standard library; many people use the requests
library which not only offers a superior API but also more security by
default.

------
lucb1e
I wonder how this is going to work. I've been using an add-on to pin
certificates for a half a year now and it's a hell on some websites. It nicely
worked for my bank for a while, but they now employ the same technique as
Google, Twitter, Facebook, Akamai, etc., changing certificates and even their
CA seemingly at random. You'd think I'm being MITM'd but I'm pretty sure
that's not actually the case.

Edit: I should read more closely, found it:

> the list of acceptable certificate authorities must be set at time of build
> for each pinned domain.

So it's directly in Firefox' source code right now. Pretty much useless for
anyone but a few big sites.

And the pinning RFC doesn't sound much better. It makes the client store
something about sites they visited, which roughly translates to supercookies.

~~~
Someone1234
> And the pinning RFC doesn't sound much better. It makes the client store
> something about sites they visited, which roughly translates to
> supercookies.

I don't follow. If the user visits a site for the first time (ever) over a
secure connection, they will become much more resilient to MITM for all future
connections (including the ones where it updates).

That's a win in my book. At least it is a win from the "hacker" MiTM threat.
It won't be as useful against states/governments since there might never be a
secure connection ever.

But I'd take a solution NOW that works-ish than a solution maybe never which
is flawless. Security in depth and all that jazz. The ultimate solution is
some kind of secure DNS infrastructure which delivers information about HTTP
certificates (which I believe is in the works also).

~~~
lucb1e
I'm not saying it's entirely bad, but it's something some users will want to
disable for privacy concerns. The current model works well enough to allow
widespread online banking and although this RFC will certainly make it _more_
secure, there are also disadvantages.

I happen to know that Chrome hardcodes a list of EV certificates (or at least
I read so a while ago), it could do the same for CAs. Or the browser could ask
a central server which CA belongs to a certificate fingerprint. Not that
different from OSCP except that it'll probably be run by the browser
manufacturer instead of the CA.

------
StavrosK
Does anyone know why they didn't go with TACK?

~~~
eplsaft
TACK is a TLS extension. It must be added to TLS 1.3 RFC.

~~~
tptacek
TACK doesn't have to be added to any RFC in order for browsers to adopt it.
They could have integrated TACK, but chose not to. I wish they'd go the other
way on that.

~~~
danudey
Sure, unless you want a standardized interface that everyone can reliably
implement. Once it's in the RFC, people can start implementing it, but if you
implement before there's a published standard then you're stuck with a broken
implementation, or you break backwards-compatibility.

~~~
eli
It wouldn't be the first RFC to begin life as an implementation instead of a
spec document.

~~~
tptacek
Which is the right way for an RFC to begin!

------
cornewut
Maybe FF and Google should just become CAs? It would remove the extra step as
currently both pinning and registering with existing CA are required?

~~~
bottled_poe
Why should we trust them?

~~~
belorn
With the current setup, people are trusting mozilla/google to: Give you the
correct software, Update silently, Determine which CA certificates to trust by
default, and Determine which certificates are valid by pinning.

The CA is trusted to do: Determine which certificates are valid.

~~~
tokenizerrr
Not really. For firefox anyone can build from source (see also iceweasel),
disable automatic updates. For chrome it's mostly the same, but then for
Chromium instead.

~~~
unfamiliar
Can you verify that the binary download of Firefox is compiled from that
source unmodified?

~~~
gluxon
There's work in progress to allow this.
[https://bugzilla.mozilla.org/show_bug.cgi?id=885777](https://bugzilla.mozilla.org/show_bug.cgi?id=885777)

------
mkal_tsr
> Other mechanisms, such as a client-side pre-loaded Known Pinned Host list
> MAY also be used.

Fantastic addition IMO. You could distribute/sync hash lists on and offline,
awesome.

------
cpeterso
The problem I've run into with public key pinning is captive portals. Mobile
operating systems or browsers need to provide a better user experience for
captive portals.

~~~
giovannibajo1
iOS have had good user experience for captive portals since a long time (iOS 3
I think), and also Mac OSX. They automatically detect when a captive portal is
present by doing a background connection to some Apple-owned property and
trying to download a small text file; if it fails for any reason, they bring
up a modal panel showing the captive portal and letting the user login; then,
they detect when the Internet access is active and close the panel (or,
lately, let the user close the panel with a "Finish" button). What's more
important is that the whole operating system / applications are not given the
signal "network is on" until the whole process is finished, so you don't get
one dozen of failures from applications that try to refresh in background and
can't access their backend servers.

This said, I really hate the fact that the WiFi alliance has left us in this
very sad state. Like many similar committees, they move too slowly; captive
portals arose from a real-world problem that wasn't solved by the WiFi
standard, so people had to come up with weird DNS/HTTP interceptions that fail
in so many regards it's not even funny. If there's somebody to blame, it's not
operating systems for not adding weird heuristics like Apple did, but Wifi
Alliance for not bringing to the market a good solution for handling hotspots
soon enough.

~~~
mrottenkolber
Apple customers, where phoning home on every network connection is good UX.
What is wrong with you people?

I had this behaviour crash a captive portal daemon once, was fun tracking down
the owner of the device in question in the building. "CapDaemon crashed,
request came from around room A2.011!", "Let's get him before he's gone!",
"Hey, Mr., do you own an Iphone or I pad? Yes? Please come with us. We need to
have a debug session with you."

~~~
giovannibajo1
It _is_ good UX. For privacy concerns, it's already phoning home on every
network connection for push notification registration (at the very least), so
your objection is moot to me.

To handle this less centrally, you would need a distributed list of URLs to be
used for captive portal checks, with servers handled by different entities,
and each device selecting one at random from the list. This wouldn't change
the UX though.

For your other remark, if the captive portal crashes for a standard HTTP GET
to a normal URL, you can't really blame this on anybody else but the captive
portal developers.

~~~
mrottenkolber
> so your objection is moot to me

"...because Apple already fucked it up elsewhere so double fuckup doesn't give
extra points."

> you can't really blame this on anybody else but the captive portal
> developers.

I don't think I did, thats just how I found out about this particular
appleism. Also: This behaviour enabled us to track down that device and its
owner physically too. So think about the sensibility of this "feature" in this
light. Every other device would have enjoyed relative anonymity amongst the
other devices in the building.

~~~
giovannibajo1
We're speaking of a device where Apple can silently push kernel-level code
over-the-air at any moment. Surely an extra IP connection to an Apple server
is not changing anything in the picture, but it is helping millions of people
using their device without getting weird error messages or certificate errors
every time they connect to a hotspot.

If you don't trust Apple with your IP, you shouldn't use a device which runs
their kernels, that's a no brainer; if you do trust them, though, you might
appreciate the way they go through multiple hoops not to handle too much of
your sensitive personal data, see for instance the design of iCloud Keychain:
[http://www.apple.com/ipad/business/docs/iOS_Security_Feb14.p...](http://www.apple.com/ipad/business/docs/iOS_Security_Feb14.pdf)

So it's not like the impact of an IP connection is not taken into account in
design considerations; it's just that it doesn't look so important in the big
picture of the security and privacy implications of using such a device. For
personal passwords, different avenues are taken.

------
xroche
What about DANE ([http://en.wikipedia.org/wiki/DNS-
based_Authentication_of_Nam...](http://en.wikipedia.org/wiki/DNS-
based_Authentication_of_Named_Entities)) ?

~~~
pilif
When you support DANE, you trust the various governments responsible for the
various country domains to not wanting to MITM owners of said domains.

I'm sure everybody has a different subset of governments they would be putting
in the "trustworthy" bucket and it's not up to the browser vendors to make a
political statement there.

Browser vendors would have to trust all governments equally and AFAIK, none of
them have publicly stated what their policies regarding MITMing DNSsec is, nor
how well they protect their DNSsec signing keys.

CAs have to follow quite rigorous protocols if they want to be included in the
browser default list and they have all financial incentives to comply.

Governments don't have to follow anything and even if they had, they have all
the incentives not to comply.

This is why DANE, while otherwise sounding like a really good idea, is
ultimately doomed to failure. No browser wants to take responsibility for
less-than-stellarly performing governments and no browser wants to make a
political statement by only supporting DANE for certain top level domains but
not others.

~~~
exo762
CA model is broken (>500 of CAs, race to the bottom in terms of price,
security is not a part of their business model). DANE is no better. What we
really want is to be able to withdraw trust. There is no point for me to trust
some Iranian CA. Why should I want to trust one? Today - I have to trust it,
because you can't just remove CA without breaking a percentage of websites for
you. And you can't normally erase CA from existence, because on single CA rely
many customers and each of them will have broken website.

Please read about Convergence by Moxie Marlinspike. It solves the removing of
trust problem.

~~~
Nullabillity
With DANE it's immediately visible who you have to trust, and that can't
easily be changed. If it's a .se domain then you know that only the Swedish
govt can MITM that, with the current CA model any CA is able to authorize a
MITM.

~~~
tptacek
That would be great if looking at .COM wasn't an immediate guarantee that the
USG could MITM something.

~~~
xorcist
Don't use .com if you distrust them. It's still orders of magnitude better
than the CA model.

The web site owner gets to choose which top domain to use and trust. It is not
the end user that is supposed to value how much trust they put in each CA.
That alone is the most important point right there.

~~~
tptacek
This makes absolutely no sense to me. DNSSEC is a forklift upgrade of a key
piece of the architecture of the Internet. We should incur that cost so that
_all of the most popular sites on the Internet_ will end up with the USG as
their CA? And that's "orders of magnitude" better than what we have now?

~~~
toast0
Today, for a .com, there are a large number of CAs (let's call it 100?) that
can sign a cert. Additionally the registrar or the registry (VeriSign) can
change NS and DS records due to a US court order (or otherwise) and the new
destination could get a domain control validated certificate.

If DANE were adopted and the current CA system abolished, then the registrar
or the registry could still change the NS and DS records to takeover a domain,
but that takes us from 100+ parties capable of signing a cert to 2 parties
that are already part of the system.

------
eplsaft
this sounds identical to google CRLSet. Basically a list of pinned certs
inside source code.

> In the future, we would like to support dynamic pinsets rather than relying
> on built-in ones. HTTP Public Key Pinning (HPKP) [1] is an HTTP header that
> allows sites to announce their pinset.

ok cool. Requires initial safe connection once. Like HSTS.

~~~
reedloden
This is nothing like Google CRLSet. CRLSet is just a way of collecting the
CRLs from a ton of different CAs and having a way to push those out to Chrome
browsers easily without users having to individually download them all from
the CAs.

Chrome has its own TLS pinning implementation that basically works the same
way as Firefox's. See
[https://src.chromium.org/chrome/trunk/src/net/http/transport...](https://src.chromium.org/chrome/trunk/src/net/http/transport_security_state_static.json)

------
tete
Meanwhile Chrome doesn't even support OCSP (Certificate Revocation) for
performance reasons, not even after Heartbleed.

I hope that doesn't sound like fanboyism, but not being able to communicate a
certificate revocation properly is worrisome.

~~~
tptacek
That's because OCSP doesn't work. The real-world Internet routinely breaks
OCSP queries, which results in Firefox (and other browsers) soft-failing them:
if OCSP doesn't work, the browser goes ahead with the connection. The security
problem here is trivially observed.

TLS certificate revocation is a mess. We know what the solution will look
like: it'll be something like HSTS, except that instead of caching "this site
must use HTTPS", we'll also cache "this site must use OCSP stapling", and the
OCSP data will be conveyed in-band in the HTTPS connection. It's hard to ding
Chrome for not supporting something that doesn't really exist yet.

So, no: _not_ "for performance reasons".

Incidentally: Chrome more or less invented certificate pinning.

~~~
tete
But OCSP is the only thing that's widely supported as of now. You can't on one
hand say "don't blame Chrome to not support something that doesn't exist",
when at the same time it rejects something that's widely deployed and even is
considered a requirement for CAs.

Being able to revoke your certificate, even if it has problems is better than
not being able to.

OCSP is still the standard way of communicating certificate revocations and
even with all the HTTP-Extensions you need a way for certificate revocation.

Unlike most alternatives OCSP is out of the box supported by IE, Firefox,
Opera and Safari. Only Chrome has it disabled per default. Most people revoked
their old certificates after Heartbleed. This is an example of where you need
an alternative to just pinning a key.

So you are saying that an attacker has to make sure that the OCSP connection
isn't working means OCSP is worse than having no possibility of certificate
revocation at all?

Not saying that it's absolutely secure. Hopefully everyone knows that there
are flaws in HTTPS/SSL. Zooko's triangle[1] even gives you a hint why.

Also I am curious. What better, more widely used way of Certificate Revocation
do you know?

[1]
[https://en.wikipedia.org/wiki/Zooko%27s_triangle](https://en.wikipedia.org/wiki/Zooko%27s_triangle)

~~~
danudey
OSCP fails often enough that requiring a verified pass would break the
internet. Thus, browsers which support OSCP treat a failure to fetch OSCP data
as a soft fail (i.e. they ignore it).

The problem is that if you can MITM someone, you can deny them access to the
OSCP service and cause a soft fail, which makes OSCP worthless.

~~~
Pxtl
Just because it doesn't stop a full MITM between the CA and the client doesn't
make it worthless. It still protects the user from trusting a server that is
no longer trustworthy.

If a certificate is worth issuing when a server is trustworthy, it's worth
revoking when the server loses that trust.

~~~
tptacek
We are talking about adversaries who control both secrets and connectivity.
Not because those are the adversaries we care most about, but because those
are the adversaries that key revocation contemplates. The notion of "full
MITM" versus "partial MITM" versus "passive-only" attacker does not apply.

------
jbb555
Yeah, makes sense. I guess. But I'm more and more worried about the web
"platform", it just gets more and more complex each day.

