
Opportunistic Encryption for Firefox - cpeterso
http://bitsup.blogspot.com/2015/03/opportunistic-encryption-for-firefox.html
======
chimeracoder
I'm really glad to see this.

I really dislike that browsers seem to treat self-signed certificates as
_worse_ than plain HTTP (in that self-signed certificates cause a big scary
yellow warning that looks similar to the big scary red warning for invalid
certs).

Self-signed certs are bad insofar as you can't prove that someone else isn't
MITMing the connection and serving you an encrypted, but untrustworthy, proxy
page. But with plain HTTP, you already have no guarantees that that isn't
happening![0]

If I understand it correctly, this seems to combine the best of both:
encouraging the use of self-signed certificates over plain HTTP, while still
rewarding verified certs (ie, signed by a trusted third party) over self-
signed.

[0] If you really care, you should use a CA-signed cert. But every attack that
possible when using self-signed certs is not only possible when using plain
HTTP, but much easier to execute, and also much easier to execute _silently_.

~~~
tines
I won't defend it, but I can imagine that an argument for why self-signed
certs could be worse than plain HTTP might be that if you're using HTTP, the
communication isn't meant to be secure/trustworthy, but the presence of a
certificate means that your communication is meant to be one or both, and
since the cert is self-signed, it may not be either. This argument does
presume that site administrators know what does and doesn't need to be
confidential, which of course may not be the case.

Some argue that all communication should be encrypted, but that's another
issue.

~~~
vbezhenar
Self-signed certificate provides defense against passive attacker, and HTTP
does not. That's an important difference IMO.

~~~
copsarebastards
That's good in that it requires a slightly more sophisticated attack, but I
don't really view that as a very effective defense.

 _However_ , the bigger thing a self-signed cert allows you is the ability to
verify later if your communication has been snooped by an active attacker--
which is very significant. It's problematic that the communications aren't
authenticated before they occur, but that doesn't mean they can't be
authenticated later (via CAs or other means).

~~~
Wicher
> That's good in that it requires a slightly more sophisticated attack, but I
> don't really view that as a very effective defense.

A slightly more sophisticated attack, indeed — but one that doesn't scale
well. It ups the cost of mass surveillance tremendously.

~~~
copsarebastards
I hear this frequently but I don't buy it. What about the MITM scales worse
than a passive listener?

As far as I can tell, both insertion of a passive listener and a MITM are
algorithmically O(n) on the number of connections being surveilled. All you're
increasing for the MITM is the constant factor.

When you're on the order of billions of connections being surveilled, even
linear growth is hard, but we know that _the NSA already has done that_.
Increasing the difficulty by a constant factor is not much harder, and there's
no question that the NSA has the budget to do so. And in fact, the constant
factor isn't even large: it's whatever the resource costs of two connection
handshakes per connection is, plus decryption/encryption on the data flowing
across, all of which are highly optimized algorithms at this point.

~~~
Wicher
No. Passively listening only requires dumping whatever flies over the
interface of some router. Done! Very hard to detect. You can also just scan
for keywords and only start dumping traffic when triggered. With SSL, you
_cannot_ retroactively decide that you'd want to dump traffic. You have to
MITM _from the beginning_.

That's point 1.

Point 2: Detection.

Actively MITM-ing an SSL connection requires you to (if the CA Chain-O'-Trust
works as supposed) give away the fact that you own a (valuable!) compromised
CA authoritative cert. You would want to use such a cert for a targeted
attack, not waste it on blanket surveillance and get found out (and called
out) within a couple of days.

If you don't own a root cert but rely on vulnerable implementations, or
implementations such as this one which do not rely on the CA infra, same
story. You do not want to waste that on blanket surveillance and get caught.
You'd save it for the /special occasions/.

SSL-MITMing _everyone_ , _all the time_ , as in blanket surveillance, is
unfeasable even without CA chains. You'll get called out on it by people that
_do_ check cert fingerprints once in a while.

This is a different kind of scalability than the computational order-of-
complexity one that you seem to be thinking about.

------
Navarr
This is very interesting.

I think a better approach might be to separate encryption from trust.

I'm thinking back to Chrome's announcement that they were considering making
[http://](http://) show some sort of warning.

What if:

* We make HTTP show up as "insecure" in browsers

* We make HTTPS work with self-signed certificates, and display websites encrypted that way the same way we currently show http

* We make HTTPS with Trust show up the way we currently show HTTPS

* Keep EV Certs the same

~~~
agwa
> * We make HTTP show up as "insecure" in browsers

> * We make HTTPS work with self-signed certificates, and display websites
> encrypted that way the same way we currently show http

Clear text and opportunistic encryption should have exactly the same UI.
Security UIs are already too confusing, and we shouldn't introduce more
complexity. Besides, one should never make a decision based on a clear text vs
OE distinction, since OE is so easily defeated.

~~~
belorn
I guess this is why most administrators use telnet instead of ssh, since they
don't have time to check ssh fingerprints... Well, not really. As it is, many
people do make a decision based on a clear text vs OE.

Now if just Firefox would store and check fingerprints of self-signed
certificates, we would get the exact same benefit when telnet was replaced by
ssh.

~~~
IgorPartola
You don't check ssh fingerprints?

------
peterwwillis
Opportunistic Encryption is harmful because people think it's actually useful.
Here's the problem with opportunistic encryption:

    
    
      OE provides unauthenticated encryption over TLS for data that would otherwise be
      carried via clear text. This creates some confidentiality in the face of passive 
      eavesdropping,
    

You should never assume that 'eavesdropping' is passive. In almost every
practical context of traffic interception, if you can read the transmission,
you can write as well. If you're going to the trouble of installing some kind
of tap, it makes more sense to make it read-write so you can actually _do_
something with that intercepted connection. Collecting data is great, but
hijacking is even better.

    
    
      and also provides you much better integrity protection for your data than raw
      TCP does when dealing with random network noise
    

This is a red herring, and if you need _real integrity_ is totally useless
since it doesn't prevent an active attacker from corrupting the data once
they've hijacked your unauthenticated connection. For the majority of
plaintext traffic, a small amount of corruption is way more efficient to allow
for than breaking down and re-creating a connection every time a single bit
gets flipped.

To use OE, you have to set up an SSL service in the first place, so just take
the extra 15 minutes and make a real signed certificate. There is no such
thing as "kind of" secure, after all. Encryption is intended to provide
security. OE is not secure.

~~~
y0ghur7_xxx
> Opportunistic Encryption is harmful because people think it's actually
> useful. [...] You should never assume that 'eavesdropping' is passive.

\- Google driving by with it's street view cars and capturing all your wifi
traffic for later analysis is passive.

\- People sitting in a coffee shop and capturing your wifi traffic is passive.

OE protects against these attack vectors. It does not protect against other
attack vectors, but that does not mean it's harmful or useless.

Moreover, with pins it could be a first step to get rid of CAs.

~~~
peterwwillis
These are both examples of how OE would _not_ help you.

With google street view you're barely even in range long enough to pick up a
couple packets, if the person was even using their computer while the car
drove by; this is not what OE is designed to protect against. If the car sat
outside their house, they'd still get owned, and OE would still not be useful.

Sitting on a cafe's wireless is literally the de-facto example of how to
actively sniff or inject traffic on an unsuspecting victim. The only more
bluntly useless case for OE is a state-sponsored mitm using coercion-induced
signed certificates.

And you can't get rid of CAs.

~~~
y0ghur7_xxx
You just continue to state the same thing: that OE does not protect against an
active attacker. We know that. Nobody says that it does. What it does is
protect against a passive attacker. And that makes it useful for some use
cases.

------
13throwaway
The problem with allowing self-signed certificates has always been
distinguishing if a site should be signed by a CA or not. Consider the follow
situation:

Alice sends Bob a link: [http://example.com](http://example.com)

Bob trusts Alice and now knows that example.com is probably ment to be
accessed over HTTP. Now for the next example:

Alice sends Bob a link: [https://example.com](https://example.com)

With the current implementation of browsers Bob knows that example.com should
present a CA signed certificate. But what if example.com wants to encrypt
their data, but for whatever reason uses a self-signed certificate? Some
people say that Bob's browser should not display a "big scary" warning, but
instead display a UI similar to when accessing a HTTP site. However, in this
situation HTTPS has lost some meaning. I think http2 should work as follows:

http2:// \- encrypted, not verified

https2:// \- encrypted and verified

This way the protocol still conveys the same level of information.

However, if it were completely up to me, I would say ditch the CAs and use
namecoin to verify certificates.

~~~
JonathonW
That's more or less what OE does. It allows the browser to use HTTP/2 (and
encryption) to connect to a site, but keeps the user experience the same as
unencrypted HTTP.

That's why self-signed certificates work in this context; the identity of the
server's not supposed to be validated (unencrypted HTTP can't validate server
identities), so the browser can accept a self-signed certificate without
warning.

There's no change to how certificates are authenticated when accessing a site
via an [https://](https://) URL.

------
pavpanchekha
This is a great step, one that I've been hoping for. Of course encryption
without authentication is much worse than true encrypted transport (as in,
with authentication), since it only prevents passive adversaries, but any sort
of encrypted transport is better than plaintext. I'm also hoping this will
ease the transition to TLS, since you can get it up and running without
worrying about mixed-resource problems, then fix those one at a time.

~~~
orthecreedence
I'd argue that the CA system is false authentication because it's fairly easy
for the right players to tamper with. In that case, unauthenticated/encrypted
transport is only a little less safe than "authenticated"/encrypted transport,
but with the latter giving a higher illusion of safety than the former.

The only real trust that would work is distributed trust. The CA system is
kind of a joke.

That said, yes, it does protect coffee shop HTTPS browsing better than a self-
signed cert.

------
y0ghur7_xxx
Does someone know how to set this up serverside? Does apache support this? I
skimmed through the mod_spdy docs, but found nothing about opportunistic
encrypion.

This, together with TACK or Certificate Transparency could be a CA killer.

------
upofadown
This is a neat idea, but we really need to define a standard for self signed
certificates. Something like certificate pinning should be mandatory. It
should be done in a way so there is no confusion with a connection with
identity protection _but_ at the same time it is essential that the user knows
they are at least getting the protection of the self signed certificate.
Perhaps we need something like a httpq:// resource identifier.

For bonus points such a standard should incorporate a web of trust system that
can not be overridden by a bogus certificate in the regular system. Ideally a
self signed certificate provided by someone you can physically visit should be
_more_ secure than what we have now.

Added: I guess my point is that we are thinking about this backwards. A
verified self signed certificate is the gold standard, not some inferior
alternative. I should be able to walk into my bank and then walk out with a
certificate on a USB key that can not be messed with. If we are going to
change things we should strive to end up with the possibility of something
better than what we have now.

------
teddyh
This is only for HTTP/2\. If _could_ have been for HTTP/1.1 also, _if_ there
had existed a registered ALPN name for “HTTP/1.1 with TLS”. As it is now,
HTTP/2 has both variants, but HTTP/1.1 stands alone¹.

① [https://www.iana.org/assignments/tls-extensiontype-
values/tl...](https://www.iana.org/assignments/tls-extensiontype-values/tls-
extensiontype-values.xhtml#alpn-protocol-ids)

~~~
patrickmcmanus
fwiw the h1 barrier was the lack of a scheme indication in an h1 transaction -
not really the alpn ids (which can always be registered if need be). But
without a scheme the server just needs to infer http vs https from the
port/address and that wouldn't work with alt-svc

~~~
teddyh
Since this is a backport of a new feature from a new protocol to its older
predecesor protocol, it is not necessary to be so wary of slightly uglier
features. Simply having “Alt-Svc: http/1.1:443” imply HTTPS would do fine to
solve this specific problem, and I doubt anyone would really have a problem
with it.

------
Dylan16807
> 443 is a good choice :)

If it's self signed, and going to throw massive warnings with a direct
connection, shouldn't you use anything other than 443?

Any subtleties I should be aware of?

~~~
opejn
The main reason I would think it's a good choice is because if you decide to
get a CA certificate later, you just drop it in and you're done; no additional
configuration required.

If you don't have a CA certificate, you're probably not advertising your
[https://](https://) URLs anyway, so unless search engines are aggressively
looking/prioritizing for https transport, it wouldn't seem to hurt anything to
run a self-signed certificate there.

~~~
Dylan16807
HTTPS everywhere. And I would not trust an entire site to go unindexed.

There's a lot more to change if I want real HTTPS support. Changing a single
port number is the least of my worries.

------
quonn
This is good, but still not a good as a service that requires at least an
unauthenticated encryption. The problem is that the attacker has to be active,
but very little effort is needed to break this without the user noticing -
it's enough to inject some packets to disrupt the TLS connection.

However, for HTTP it's the best thing possible at this point.

------
jesrui
Since the request and response sizes can reveal what public page you are
browsing over https, OE in the proposed form would not prevent user tracking:
[http://sysd.org/stas/node/220](http://sysd.org/stas/node/220)

------
zaroth
Please, please, can we have the same for WiFi?! Trade a key with the AP when
you first associate, and be done with it. The entire concept of a WiFi
password is a *%^$ waste of time.

