

On Mozilla's forced SSL - joepie91_
http://cryto.net/~joepie91/blog/2015/05/01/on-mozillas-forced-ssl/

======
Someone1234
Someone in the other thread had a genius solution to this:

\- All domains should now come free with a domain certificate.

Seriously, this entire problem set is solved by that one single change. It is
one of these ideas that once you hear it you cannot un-hear it. We already
know (and as this blog post says) that domain certificates have zero cost
associated with creating them, so just bundle it with domain registration and
soon competition in that space will force the additional cost to near zero (in
particular if it is required).

This is a much better solution to "Let's Encrypt" because it scales better,
and we don't have all of our eggs in one basket.

So who do we lobby to make this happen?

PS - Domain registrars won't fight this too hard as their customers will be
"forced" to buy a "bundled" domain cert'. I am just hopeful that even if
initial prices would be higher, eventually the additional cost will be
absorbed into the domain registration itself.

~~~
dragonwriter
> All domains should now come free with a domain certificate.

So, every domain registrar should also be a trusted CA?

~~~
Someone1234
If they meet the requirements other CAs meet, sure, why not? But otherwise
they'd have to find a CA to do the signing on their behalf.

------
tptacek
This post has an incomplete and somewhat dismissive take on certificate
pinning.

It's true that pinning involves a degree of trust on the certificate presented
on the first connection, and that is a weakness.

But that weakness is mitigated. Browsers also rely on the CA signatures for
that certificate (pinning _augments_ CAs, but doesn't replace them).

The potency of pinning is subtle, because nobody trusts CAs (and shouldn't!).
You have to think beyond just your browser, and you have to grok that CAs are
a finite resource for your adversaries. _You_ trust the CA-signed pinned cert
for an HPKP site on the first connection. But other browsers have had that pin
cached, and when the pin is tampered with, they _don 't_ trust it. When they
see the broken pin, they can do more than just not trust the connection: they
can also relay the evidence that a CA is implicated in signing a certificate
that breaks a pin.

Google won't say so specifically, but it's not unlikely that some of the last
few CAs to have been burned by trying to sign Google sites were caught because
of pinning.

Pinning protects more than individual browsers; it also uses the installed
base of pinning browsers to protect _everyone_ , not just those using pinning
browsers, by turning them into a global surveillance system for compromised
CAs.

------
simonhamp
"problems that absolutely need solving _before_ a forced global deployment of
TLS can happen"

I'm all for this switch to SSL. But there's no way Mozilla's announcement will
effect global deployment of TLS... not with 11.7% market share
(NetMarketShare.com, [https://www.netmarketshare.com/browser-market-
share.aspx?qpr...](https://www.netmarketshare.com/browser-market-
share.aspx?qprid=0&qpcustomd=0)).

The realist in me says this will just frustrate developers as staunch
advocates of Firefox pester for working services while higher-ups refuse to
justify the cost to suit a possible minority userbase. These users being
forced to either switch browser or move service provider.

~~~
kazazes
Chromium/Chrome have already proposed a step in the same direction, pushing
your market share figure up to 37.4%. [1]

[1] [https://www.chromium.org/Home/chromium-security/marking-
http...](https://www.chromium.org/Home/chromium-security/marking-http-as-non-
secure)

~~~
mbrubeck
Another related step from the Chromium developers:

[https://groups.google.com/a/chromium.org/forum/#!topic/blink...](https://groups.google.com/a/chromium.org/forum/#!topic/blink-
dev/2LXKVWYkOus/discussion)

------
jkire
> I do not believe that there is data that is "not important enough to
> encrypt".

I _do_ believe this. When I visit a static web page over SSL people sniffing
my connection are almost always going to have a good idea of which domain I'm
looking at (either by IP, or the DNS requests I just fired off, etc.), so the
attackers know what content I'm seeing. So why encrypt it? The benefit is that
we can trust that static content really did come from that domain and wasn't
changed by a MitM, but this can be solved by simply including a signature of
the content rather than encrypting everything.

SSL is really not cheap CPU wise; it can limit the usefulness of a small,
personal, cheap VPS for hosting even moderately popular content. Signing
static content is essentially free, since we can pre-compute them.

If the writer really does believe everything should be encrypted (this
necessarily includes metadata), then I assume he would advocate Mozilla
deprecate supporting for non-Tor connections? :)

~~~
joepie91_
Author here.

> I do believe this. When I visit a static web page over SSL people sniffing
> my connection are almost always going to have a good idea of which domain
> I'm looking at (either by IP, or the DNS requests I just fired off, etc.),
> so the attackers know what content I'm seeing.

The domain is not the same as the content. Far from it. Aside from that, a
site being 'static' is not a meaningful data point here.

> SSL is really not cheap CPU wise; it can limit the usefulness of a small,
> personal, cheap VPS for hosting even moderately popular content.

Being somebody who runs a HTTPS-only PDF hosting site off such a cheap VPS, I
disagree. It does not make a meaningful impact on resource usage.

I'm quite sure you could easily run a static site HTTPS-only from even a
LowEndSpirit VPS - these are €3 per _year_.

> If the writer really does believe everything should be encrypted (this
> necessarily includes metadata), then I assume he would advocate Mozilla
> deprecate supporting for non-Tor connections? :)

No, I do not. Tor is yet another dependency that needs to be available, and
there are significant usability (and privacy!) concerns with routing all
traffic over Tor.

"Everything should be encrypted" does not come at any cost. There are real-
world problems that need solving before this can be made a reality. Just
disabling non-TLS connections does not cut it.

~~~
jkire
> The domain is not the same as the content. Far from it. Aside from that, a
> site being 'static' is not a meaningful data point here.

For a lot of web pages this is true, but what about sites like restaurant
webpages? They have very few actual pages on their site, and they don't change
depending on who is making the request. If I request the site over HTTPS,
anybody that is sniffing my traffic will know that I'm visiting that domain
and thus can request the content themselves. The encryption here is not
providing anything.

If you are sending/receiving personalized data, e.g. logins, search requests,
etc., then you should be encrypting the requests. You would still be
encrypting huge amounts of data for no reason, the CSS and static images used
on the site are not being transmitted with or based on any personalized data,
so any sniffer will already know that these resources would have been
downloaded by you anyway (and I don't care if an attacker sees the CSS of a
page I'm looking at anyway).

Requesting these static resources over plain HTTP, but authenticated via
hashes or signatures, doesn't provide an attacker with any information they
wouldn't already have.

> No, I do not. Tor is yet another dependency that needs to be available, and
> there are significant usability (and privacy!) concerns with routing all
> traffic over Tor.

And yet Tor is the only way to actually protect against sniffers tracking
which domains you visit. If we really care about encrypting all the content -
including non-personalized, static content - then I don't see why we wouldn't
also care about protecting the domains we visit.

> "Everything should be encrypted" does not come at any cost. There are real-
> world problems that need solving before this can be made a reality. Just
> disabling non-TLS connections does not cut it.

I totally agree with this. However I think I come to a different conclusion.

TLS is usable precisely because of its insecurity that we have to trust so
many CAs. The key problem with any encryption architecture is key
distribution, more often than not the more secure the key distribution the
less usable the end product is. Its (relatively) easy to get a TLS certificate
for your domain that everyone else will trust, but precisely because its so
easy makes TLS very vulnerable to malicious certificates.

For the most part TLS is fine, it offers a certain level of security, but is
still vulnerable. I'm happy to rely on TLS when browsing google search, online
shopping etc, however I'm not particularly comfortable with using it for
online banking, and I certainly wouldn't trust it for secure person-person
communication.

So instead of trying to fix TLS, I think we should have more choice for the
different levels of security I desire: 1\. No personalised data is being
transmitted either way => I only need to authenticate the remote content, not
encrypt it. 2\. Personalized, but not particularly confidential information
being transmitted, e.g. google search, shopping, logins for sites where its
not the end of the world if they get stolen => TLS, there's a chance stuff
will get intercepted, but it is convenient. 3\. Highly confidential
information is being transmitted => Some other protocol or cert distribution
mechanism, e.g. for online banking I might only trust a certificate given to
me directly by my bank IRL.

~~~
joepie91_
> For a lot of web pages this is true, but what about sites like restaurant
> webpages? They have very few actual pages on their site, and they don't
> change depending on who is making the request. If I request the site over
> HTTPS, anybody that is sniffing my traffic will know that I'm visiting that
> domain and thus can request the content themselves. The encryption here is
> not providing anything.

The value of encryption as a whole increases when _everything_ is encrypted,
because it is harder for an adversary to distinguish "important" traffic from
"unimportant traffic". It may not matter for that domain alone, but it
certainly matters in the bigger picture. It significantly increases adversary
cost.

> If you are sending/receiving personalized data, e.g. logins, search
> requests, etc., then you should be encrypting the requests. You would still
> be encrypting huge amounts of data for no reason, the CSS and static images
> used on the site are not being transmitted with or based on any personalized
> data, so any sniffer will already know that these resources would have been
> downloaded by you anyway (and I don't care if an attacker sees the CSS of a
> page I'm looking at anyway). Requesting these static resources over plain
> HTTP, but authenticated via hashes or signatures, doesn't provide an
> attacker with any information they wouldn't already have.

False. Assets can leak very easily, disclosing what content you are looking
at. Just identify which assets are not loaded on _every_ page.

> And yet Tor is the only way to actually protect against sniffers tracking
> which domains you visit. If we really care about encrypting all the content
> - including non-personalized, static content - then I don't see why we
> wouldn't also care about protecting the domains we visit.

It's not. The exit node still sees your traffic - and this is also why routing
everything over Tor is a terrible idea (and incidentally, the same reason
devices like the Anonabox are fundamentally broken). If you tunnel personally
identifying traffic along with "anonymous" traffic, you're "contaminating" the
anonymous traffic with your identity.

> I totally agree with this. However I think I come to a different conclusion.
> [...]

Saying that TLS is "not entirely useless" is a very poor argument for not
working on making it better.

The "different levels" you suggest are pretty much already implemented as
such, except there is no "authenticate but don't encrypt" level, because it's
not a useful or desirable level to have.

~~~
jkire
> The value of encryption as a whole increases when everything is encrypted,
> because it is harder for an adversary to distinguish "important" traffic
> from "unimportant traffic".

Except you can classify a lot of traffic as unimportant by domain. If someone
is trying to steal my bank account information encrypting all my other traffic
isn't going to help. The easiest attack against TLS is to get a valid cert and
then MitM, at which point it doesn't matter how much traffic you're sending to
that domain.

I would also be interested in knowing if sending more encrypted data down a
TLS channel actually makes it harder to brute force; considering the
tendencies for browsers to connection pool there will probably be (relatively)
few actual distinct TLS connections.

> False. Assets can leak very easily, disclosing what content you are looking
> at. Just identify which assets are not loaded on every page.

Assets are cached by the browser, making it impossible to know if a newly
downloaded page included those assets or not. Only downloading assets in plain
that are on the root page or on the majority of pages mitigates this attack.

(Also irrespective of if you used TLS, if you requested the HTML over TLS and
it included hashes for all its static content you would have a much stronger
guarantee over authenticity of downloaded assets if they came from CDNs.
Hashing also makes caching much nicer and friendlier.)

This feels more like an argument along the lines of "its not really worth
having a separate method for doing authentication without encryption, it
complicates things and probably won't be able to be used very often", which is
perfectly valid and an argument I'm sympathetic to. It just doesn't support
the notion that all data is important.

> It's not. The exit node still sees your traffic - and this is also why
> routing everything over Tor is a terrible idea (and incidentally, the same
> reason devices like the Anonabox are fundamentally broken). If you tunnel
> personally identifying traffic along with "anonymous" traffic, you're
> "contaminating" the anonymous traffic with your identity.

But you can tunnel TLS through Tor(?) The idea behind using Tor here is to
stop attackers from being able to trivially tell which domains you're looking
at.

> Saying that TLS is "not entirely useless" is a very poor argument for not
> working on making it better.

My, probably badly worded, point is that its not as easy as saying "lets make
it better". Making TLS more secure means coming up with a way of issuing
certificates in a more trusted fashion; I don't see how you do that without
making it harder to get a cert.

The only other tech I'm aware of that is trying to address this is
Perspective, but even that is not perfect.

> The "different levels" you suggest are pretty much already implemented as
> such...

Browsers are getting better at supporting pinning, but I don't think any allow
you to manually add domains? Its not something that has been advertised and
encouraged. My bank certainly doesn't advertise its certs fingerprints in
their branches.

> ...except there is no "authenticate but don't encrypt" level, because it's
> not a useful or desirable level to have.

And I disagree with you, obviously.

Unless there are actual, provable, benefits to enforcing encryption
everywhere, I don't like the idea of anyone removing the ability for me to
make the choice. If you think all data should be encrypted, by all means
encrypt all your data.

Personally, I don't care if an attacker knows what BBC articles I read. Yes, I
know all the dangers that might befall me, but I've made an informed choice
based on a risk analysis of my current situation and, well, I just really
don't care one way or the other.

~~~
joepie91_
The quotes are getting very long, so I'm just going to respond directly to
points without quoting here.

You seem to be oversimplifying the notion of "privacy" to "stuff like bank
details". That is incorrect. _Any_ kind of browsing data that a user does not
want exposed to third parties falls under this banner. For some that's just
their bank details, for others that's every single site they visit.

The point is that you can't decide for other people what is "private" to them.
Therefore, the only acceptable solution is to make privacy opt-out - and that
is done by encrypting everything by default.

Asset caching depends heavily on the site, and on whether different pages use
unique assets. A cache is not a security feature, was not designed as such,
and should not be treated as such.

Yes, you can tunnel TLS over Tor. It doesn't afford you any additional
confidentiality. Your domains are still being leaked, just in a different
place - and routing everything over Tor _still_ exposes you to the same
traffic correlation issues, just now it's domains that are being correlated
rather than all request/response data.

Making TLS more secure entails removing the requirement of 'trust' as much as
possible. That does not necessarily translate to it being harder to obtain
certificates. A good example of this are hidden services - it's trivial to
obtain an .onion identifier, yet since it's self-authenticating, it does not
require trusting a third party.

The real problem with your argument shows itself in your very last paragraph -
"Personally, I don't care [...]". You are extending your own personal point of
view to _everybody else on the internet_ , and it doesn't work that way.
Others _will_ have different privacy requirements, and those should be
accomodated to.

Just because you don't care, that doesn't mean you get to decide that nobody
else cares either.

EDIT: Also, just to emphasize this: I am _not_ arguing that encryption should
be _forced_. I'm arguing that it should be _default_. That is something very
different.

------
protomyth
So, under this scheme, what is the best practice for dealing with all the web
servers in devices such as printers, routers, copiers, embedded systems, etc.?
Quite a lot of these have no provision and, in quite a few cases, not the cpu
power to do https.

~~~
finnn
most printers/routers/copiers offer HTTPS from what I've seen. Sometimes it's
on by default, usually not. They just use self-generated, self-signed
certificates. There are problems with that, but CPU power isnt one of them.

~~~
greglindahl
My experience with large numbers of smart powerstrips is that they support ssh
and https, but it's not reliable. Their telnet and http is reliable. I don't
know why this is the case, but there you have it.

~~~
nine_k
Smart _powerstrips_ are still a minority of connected devices.

Printers, routers, etc — anything that can afford a $5 ARM or MIPS core — have
plenty enough power to allow TLS access.

Getting a certificate for each of them to provide a Web interface is another
story.

In corporate environment the IT department will probably install their own
certificates, automatically trusted by corporate browsers. Home-oriented
devices will probably use massively-copied certificates instead of unique
ones. It's not as secure as a per-device unique certificate, but definitely
more secure than no encryption at all.

~~~
finnn
But it mandates that you click through an SSL warning, which no user should
ever have to do unless they are actually testing SSL-related stuff. Otherwise,
it's just teaching everyone bad practices.

~~~
nine_k
If self-signed certs are accepted silently and shown as "not secure", the way
plain HTTP is accepted and shown (per
[https://news.ycombinator.com/item?id=9472037](https://news.ycombinator.com/item?id=9472037)
proposal), the user won't need to click through anything.

Self-signed HTTPS is in no way less secure than unencrypted HTTPS.

------
diafygi
There's one solution that the author didn't cover: Start treating self-signed
certs as unencrypted. Then, deprecate http support over a multi-year phase
out. That way, website owners who want to keep their status quo, can just add
a self signed cert and their users will be none the wiser.

For https there are two major objectives. 1) Prevent MITM attacks. 2) Prevent
snooping from passive monitoring. Self-signed certs can prevent #2, which the
IETF has adopted as a Best Current Practice
([https://tools.ietf.org/html/rfc7258](https://tools.ietf.org/html/rfc7258)).
I'm much more in favor of trying to at least do one of the two objectives of
https, rather than refusing to do anything until we are able to do both
objectives.

Here's a proposed way of phasing this plan in over time:

1\. Mid-2015: Start treating self signed certificates as unencrypted
connections (i.e. stop showing a warning, but the UI would just show the globe
icon, not the lock icon). This would allow website owners to choose to block
passive surveillance without causing any cost to them or any problems for
their users.

2\. Late-2015: Switch the globe icon for http sites to a gray unlocked lock.
The self signed certs would still be the globe icon. The would incentivize
website owners to at least start blocking passive surveillance if they want to
keep the same user experience as previous. Also, this new icon wouldn't be
loud or intrusive to the user.

3\. Late-2016: Change the unlocked icon for http sites to a yellow icon.
Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of
frameworks like wordpress including tutorials on how to use it. This increased
uptake of free authenticated https, plus the ability to still use self-signed
certs for unauthenticated https (remember, this still blocks passive
adversaries), would allow website owners enough alternative options to start
switching to https. The yellow icon would push most over the edge.

4\. Late-2017: Switch the unlocked icon for http to red. After a year of
yellow, most websites should already have switched to https (authenticated or
self-signed), so now it's time to drive the nail in the coffin and kill http
on any production site with a red icon.

5\. Late-2018: Show a warning for http sites. This experience would be similar
to the self-signed cert experience now, where users have to manually choose to
continue. Developers building websites would still be able to choose to
continue to load their dev sites, but no production website would in their
right mind choose to use http only.

Here's two relevant Bugzilla bugs:

Self-signed certificates are treated as errors:
[https://bugzilla.mozilla.org/show_bug.cgi?id=431386](https://bugzilla.mozilla.org/show_bug.cgi?id=431386)

Switch generic icon to negative feedback for non-https sites:
[https://bugzilla.mozilla.org/show_bug.cgi?id=1041087](https://bugzilla.mozilla.org/show_bug.cgi?id=1041087)

~~~
paganel
> 5\. Late-2018: Show a warning for http sites.

Honest question: Why would I show a warning for a http-only site which only
displays photos from my latest vacation? (assuming said photos are hosted on
my blog). Or random thoughts of mine about tramway-spotting? (again, assuming
said rumblings happen on my blog)

IMHO this kills a large portion of the open-web.

~~~
pornel
* Your website can be manipulated to contain anything: it could be a script attacking another website (Baidu vs GitHub), it could be a sneaky redirect to a phishing page (tab-nabbing), or just a ton of nasty tracking ads injected by an ISP.

* There are still sites that should use HTTPS, but don't. Browsers can't know whether these are just holiday photos, or photos that are privacy sensitive or can even cause a visit from the secret police in some countries. It's better to err on the side of security.

------
ozten
Progress is never easy.

Firefox will be a forcing function for fixing the most critical issues raised
in the blog post.

------
cleverjake
Author appears to not be aware of the upcoming free SSL certificate service
from Mozilla, the EFF and more -
[https://letsencrypt.org/](https://letsencrypt.org/)

~~~
joepie91_
Did you actually read the article? I've dedicated an entire section of the
article to Let's Encrypt.

