
Deprecating Non-Secure HTTP - talideon
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/
======
nosefrog
I should be happy about this -- who _wouldn 't_ want the entire web to be
encrypted -- but SSL is so broken for normal people. SSL is expensive
(wildcard certificates run $70 a year and up), confusing (how does one pick
between the 200 different companies selling certificates?), and incredibly
difficult to set up (what order should I cat the certificate pieces in
again?).

If SSL doesn't change, this move will cut the little folks out of the
internet. What are Mozilla's values?

~~~
bigdubs
Why this project: [https://letsencrypt.org/](https://letsencrypt.org/) is so
important.

From the site: Let’s Encrypt is a new Certificate Authority: It’s free,
automated, and open. Arriving Mid-2015

~~~
nothrabannosir
That's just _one_ project, and it doesn't even exist yet.

The web is moving faster every day, apparently. I sure do hope that project
will be all it's chalked up to be.

For example, I need IP-only certs for a new project I'm working on (waiting
for DNS to propagate to all clients is too unreliable and slow). If
letsencrypt doesn't do that... well then I'd have to hope real hard for a
competent CA out there who has an automated process available that allows IP-
only certs. And whatever their price, if companies start following Mozilla's
lead too soon, I'll have to pay up.

The wording in the article is perhaps not so damning yet, but it's still
making me uneasy that they put out this press release while there are
currently ZERO viable solutions for this.

~~~
thwarted
_For example, I need IP-only certs for a new project I 'm working on (waiting
for DNS to propagate to all clients is too unreliable and slow)._

This doesn't make any sense. You're not waiting for DNS to propagate to
clients; if anything you're waiting for recursive DNS servers at shitty ISPs
to time out their caches when they are configured to not honor the RR's TTL
sent by the authoritative server in a misguided attempt to make the internet
"faster".

But this is completely avoidable without having to use IPs or certificates
with CN/SAN that are IPs: get a wildcard cert and rotate the subdomain name.
It's a new hostname, so it busts intermediate DNS caches by being new queries;
since it's a new query, there's no "propagation to clients" to wait for when
you change IPs, all queries for the new name hit authoritative servers.
Additionally, it looks infinitely more legit than a website that is accessible
only via IP address. And doubly additionally, if you're going through so many
IPs, presumably you'll be rotating some out and those may be assigned to other
people who can then get their own cert for that IP and impersonate you.

~~~
nailer
How often are DNS caches configured to ignore TTLs? That sounds awful.

I assumed the grandparent simply didn't understand the need to lower his TTLs.

~~~
aroch
Google DNS seems to ignore TTL but it does the opposite of what the GP is
saying; gDNS drops it cache before the TTL is expired.

~~~
thwarted
Unless you have a reference for Google saying they purposely do this, it is
more likely that the cache is dropping LRU entries as it fills up. Also, I
doubt there is a trivial number of actual resolvers behind the google public
DNS endpoints, so you may be seeing the result of multiple individual servers
without shared cache initially populating their caches.

Some quick tests with dig seem to indicate that, at least for the region I'm
in, my queries to google's public DNS is rotating between 4 or 5 servers, as
evidenced by the TTLs being returned.

------
UnoriginalGuy
I agree with trying to phase out HTTP, but I think their method is "annoying."
What do features have to do with HTTP Vs. HTTPS? It just seems like an
arbitrary punishment.

Wouldn't it just be significantly easier to simply change the URL art style to
make clear that HTTP is "insecure." Like a red broken padlock on every HTTP
page?

That has the following advantages:

\- HTTP remains fully working for internal/development/localhost/appliance
usage (no broken features).

\- Users are reminded that HTTP is not secure.

\- Webmasters are "embarrassed" into upgrading to HTTPS.

\- Fully backwards compatible.

Seems like a perfect solution where everyone wins.

~~~
joshmn
I can imagine a lot of personal sites will suffer from this. With most,
they're sitting on something like Eleven2 or Dreamhost, who requires a
dedicated IP for an SSL certificate, which the user then has to buy and figure
out for himself (it's not trivial for the average "webmaster"), or buy the
certificate from their host which is marked-up plenty.

Yes, the hosts could wildcard. Yes, there are other solutions out there. But
for the average Joe who is blogging about his vacations and family? They're
going to be completely lost.

Why don't shared hosts just wildcard? Shared certificate? Well, let's think
about it... Charging ~$5/month/dedicated IP is a nice upsell, and getting $70
for an installed SSL cert that costs them $10 from their SSL cert reseller,
that takes them 2 minutes to configure... That's a nice slice of pie. I'd take
that bet any day.

~~~
opejn
I think you're overstating how bad things are. Dreamhost, for example, no
longer requires a dedicated IP for SSL, though they do still recommend it for
e-commerce. They are charging $15/year for a CA-signed certificate. Granted,
that's for a single-site cert and they don't support wildcards under this
scenario, but the vacation blogger isn't likely to need that anyway.

------
kazinator
This is stupid. There are all kinds of use cases where you don't care who
knows what you're looking at, or whether it is authentic.

Say I navigate to some restaurant's web page using HTTP. Even if I used HTTPS,
someone spying on my traffic would know what I'm reading, if the IP address is
a dedicated server for that web site only. Whether I use HTTP or HTTPS, they
could infer that I'm interested in visiting the restaurant.

Secondly, I'm only interested in the opening hours. That is not classified
information.

I suppose that a MITM attack could be perpetrated whereby the attackers
rewrite the opening hours. I end up going to the place while it is in fact
closed (and the area happens to be deserted), making me an easy target for the
attackers to rob me.

Okay, okay, please deprecate HTTP; what was I _thinking_!

And that restaurant better get a properly signed certificate; no "self signed"
junk! Moreover, I'm not going to accept it over the air the first time I
visit, no siree. DNS could be redirecting me to a fake page which also has a
signed certificate. I'm going to physically go the restaurant one time first,
and obtain their certificate from them in person, on a flash drive, then
install it in my devices. Then I'm going to pretend I was never there and
don't know their opening hours, and obtain that info again using a nearly
perfectly secured connection!

~~~
peteretep

        > Even if I used HTTPS, someone spying on my traffic would 
        > know what I'm reading, if the IP address is a dedicated
        > server for that web site only
    

How would they know the IP is a dedicated server for that website only, rather
than simply a default?

~~~
icebraining
Since the SSL negotiation happens before the HTTP request, either there's only
one certificate for that IP or you need to use SNI, which reveals the domain
you're requesting.

You could have multiple domains in the certificate to avoid identification,
but that has its own problems.

------
cdnsteve
I have to say, I actually disagree with this move. While I think the
intentions sound noble, and I'm all for a more secure web, I also believe that
a web browser has no business dictating that the entire web should be forced
in HTTPs.

I don't see any benefit in this type of blanket, all or nothing, type of
approach. In fact, I see it doing more damage than good. Encrypting blogs,
news websites, etc still makes no sense to me. I'm actually disappointed in
Mozilla for looking at doing this. As a developer I respect many of their
products and see them as champions of the web in a lot of ways.

HTTPs does not:

\- protect a user from malware on their own system with keylogging taking
place

\- increase security in outdated and insecure websites (eg: old known
exploitable code)

\- prevent any browser drive-by downloaders or exploits

\- increase the security of the web server itself (web stack thats serving
requests) - yeah that's you using a private VPS without doing Kernel updates.

These are likely the major factors of why people have security issues. What is
forcing HTTPS on the entire web actually doing? Who is it benefiting? The
government can still snoop your data in-flight. If someone is connected to a
fake wifi endpoint there is on the fly SSL decryption out there.....

Do we still need TLS for actual secure transactions that deal with personal
data? Yes, of course. That's what it is intended for.

Do we need TLS to read the latest TMZ post about Miley Cyrus? You decide...
(oh and it's http if you were wondering)

~~~
teraflop
HTTPS provides authentication, not just confidentiality.

When you visit "blogs, news websites, etc" do you think there's no value in
being able to know for sure that the content is exactly what the owner of the
site intended? Even though ISPs have proven themselves willing to intercept
and modify that content in transit?

[http://arstechnica.com/tech-policy/2013/04/07/how-a-
banner-a...](http://arstechnica.com/tech-policy/2013/04/07/how-a-banner-ad-
for-hs-ok/)

[http://arstechnica.com/tech-policy/2014/09/08/why-
comcasts-j...](http://arstechnica.com/tech-policy/2014/09/08/why-comcasts-
javascript-ad-injections-threaten-security-net-neutrality/)

~~~
dukky
But https doesn't 'let you know for sure that the content is exactly what the
owner of the site intended' as it doesn't protect you from xss

~~~
abraham
Than you fix the XSS vulnerabilities and implement CSP. Shitty security
practices is not an excuse for more shitty security practices.
[https://developer.mozilla.org/en-
US/docs/Web/Security/CSP](https://developer.mozilla.org/en-
US/docs/Web/Security/CSP)

------
byuu
Meanwhile, OpenBSD 5.7 came out today, with the following security fixes in
LibreSSL (arguably the most secure SSL library so far):

"Multiple CVEs fixed including CVE-2014-3506, CVE-2014-3507, CVE-2014-3508,
CVE-2014-3509, CVE-2014-3510, CVE-2014-3511, CVE-2014-3570, CVE-2014-3572,
CVE-2014-8275, CVE-2015-0205 and CVE-2015-0206."

So if I were running a TLS-enabled site using LibreSSL from OpenBSD 5.6, I'd
have been exposed to potentially 11+ CVEs. A little sooner with OpenSSL, and I
would have been exposed to Heartbleed. And who knows how many CVEs will arise
before 5.8 is released?

Why is it so impossible to write a secure TLS library? Why should I put my
entire server at risk to appease the attempts of Mozilla and Google to prop up
the CA business? Sorry, but I'll stick to parsing lines of text.

Let 'em remove HTTP completely. Hopefully after they break 90% of the web,
we'll get some real user revolt, and some real competitors in the web browser
space might emerge. Maybe from some people who actually listen to what their
users are asking for.

I guess now we know what that "signed extensions only" change was for: what do
you think they're going to do when someone submits a "Restore HTTP
Functionality" add-on in the future?

~~~
Noctem
Well, frequently the vulnerability of those CVEs is breaking or downgrading
the crypto... or in other words: if exploited, the connection could become as
insecure as HTTP.

So your argument is that since locks can occasionally be picked, doors
shouldn't have locks? What exactly is the massive burden with HTTPS? The
computational cost is tiny and will continue to become tinier, there are free
cert providers like StartSSL and more coming soon, and the implementation is
simple enough that anyone managing a server should be able to handle it
easily.

The number of websites where I wouldn't prefer encryption and identity
authentication is around zero, and the number of websites where I'm okay with
someone injecting arbitrary JavaScript is exactly zero. The time people spend
making flawed "if you have nothing to hide, you have nothing to fear" or
"crypto libraries/CAs are bad, scary, and hard to use" arguments would be much
better spent actually trying to improve those circumstances for the inevitable
and necessary shift to HTTPS everywhere.

~~~
byuu
> So your argument is that since locks can occasionally be picked, doors
> shouldn't have locks?

A faulty lock on my house doesn't turn into Heartbleed.

The thing is, _I_ don't need a lock on my server that serves up static, legal
content. You might think it's a problem, that the NSA is going to spy on you,
or China is going to inject attacks into your requests to my server, but
that's your problem.

I'm not going to run a massively buggy TLS library with an API guide that
would take a whole team of engineers _weeks_ to decipher, just because you're
intensely paranoid about accessing game-related data over HTTP.

Seriously, look at the GnuTLS documentation sometime. It's psychotic. As is
MatrixSSL, PolarSSL, OpenSSL, and NSS. The closest to sanity I've ever seen
was libtls, which is only on OpenBSD, still has lots of CVEs popping up, and
can't do non-blocking mode.

> What exactly is the massive burden with HTTPS?

1\. write your own HTTPS server. I'll wait a few months, or

2\. find a library that's easy to use and won't expose my server to
Heartbleed-like attacks, and

3\. pay me $70/yr for the wildcard cert I would need.

I'll cover the extra CPU costs, since you say they're so small. (even though
when people say "small", they're counting overhead as a percentage against a
site running a bloated beast like Wordpress in PHP + MySQL.)

> there are free cert providers like StartSSL and more coming soon

That don't provide wildcart certs (and I have a wildcard CNAME entry; and I
make use of that.)

> The number of websites where I wouldn't prefer encryption and identity
> authentication is around zero

And you're free to not visit my site, just like I wouldn't ever patronize a
webstore that wasn't HTTPS. That's how markets are supposed to work. I don't
see why your browser has to make the decision for the both of us.

> and the number of websites where I'm okay with someone injecting arbitrary
> JavaScript is exactly zero

Honestly ... I would be okay with blocking Javascript over HTTP. But I think
that's more because I just hate Javascript :P

> would be much better spent actually trying to improve those circumstances

You seriously want me to write a TLS library?

My dream goal would actually be to have it built-into the sockets layer. If it
could be enabled as easily as a setsockopt(SO_TLS_CERTIFICATE,
(void*)certificatedata, ...); and OS updates could fix the security, I'd be a
lot more inclined to get on board with the programming side.

I don't have a solution to the wildcard cert issue. I can't well start up my
own CA to give them out for free. I guess it would at least be nice to see if
they ever tone down self-signed certs from "WORSE THAN HITLER" to "at least
equal to HTTP" in terms of warning messages. People keep talking about it, but
it's been what? Over a decade now? I'll believe it when I see it.

------
KeytarHero
Can someone explain why HTTPS is necessary for a webpage where I don't log in
or submit any information?

For example, take the xkcd homepage. Not only do I not log into it, there's
nowhere I _could_ log in. The only input is a search box (which seems to be
disabled at the moment anyway). Is it really a security risk if my
communication with xkcd's servers is unencrypted? (Yes, xkcd has a store and a
forum, and I understand why you'd need HTTPS on those subdomains - but I don't
see why the main domain needs it.)

I agree with the parts of their plan to disable browser features that could be
a security risk to non-HTTPS pages - that makes total sense. But it seems
absurd to prevent static pages from using future CSS layout features just
because they're not using HTTPS.

~~~
rogerbinns
Intermediaries can (and already do) silently cause the content to be tracked,
altered or otherwise modified against both your and the site owner's
interests.

How would you feel if they inserted javascript to mine bitcoins?

~~~
peteretep

        > How would you feel if they inserted javascript to mine bitcoins?
    

I couldn't care less. JavaScript to DDoS GitHub, on the other hand...

------
skybrian
If things like "python -m SimpleHttpServer" don't work then developers will
switch browsers. I don't think anyone is seriously considering what it will
take to migrate the long tail of development tools that use HTTP on localhost.

~~~
vortico
And what about testing small applications on remote servers like "dev.my-
personal-site.com"? I don't want to pay $15 for an SSL certificate and 15
minutes of my time just so I can get my dumb lunch break tetris HTML app
running on the machine I SSH into from my tablet.

~~~
derefr
I am long past confused and heading toward awed, at this point, that it's not
a common-sense practice for every web developer to generate a personal self-
signed root-CA cert, and install it on all of their machines. It's as basic as
having an SSH or PGP key.

Setting up a new box? Put your CA-cert in its trust roots. Then use your CA to
generate a server cert for it; plop that in /etc/nginx and wherever else. Now
it's secure!

This is exactly the original use-case for X.509 certificate authorities:
pairing devices on a _private network_ without having to give each of them a
set of of their peers' keys in advance. You have a private network that you
run services on? You're a CA.

And really, in the dev-environment case, you actually want client-auth, too,
because then you get "clients who don't have a CA-issued client cert can't
connect" for free.

In proper X.509, the server auths the client just like the client auths the
server—it's really more of an equal-peers "we're both trusted by the CA—the
network owner—so we should both trust each-other" kind of thing. The public
Internet centralized X.509 model—where the client has a huge list of CAs that
the user doesn't even know the contents of, and the server doesn't check
_anything_ —is a very strange and non-idiomatic implementation of the premise.

~~~
recursive
> It's as basic as having an SSH or PGP key.

And you're surprised that not every developer has done this? A minority of the
developers I've ever worked with have ever done any of these things.

~~~
derefr
I'm really talking about the kind of developers that hang out here—people who
regularly set up their own staging environments, use those "tunnel into my dev
box" services, etc. Most of us here certainly know SSH, and probably have used
GnuPG at least once. But it's still relatively unlikely, statistically, that
you or I have ever touched the openssl(1) command.

~~~
hackmiester
Well, when we are talking about a change in Firefox, we are talking about
every developer, not just the ones who hang out on HN.

------
diafygi
Here's two relevant Bugzilla bugs:

Self-signed certificates are treated as errors:
[https://bugzilla.mozilla.org/show_bug.cgi?id=431386](https://bugzilla.mozilla.org/show_bug.cgi?id=431386)

Switch generic icon to negative feedback for non-https sites:
[https://bugzilla.mozilla.org/show_bug.cgi?id=1041087](https://bugzilla.mozilla.org/show_bug.cgi?id=1041087)

Here's a proposed way of phasing this plan in over time:

1\. Mid-2015: Start treating self signed certificates as unencrypted
connections (i.e. stop showing a warning, but the UI would just show the globe
icon, not the lock icon). This would allow website owners to choose to block
passive surveillance without causing any cost to them or any problems for
their users.

2\. Late-2015: Switch the globe icon for http sites to a gray unlocked lock.
The self signed certs would still be the globe icon. The would incentivize
website owners to at least start blocking passive surveillance if they want to
keep the same user experience as previous. Also, this new icon wouldn't be
loud or intrusive to the user.

3\. Late-2016: Change the unlocked icon for http sites to a yellow icon.
Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of
frameworks like wordpress including tutorials on how to use it. This increased
uptake of free authenticated https, plus the ability to still use self-signed
certs for unauthenticated https (remember, this still blocks passive
adversaries), would allow website owners enough alternative options to start
switching to https. The yellow icon would push most over the edge.

4\. Late-2017: Switch the unlocked icon for http to red. After a year of
yellow, most websites should already have switched to https (authenticated or
self-signed), so now it's time to drive the nail in the coffin and kill http
on any production site with a red icon.

5\. Late-2018: Show a warning for http sites. This experience would be similar
to the self-signed cert experience now, where users have to manually choose to
continue. Developers building websites would still be able to choose to
continue to load their dev sites, but no production website would in their
right mind choose to use http only.

~~~
zmmmmm
Why the hate for self-signed certificates?

I would personally rather see those promoted and methods developed to securely
bootstrap them than make us all reliant on centralised CA infrastructure. The
centralised CAs are all at the mercy of their governments and hence, in my
opinion, ought to be considered almost as insecure as self-signed certs.

EDIT: I think I misunderstood your comment - reading again it sounds like you
are also in favour of self-signed (hopefully so).

~~~
talideon
Until supports for DANE and DNSSEC becomes widespread, unless it's a site for
personal use, self-signed certs can't really be trusted by third parties.

(BTW, if you're not using a conventional CA, you'd best off being your own CA,
and signing your certs with a CA certificate you've generated rather than
simply self-signing the cert. It's a little more trouble in the short term,
but it means that each time you subsequently need to generate a new cert, you
don't need to put up with warnings everywhere because it'll be validated by
your own CA cert. The downside of this is having to install the CA cert
everywhere. That's what I do for my private stuff. There are tonnes of
tutorials online on how to do it.)

------
jsn
I envy you, citizens of the free world :) You (mostly) can use HTTPS, avoid
government surveillance, and use new shiny Mozilla features (for whatever they
are going to be).

It's not the same in e.g. Russia (and I'm sure it's not just Russia). In
Russia, the Web is now officially being censored by the state. They have a
national register of prohibited resources -- basically, a huge list of URLs.
Every ISP must block all access to those URLs, or else.

So if a page (perhaps, a comment page?) on your site enters the register, and
it is served over unencrypted HTTP, ISPs can use DPI to block the access to
just that specific page -- which sucks, but at least your site is still
accessible. If, however, you use HTTPS -- then ISPs have no other choice but
to block all traffic to your site entirely. Given that choice, many webmasters
(myself included) will have to choose plain HTTP.

~~~
peteretep

        > then ISPs have no other choice but to block all traffic
        > to your site entirely. Given that choice, many
        > webmasters (myself included) will have to choose plain
        > HTTP
    

At some point, blocking CDNs at IP level becomes too much of an economic
burden on a country to be feasible. We've seen an unwillingness by the Chinese
to block access to GitHub; presumably this means Fastly (their CDN provider)
is safe for a while.

~~~
jsn
Did you know that Russians had github blocked for several days? Anyway, you're
talking about counter-censorship warfare. Yes, some of those measures will be
somewhat effective sometimes, but the costs (not necessarily even monetary)
are actually quite substantial, and it's definitely not for everyone.

------
jamiesonbecker
This potentially removes the relative anonymity that the entire non-commercial
web offers (and in fact was largely built on, post-DARPA). Free DV
certificates _may_ help minimize that negative effect, but this entire scheme
_still_ further increases reliance on a badly broken CA system.

This seems like a somewhat rushed idea with good intentions but without
sufficient community discussion.. rather than put all our eggs in one basket
with LetsEncrypt et al, which are noble efforts to fix a broken system, are
there things we can do right now in terms of favoring self- _authentication_
of self-signed certs? This whole thing feels a bit like a witchhunt to punish
non-HTTP sites.

~~~
teraflop
> are there things we can do right now in terms of favoring self-
> authentication of self-signed certs?

That's a good question, but I've yet to see any justification for thinking the
answer is "yes".

If an attacker controls your network connection and/or DNS, what possible
information could you obtain to prove the authenticity of a website, without
reference to an external source of authority?

~~~
jamiesonbecker
Agreed. That's why it was a question. :) I'm trying to get people to start
thinking in that direction, rather than in a central source of authority
(which also means DNSSEC or DNS TXT's are out)

------
chimeracoder
I'm very glad to see this. It's embarrassing to think that, just a few years
ago, many major websites used HTTP for all but their login pages, and it took
Firesheep to get them into gear.

> For the first of these steps, the community will need to agree on a date,
> and a definition for what features are considered “new”. For example, one
> definition of “new” could be “features that cannot be polyfilled”.

I hope that includes WebRTC, since WebRTC can be used to figure out your local
IP address, which (when combined with your public IP address) is essentially a
unique identifier[0]. WebRTC is a technology that enables some great things
(like Firefox Hello!), but it's a MASSIVE privacy hole[1], and one that I
can't imagine justifying for non-secure endpoints.

 _EDIT_ : Added link to proof-of-concept attack

[0] [https://www.browserleaks.com/webrtc](https://www.browserleaks.com/webrtc)

[1] [https://github.com/diafygi/webrtc-
ips/blob/master/index.html](https://github.com/diafygi/webrtc-
ips/blob/master/index.html)

~~~
tracker1
It's worth bearing in mind that in the beginning, https was a significant CPU
overhead... Since 2004 or so, much less of one. And since around 2010 CPU is
rarely the bottleneck for web applications.

I do find it interesting that someone starting out as a significant effort
after 2010 would bother having a partially https site, with back and forth
jumps for login. It seems to me like it's actually _more_ work than just
having it all https and flat.

~~~
manigandham
Amazon does this today, browsing is http.

------
code_reuse
I view this as an attempt by various power brokers to subvert the power of the
World Wide Web by attacking it's decentralized nature. In the beginning (like
now) it'll be relatively simple for everyone to get their hands on the SSL
cert they need, but the risk is that in the future, after support for HTTP has
been reduced it could become more difficult to acquire the certificates
required to deliver the user experience that you wish to deliver (not just in
terms of price, but in terms of censorship).

In addition to making the web more centralized, forcing everyone into HTTPS
actually makes it much easier to effect broad scale traffic analysis. On top
of that many info-sec experts suspect that the actual cipher in play here may
eventually be proven to have significant weaknesses at some future date. AND
HTTPS is more expensive to support in terms of bandwidth, CPU, and increased
latency. It could result it more coal being burned each year to push all of
those extra bytes around.

~~~
walterbell
In such a scenario, wouldn't an alternative/forked browser emerge with support
for an HTTP/anonymous web?

There is also censorship risk in named-data and content-centric networking,
which offer multicast and caching benefits, but rely on uniquely identified
content.

~~~
code_reuse
certainly there will always be alternative browsers, but since they would be
used by a small minority the censors would effectively have the ability to
determine which publishers are "cleared" to reach out to the most broad
demographics. That alone would be enough if your censorship goal was to be
able to sway public sentiment.

------
abhinai
Hopefully they will also introduce a standard and free way to get SSL
certificates. I do not like the idea of having to buy new certificates every
year (and all the hassle that comes with installing the certificates) just to
maintain a very basic website.

~~~
pckspcks
Nope. The goal is to make running a server only available to corporate
entities. It reduce competition from folks like yourself.

~~~
natrius
If the cost of an SSL certificate is a barrier for you to compete, you should
probably do something else.

~~~
im3w1l
It just takes a tiny cost to turn off bright 10 year olds from experimenting.

~~~
adventured
Cloudflare's free plan has SSL now, which a 10 year could utilize. While that
opens up a potential MITM attack, I don't believe it's worse than having no
SSL at all (others argue it is, on the premise that it creates a false sense
of security).

~~~
Spivak
Amateur web development shouldn't depend on having an account with a 3rd party
service which can arbitrarily decide whether or not to sign your cert.

~~~
MichaelGG
Well you still depend on 3rd parties to register a domain. And one to provide
a connection, if not a server.

~~~
dijit
/etc/hosts

when I was experimenting with computers I had a WAMP executable on my LAN.

less parties involved the better.

~~~
Dylan16807
Localhost will not be restricted. If you're making people edit their hosts
file you can make them bypass any security warnings.

------
rossng
This is a pretty bold move, but I like the intent behind it. Hopefully Mozilla
can pull it off without causing any problems for normal users.

Presumably it will all be synced with their plans to launch a free CA[1] in
the near future.

[1] [https://letsencrypt.org/](https://letsencrypt.org/)

~~~
NeutronBoy
One thing I've never been able to figure out from Let's Encrypt's website -
will you be able to get a certificate, _without_ hosting your own instance? Or
will it be limited to servers you can actually install their program on? Also,
I assume they'll get the root CA included by all major vendors/browsers?

~~~
schoen
You don't have to run the Let's Encrypt client, but you do have to be able to
do things to prove that you control the domain. Currently the Let's Encrypt
client assumes that it's being run on the same machine on which domain control
will be proved (though not necessarily the same machine where the cert will
eventually be deployed). Someone could write another client application which
gives instructions to complete the challenges manually, which is a feature
that's occasionally requested.

The CA will be cross-signed by IdenTrust, which is accepted by mainstream
browsers, so those browsers will also accept the certs we issue.

~~~
vtlynch
So you don't have to run the Let's Encrypt client? I was under the impression
you did because without running the client there was no way to communicate
with the Let's Encrypt service?

Or do you mean, you can have multiple servers and only need to run the client
on one of them?

~~~
schoen
I mean that you need to run some client software, but it doesn't have to be
the client software we're writing. Other people can implement ACME, for
example in hosting provider infrastructure or in server software or other
configurations. There could be an ACME client where the verification steps
indicated to the user are performed manually rather than automatically. You
could probably even speak ACME with curl, although quickly generating valid
JSON objects that contain valid cryptographic signatures might be a bit
challenging. :-)

It's also right that you can have multiple servers and only run the client on
one of them, if you're willing to copy key material from one server to
another.

~~~
vtlynch
Ah gotcha! Thanks for clarifying.

------
peeters
My question whenever this comes up is how will the web respond to the millions
of caching devices out there that will now provide no bandwidth savings?

ISPs and companies all over the world cache static HTTP content (i.e. HTTP
resources with proper caching headers). Doesn't endpoint-to-endpoint
encryption basically kill that?

What I'd love is to have HTTPS for encrypted traffic, and signed HTTP for
traffic that doesn't need encryption. So you would use the certificate to
authenticate the payload, but a cache would still be able to deliver the
content (because a replay would be valid).

------
nakovet
From the comments section:

> Not all data needs to be secure. Not all websites need to be secure.
> Requiring HTTPS means additional compute and additional servers securing
> something may not need to be secured and provides no benefit – only cost.
> Free and open information should be (optionally) free of encryption as well.

Indeed, there still a portion of the internet that could benefit from being
SSL free.

------
mymymythe
I have a small blog on a home server. Basic HTML and static content and I
don't care who views it. I can't get a static IP address.

Some things about this decisions doesn't seem thought out. -who regulates the
companies selling certificates? ($5 for a cert seems shady), are cert
companies fronts for others entities? -does this really prevent malware? -will
self signed certificates get a bit more respect? -how does this stop Lenovo
from adding preinstalled malware that circumvents security certificates?

------
blackhaz
This is so wrong. Partially because HTTP is used as a vehicle to deliver
applications. This blurring of responsibilities results in the messy state the
Web is going to be in. I see parallels with systemd and Linux here: poor
design decisions, the chase to accommodate an ever-widening audience of
Internet users, one-button devices. Just recently I saw a post from a guy
somewhere on a dial-up link in Nepal that it is impossible to write e-mails
anymore. You have to write them in Notepad and then copy-paste it into your
web-based e-mail client, otherwise the "client" is too slow. And no, adding
fancy animations to my GUI is not progress.

------
thyrsus

        "It should be noted that this plan still allows for usage of the “http” URI scheme in legacy content."
    

This is an important qualifier to the headline, and means that Firefox will
remain workable with things that won't implement https for a decade.

~~~
talideon
Hence 'deprecating': they're not getting rid of HTTP, it's just that HTTPS
will be preferred, with HTTP sticking around for legacy purposes.

------
wowaname
For Tor and I2P hidden services, HTTPS is redundant so I don't really see the
point in punishing people for things like this. Loopback sites are an obvious
exception to the "HTTPS is better" rule as well.

~~~
MattSteelblade
I'm pretty sure you want to avoid HTTP websites while on TOR. HTTPS encrypts
your connection, while TOR anonymizes it.

~~~
SSLy
This makes sense for clearnet stuff. Hidden services are encrypted before
transit.

------
jjarmoc
Please, can we stop calling it SSL?

SSL means something very specific; something that people should no longer be
deploying. The article notably uses the term 'Non-secure HTTP' which at this
point in time means HTTPS leveraging TLS (probably at least 1.2) but leaves
some room for future interpretation as newer versions or entirely different
standards arise.

No one is advocating for 'SSL' here, and continuing to use the term 'SSL' or
'SSL/TLS' when we really mean 'TLS' further confuses the situation.

~~~
sp332
It's not that specific. You could even negotiate a downgrade with a TLS server
to use SSL. The first 3 versions of the protocol were named SSL and the later
ones were named TLS but they're not really different.

~~~
jjarmoc
The differences are significant when it comes to the security of the
underlying protocol, and the downgrade is why it's important you refuse to
support SSL entirely. SSL of any version (v2 or v3.. the v1 you refer to was
never publicly in use) comes with security problems that are resolved in TLS.

I won't bore you with the details, they're well explained at
[http://disablessl3.com/](http://disablessl3.com/) among other places. All
major browsers have ended support for SSL, and more secure alternatives have
been available for years.

It's not a high risk; attacks require scenarios that may not be common, but it
remains true that there's no reason to deploy SSL today.

~~~
sp332
TLS 1.0 was also vulnerable to BEAST. I'm assuming that pointing to TLS 1.0 as
the "minimum" is temporary. Over time, we will decide that the cutoff should
be TLS 1.1 and we'll deprecate TLS 1.0. At that point, everything you're
saying about SSL will be true of TLS 1.0. It's really just a difference in
version number.

~~~
jjarmoc
Yes, it likely will. That's probably why the article mentions a deprecation of
"Non-Secure HTTP" rather than prescribing a specific TLS version. It's the
sort of language that will stand the test of time as newer protocols become
deprecated. The comments here, however, largely encourage "SSL" which is poor
advice.

BEAST can be mitigated through ciphersuite selections and other measures. This
makes it somewhat different than POODLE which is a protocol design flaw for
which no reliable mitigation exists.

Suggesting folks not deploy SSLv3 is hardly a controversial statement. It's
not just a difference in version number, it's a difference in protocol
specification and name. When we say 'Use SSL' a well intentioned reader may
follow that guidance and implement SSLv3, or worse disable support for TLS.
Words mean things.

------
multinglets
Oh cool, so with increasingly stringent SSL requirements, we're basically
entirely phasing out the ability to run a website without a certificate
authority's involvement.

So instead of all this bullshit from Chrome, Firefox, et al., can I please
just send some huge check to GoDaddy or Verisign or whomever and continue to
use the internet as an open platform and not some managed service where we try
to hold everyone's hand because we've conditioned them to spew their personal
information all over the web all day?

------
zaroth
Mentioned this last time, but since I didn't see it elsewhere in the thread,
will mention it again... what about LAN resources served over HTTP like NAS,
Printer, AP, etc.? These devices don't have DNS, forget about about SSL.

Is the entire local subnet going to be a secure origin like localhost? Because
that sounds problematic... What I want is a way to single-click pin a self-
signed certificate to "turn it green".

~~~
nitrogen
_These devices don 't have DNS_

They should, via mDNS AKA Zeroconf AKA Bonjour AKA Avahi. Often, _printer-
name.local_ port 80 or port 631 will lead to the printer's status page.

~~~
zaroth
So vendors are supposed to pre-install a certificate based on that? What
happens when you rename it? What happens if you have two of the same AP in the
house?

------
vonklaus
At first, I was apprehensive. As a newer web developer I have never had a
secure site. I have a small portfolio and a few tiny side projects I work on,
nothing with >10 users. I will have to learn more, do more, and pay more to
support HTTPS.

When I look at it from a different lens, I believe the internet should be as
private as possible. Encryption is a solution. I think we should all make a
push to make things more secure. Hopefully, we can destroy the cottage
industry around SSL certs and it will be bundled in as an expected value add
with either hosting or DNS purchase. I think that $1 a month is enough rent
for a cert, I saw an SSL cert offered for $600 bucks which is quite
problematic if it represented the threshold someone would have to cross to get
a cert.

Hopefully, mozilla will work to sort out the CA problem, which is the real
thing holding back HTTPS adoption.

------
MBlume
Question: if I'm prototyping a webapp on my machine -- one that will
ultimately run behind apache or nginx or an amazon load balancer or something
-- can I still prototype it in my browser with new features enabled without
getting a valid https setup running on my localhost?

~~~
JoshTriplett
Yes; localhost specifically counts as secure.

~~~
tracker1
Hopefully they put in a whitelist option... (similar to IE's security
zones)... so you can whitelist your development domains.. in the case of
hostnames, or when you hit a local VM.

I agree with another comment that a red broken lock for HTTP connections would
be a better approach.

~~~
teraflop
Chrome is proposing the same thing as Firefox (deprecating advanced features
over HTTP) and additionally wants to visually mark HTTP connections as
insecure: [https://www.chromium.org/Home/chromium-security/marking-
http...](https://www.chromium.org/Home/chromium-security/marking-http-as-non-
secure)

------
general_failure
For a start, I would like to see http content and https self-signed content
being marked the same way. The fact that https self-signed has a shocking
warning right on the face and that http is just let through makes me a very
sad camper.

------
sergiotapia
Why do we have to pay for an SSL certificate? Shouldn't it be free?

~~~
angersock
You would think so.

The problem is that browsers have gone and made self-signed certs suspect, and
yet not created, for example, a well-established foundation for signing such
certs.

~~~
icebraining
It's exactly what they're doing with Let's Encrypt.

------
cJ0th
I can see how https is technically better than http. But wouldn't a https-only
web put too much trust in companies who create certificates? I can't think of
a concrete danger but it sounds dangerous that the degree of security depends
on monetary interests.

------
typish
Without a solution to everyone needing to pay for a certificate and identify
themselves this seems a bit premature. Maybe browsers will relax the "This is
an evil self signed certificate on the site" warning when they do it.

------
nfoz
Why isn't encryption in the network stack, at a lower level than HTTPS?

~~~
lclarkmichalek
It is; HTTPS is HTTP over TLS ("Transport Layer Security"), however, there are
various features like pinning, HSTS, etc that need to be controlled by the
application layer, which is why we talk more about HTTPS than TLS.

------
pluma
In the worst case, Mozilla (i.e. Firefox and Firefox OS) will make itself
irrelevant because it breaks the Internet for its users.

In the best case, this will make website owners value HTTPS as a marketing
decision (rather than a boring non-mandatory privacy decision, because let's
be honest, what business really cares about its users' privacy as much as it
cares about marketing goals?). Much like how Apple helped making Flash
irrelevant (at the cost of impacting their users' experience just like Mozilla
does now).

------
XorNot
If you really want to tackle SSL make it less stupid. Self-signed
certificates? I want these pinned and treated as secure. I want a notification
if they change around the time they expire and a really big warning if they
don't.

If we must have central trust sources, then have central hash servers so when
I visit a new self-signer I can externally verify the hash.

~~~
icebraining
Say the owner of a website with a self-signed cert fears it might have been
compromised, and decides to create a new cert. How is the user supposed to
distinguish that from a MITM?

~~~
XorNot
That's what the central hash servers are for. Am I being MITM'd? Well,
ignoring a global adversary, the problem is usually local. But CA's don't
solve the global problem either.

~~~
icebraining
Wouldn't you have to tell the central servers about each and every site you
visited?

------
perlgeek
Maybe a good first step would be to make self-signed SSL certificates appear
less scary than unencypted HTTP in firefox.

While SSL with self-signed certs don't make MITM attacks much harder, they do
prevent passive evesdropping. Yet the firefox UI seems to imply the contrary,
by making it harder to use https-sites with self signed certs than unencrypted
sites.

------
istvan__
This kind of implies that HTTPS is secure. :) I dont think there is anything
wrong using HTTP internally in a datacenter for data that is not sensitive
(like monitoring, statistics, etc.). I guess you can still access these in
legacy mode. I think the title should be that HTTP is getting phased out for
public internet use or something.

~~~
zobzu
actually, its not really "safe behind a firewall"

what if im inside the network i tell your monitoring that everythings ok while
i break stuff?

~~~
istvan__
and how is HTTPS protecting us against that? if you took over that IP address
you can initiate a valid HTTPS session using the compromised server's identity
and communicate with the monitoring service happily reporting fake data over
HTTPS. I don't see your point. The question here is btw. is it worth X amount
of dollar to protect this service with a secure channel? Sometimes the answer
is yes, sometimes it is no.

~~~
zobzu
the internal server is always made of servers and clients just like the
external one. if you compromise one server you have access to the data
transiting there, but not the others

if you compromise one client you have access to the data this client sends
only

in particular, very few internal networks enforce L2 security (ie its possible
to sniff all data on the same VLAN as you are).

------
Sami_Lehtinen
Whole point of certs is to verify the site ownership. If getting certs is too
easy, well those are then worthless. As it happens to be already. Email
verification of domain ownership isn't good verification at all. These certs
even if trusted are no different from self signed certs. IMHO.

------
etiam
So, to what extent does this address the concerns of Tim Berners-Lee,
expressed in "TLS Everywhere, Not Https: URIs"
[https://news.ycombinator.com/item?id=9424327](https://news.ycombinator.com/item?id=9424327)
?

------
mymymythe
I understand that http will still be supported but downgraded by both
Mozilla's browser and Google search. How about distinguishing websites that
are only static content, and websites that have some forms, or dynamic
content.

------
tux
There where an interesting article recently on HN about this;

"Please consider the impacts of banning HTTP"

[https://news.ycombinator.com/item?id=9406876](https://news.ycombinator.com/item?id=9406876)

~~~
pluma
That was the opposite though. There's an obvious difference between making a
website HTTPS-only and making a browser HTTPS-only (or blacklisting features
for non-HTTPS websites).

------
J_Darnley
Great! Now I just need to setup ssl on my Raspberry Pi to access the several
web interfaces I have running there. Oh wait. I'm no longer running Firefox, I
won't have to worry about this immediately.

~~~
icebraining
If you're running Chrome instead, you still do; they have similar plans.

------
e12e
So SSL on localhost? That seems a bit over the top? Can we then assume that
all browsers will include a trusted CA/cert for localhost? That doesn't work
with eg: ssh tunnels? Or will we need "developer" browsers and "app" browsers
to work with localhost? Either for test/dev or for deploying "apps" with
nodejs etc?

I'm not sure if considering localhost to be secure/"encrypted" (access to all
features) would be a good or bad idea...

------
timwaagh
this is very bad news indeed for hobbyists

------
citrin_ru
HTTPS everywhere is huge waste of energy. Lets encrypt ==> lets transform
energy to heat.

------
ereckers
Great! Make it simple to implement for all and make it affordable.

------
vtlynch
If anyone needs help or has questions about SSL, please ask! I work at a
company that provides SSL and would also be happy to give out discounts to
anyone here.

------
profinger
How does this affect personal websites?

------
lorddoig
I think there are some issues here.

Browser vendors have indirectly created the money sucking machine that is the
certification industry by requiring potential root CAs to have been audited to
a very thorough standard (e.g. WebTrust).[0] Most of these audits implicitly
require dedicated premises, extreme physical security measures, dedicated
hardware, multiple dedicated uplinks, 24x7 personnel, and more. Even browsers
that don't use their own cert store prop up this system by using the OS store
which does require said audits. (And if anyone doubts how instrumental the
browsers are to the continuance of this system, imagine how relatively niche
the X509 industry would become if they moved to using something else.) As
anyone who has tried to grok the documents at [0] will attest, it's a damn
scary thing. Honestly you may as well try to start a bank. Or a country.

This level of difficulty creates a monopoly (or oligopoly, to be more
precise.) Few people have the will/finance to do it so few do, and those who
do get to take the piss with pricing. As I previously wrote[1], this means
_FOUR_ companies control the CAs that issue 91% of _ALL_ the internet's TLS
certificates.

LetsEncrypt seems like a good thing, and it might be, but it also might not
be. It is, underneath all the PR, pretty much just another root CA who holds
itself to the same auditing standards. It is no-doubt a very expensive
undertaking and as such we may reasonably assume that there will be few, if
any, additional zero-cost, fully-supported CAs in the future: and herein lies
one problem. Unless you have specific requirements that LetsEncrypt just
doesn't support, you have no reason not to use them. So a future CA landscape
might be _ONE_ company controlling 99% of the internet's secrets. Oh dear.

What's more, we should not underestimate the importance of cheap shared
hosting. The internet is a medium for information and nothing more, and
_everybody_ has something that they might wish to broadcast. Currently,
deprecating vanilla HTTP is akin to deprecating the ideas of millions of non-
experts who rely on shared hosting to participate. We're telling them to join
us in the land of VPSs and terminal emulators/Plesk (shudder), or to use one
of the many PaaS services we've created over their own homemade solution. This
is fundamentally anti-technology, which is supposed to harness innovation and
make lives easier. This point is especially pertinent when you consider that
the vast majority of these sites probably don't need encryption at all, so
it's not even like you can mitigate the pain with direct benefits - because
there are none.

Finally, TLS is a pain in the arse to administer. Really - it's not fun. I'm
no stranger to it, and even I get a bit of a sinking feeling when it has to be
done. To this day I'm bound to using Chrome, because no matter what I do I
cannot get Firefox to _parse_ (never mind accept) my NAS's self-signed cert.
Requiring TLS across the board is tantamount to requiring many millions of
hours of pain across the world.

To hold up some moral torch that does _not_ have universal applicability and
actively makes life difficult, and then declare it as canonical truth that all
must adhere to is arrogance of the highest order. A great deal of chat in the
tech community is dedicated to lambasting short-sighted and ill-conceived laws
(think surveillance, copyright, patents, etc.) and yet here we are, _making
them_. We have to do better.

    
    
        [0]: http://www.webtrust.org/homepage-documents/item27839.aspx
        [1]: http://lorddoig.svbtle.com/heartbleed-should-bleed-x509-to-death

------
IgorPartola
As someone who participated in the referenced discussion and on HN, I have to
say I am very happy with this outcome. Seems like reason has won. Now, if
Google follows Mozilla's example, we might actually be able to pull this off.

