
Still Why No HTTPS? - andimm
https://www.troyhunt.com/still-why-no-https/
======
gmiller123456
1\. The requirement to involve a 3rd party certificate authority is a needless
power grab. Giving in ends the hope that it will ever get changed.

2\. There is currently only one free cert provider, if there are ever issues
with it, your users will see a scary error message which will make them think
there are security issued with your website.

3\. Downloading and running code from a 4th, or 5th party and giving it access
to your config files is not "more secure".

4\. The culture of fear around HTTPS, meaning only the "most secure" or
"newest" protocols and cipher suites are to be used. This prevents older
clients from working, where HTTP works just fine.

5\. HTTPS is needlessly complex making it hard to implement. There have been
several security vulnerabilities introduced simply by its use.

6\. If you can't comply with the OpenSSL license, implementing it yourself is
a hopeless endevour.

SSL was developed by corporations, for corporations. If you want some security
feature to be applicable to the wider Internet, it needs to be community
driven and community focused. Logging in to my server over SSH has far more
security implications than accessing the website running on it over HTTPS.
Yet, somehow, we managed to get SSH out there and accepted by the community
without the need for Certificate Authorities.

~~~
xvector
> The requirement to involve a 3rd party certificate authority is a needless
> power grab. Giving in ends the hope that it will ever get changed.

Genuinely curious - what alternatives do you have in mind? Are there any WoT
models that interest you more?

> There is currently only one free cert provider, if there are ever issues
> with it, your users will see a scary error message

Isn't this the point?

> Downloading and running code from a 4th, or 5th party and giving it access
> to your config files is not "more secure".

Could you elaborate? Have you written your whole stack from scratch? You are
running millions of lines of code that you will never read but have been
implemented by other parties.

> HTTPS is needlessly complex making it hard to implement.

Isn't this done with robust battle-tested libraries and built-in support in
modern languages?

\---

Mainly I'm just wondering why you're letting perfect be the enemy of good.
There's always room for improvement in everything, but I don't think user
privacy is a reasonable sacrifice to make.

> Giving in ends the hope that it will ever get changed.

Abstaining from HTTPS won't be seen by anyone as a protest, but as
incompetency, whether you find that justifiable or not.

~~~
jart
DNSSEC is superior to both PKI and WOT. It's basically free. It makes chains
of accountability transparent (hint: it's the dots in the URL). It provides
the benefits of hierarchical trust except with democratic control, and is
operated on film in public ceremonies.

We don't have a robust understanding of who exactly operates PKI, but we do
know that it's de facto governed by a company on Charleston Road, since CAs
only have their root keys listed in things like web browsers at their
pleasure. We also know that Charleston Road rewards CAs for their loyalty by
red-zoning and down-ranking the folks who don't buy their products. Products
which should ideally be deprecated, since SSL with PKI is much less secure.

Can anyone guess who's stymied progress in Internet security, by knuckle-
dragging on DNSSEC interoperation? It reminds me of the days of Microsoft
refusing to implement W3C standards. Shame on you, folks who work on
Charleston Road and don't speak up. You can dominate the Internet all you
like, but at least let it be free at its foundation.

~~~
tptacek
Who's knuckle-dragging on DNSSEC interop? The entire Internet community. It's
been almost 25 years, and 3 major revisions of the protocol, and still it has
almost no adoption --- virtually none of the most commonly queried zones are
signed. Why is that? Because DNSSEC is awful.

Obviously, you can't replace "SSL with PKI" (you mean TLS, and/or the WebPKI)
with DNSSEC, because DNSSEC doesn't encrypt anything. Whether or not you enact
the ritual of adding signature records to your DNS zone, you will still need
the TLS protocol to actually do anything securely, and the TLS protocol will
still not need the DNS in order to authenticate connections.

Instead, what DNSSEC (DANE, really) hopes to do is replace LetsEncrypt, which
is not "basically" but instead "actually" free, with CAs run by TLD owners.
Who owns the most important TLDs on the Internet? The Five Eyes governments
and China. Good plan!

~~~
jart
What we mean by DNS security is that when you visit your bank's website, you
know it's actually your bank. We're less concerned about concealing DNS
queries from routers and more concerned about preventing them from forging
responses. Eavesdropping won't empty your bank account. Spoofing can, and
encryption doesn't matter if the remote endpoint isn't authentic.

Right now you need to ping Google's servers each time you visit a website to
ask if it's safe. We love Google but they're a private company that can do
anything they want. If you feel comfortable with them being the source of
truth for names on the Internet, then the problem is solved.

Most of us would prefer it be controlled by ICANN which is a non-profit, not
controlled by any one government, that lets anyone from around the world who
cares enough participate show up and take part in Internet governance.
Controlling names was the purpose they were founded to serve. I say let them.

~~~
tptacek
DNSSEC doesn't protect your bank account. Your bank uses TLS to establish
connections with you, and TLS is authenticated, and does not rely on the DNS
when establishing connections.

DNSSEC is in fact controlled by world governments, who have de facto authority
over the most important TLDs. When a CA misbehaves, Google and Mozilla can
revoke them, as they've done with some of the largest and most popular CAs.
You can't revoke .COM or .IO.

------
dm33tri
Why do browsers punish non-verified certs much harder than no-cert?

If I want to quickly host my page and use encryption, then I have go through
all that hustle to make it work. Perhaps allow use of self-signed certificates
on same level as http instead of blocking my website.

~~~
cesarb
Since there's no way to distinguish a non-verified (self-signed or not)
certificate from an attack, browsers have to treat them identically to an
attack (otherwise an attacker would simply pretend to be a non-verified
certificate, to get the more lenient treatment).

On the other hand, a no-cert (unencrypted) connection _can_ be distinguished
from an attack on an encrypted connection: the browser knows _a priori_
(through the protocol in the URL) that the connection _is_ supposed to be
unencrypted.

~~~
lucideer
I think the point here is that there's also no way to distinguish a http
request from an attack.

It's fair enough to an argue that a self-signed cert could be an attack, but
so could any http request.

> _a no-cert (unencrypted) connection can be distinguished from an attack on
> an encrypted connection: the browser knows a priori (through the protocol in
> the URL) that the connection is supposed to be unencrypted._

I don't understand how that allows one to distinguish it from an attack.
Knowing that a connection is supposed to be unencrypted is just equivalent to
knowing that a connection could be under attack.

~~~
knome
Rightly punishing the connection for having the trappings of security when it
actually lacks it doesn't mean we need to punish openly insecure traffic. End
users have been told time and again that http is insecure, and so it's fine to
leave it. End users should also be able to trust that https means secure
without having to distinguish between secure and secure unless I'm being
mitm'd and needing to understand what any of that means.

~~~
mrob
Most end users have no idea what HTTPS is. They've just been (incorrectly)
taught that the padlock means it's secure. Disable the padlock for self-signed
HTTPS, and disable the CA-signed HTTPS-only features, and it becomes strictly
better than HTTP.

~~~
namibj
Especially because there is no way to MITM a connection with perfect-forward-
secrecy only if it ends up serving a self-signed certificate, because the
connection first negotiates an ephemeral key with which everything, including
the certificate, will be encrypted.

This means that with eSNI and at least one CA-signed cert on the IP, any
attacker runs the risk of having to spoof the CA-signed certificate.

~~~
zozbot234
A sophisticated attacker might know that you were going to connect to a self-
signed site, though. Interestingly though, private DNS (DoH, etc.) might help
further shroud this fact from the attacker.

All in all, I'd say that the browser should still throw up a full-page warning
because of the implications of TOFU, but it can be one where the "continue to
site" option is clearly shown even to a naïve user, and not hidden behind a
spoiler.

~~~
namibj
Then maybe fall back to DANE and thus restrict this to zones signed with more
than 1024bit RSA?

------
founderling
Because there is only one free certificate provider (lets encrypt) and it does
not allow wildcard certificates via server authentification.

Having the DNS credentials laying around on the server is not a good idea. So
creating wildcard certs via letsencrypt is a huge pain in the ass.

If a webmaster has control over somedomain.com I think that is enough to
assume he has control over *.somedomain.com. So I think letsencrypt should
allow wildcards to the owner of somedomain.com without dabbling with the DNS.

The way things are now, I don't use ssl for my smaller projects at
smallproject123.mydomain.com because I don't want the hassle of yet another
cronjob and I sometimes don't want the subdomain to go into a public registry
(where all certificates go these days).

~~~
sturgill
AWS certificates are free. Cloudflare will also put SSL in front of your
origin for free.

So if you’re using AWS you get it for free. Or you can slap CloudFront or
Cloudflare in front of your origin.

I think the barrier is low enough that I SSL all the things (including my
small side projects).

~~~
hitpointdrew
> AWS certificates are free.

"Free", but you can only use them on AWS stuff. AWS makes it nice and easy
(and does a bunch behind the scenes for you). Part of that behind-the-scenes
is that they have control of the private key on their side. You want to use
the AWS generated cert locally, or on another provider, too bad.

~~~
sturgill
You’re right, but it’s pretty simple to slap CloudFront (or Cloudflare) ahead
of those origins if you need to in a pinch. I don’t work for Amazon (and have
no dog in the fight) but I am a fan of AWS. And if you’re ever using AWS for
anything, there’s no reason to _not_ use their free certs.

Someone else mentioned Azure having a similar offering (I’ve never played with
Azure so I can’t speak to it). And if 2/3 of the providers offer it, I’d
imagine GCP will at some point as well.

I love how easy it’s becoming to launch SSL. LetsEncrypt did a lot to make it
mainstream. I’ve never used LE but I am grateful for their impact on our
industry.

------
Thorrez
The article says googletagmanager.com has HSTS preloading. But it doesn't.

This is easily testable. I view the website in both Chrome and Firefox, and
it's http, not https.

Sure googletagmanager.com is in the preload list, but it doesn't have "mode":
"force-https". It just has certificate pinning, not HSTS.

------
BrandoElFollito
Because HTTPS is not as easy as HTTP.

Sure there is Let's Encrypt and if you are facing Internet you are probably
good to go.

If you are on an internal network, then good luck. You need to build a PKI,
and then put into your devices the right certificate so that it is trusted.

If it was simpler, Apache would sing out its "It works!" in HTTPS and not
HTTP.

~~~
zurn
Let's Encrypt works on internal networks too.

Fun fact: TLS doesn't require certificates, and some browsers even used to
support HTTPS in these TLS modes many moons ago. See eg
[https://security.stackexchange.com/questions/23024/can-
diffi...](https://security.stackexchange.com/questions/23024/can-diffie-
hellman-anonymous-be-used-as-a-cipher-for-ssl-for-one-way-certifiate#23025)

~~~
BrandoElFollito
Ah? That's good to know!

How to set this up on a domain which is not connected to Internet? How is the
check done?

~~~
benoliver999
It's not easy but iirc you can do it with a DNS-01 challenge, if your internal
domain name is valid (doesn't have to resolve to anything though).

~~~
BrandoElFollito
The problem is that I also have domains which are completely internal, not
known/resolvable outside

~~~
tialaramex
This is probably a bad idea and I'd recommend migrating off such names as a
background task.

Realistically you can't entirely deconflict these names. So you always have a
risk of shadowing names from the public Internet.

The public CAs spent years in denial over this (yes they used to sell publicly
trusted certs for "private" names, this is now prohibited). Create
internal.example.com and things get easier. To the extent security by
obscurity is worth trying it's just as available this way (split horizon DNS
etcetera)

~~~
lixtra
> Realistically you can't entirely deconflict these names. So you always have
> a risk of shadowing names from the public Internet.

It's totally save and legitimate for ycombinator to use secret.ycombinator.com
on their intranet without telling anything about it to the outside internet.

~~~
tialaramex
Those are names you own, and a CA will happily issue you certs for those names
(but Let's Encrypt won't without a DNS record saying the name at least exists)

The grandparent was, as I understand it, talking about names they don't own,
for which you've no assurance somebody else won't own them (on the public
Internet) tomorrow. This used to be very common, decades ago Microsoft even
advised corporations to do it for their AD, but it's a bad idea.

------
namibj
There is one "good" reason against https: handshakes take enormous amounts of
CPU, relatively speaking. It's quite easy tp DoS server by skipping the
expensive part on your end. You can load a core with 10~30Mbit@2k rps if your
not even optimized.

Whereas the same server could tank 40k rps HTTP requests.

~~~
Sukera
Do you have a source on that? Quite a few people seem to disagree:
[https://istlsfastyet.com/](https://istlsfastyet.com/)

~~~
kevingadd
In my testing for high-throughput scenarios like copies over
ssh/rsync/https/smb (i tried them all) in every case encryption was a big hit
to throughput. hardware assistance (built into the CPU) helped a lot but it
was still a massive boost to shut off encryption - saving literal minutes on
every bulk transfer, multiple transfers per day.

For the average case it probably doesn't matter, and you can optimize it, but
I think it is totally understandable that the average novice could end up with
bad https performance if only because the defaults are bad or they made a
mistake. If hardware assist for the handshake and/or transfer crypto is shut
off (or unavailable, on lower-spec CPUs) your perf is going to tank real hard.

I ended up using ssh configured to use the weakest (fastest) crypto possible,
because disabling crypto entirely was no longer an option. I controlled the
entire network end to end so no real risk there - but obviously a dangerous
tool to provide for insecure links.

Also worth keeping in mind that there are production scenarios now where
people are pushing 1gb+ of data to all their servers on every deploy - iirc
among others when Facebook compiles their entire site the executable is
something like a gigabyte that needs to be pushed to thousands of frontends.
If you're doing that over encrypted ssh you're wasting cycles which means
wasting power and you're wasting that power on thousands of machines at once.
Same would apply if the nodes pull the new executable down over HTTPS.

~~~
acdha
How long ago was this — and how fast was your network? On hardware less than a
decade old you shouldn’t be seeing that unless you’re talking about 10+Gb
networking.

~~~
kevingadd
A year ago in my development VMs, it was the difference between like 40MB/s
throughput and 200+

------
necovek
My biggest gripe with the current de facto recommended approach (even mandated
in HSTS) is that you need to redirect to https from untrusted http.

So you are being forced to either not serve http, or to condition users to
trust MITM-able redirect. How many people will notice a typoed redirect to an
https page with a good certificate?

The solution is simple: browsers should default to https, and fall back to
http if unavailable. Sure, some sites have broken https endpoints, but
browsers have enforced crazier shit recently.

~~~
kuschku
That's what HSTS is for - you set a HSTS policy, and the browser will remember
this site for a certain time you can set (usually 1-2 years).

And going further, you can enable HSTS preloading, meaning the next release of
browsers is going to hardcode your website as always and only ever going to be
used with HTTPS.

See for example my domain
[https://hstspreload.org/?domain=kuschku.de](https://hstspreload.org/?domain=kuschku.de),
which is currently in the preload lists of all major browsers including
Chrome, Firefox, Edge and even Internet Explorer.

I also deploy the same for mail submission with forced STS, and several other
protocols.

~~~
necovek
Right, so HSTS will protect a visitor who has visited your web site at most
max-age ago using that particular browser and device.

Or, as I stated, for preload, you have to either not have HTTP at all, or have
a redirect to HTTPS: it should be clear from my above post why I think a
redirect is a bad idea. I also dislike turning off HTTP for those that don't
have any other option.

To me it seems that browsers just switching to https-by-default and http-as-
fallback is a much simpler, better, backwards-compatible change that should
just work. What am I missing and why do you feel HSTS is a good idea compared
to that?

~~~
kuschku
Because some websites serve something different on 443 and 80, and you won’t
get the right result by visiting 443.

The preload list allows you to specifically say that for your own website
clients should always use HTTPS, which is a good solution, as it means no one
is ever going to visit kuschku.de on port 80, except for curl and dev tools,
for which the redirect is useful.

~~~
necovek
I disagree with the claim that it's better for a web site to implement HSTS
than to fix whatever they are serving on 443.

But to each their own.

~~~
kuschku
It’s possible for me, today, to implement HSTS, and have my site served
securely everywhere, today.

Browsers can’t set 443 as default, because _other_ websites are broken,
_other_ websites I can’t fix and the browsers can’t fix either.

~~~
necovek
We have differing views of "everywhere, today": you acknowledged yourself
there are cases where it won't happen, it's just how much we think that's
important where we differ. That's ok, I appreciate your point and thanks for
spending the time to explain.

As for what browsers can or cannot do, they also can't introduce DNS-over-
http, introduce stricter cookie policies breaking a bunch of web sites, or
reduce effectiveness of ad-blockers, drop flash, or... Sure, defaulting to
https is too high a bar (not expressing an opinion on any of those — eg. good
riddance to Flash :) — but browsers can and have done stuff that's just as
bad, forcing web site creators to adapt their web sites).

------
wojciechpolak
"gnu.org" is on the list marked as a Chinese website...

~~~
k33l0r
There are some other confusing ones as well.

nature.com is marked as Chinese, as are nginx.org and ntp.org.

example.com is Indian in the list as is the now defunct dmoz.org.

I don't understand the methodology behind the country assignments at all…

~~~
signed0
Weirdly nature.com seems to actually redirect to https, as does zara.com,
lenovo.com, genuis.com, and senate.gov. Is this list stale, or did no one
spot-check this?

~~~
squiggleblaz
Yes, senate.gov in particular:

% curl -I senate.gov HTTP/1.1 301 Moved Permanently Server: AkamaiGHost
Content-Length: 0 Location: [http://www.senate.gov/](http://www.senate.gov/)
Date: Tue, 17 Dec 2019 10:37:04 GMT Connection: keep-alive

% curl -I www.senate.gov HTTP/1.1 301 Moved Permanently Server: Apache
Location: [https://www.senate.gov/](https://www.senate.gov/) Content-Length:
231 Content-Type: text/html; charset=iso-8859-1 Date: Tue, 17 Dec 2019
10:37:08 GMT Connection: keep-alive

It seems to meet the requirement for exclusion from the list. Data updated 16
Dec 2019, so I don't think it's stale.

I've also checked from Australian and a European connection, so I don't think
it's a regional thing. The other genuis.com doesn't work for me, the other
sites redirect and set a cookie.

~~~
michaelt
If you're trying to get senate.gov onto the HSTS preload list, you have to
redirect [http://senate.gov](http://senate.gov) to
[https://senate.gov](https://senate.gov) before
[https://www.senate.gov](https://www.senate.gov)

Maybe their tester applies the same criteria - although to me that feels a bit
unfair...

------
strenholme
One annoyance with this system, from the linked webpage:

>an expectation that a site responds to an HTTP request over the insecure
scheme with either a 301 or 302

Doing things this way is the final nail in the coffin for Internet Explorer 6,
since IE6 does not use any version of SSL which is considered secure here in
2019. And, yes, I have seen in people the real world still using ancient
Internet Explorer 6 as recently as 2015, and Windows XP as recently as 2017.

Which is why I instead do the http → https redirection with Javascript: I make
sure the client isn’t using an ancient version of Internet Explorer, then use
Javascript to move them to the https version of my website. This way, anyone
using a modern secure browser gets redirected to the https site, while people
using ancient IE can still use my site over http.

(No, I do not make any real attempt to have my HTML or CSS be compatible with
IE6, except with [https://samiam.org/resume/](https://samiam.org/resume/) and
I am glad the nonsense about “pixel perfect” and Flash websites is a thing of
the past with mobile everywhere)

~~~
namibj
Be aware that blocking scripts from insecure connections is something you'd
usually want to do...

~~~
strenholme
“usually” being the operative word. I’m not quite ready to throw IE6 (Internet
Explorer 6) and all http-only browsers completely under a bus yet.

------
altmind
Preloads list is an absolute kludge that does not and will never scale and
creates a huge deal of problems and works only for specific browsers.

The task is not as simple as using DNS to store strict https flags(as DNS can
be manipulated by intermediary), but hardcoding the lists in the browsers and
keeping the lists in the chrome's code is definitely not a solution.

~~~
kuschku
The goal is to slowly move higher levels into that list.

e.g. in the past it was just domains and subdomains.

Today there are already some TLDs on the list themselves.

------
cassianoleal
I mostly have port 80 egress traffic blocked on Little Snitch. The web is
painful to use like that but gives you an idea of the sorry state of websites.

A lot of websites just don't serve over HTTPS, or serve them with domains
whose CN or SAN don't match the host.

Many that do support https have links that downgrade you back to http on the
same domain.

~~~
dijit
How do you use public Wi-Fi with captive portals?

~~~
ninkendo
Allowing [http://captive.apple.com](http://captive.apple.com) should make
macOS’s captive portal auth window work.

~~~
Zarel
If you block port 80, you'll never get to the part where you do URL filtering
in the first place.

(And also the redirection thing.)

~~~
ninkendo
I mean whitelist port 80 for captive.apple.com. Sorry if that wasn't clear.

macOS has a background daemon which automatically hits captive.apple.com on
connection to a WiFi network, to detect if it's behind a captive portal (and
opens up a browser window to let you complete the flow, if it gets a 302). So
that much should work even if you block egress port 80 but whitelist
captive.apple.com.

...that is, assuming the portal to which you get redirected would be served
over https, but I guess that isn't a given either.

------
vivekd
One thing that surprised me was how hard it was to set up https https
redirects for websites on aws and Google cloud. I needed too set up a load
balancer to do https.

The redirects are also hard, I have a static site using Google storage and I
have to create a server instance and redirect from there because it's not
possible to do an automatic redirect. I don't know why the big cloud hosting
providers aren't cooperating to make full https implementation easier.

------
peterwwillis
Recently an OpenShift cluster I admin went down because of long-lived certs
not being rotated in time. There are many clients, servers, nodes, services,
and configs involved, so rotating is non-trivial, so of course it's automated,
and of course because it's not tested regularly, the automation just doesn't
work after a while. Using the automation only seems to make things worse, and
getting everything working again ends up taking days.

PKI is technically the best practice for these systems, but it's also the most
fragile and complicated. At a certain point, if the security model is so
complex that it becomes hard to reason about, it's arguable that it's no
longer a secure model, to say nothing of operational reliability.

I also have a whole rant about how some business models and government
regulations _literally require inspecting TLS certs of critical transport
streams_ , and how the protocols are designed only to prevent this, and all
the many problems this presents as a result, but I don't think most people
care about those concerns.

Oh, and gentle reminder that there are _still_ 100% effective attacks that
allow automated generation of valid certs for domains you don't control. It
doesn't happen frequently (that we know of) but it has happened multiple times
in the past decade, so just having a secure connection to a website doesn't
mean it's actually secure.

------
cm2187
Is it still the case that when you think you connect in https to a website,
only the segment to cloudflare is encrypted and the segment cloudflare to the
web server might not be?

~~~
mtberatwork
Yes, that's SSL termination. Generally this happens at the CDN, load balancer
or proxy (e.g. nginx used as a cache) layer and is pretty common since the
fleet of servers handling the request after being routed are in a private
network. With CF, the request from CF to the origin is over a public network
and it will depend on how the user has configured their CF setup as to whether
or not that hand-off is then encrypted. If they are doing SSL termination in
CF, then it won't be encrypted from CF to the origin server.

------
edf13
The biggest problem with forcing everything HTTPS is a false sense of security
& trust that this gives to none-techie users.

Security of the data transfer layer does not mean can or should trust the
website you are visiting.

Just because a website has a padlock does not mean it is trust worthy and you
can hand over your CC details.

[https://www.amazon.somethiing.other.co/greatDiscount](https://www.amazon.somethiing.other.co/greatDiscount)
may look great to some!

~~~
simias
If we migrate to HTTPS everywhere we can get rid of HTTP for general use and
switch to a different UI, where HTTPS websites don't have any special icon but
HTTP ones get a warning icon.

It's already effectively how password form submissions work in many browsers.

~~~
greggman2
You can't have HTTPS everywhere until we can get HTTPS for IoT devices. My
router doesn't serve it's configuration screen via HTTPS. How could it? I have
to connect to it to configure it before it's on the internet.

Same with my IoT cameras and all the various local apps I run that can start a
web server. Heck, my iPhone has tons of apps that start webservers for
uploading data since iPhone's file sync sucks so bad.

We need a solution to HTTPS for devices inside home networks.

~~~
simias
I agree that having an elegant and secure solution to enable HTTPS on non-
internet-facing equipment would be nice. I work mainly on embedded devices and
all my admin interfaces are over HTTP because there's simply no way to ship a
certificate that would work anywhere. It would be nice if you could easily
deploy self-signed certificates that would only work for local addresses and
only for specific devices, although of course doing that securely and with
good UI would be tricky.

In the meantime having big warnings when connecting to these ad-hoc web
interfaces makes sense I think, since they can effectively easily be spoofed
and MitM'd (LANs are not always secure in the first place so it makes sense to
warn the user not to reuse a sensitive password for instance). It's annoying
for us embedded devs but I think it's for the greater good.

------
bo1024
Maybe I’m wrong, but I feel SSL has a downside of relying on more
centralization. If a visitor to my totally-static webpage wants to bypass that
layer and request the http version directly, I’m going to let them. (Obviously
not excited about the idea of being mitm’d but it’s not a security risk, so
leave that tradeoff up to the visitor).

~~~
pornel
[https://doesmysiteneedhttps.com/](https://doesmysiteneedhttps.com/)

MITM can do _anything_ to your site, so your totally-static site may not be
static any more at the victim's end. It may be a site collecting private
details, attacking the browser, or using the victim to attack other sites.

Your static HTTP site is a network vulnerability and a blank slate for the
attacker.

~~~
anony121212
So then disable javascript for http sites

~~~
Sohcahtoa82
That won't do anything. If someone can Man-in-the-Middle you, then they can
easily forge a 302 redirection to a malicious web page that could be HTTPS.

------
davidmurdoch
One _potentially_ good reason to not force SSL:
[https://meyerweb.com/eric/thoughts/2018/08/07/securing-
sites...](https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-
them-less-accessible/)

TL;DR: Secure websites can make the web less accessible for those who rely on
metered satellite internet (and I'm sure plenty of other cases).

~~~
satanspastaroll
Trading security for convenience is rarely a good idea. The rest of the world
should not conform the to failures of certain areas to provide internet.

~~~
davidmurdoch
I see your point. But we trade security for convenience 24/7/365\. We could
all have bulletproof glass in our homes, personal security cameras everywhere,
backup generators, panic rooms, etc, but we don't, because it's not convenient
(and I know the expense is primarily what makes it not convenient, but I think
it's still a valid argument).

Providing access to Wikipedia over http to people in third world countries may
be worth the risk of someone MITMing the site with propaganda.

The suggestion is only to give some users the option.

~~~
pixl97
Mitm with propaganda is the least of the worries. Full on exploit code is.

The fact is as an ecosystem develops completely increases. Lifeforms in that
ecosystem have to spend more time and effort protecting themselves from
outside attacks as time progresses.

------
onion-soup
Because it's always pain in the ass to set it up and then renew?

~~~
watermelon0
How exactly is it a pain in the ass?

\- If you are hosting a simple static page or blog, your hosting provider
probably has Let's Encrypt plugin.

\- If you have your own VPS, Caddy has you covered with file serving, fastcgi
support for PHP, and proxying to (g)unicorn/nodejs/Go/.NET, and has HTTPS
enabled by default.

\- If you have more advanced setup (e.g. containers), traefik supports HTTPS
with just a few lines of configuration.

\- If you are big enough to afford cloud, it takes a few lines of Terraform
code to provision certificate for load balancers (speaking for AWS, and
assuming others have similar solutions).

For other cases (e.g. lots of traffic with custom haproxy/nginx/etc. setup),
you are probably smart enough to find out how to enable Let's Encrypt support.

~~~
at_a_remove
1) Not everything is running bare Apache. In fact, some services might have
some rather strange web-driven GUI (or, more interestingly, curses-like) that
requires you to carefully load a certificate, a CSR, and so forth in a
somewhat arcane manner. Some pretty niche serving exists out there and I have
had to deal with a bunch of them, to the point where I had to write extensive
documentation on keeping the certificates up to date on each separate weird
service. Many of these services have a "no user-servicable parts inside, your
warranty will be voided ..." clauses in the service contract which deter
spelunking.

2) Some services require wildcards, like proxies.

3) Some organizations have, due to someone far away making strange decisions,
policies about certificate authorities, and people to audit for compliance.
Therefore, a cert costs money and, for a site which is purely informational,
that's a hard sell.

4) Because we're not running on a hosting provider, a VPS, containers, or
cloud.

5) Because not everyone wants to deal with some combination of the above every
three months due to Let's Encrypt's expiration policy.

~~~
necovek
I generally run apache/nginx in front of most things for SSL termination —
this allows you to simplify SSL setup significantly.

------
romwell
I consider myself young, but I've been around long enough to to rely on One
True Service Provider for anything.

And "Let's Encrypt" is not an answer to "HTTPS is not free". It's not. We all
are going to see our projects outlive Let's Encrypt (or their free tier).

In the end, nothing is secure. A dedicated attacker _will_ find a way, given
enough resources. Any security measure is just a deterrent.

My deterrent is that it's not worth MITM'ing my personal website with, like,
10 monthly visitors. (The reader might gasp that I lock my bicycle with a
chain that can be snapped in a second, and that a strong enough human can
probably bash my home door in).

Anyway. It's almost 2020, and if you are still advocating on moving the
entirety of the Web to reliance on Big Centrally Good Guys, I really don't
know what else to say to you.

------
fiatjaf
Because it's hard and a pain.

Sure, depending on your setup it's easy, but for a lot of setups it isn't.
Instead of trying to say HTTPS is easy and shame everybody who isn't doing it
more efforts should be diverted into creating an actual fully encrypted
network that doesn't need CAs.

~~~
fiatjaf
What actually happens when you try to force HTTPS over the internet: you
centralize it, you make it harder for the small player, hobbyist, personal
homepage guy, and make it easier for the big corporation.

------
LinuxBender
It isn't just web sites. Many software repos still use http or native rsync.
Some would argue that you validate the packages with GPG, but you would be
amazed if you saw how many people install the GPG public key from the same
mirror they download software from.

~~~
surge
Gradle, granted they're fixing it.

[https://blog.gradle.org/decommissioning-
http](https://blog.gradle.org/decommissioning-http)

------
z3t4
Had to access an EOL device and couldn't browse the web because of all ended
certificates...

------
veb
I don't get it. With Lets Encrypt, it's like one or two lines to get
everything set up.

I'm guessing people aren't as lucky as I am to be running on newer machines
and such.

I mean it even edits your nginx files to redirect http to https if you agree.
It's not hard.

~~~
WilliamEdward
I'm fine with people who think it's too hard...

What i cannot stand is people who can do it, but refuse to out of laziness. Or
because they want their content to be insecure on purpose.

This applies mostly to big orgs, so indie devs can have some leeway if it's
too hard to implement.

~~~
grecy
> _i cannot stand is people who can do it, but refuse to out of laziness_

(Raises guilty hand)

I run a couple of sites on my hosted server that are still http. They both sit
behind a varnish setup and to be honest I just have not found the time to get
it done. Usually when I mess with my configurations I lose a week to
troubleshooting stupid stuff and I just can't bring myself to do it.

~~~
WilliamEdward
Hey at least you aren't running a fortune 500 with millions of users (You
aren't right?)

~~~
grecy
haha, right. They're really just hobby projects, almost entirely read-only.

------
printercenter
To provide a real-time solution to for printer hitches, get in touch with the
experts of printer service. All technicians are well-trained and have years of
skills to resolve the glitches. [https://printerhelpcenter.com/replace-
brother-drum-error-mes...](https://printerhelpcenter.com/replace-brother-drum-
error-message/) [https://printerhelpcenter.com/how-to-fix-canon-printer-
error...](https://printerhelpcenter.com/how-to-fix-canon-printer-error-b200/)

------
bullen
Downvote time: Why HTTPS?

I made my own security:
[http://talk.binarytask.com](http://talk.binarytask.com)

~~~
DuskStar
Do you have a description of how you made your own security and what it
provides?

~~~
bullen
Last time I described it here on HN there was confusion.

It's just "single serving server salt" (try saying that fast 3 times) sent to
"client for secret hashing" and then "sent back to server again", so it's
insecure on registration (just like all security with MITM without common pre-
shared secret) but after that it's pretty rock solid, even quantum safe.
Requires two request/responses per auth. though.

This tech is nothing new and has been used by many big actors since forever.
It's simpler that public/private encryption because it only requires hashing
math to work.

It should be my choice to use whatever encryption I want without having google
scare away my customers with "Not Secure".

~~~
necovek
But "common pre-shared secret" (well, public key allowing verification that
there is a trusted secret being used) is at the root of https security today
(a preset list of root certificates distributed with OSes/browsers).

If someone presents as your web site to a first time visitor (or a previous
visitor but on a new device), there is no way for them to really trust your
web site. Basically, it's the equivalent of you using self-signed certs, and
likely even worse because there are more attack vectors even outside the
initial connection.

~~~
bullen
Can you explain how root certificates makes anything secure? Why can't you
just hack the root cert store on the local computer f.ex.?

There must be a million attack vectors to that system too, with a lot of
attackers working on them since the payout is good when everyone uses the same
system?

Even if it makes sense, since all governmental offices and some corporations
have their own; doesn't that make you skeptic of that kind of centralized
security?

I'd rather take my risks with something I can understand, modify and improve;
than using what everyone else uses.

And again: It should be MY choice! Not googles, now I have to compile my own
browser, which takes like 24 hours on a modern home PC!!!

~~~
necovek
You are conflating multiple things.

If you are on an untrusted device (eg. someone _else_ could have hacked the
root cert store in the OS/browser), all bets are off: they could have also
just hacked the browser to drop any and all warnings and to always display a
green padlock icon.

If you are talking about someone else hacking your machine, well, then it's
pretty much the same: they can get most stuff by adding keyloggers, screen
recorders and just scraping your disk for useful data.

If you are on a trusted device, you can "hack" the root cert store all you
want to add root certificates you trust. As long as _you_ trust them, no trust
has been lost.

Root certificates are not really "centralized": they are issued by different
CAs, and different browsers trust different root CAs too, and it was even more
prominent in the past where you had some certs "work" in only some browsers.
Still, there are multiple recognised attack vectors there as well (each
individual CA, their certificate issuing servers which have access to the root
or intermediate signing cert, browsers and OSes and their trusted-CA
components...), and the big difference is that the attack vectors are known
and heavily monitored.

PGP/GPG keyrings were basically the same approach without the root
certificates, and the (in)famous signing parties did not bring a trust level
that is ultimately needed on the internet today. I would love to see a
development in that direction (one could say it was an early consensus-
building approach on who to trust), but we are not there yet.

It certainly is your choice to how you want to protect yourself and your web
site visitors, and it's your web site visitors' choice whether they want to
trust you with their data (for instance, I personally would recommend you to
set up a self-signed cert and add that root cert to your keyring for services
that you plan to only access yourself through untrusted networks).

Except that most people won't understand where the risks are in either
approach, and that's half the battle.

------
mohas
Many of us have to host our websites on shared hosts that does not support
HTTPS freely, HTTPS costs money in the third world

------
SlowRobotAhead
Am I missing something?

Lots of US sites on their NO HTTPS list come up in Safari as HTTPS.
Rutgers.edu for example.

------
dijit
I have a reason not to use https.

I host a single site on a host (so, no login, subject name or path information
to leak), which only contains details how to connect to my irc server at the
same address.

If the message is altered then the most pain anyone will have is connecting
somewhere else for the first time. (They won’t be automatically logging in if
they’re using this page).

Why does everything need to be TLS? It feels like a cargo cult. A requirement:
“because!”

In other scenarios it’s worth modelling threats and I agree that it’s good to
err on the side of caution but aside from the modification of my connection
information there’s no good tangible reason to incur an overhead in
administration.

Although it should be noted; part of the reason that web server even exists is
to do letsencrypt for a globally geobalanced irc network.

~~~
throw0101a
> _Why does everything need to be TLS? It feels like a cargo cult. A
> requirement: “because!”_

Traditionally, people have only encrypted things that are deemed sensitive
(logins, money, health). However, when the majority of traffic is non-
encrypted, actually ciphered data is very noticeable to anyone monitoring the
network, and it screams " _look at me! I am important!_ ".

However, when >90% of traffic of the Internet is encrypted, then there is no
'extra' information to be gained from that fact. If further forces any
surveillance program to expend extra resources to either trying decrypting
everything, or choose to only focus on those people that it actually deems
important, instead of wholesale surveillance of the entire population.

Further, encrypting content prevents it from being modified, reducing your
potential to be leverage against:

> _The Great Cannon of China is an Internet attack tool that is used to launch
> distributed denial-of-service attacks on websites by performing a man-in-
> the-middle attack on large amounts of web traffic and injecting code which
> causes the end-user 's web browsers to flood traffic to targeted
> websites.[1]_

* [https://en.wikipedia.org/wiki/Great_Cannon](https://en.wikipedia.org/wiki/Great_Cannon)

* [https://citizenlab.ca/2015/04/chinas-great-cannon/](https://citizenlab.ca/2015/04/chinas-great-cannon/)

~~~
dijit
"herd immunity" is a good argument; but herd immunity exists for outliers. The
people who for some reason cannot get a vaccine, yet they are not exposed to
the hypothetical disease because everyone they are surrounded by is immune.

That's kinda my argument, not that https is bad. I agree with widespread
adoption and taking it as a default even for a static page.

But in my environment I have many dozens of nodes and idk where letsencrypt is
going to come in because of geobalanced DNS. I also serve many domains with
this project so I don't have the nice DNS-01 ACME verification features
because not all DNS providers have an API.

So I have a web server on each node, which reverse proxies .well-known/ to
some central server that runs certbot. Then I distribute those certs outwards
to those nodes.

It goes against certain sysadmin principles about transportation of private
key materials, but it's what works.

But; given that architecture which caters for a latency sensitive product;
letsencrypt is a serious overhead. To the point where I'm considering going
back to 2y paid certs.

~~~
throw0101a
> _I also serve many domains with this project so I don 't have the nice
> DNS-01 ACME verification features because not all DNS providers have an
> API._

Are you aware of using ACME DNS aliasing with CNAMEs?

* [https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...](https://www.eff.org/deeplinks/2018/02/technical-deep-dive-securing-automation-acme-dns-challenge-validation)

* [https://github.com/Neilpang/acme.sh/wiki/DNS-alias-mode](https://github.com/Neilpang/acme.sh/wiki/DNS-alias-mode)

* [https://dan.langille.org/2019/02/01/acme-domain-alias-mode/](https://dan.langille.org/2019/02/01/acme-domain-alias-mode/)

My work's DNS provider does not have a handy API, so if we want a cert for the
internal-only _foo.example.com_ , we point __acme-challenge.foo_ to __acme-
challenge.foo.dnstest.example.com_. And the NS server for the
_dnstest.example.com_ lives in our DMZ and is only there to answer ACME
queries from Let's Encrypt. We set up some scripting to allow updates to the
NS server via _nsupdate_.

And there are ACME clients written specifically around the idea of having the
client run on a different system than the web server:

* [https://github.com/srvrco/getssl](https://github.com/srvrco/getssl)

------
andrewfromx
i still use [https://neverssl.com](https://neverssl.com) daily. I hope it
never goes away.

~~~
nhumrich
Intentional that your link is https?

~~~
andrewfromx
ha, no i think HN did that automatically? I wrote http without s

------
megous
Service workers are a poor replacement for the shared HTTP cache, since the
cache will not be shared among users.

~~~
snek
I apologize for not having a source, but browsers are actually looking into
disabling shared caches between sites because of side-channel attacks.

------
anony121212
Ability to write my own HTTP server from scratch. Content is for downloading
and use offline.

------
forgottenpass
As long as the worlds greatest surveillance system continues to be given
deliberate access to the plaintext, I will continue not caring about HTTPS for
websites that don't have users logging into an account or submitting forms.

------
jzl
I clicked through to the list of sites. Embarrassing to see that mit.edu is
not https by default! The same institution invented Kerberos. Come on MIT, fix
this please.

------
greggman2
what is about that site's low contrast. My eyes can barely focus

------
netsectoday
Without HSTS preload anyone on your local network can arp/dns spoof your
traffic, MITM you, and automatically inject malicious javascript
(cryptominers, credential-stealers, etc.), access all of the page content, and
manipulate the page or response.

If you are connecting to a "Free Public WiFi" and the malicious actor is the
one broadcasting the access-point; it's even easier to MITM you.

Without Cert & Key Pinning your employee laptop can be MITM by corporate to
eavesdrop on all of your HTTPS traffic. The browser will show that the
connection is secure, but it isn't. When you pin the cert and key - even with
a compromised corporate computer - the insecure site warning will show and
you'll be alerted to the fuckery.

> Doing things this way is the final nail in the coffin for Internet Explorer
> 6

\- Fucking great! Nothing else to say here.

> handshakes take enormous amounts of CPU

\- This is vastly overstated (enormous?). Also, this is called a tradeoff.
Security isn't free in time, money, or performance.

> Preloads list is an absolute kludge that does not and will never scale...
> and works only for specific browser

\- The preload list, right now, is 10.6mb and contains 90,862 entries. This
seems to function and scale just fine. Seeding your browser with known values
is really the best way to do this until 99.X% of web traffic is provided over
HTTPS... Also Chrome, Firefox, Safari, IE/Edge, and Opera make up 98% of all
browser traffic today and they have all supported this standard for years.

> The biggest problem with forcing everything HTTPS is a false sense of
> security.

\- Defense in depth. Layering security controls is the only way to go. Also;
this is some crazy mental gymnastics to take the position "wearing a seatbelt
is a false sense of security because you can still crash".

> Because it's hard and a pain.

\- Feeling that pain is offset onto the attackers trying to compromise your
site. If you don't feel the pain; they don't either.

> Secure websites can make the web less accessible for those who rely on
> metered satellite internet... TLS 1.3 with 1-RTT should improve this
> situation.

\- Even if your entire business depended upon delivering data to metered
satellite internet users; the risk outweighs the cost when not encrypting your
traffic. WARNING: DON'T IMPLEMENT 0-RTT OR 1-RTT WITHOUT UNDERSTANDING YOUR
APPLICATION-SPECIFIC REQUIREMENTS. You can really fuck this up by not properly
managing tokens between your webserver and application layer. Not recommended.

> I don't get it. With Lets Encrypt, it's like one or two lines to get
> everything set up.

\- True, but it get's confusing really fast if you don't 100% match the
certbot use-case.

> HTTPS is not an obligation.

\- For 99% of people running businesses; it is.

> Recently an OpenShift cluster I admin went down because of long-lived certs
> not being rotated in time.

\- If you have had certbot running for a long time I would suggest you check
your server logs TODAY and make sure your cron job is still working correctly.
Recently there was a change with the certbot acme version requirement and your
reissue might be failing. Seriously, take a quick look right now.

> Because frankly, I neither trust letsencrypt nor the certificate authority
> system in general... but won't help against industrial (e)spionage

\- Places tinfoil hat on... you're not wrong.

------
faissaloo
Because even Certbot is a massive pain to setup if you're not using a very
generic setup

------
dvfjsdhgfv
HTTPS is not an obligation. Most people believe it's a must these days, but
it's not. There is a nice rebuttal of Troy's arguments on N-gate (via webcache
as direct links from HN end up in an endless pseudo-captcha):

[http://webcache.googleusercontent.com/search?q=cache:t_oVSNu...](http://webcache.googleusercontent.com/search?q=cache:t_oVSNuTvIgJ:n-gate.com/software/+&cd=1&hl=en&ct=clnk&gl=pl)

------
WilliamEdward
Some websites adamantly insist they did not need HTTPS because they are purely
static.

[https://www.troyhunt.com/heres-why-your-static-website-
needs...](https://www.troyhunt.com/heres-why-your-static-website-needs-https/)

The same website to my surprise has an article on why this is faulty
reasoning.

~~~
iudqnolq
Our asshat twin n-gate has something to say about this

> Horseshit. Users must keep themselves safe. Software can't ever do that for
> you. Users are on their own to ensure they use a quality web client, on a
> computer they're reasonably sure is well-maintained, over an internet
> connection that is not run by people who hate them. None of the packets I
> send out are unsafe, so my site does not need HTTPS.

> None of those things are my problem. If people don't want to see my site
> with random trash inserted into it, they can choose not to access it through
> broken and/or compromised networks. If other website operators are concerned
> about this sort of thing, they are free to use HTTPS, but I have no reason
> to do so. Encryption should be available to anyone who wants to serve
> encrypted content, but I have no interest in using it for my website. It's a
> shame that people are using web browsers (note: not my website, but
> BROWSERS) as attack vectors. The legions of browser programmers employed by
> Mozilla, Google, Apple, and Microsoft should do something about that. It's
> not my flaw to fix, because it's a problem with the clients. My site does
> not need HTTPS.

> Earlier you recommended letsencrypt, and now suddenly you want me to pick a
> competent certificate authority? The only reason they didn't leak my info
> already is because my site does not need HTTPS.

> Obviously my site does not display ads; as has [been pointed
> out][[https://news.ycombinator.com/item?id=14666391](https://news.ycombinator.com/item?id=14666391)],
> It does not even appear to be monetized. This is because I have a real job
> and the entire web ad industry can fuck itself off a cliff. So, while mixed-
> content warnings are pretty obnoxious, my site does not need HTTPS.

[http://n-gate.com/software/2017/](http://n-gate.com/software/2017/)

~~~
pcmonk
Can't read the article because the captcha won't load, but this reply doesn't
make any sense. What can the browsers do without the cooperation of the
server? You don't really need encryption to deal with that specific problem,
but you do need signatures, which means you need a certificate anyway. It's
quite a strange attitude toward the problem.

~~~
majewsky
The captcha is because you come with a Referer of news.ycombinator.com. Try
opening the website in Private Mode.

------
ktpsns
Because frankly, I neither trust letsencrypt nor the certificate authority
system in general. This might prevent eavesdropping in your coffee shop wifi,
but won't help against industrial spionage powered by three-letter-agencies
who probably control some of these authorities.

~~~
peterwwillis
This feels a bit like saying, I'm not going to use a traditional wood beam and
shingled roof for my house, because it won't help against a meteor.

