This is really misleading and potentially dangerous. Details:
* Even on static sites, HTTPS prevents a MITM from telling what pages you're reading, or introducing falsehoods into the content. It's not "overuse" to use HTTPS there.
* I don't know how the author's server is set up, but a 1-second increase in page load time isn't consistent with load times I've ever measured anywhere else. In fact, there's a lot of HTTPS pages where the total load time is less than what's claimed as the difference here.
* If you care about sustainable technology, you should be demanding that Apple provide updates for hardware longer, or open-source everything so that the community can. Whether or not the sites you want to visit use HTTPS, using the Internet from a system years behind on security patches is a really bad idea, and not something you should optimize for.
* The specific suggestion of disabling the redirect is exactly equivalent to a MITM attacker running sslstrip, so you're doing a big piece of the bad guy's job for him.
> Even on static sites, HTTPS prevents a MITM from telling what pages you're reading, or introducing falsehoods into the content. It's not "overuse" to use HTTPS there.
Also scripts. Even if the site itself is completely uninteresting to an attacker, he can still inject scripts which attack 3rd-party sites or exploit browser vulnerabilites to gain access to the machine. (Though, to be fair, the same is possible with ads)
> If you care about sustainable technology, you should be demanding that Apple provide updates for hardware longer, or open-source everything so that the community can.
I can't really agree with that point. How much longer is "longer" supposed to be? 6 years? 10 years? 50 years? To a software dev, supporting any piece of technology for 50 years sounds completely outrageous, yet that was the generally accepted lifespan of many consumer devices not long ago.
IMO, hardware/software that requires continuous updates is not sustainable, period. At some point, you'll always be in a position where the vendor is unwilling to keep up work for a product. Maybe you'll have some volunteers to fill the gap, but that's nothing that you can take for granted. At that point, if the device becomes e-waste, it's not sustainable.
For as long as nobody else is capable of doing so. Don't forget the part where I said "or open-source everything so that the community can". If they did that from day 1, I wouldn't care if they never provided any official updates.
The load time is contingent on ping, for instance with TLS over TCP you need 3 full round trips to get a pipe that can carry HTTP, this is 3x as many as with TCP alone. In the author's scenario there's a 500ms delay, so naively that's ping of 250ms which is really high. In the case of HTTP/1.1 or asset loads from other origins, there may be additional connections that are made which will all incur the extra overhead as well, though the author's blog is served over HTTP/2 which wouldn't incur this cost for same origin asset requests. For me best case for author's blog (just the HTML via curl) is 250ms plaintext and 600ms TLS. What's more interesting is that plaintext is a lot more consistent for me because I have pretty bad packet loss and the HTTP request fits into a single TCP segment.
Your browser can redirect to HTTPS, if you use HSTS preloading.
Also, at this point I think we have sufficiently pervasive availability of HTTPS that it'd be reasonable for browsers to expand typed domains (if the user types `example.org`) to use https:// rather than http:// , unless they're an IP address or localhost.
FWIW, it doesn't require (or at least check) that you redirect for your whole site, just / on the apex domain. I've got a larger post elsewhere in the thread with detailed recommendations for how to use HSTS for modern browsers and let old browsers do http.
Which is one of many good reasons why you should have that redirection enabled. Or disable HTTP entirely (the HSTS requirements say "if you are listening on port 80").
> * Even on static sites, HTTPS prevents a MITM from telling what pages you're reading, or introducing falsehoods into the content. It's not "overuse" to use HTTPS there.
I hear this argument again and again. You need to understand it's completely useless for websites where you are a passive consumer (i.e. you don't enter any information, just read articles and so on). And, frankly speaking, MITM attacks on passive websites are much rarer than errors related to outdated (and sometimes misconfigured) certificates. Enabling HTTPS redirect means the fallback is no longer working, and if the browser doesn't let you in, you basically lost access to information even though it is still right there, just inaccessible.[0]
[0] Yes, website owners should configure their certificates properly, the process is now very easy, you can add certbot to crontab and forget - but it turns out in real life it doesn't work like this.
It is not "completely useless" or a "theoretical attack vector". This literally happens in the wild. There were reports of Comcast and ISPs in India MITM'ing traffic injecting ads into their their customers' tcp streams a couple of years ago.
> Enabling HTTPS redirect means the fallback is no longer working, and if the browser doesn't let you in, you basically lost access to information even though it is still right there, just inaccessible.
This is the equivalent of saying that "since phantom issues with preflight checks on airplanes can sometimes result in cascading delays of flights on an entire airport therefore we should get rid of preflight checks on commercial airliners".
> It is not "completely useless" or a "theoretical attack vector". This literally happens in the wild. There were reports of Comcast and ISPs in India MITM'ing traffic injecting ads into their their customers' tcp streams a couple of years ago.
In this case people can freely use the HTTPS version. What we argue is stop doing automatic redirects effectively rendering the whole content inaccessible in case of certificate issues - and this a massive problem, affecting much more websites than hijacking traffic by some ISP some years ago. And you can be sure ISPs will be doing less of it as most sites are on HTTPS now. What we are asking is not to kill plain old HTTP for those who want and need to use it.
> You need to understand it's completely useless for websites where you are a passive consumer (i.e. you don't enter any information, just read articles and so on)
It's not. ISPs have injected ads. It also gives slightly less data to for ISPs to track since they only get the SNI header (domain name), not what articles you visited.
You could argue that ISPs shouldn't do that, and I'd agree, but unfortunately many people don't have a choice of ISP and some ISPs are shitty.
Yes, this is something I hear from my colleagues in the USA (I haven't seen it in Europe yet, it's unthinkable for me). Note that in this case you can perfectly use HTTPS. What we are talking is the situation where the website owner redirects HTTP to HTTPS (and in this case the ISP sees at least the initial article address anyway).
Not a compelling argument in the least. MITM still means ISPs can track and inject ads, regardless of how static the content is on your website. And the performance argument is completely invalid because it's only measuring total bytes, and doesn't consider HTTP/2 multiplexing.
> It should be mentioned that my personal website is also plain text based and tiny. Imagine the impact of a blog article or research paper with significantly more content.
The larger the website, the greater the improvement from using HTTP/2 (which requires TLS). And it doesn't take much data at all to offset that TLS lookup.
This post is justifying bad behavior. The only sites that can afford to remain http are neverssl.com and local development.
even for local development, I tend to serve the HTTP server with nginx for adding HTTPS. Because many of the web APIs, like service worker, clipboard APIs, are only available in HTTPS context.
Unfortunately mobile apps complain a ton about 'insecure' https, and make it a real chore to do local certs. So its usually easier to just run over http locally when doing mobile dev.
One thing that these types of articles always miss is that HTTPs isn't just there to protect user data being submitted to your website. It's also there to make sure that the data you're looking at wasn't modified by a 3rd party without your consent.
The fact that someone technical has missed it this is a really good argument for redirecting users from HTTP to HTTPS.
I think there are few excuses left for not having https on your website. Having the option of having an http only variant of your website should likewise not be a thing. A redirect is better than a not found error. IMHO those are the two valid responses for a public http website (go away, or go here instead).
As for "updated browsers", anything that doesn't support https (or redirects) in 2022 is not fit for use on the modern internet. Most of the web would in any case be unusable with such a browser already. And essentially everything that shipped in the last 20 years or so would be able to deal with this (with the exception of handling newer TLS versions perhaps). You'd be well advised to not use a browser that hasn't been updated for that long.
If somehow you are using such a browser (why?!), you might want to fix that ASAP. Meanwhile, I'll blindly assume the intersection of those users and this audience (hacker news) is extremely small to non existent. If that intersection exists at all, it's probably for some esoteric reasons that have nothing to do with an inability to fix the actual problem (like fixing your browser setup) and is by choice rather than by circumstances. Either way, your problem to solve and not something to waste energy on for website maintainers looking to do the right thing.
That is a pretty weak argument for why we should sacrifice privacy. We should continue moving in the opposite direction. Ideally, within 5 years, browsers should send up warning flags whenever the user hits a website that doesn't use HTTPS.
They had to go back 20 years to find a browser that does not support TLS. Software that old is going to have other fundamental problems accessing the modern web. The amount of work necessary to make every website actually work on a browser that old is far, far greater than just supporting HTTP.
So the argument is https has too much overhead and excludes those who cant use updated browsers? Unless I missed it, the article doesn't discuss mitm, nonrepudiation or censorship/privacy concerns.
If you want to update/browse with Windows 2000, then you should set up a local TLS terminating proxy for it, rather than asking the rest of the world to decrease their security for you.
It gives a few examples of use cases where protection against these are not necessary. This is one of my pet peeves too and I agree with the author. Though if you want google to list and rank you, you have to have it. (This might have changed. Google doesn't do anything for long.)
As a side effect of the push for https, local web log analysis is basically worthless nowadays. This might be why google pushed it so heavily.
There are a lot of ISPs, even consumer wifi gateways that mess with unencrypted traffic. HTTPS is the only way to ensure that the data served is what the user received.
This. This is so mind-blowing that in 2022 people, I presume knowledgeable engineers, don't care about their privacy and security, and willingly suggest to let ISPs and whatnot inject ads and even miners sometimes [0].
Using HTTPS by default has obvious security benefits, but it's not clear we need a redirect to implement it. We could have leveraged the Alt-Svc header for this instead, which is meant specifically for this. This would allow modern browsers to use HTTPS without preventing older browsers which want to use HTTP from doing so (at their own risks).
You can already implement this using the Upgrade-Insecure-Requests header[1]. Servers just need to check for that header and only send an HTTPS redirect if it's included. All major browsers already send that header except Internet Explorer so the security impact of such a check would be pretty minimal.
The only argument I can see against it is that it's insecure-by-default, but you could theoretically fix that by allowing clients to send `Upgrade-Insecure-Requests: 0` to explicitly indicate that they don't want to be redirected.
> But what about sites like https://doesmysiteneedhttps.com? While this website makes a few valid points, it still relies heavily on “fear tactics” that honestly don’t apply for the vast majority of users. It’s overkill.
Sorry, but not good enough.
1. "a few valid points": you avoid making your visitors liable in oppressive environments (employers, regimes), you avoid very real content injection (commercial or malicious), and you give the visitor a way to know that content wasn't tampered. That's a few valid points. (The rest are counter-arguments.)
2. "fear tactics": not true. Protecting the integrity of your visitors and your content is nurture, not fear.
3. "don’t apply for the vast majority of users": by making HTTPS standard at practically no cost, you make it work for those for whom it matters. Just because I feel safe on Hacker News doesn't mean that any visitor who goes here will be treated fairly by reading my message.
Pervasive traffic monitoring is an attack and it is an attack being carried out in massive scale. I believe HTTPS is the most meaningful and available method out there to stop this attack.
If your website does not support using HTTPS you are an accomplice in pervasive monitoring.
I pretty much wholeheartedly disagree with this. HTTPS provides confidentiality, but it also provides integrity, which is arguably much more important for many cases. If you’re browsing around on an older (perhaps unpatched) machine, a plaintext HTTP website can easily have malware embedded in it by another person on the wifi network (with ARP spoofing or whatever). It can also have incorrect links (see sslstrip) that impersonate other websites. It used to be that a major UK bank didn’t use HTTPS for their homepage, only for their online banking application. You would probably think this is fine, but it’s trivially easy to replace the link to said online banking application with one that MITMs you.
Requiring HTTPS also provides another benefit, which is that it stops downgrade attacks. If your site is available in plaintext, I can just block access to the secure version in order to do my nefarious business. The internet is a nasty place these days.
Plain HTTP is synonymous with trust. In an ideal world where no one would snoop or mangle responses, it should be enough.
In our still-imperfect world with potential MITM attacks, sometimes that trust is not warranted. However, one still should be able to choose to trust own connection. I don’t want this choice to be made for me with no way to appeal.
I may use a self-provisioned VPN, but actually this doesn’t matter. The point is that the user has no choice, regardless of circumstances.
As an example (although it’s whataboutism), the option of using insecure SMS 2FA is routinely offered in much more sensitive contexts compared to the option of using plain HTTP to read static text content—yet the former is considered acceptable but the latter isn’t.
Modern browsers with preload built after your site is added would always use https. Modern browsers without the preload will load whatever the first page they hit as http, but pickup the preload from the favicon and future page loads will be https. Resources from the first load will likely be http, depending on favicon loading timing. There's no additional MITM risk for modern browsers, because a MITM could avoid your redirect if an http load is attempted just as well as they could mess with your HTML.
Older browsers can still go to the http version, although inbound links are likely to be https, because people like to cut and paste from the URL bar, and most people are going to have a modern browser with your site preloaded. Users of older browsers would need to edit the url in the URL bar, a skill they'd likely rapidly develop.
IMO it's better to close port 80 entirely because many clients will end up sending unencrypted headers and rely on the redirect, not realizing they've exposed themselves.
===>
HTTP GET /your_api
Host: passwordleaker.com
Authorization: Bearer 12312312321
<===
HTTP/1.0 302 Found
Location: https://passwordleaker.com/your_api
Yes, because you configured it to go to http, not https. Usually if you have credentials to send you also have a domain configured for those credentials, and then you'd configure the correct address, right?
That's my whole point though. If you build an API but then its up to customers to integrate with you, you can help them avoid one of many pitfalls by not allowing them to do a dumb thing like send private headers over unencrypted channel
>because you configured it to go to http, not https. Usually if you have credentials to send you also have a domain configured for those credentials, and then you'd configure the correct address, right?
The "you"s in your sentence are actually 2 different actors the one doing configuring is often a not technically excellent client, the "you" who has a domain is the API creator, who can also help out the client by closing 80 or refusing non-TLS traffic on 80.
> By using HTTPS my website increases it’s overhead by almost 100%. It should be mentioned that my personal website is also plain text based and tiny. Imagine the impact of a blog article or research paper with significantly more content.
An extra kilobyte. Wow. Such data transfer. What a dealbreaker.
No HTTPS means worse performance, because HTTP 2.0 requires it. HTTP 1.x doesn't have connection multiplexing. When I moved my blog to HTTP 2.0, I noticed that it loaded faster, even when on the same LAN as the server.
> Helpful Tips:
> If users are nervous of links set in standard http:// format, they can add s themselves or better yet, use a browser extension like HTTPS Everywhere (highly recommend)
> No HTTPS means worse performance, because HTTP 2.0 requires it
HTTP 2.0 actually does not require it, the spec allows for it over unencrypted connections. All browsers have decided to only enable it over TLS connections though.
meh. the reason i like HTTPS is it makes it difficult for unscrupulous ISPs to insert ads and code into my pages. and yeah. bad guys inserting nefarious cookies as well.
it's great the poster hasn't had to deal with such problems, and they're certainly not universal. but they're common enough to just do HTTPS and not worry about it.
also... https is adding half a second to load times? pretty sure this is what QUIC and HTTP/3 were designed for. or move your content to akamai or cloudfront or cloudflare or whatever CDN you're comfortable with.
* Even on static sites, HTTPS prevents a MITM from telling what pages you're reading, or introducing falsehoods into the content. It's not "overuse" to use HTTPS there.
* I don't know how the author's server is set up, but a 1-second increase in page load time isn't consistent with load times I've ever measured anywhere else. In fact, there's a lot of HTTPS pages where the total load time is less than what's claimed as the difference here.
* If you care about sustainable technology, you should be demanding that Apple provide updates for hardware longer, or open-source everything so that the community can. Whether or not the sites you want to visit use HTTPS, using the Internet from a system years behind on security patches is a really bad idea, and not something you should optimize for.
* The specific suggestion of disabling the redirect is exactly equivalent to a MITM attacker running sslstrip, so you're doing a big piece of the bad guy's job for him.