Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Your Static Website Needs HTTPS (2018) (troyhunt.com)
153 points by codesections on Jan 25, 2020 | hide | past | favorite | 137 comments



The recommendation of Cloudflare here seems poor. Using CF to make an HTTP only site support HTTPS will only prevent MITM between CF and the end user. MITM between my server and CF is not improved as it's still HTTP. Yes, you can add a self signed cert and tell CF not to check the cert validity, but that doesn't prevent MITM.

Worse, Cloudflare can inject JavaScript into your site. The default settings will show Captchas to users if CF thinks they are not trustworthy. So you end up with MITM anyway if you aren't careful. For a static site, does a captcha really make sense? Cloudflare makes the internet worse with insane defaults like this.

https://community.cloudflare.com/t/getting-cloudflare-captch... https://www.techrez.com/remove-cloudflare-challange-page/


Then again, the defaults of the internet let anyone remove you from it with a $5 booter and your data is in cleartext and your MITM is every ISP + any hop in between instead of just your reverse proxy.

Takes defaults far more insane than Cloudflare to do worse than the internet status quo.


CF is undoubtedly good for DDoS protection, but that doesn't negate the fact that it does other things poorly.

FWIW I've found more websites that prompt for Cloudflare captcha than I've seen websites offline due to DDoS. I've seen lots of websites offline because they get too popular though. Many websites that I've known were currently under DDoS attack stayed online while I used them (like GitHub using Akamai).

At the risk of Troy writing a blog post proving me wrong... does the average static website need DDoS protection? I'd guess they don't.


Can I pay 5 dollars to have the attackers attack my monero powered site? All of those resources generating coins should be worth more than 5 dollars.


Sorry, what is a “$5 booter”?


Automated service for issuing DDoS attacks.


Cloudflare itself has a feature to issue certs to protect CF<->your server.


Indeed it does, and I’ve been using this in production for many months.


Hmm, I haven't seen this - do you know if it's available on the free tier?


It's their "Full Strict" HTTPS setup [1]. It's included on the free tier, I've been using it over a year and counting.

[1] https://support.cloudflare.com/hc/en-us/articles/200170416-E...


Ah, right, I knew about this, but you still need to setup an SSL cert yourself; I thought you'd meant there was some kind of turn-key solution for E2E I wasn't aware of.


How would you propose a 3rd party intermediary provide secure end to end encryption without you creating a client side cert?

Takes 30 seconds with letsencrypt to create your own TLS cert.

If you are suggesting CF do it server side then it's nothing more than snakeoil.


> Takes 30 seconds with letsencrypt to create your own TLS cert

You know very well that's not true, and that kind of exaggeration does Let's Encrypt no favours. I'm sure it doesn't take long when it just works, but when I've used the acme client before on systems with Apache and nginx, it was a complete PITA to get working. I haven't had to use it for a while though, so newer versions of the acme client might well be much better.

> If you are suggesting CF do it server side then it's nothing more than snakeoil

No, I didn't mean that.

What I meant was something simpler than Let's Encrypt, where you didn't need to expose an HTTP endpoint on your server for proof of domain ownership, since Cloudflare already know you control particular domain names and no further validation is needed.

Perhaps they could provide a one-time use GUID, which you'd pass to a simple client on your server, which could then send a CSR containing that GUID to a Cloudflare endpoint, which would in turn sign your CSR.


It's called the Origin CA and yes it's available on the free tier.

https://support.cloudflare.com/hc/en-us/articles/11500047950...


My primary concern with Cloudflare proxied sites is that I have no way to assess the technical competency of the sites they proxy. I can check most HTTPS sites using an online tool such as SSL Labs or Mozilla's Observatory. For example I discovered a local mobile operator who had not bothered to patch their primary web server against Heartbleed an entire year after the exploit was discovered, which was a real shocker.

Cloudflare are technically competent which is great, but their clients are impossible to assess. I see a lot of formerly insecure local web servers switching over to Cloudflare (and HTTPS), and I know it's the same morons operating the web server. For me the safe default assumption must be that the site behind them is run by people who are not technically competent. I suppose Cloudflare could set a server header indicating the connection between them and the proxied site is HTTPS?


I put in a support request back in March 2017 asking them to add a header indicating which type of HTTPS setting was used (flexible/full strict/full weak/etc) and this was their response:

Hello Ryan,

This is something we are definitely considering. I will pass your feedback on to our team. Of course, we need to carefully consider the security implications for the millions of sites using Cloudflare before making this change, as it may have unforeseen consequences. Let me know if there's anything else that I can help with at all!

Best Regards,


Easier for you to assess the technical competency, however nice for you = larger attack surface for the site. Why would anyone in their right mind expose unnecessary info for hackers (whether whitehat or blackhat) to assess whether they’re an easy target? This is like asking people to turn on nginx server_tokens. Also, unnecessary headers -> more bytes transferred -> more bandwidth cost, especially for very short responses (but probably doesn’t matter for Cloudflare.)


Agreed. Cloudflare makes it impossible for people using TOR, proxies and any browser except the most popular ones. I like Troy, but this advice really rubs me the wrong way.


> Cloudflare makes it impossible for people using TOR, proxies and any browser except the most popular ones.

Does it? I know the defaults can be overly sensitive, but I wouldn't call it impossible.

I have my site behind Cloudflare, using it's protections:

> ALPN, server accepted to use h2

> Server certificate:

> subject: C=US; ST=CA; L=San Francisco; O=Cloudflare, Inc.; CN=sni.cloudflaressl.com

> ...

> GET / HTTP/2

> Host: sixteenmm.org

But you won't find any issues streaming the videos on it under Tor. I, and several others, regularly do.


That's a feature, though. 99.9% of the time, someone who checks those checkboxes are abusing your service, not some rogue journalist doing research under an oppressive regime like we like to believe (lol). You're better off thanking bad actors than people choosing Cloudflare to get some relief from holavpn/iot botnets which are dirt cheap and getting cheaper every day.


I think you'll find a number of people who have seen issues (myself included). Things like adding Cloudflare on top of an API, breaking clients when CF decides their IP needs verification. Besides, other than DDoS, how do you abuse a static site?


> I think you'll find a number of people who have seen issues (myself included).

Yes, that's the difference between 99.9% and 100%. Cloudflare cited traffic percentages which match what most experienced site operators have seen, with a much higher percentage of malicious activity using Tor than most other networks and no easy way to have per-user reputation (that was the impetus for developing the “Privacy Pass” feature).

Here's what they said at the time, which also has some answers for your question about non-DoS problems:

> On the other hand, anonymity is also something that provides value to online attackers. Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious. That doesn’t mean they are visiting controversial content, but instead that they are automated requests designed to harm our customers. A large percentage of the comment spam, vulnerability scanning, ad click fraud, content scraping, and login scanning comes via the Tor network. To give you some sense, based on data from Project Honey Pot, 18% of global email spam, or approximately 6.5 trillion unwanted messages per year, begin with an automated bot harvesting email addresses via the Tor network.

https://blog.cloudflare.com/the-trouble-with-tor/


What does a static site abusing tor user look like?


I was disappointed the article is so thin on real substance. It could have listed out the reasons to always use HTTPS. Easily done:

1. Privacy matters. A medical website, or indeed Wikipedia, should prevent a snooping ISP from finding out you have been reading about an embarrassing condition. This is similar to the way librarians are extremely protective of their loan records [0]. Netflix use HTTPS for their streams, for the same reason (it does nothing to aid their DRM, it's purely about privacy) [1].

2. HTTPS prevents ads/trackers/malware being injected into the page by unscrupulous ISPs (this really has happened [2])

3. Modern browsers will (rightly) warn users not to trust the site. This makes the site look bad.

4. Some fancy browser features are disabled if you use unencrypted HTTP. Likely irrelevant for a static site though.

5. Let's turn the tables and ask why you wouldn't use HTTPS for a public-facing web server. There are just 3 reasons:

* Reduced admin overhead not having to bother with certs

* It enables caching web proxies, which is only relevant if you're running a serious distribution platform like Steam, or a Linux package-management repo [3]

* Better support for very old devices, such as old smartphones in the developing world

[0] https://www.theguardian.com/us-news/2016/jan/13/us-library-r....

[1] https://arstechnica.com/information-technology/2015/04/it-wa....

[2] https://doesmysiteneedhttps.com/

[3] https://whydoesaptnotusehttps.com/

(Taken from an old comment of mine at https://news.ycombinator.com/item?id=21912817 )

Edit: Added the third reason not to use HTTPS


ad. 5 - There are more. For example:

* You don't like the Let’s Encrypt Subscriber Agreement, for example the part about indemnification and attorneys' fees.

* Your domain name is on Let's Encrypt blacklist (https://community.letsencrypt.org/t/name-is-blacklisted-on-r..., https://community.letsencrypt.org/t/domain-blacklist/106374), though they now seem to un-blacklist most cases on email request.

I think the problem is that there are little alternatives to LE.


I'm not sold on either of these objections. Let's Encrypt is not the only CA. If you don't like the free CA, pay to use another one.

If I see unencrypted HTTP on a website, I immediately think less of that website. It's a Good Thing that the web has got to this point. Quibbles about Let's Encrypt's terms don't strike me as convincing.

> I think the problem is that there are little alternatives to LE.

But there are. Again, there's a whole marketplace of CAs to choose from.

If anything, there are too many CAs trusted by today's browsers.


> * It enables caching web proxies, which is only relevant

It is also relevant if you're in an African village, or any other place that has bad/slow internet and children in school can't visit typical websites because they want a fancy separate connection with each of their devices.




There is an extension to TLS 1.3 that supports encrypted SNI, and it has been deployed on some popular services since 2018.

Of course, your DNS is likely still unencrypted. But maybe one day we'll get there.


The exchange between Troy Hunt and Jacob Baytelman is a little aggravating for me--they appear to be talking past each other.

Jacob challenges him to "hack [his] static blog". I don't know what 'hacking a website' means to him, but to you and me it probably means compromising the web server, which is not directly related to HTTPS (although I can think of a lot of ways that the use of HTTP could lead to a web server being compromised).

Troy responds by taking him up on this challenge, accuses Jacob of thinking that his site is immune from transport layer risks, and then performs a man in the middle attack on himself using Jacob's site (when in reality literally any HTTP site could have been used).

It's like these two are having completely separate conversations.


If you read the Twitter exchange leading up to the 'challenge accepted' tweet linked in the article (https://twitter.com/troyhunt/status/1014736960542289922), it's clear that the context of 'hack' is specifically about man-in-the-middle injections. That is, hacking his static blog as it is served up to users.

And arguably, the fact that nearly any HTTP site could have been used for the demo is partly the point.


It’s clearly not clear to Jacob, since he eventually says “I am afraid you will only demo MITM attack on the traffic.”


Certbot and LetsEncrypt make this a trivial process these days. Takes 15 minutes to set up and is free. Why not use it?


> Certbot and LetsEncrypt make this a trivial process these days.

Not on all shared hosting plans. And migrating from a shared hosting plan can be quite a lot of work, depending on the website.

Also: Getting a LetsEncrypt wildcard cert for Apache on a CentOS 8 VPS is non trivial (for individuals not already familiar with docker) [1]

[1] https://certbot.eff.org/lets-encrypt/centosrhel8-apache


> Not on all shared hosting plans

Use the nginx/apache plugin, or the webroot option. Yes, it won't do *, but on shared hosting, that might even be better.


If you have shared hosting your install step is at the hosting outfit's discretion. If they intentionally don't support Let's Encrypt to drive sales to a CA then there's no reason to let you do it yourself. This was more common when Let's Encrypt was new but still happens now.


I was redeploying an iis hosted domain with letsencrypt. Trivial is not a word I would use in the description. In addition the tooling wants to much auth on my server and dns hosting accounts. Is it better and cheaper then 10 years but far from simple/trivial.


Part of the rationale of ACME (the protocol which makes Let's Encrypt work, and which is standardised in RFC 8555) is that vendors should bake the support into their TLS-capable products like IIS.

It seems crazy to me that Microsoft shops are annoyed that the Windows tooling for Let's Encrypt isn't great yet didn't direct that straight at their vendor. What's the point of having that relationship if they don't do what you need? Are you paying them because you hate money?

Microsoft definitely could, if the feedback from customers was there, have shipped an ACME client for IIS in newer IIS releases/ updates. But the feedback from customers is seemingly "More beatings please, and have you thought of increasing prices?"


I understand what your saying but, I don't think I can just make this work out of the box with a Tom cat install? Don't I still need another Daemon/batch process? My issues are much less IIS cert update and center around how I need to prove site/domain ownership. Especially for a wild card. To me the documentation looks the same( windows or linux) a periodic update to my dns txt record to show I control this/these sites. Which means my server has to have write access to my dns?


IF you want a wildcard then yes Let's Encrypt only offers that for DNS proof of control. You can do two tricks here which let you greatly reduce the scope of the power granted to a machine getting the certificate and then avoid that machine having to be the machine which wants the certificate.

1. Use CNAME to make different DNS server hierarchy own the ACME proof of control. A CNAME DNS record can be permanently added to your real site, telling Let's Encrypt that it should ask DNS for a different name instead, and only that name needs to be writeable by the periodic renewal process. You can use this to pass DNS challenges for a domain where actual DNS changes take six weeks and a dozen people's signatures, because you only need one change once, not once per renewal. You should find documentation explaining this, or you can ask Let's Encrypt's community site to help if you explain your specific situation.

2. CSR re-use. Certificate Signing Requests don't have timestamps inside them. By default Let's Encrypt's popular Certbot client mints brand new key pairs for every renewal and so it needs fresh CSRs, but you needn't do that. Mint keys once when a server is created, produce a CSR for the certificate you'll want, and then re-use that CSR in a machine which just does the renewal periodically, the actual servers can fetch their renewed certificate from that machine or wherever, the certificate is public so it doesn't matter and the keys haven't changed they just need the renewed certificate for the same keys.


Thanks for the reply. I had a strong feeling the tool was causing an issue and not handling case 2 properly( or with enough configuration options).


or Cloudflare. Just a couple origin hits and your site is served via CDN for free, fast and secure.


Definitively no. I set up my own personal page on a VPS precisely to avoid LinkedIn, Google Scholar, Research Gate or any third party service being in control of my public face. Using another service would defeat that purpose.


You are probably using a VPS running on someone else server, through many networks. (So do I.) Any of those can MITM your traffic. Maybe you're also using apt/yum to keep the VPS up-to-date. Is adding Letsencrypt to the list such a burden? Or maybe you're only against Cloudflare. I go with Letsencrypt.


Is github on your list?


How do you secure the link between Cloudflare and your HTTP only site?


This is very hosting service-specific - some hosts will expose your server via a generated and wildcard cert encrypted connection (ex: Heroku and their appname.herokuapp.com).

CF also allows for self-signed certs: https://blog.cloudflare.com/origin-server-connection-securit... - which are (to me) more complicated than standard certs.

The real game changer in all of this is LetsEncrypt which has become the defacto option for services with huge amounts of custom domains (Shopify, Hubspot, Wordpress) etc.


It's complicated, but basically... You don't. It's much better to just set up Let's Encrypt. Takes <15 minutes.


This is bad advice. See this comment: https://news.ycombinator.com/item?id=22146854


The first time I saw a mobile/prepaid ISP inject their notices on my own personal website, I realize I needed to get off my lazy ass and setup LetsEncrypt.


CITM - detected Corporations In The Middle (CITM) attack. requests blocked 15%, cdn.example.com dnjs.cloudyfaire.com troymcclure.disqus.com fonts.noodleapis.com fonts.noodlestatic.com platform.example.com noodletube.com example.com

https is easy. point everything DNS everything to cloudyfiare and click Purchase and by clicking Purchase agree to all the terms (but don't actually read any of them.) hand over root access to a program with the word bot in it, and allow it to update itself automatically (what could possibly go wrong.) Everything HTTPS all the time. https://en.wikipedia.org/wiki/DigiNotar

call me skeptical, or the many ( https://slate.com/technology/2020/01/what-to-know-about-the-... https://www.zdnet.com/article/kazakhstan-government-is-now-i... https://en.wikipedia.org/wiki/DNS_over_HTTPS#Criticism ) many reasons Why My Static Website No Longer Exists.


Devil's advocate: HTTPS centralizes the web around big players. The CA trust model gives a privileged few the right to say what websites are "secure", even in cases where no user input goes down the wire. "Not Secure" in the top left brands and shames amateurs. Come on, just make a Medium page! You should be posting this on a FAANG property! Let's Encrypt is great, but don't forget that it could disappear overnight--after every browser started de facto blocking non-HTTPS traffic.


The Web PKI in theory is subject to oversight by all the big trust stores which is roughly the set of operating vendors except Mozilla instead of the Free Unixes (and less importantly Oracle because Java is as usual special).

But in practice over many years of involvement my observation would be that the only thing which matters is Mozilla. At the others such oversight is opaque and whilst opacity might mask a hive of frantic activity it's also what doing nothing would look like from outside such a corporation. But at Mozilla it's done in public, where you can participate if this idea of a "privileged few" bothers you. And so again, I can't prove Microsoft doesn't have loads of smart people dedicated to this problem, but I can tell you that their actions in the period I've been paying attention look exactly how I'd predict from two inputs: Government scale purchases of Microsoft's product platform (driving trust inclusion in Windows for a variety of governments I wouldn't trust to tell me if it's raining let alone in the CA role) and copy-pasting Mozilla decisions.


How does Let's Encrypt fit into this? Are they part of the 'privileged few?' Can we trust part of The Few more than other parts?


They have even more power than the paid ones, because the most of sites they secure wouldn't/couldn't get a paid cert if Let's Encrypted disappeared or refused to serve them. At least the customers of paid ones could just move their billing to another service.


I would be more comfortable if there were several acme based free certificate providers

However I’m far more concerned about chrome, and the attitude of so many in the biz that think Firefox should be mothballed.


> HTTPS Is Easy

It is not easy at all. Getting a certificate and putting it into the conf is. Maintaining that certificate, applying the ever growing number of "security" headers, dealing with broken stapling, is anything, but easy.


Strange. Once per week I receive a mail from a cronjob telling me my certificate(s) are renewed. So for me the original statement holds.


After a one-time initial setup of my static blog on S3, served via Cloudfront, with certs managed by AWS Certificate Manager, I've not had to touch it again. Not bad for a total monthly cost of $1.03 USD (plus annual domain renewal of ~$13)


Back in early 2009, I was launching a file storage web service similar to Dropbox (without the client, but with an API with OAuth 1 support) using AWS Ec2 and S3. I planned to use HTTPS, but it was expensive for me (as a college dropout), and the website is still online without it. I abandoned the project afterward. Recently, I started to migrate it from AWS to Google Cloud Platform, and one of the goals was to add HTTPS to it. However, I haven't had much time to finish the migration, and it's still not being served as HTTPS (even though it has all other sorts of protection that were the norm back then). I wonder how many other "legacy websites" have a similar issue (which I don't find justifiable for anything 'in production').


Have you tried let’s encrypt a la certbot? It wasn’t a painless process when I did it but I do have autorenewing “green lock” SSL certs for free.


Yes, I have.

Let's Encrypt ACME challenge resolved this from now on. However, I'm using a really old machine + OS, and the nginx version I'm running is old enough not to be compatible with it. So, instead of having a hard time updating the server, I decided to migrate the site to a newer machine (even because some AWS technical limitations don't let me migrate to a new instance type).

~10 years ago I slacked in getting an HTTPS certificate as this would have meant a lot of money.

~ Today, I want to use the ACME challenge with Let's Encrypt (no need of OV for a portfolio website), but I never find the time to finish the migration (that should take more 4 to 16 hours).


There are ACME clients written in Python 2 or even pure shell -- should work even on very old systems (but not older than ~2010, you need openssl with TLS1.2 support).

I personally prefer them even on current systems, as I don't really like the "automagic" nature of certbot.


I got tired of manually restarting this or that every two months or so due to failed letsencrypt updates that left my sites inaccessible, eventually I replaced the web server with a new one with more current versions. Hopefully it’ll keep things together for a few years…


I’ve been running it for 3-4 years. After a long initial setup, I’ve only had to touch it once: when some authentication protocol was deprecated. A small price to pay, all things considered.


“Let’s encrypt” has pretty much eliminated this problem.


GCP load balancers do have lets encrypt support. They will issue certificates automatically if you want.

https://cloud.google.com/load-balancing/docs/ssl-certificate...


Yeah, I already tried this too although not on my own old project, but for a SaaS company I was working for (that would generate a Kubernetes cluster automatically for you with either DNS or HTTP challenge).


This article is two years old - think it’s been well established that sites need https, if for no other reason then browsers and search engines punish you in a variety of ways for not having it. Certificates are free with let’s encrypt so there is no excuse not to anymore.

In the case of Cloudflare (or any CDN) best practice is to reject requests not from the CDN. Cloudflare doesn’t support AWS S3 compatible storage directly - it won’t make signed requests - but you can write IAM policy that only responds to certain IP.


Here are those IP addresses. Just know that the list can change over time, so you'll want to update your IAM policy.

https://www.cloudflare.com/ips/


If you're using S3, just use Cloudfront and their auto-issuing free HTTPS certs.


Note: 2018.

Troy talks about a tipping point, which was Jan 2017.


HTTPS PR is not the internet's defense against hackers. It's FAANG's defense against Comcast and AT&T.


I have found one of isp used to inject ad on http page and user has no idea how this popad apear. http protocol need to die.


> In one of many robust internet debates (as is prone to happen on Twitter)

Maybe I just don't get Twitter. Every time I look at a thread it starts with some coherent conversation, but then devolves into a bunch of tangents that don't coherently follow each other.

HN and similar seem much better suited.


Makes a lot of sense when you acknowledge that we like the dopamine release of arguing. Twitter consolidates an argument into just the parts you mean and read anyways. But you'll just find long form of the same exact thing on message boards and Reddit.


I’m stuck in a related situation: I own a website with heavy traffic that contains inline iframes to some http pages (about 30% of pages) hosted by third parties. I can’t turn https on for my website, otherwise these iframes would be blocked by the browser. Since I don’t offer https, it means that I can’t offer features such as login/sign up etc.. Any ideas?


Proxy the iframes through your own server.


Would be very bandwidth intensive? They’re service video streams served over HTTP.


Oh so then I'm pretty sure they use http to reduce CPU load.


Popup/new tab with no encryption of your own site and show the iframe. But then of course it's not inline with the rest of your sites content.


Yeah, I’m considering that.. pop up to HTTP. Then again, the most popular content is being served on HTTP. Bummer!


I can’t see any good solution, unless those third parties start using HTTPS.


I’m considering showing the iframes in an HTTP popup.. that way the main site can be served via HTTPS and the non HTTPS content would simply pop up.


I'm in the same situation as yours. Still looking for solutions.


HTTPS is bad:

- It wastes resources.

- It adds complexity.

- You can solve everything HTTPS solves over HTTP!

- It encourages passive destructive behavior.

- Troy Hunt probably has money coming in from certificates somehow.

HTTP/2 and HTTP/3 are also bad.

WebSockets are bad.

As a side note:

Vulkan is bad.

HDMI is bad.

Wakeup people. Time to get off that over-engineering couch and downvote the guy telling the truth again!


So... you are both correct and not, which is probably why you're getting downvoted.

Is much of the current tech overengineered? Yes, it definitely is.

HTTPS on it's own - having TLS and a certificate - is not. It never was. But with stapling, CORS, X-XSS, and the rest, it becomes a beast. Those are becoming requirements, and they are making things _very_ complicated.

I hear you, and I tend to agree on many things. Serial ports were gloriously simple whereas USB3 is a nightmare on every level. VGA was beautiful in it's simplicity, HDMI 1.4 with Ethernet included is certainly too much.

mta-sts is probably the worst idea that could have been added to email, by relying on HTTPS requests.

My personal take on this: use all of them responsibly. Use HTTPS, but don't make it exclusive, let the user make the choice: keep HTTP as well. Leave CORS, STS, Referrer Policy, etc out.


You are the first person in the world who has a dialogue with me about this. Thank you!

Here are a bunch of solutions I use instead of complexity:

I hash with one time server generated salt for login:

http://talk.binarytask.com

I use Comet-Stream for real-time communication over HTTP:

http://fuse.rupy.se

And my latest finding is DPI for video:

http://talk.binarytask.com/task?id=2433316338364993026



Yes, add HTTPS, but keep the option for mere HTTP, because backwards compatibility is good.

Many parts of the world can't deal with TLS1.3 HTTP/2 websites only.


It's not “many parts” but a fraction of people with older browsers (mostly IE, Android). The global figures shown on Can I Use are pretty close to what I see on an international web property as well — ~80%:

https://caniuse.com/#feat=tls1-3

That may or may not be a fraction of traffic you care about depending on your visitor profiles and security posture but it's not really accurate to say “many parts of the world”.


Which parts?



There are almost no arguments given, but all the other ones are nicely rebutted on n-gate:

http://webcache.googleusercontent.com/search?q=cache:hV6m26a...


Without HTTPS, Russians could MITM my knitting blog any minute now!

Indefinitely babysitting letsencrypt is a small price to pay to keep those grannies safe!


A cronjob does the job just fine - far less babysitting than keeping your OS uptodate with security patches (ok that’s another cronjob if you’re happy with auto reboots on occasion)


Everything in awful awful IT requires babysitting--even cron jobs.


If you’re running your own server you need to be babysitting it, extra babysitting for your https cert is negligible


Let me guess... IT guy?


The problem with these calls for HTTPS is that those doing it believe http and https are mutually exclusive. They completly turn off human navigable webservers and leave only the machine navigable ones online. It makes the web only accessible to computer software written in the last 5 years.

There are plenty, I'd say most, websites which do not need HTTPS. And my static website does not need https. It's nice, sure, but it's a personal website and there's no money or personal information involved. Leaving an HTTP version going alongside the HTTPS and Tor hidden service is just fine.

The greater evil is having people run third party code by default on every website from every random domain that's called. Now that's insecure. It's like opening every email attachment you get. Every single "danger" of HTTP he lists is actually a danger of running third party code blindly and automatically.


Do you want ISPs to inject random crap at the top of your website? Because that's how you get ISPs to inject random crap at the top of your website.

I remember getting a Vodaphone sim card (I think it was in Belgium or The Netherlands) and seeing their banner displayed on MY WEBSITE! It wasn't in a language I know, and it might have just been a bandwidth warning or something to indicate I need to reload, but it was still on my website, injected right in there.

HTTP is needed for static sites, if you want to ensure your readers see the exact same site that you made.


A few years back I used to not redirect http:// to https:// on my personal site. Then one day at an airport or something I somehow visited the http:// variant. Bam, a pop-up ad in my face.

I set up redirect and HSTS asap.


>Do you want ISPs to inject random crap at the top of your website?

If your ISP does this, or if you are on a sketchy network somewhere, then maybe you should not use it at all. Get a new ISP, or use a VPN if you are that worried. If the webmaster is not sharing sensitive information on his casually maintained static website, then that is good enough reason not to use HTTPS. I know it sounds... uncaring.

It is true ordinary people, who don't understand the risks could get MITM'd and never suspect a thing. For some reason I still don't care enough to put HTTPS on my shitty old flash game website. I just can't be bothered. I think that is good enough of a reason. Blame should go on the ISP who are MITM'ing their customers.


Sure, but when the biggest (or only) ISPs in many countries are doing it, and you can prevent it by taking out 15 minutes to set up Let's Encrypt, that's on you.


Nope. They can get a VPN, or fight against their big ISP or government to stop such dubious practices. Your point of view seems to come from a standpoint of infantilization of these users.


You expect that from the vast majority of the population which is not tech-savvy?


I expect the vast majority to complain if their internet provider is putting ads into website. The bigger the company the bigger the group.


In fairness, it's your isp that you pay for that is doing this. Asking every website to change to prevent this instead of talking to your isp and changing providers if necessary doesn't sounds like the right people are taking responsibilities.

Don't trust random sim cards you buy off of the street either.


It is interesting that the main argument against staying http is one of social responsibility. Your static site isn't any more susceptible to attacks with http only, but your users are. A bunch of MITM techniques are thwarted by only visiting https sites. Is that your problem as a webmaster? Since LE, I have taken up the position that it is easy, and I prefer https sites as a user, so I really don't have a good reason not to enable https.

Also, an attack on my end users is an attack on my site. Anytime someone wants to see my content and gets something else, that is bad. If I can significantly raise the difficulty of doing that, why wouldn't I?


Exactly. As a user I've installed the browser add-on Let's Encript and have the "Encrypt All Sites Eligible" option enabled to block all unencrypted pages. On the odd occassion that I encounter a page that isn't available over HTTPS I'm prompted to choose whether I want to make an exception and allow the site to load for that session. I rarely do though.


Did you not read my comment? I love HTTPs. I make sure all my websites have it available. But I also make sure there's an HTTP site too.


To what end though? Why do you even need to support the insecure protocol? I'm not aware of any widely used browsers that can't do HTTPS. If that's what you're worried about, then you should HSTS preload all of your domains, so that browsers that do support HTTPS will only ever get the HTTPS version, and aren't susceptible to an SSLstrip attack.


It's not an insecure protocol. What is insecure, in every single example I've seen in this thread and in the article, is the bad defaults of browsers executing javascript automatically. Without that terrible design choice, prioritized because of commerce and the desire to change the web of documents into a surveillance operating system, HTTP would be, and is, just fine.

Anyway, to directly answer your question there are browsers that can't do all of HTTPS because of false "security" enhancements being pushed for sites that don't need it like restricting the set of TLS versions that are accepted. ref: https://scotthelme.co.uk/legacy-tls-is-on-the-way-out/


> It's not an insecure protocol.

It absolutely is. In what sense is HTTP anything but an insecure protocol?

HTTP does not prevent man-in-the-middle attacks or content-injection. It does not ensure you are connecting to the domain you think you're connecting to. It does not prevent snooping on transmitted data. If it did, there would have been no reason to invent HTTPS.

> Without that terrible design choice, prioritized because of commerce and the desire to change the web of documents into a surveillance operating system, HTTP would be, and is, just fine

Absolutely not. You do not get privacy without HTTPS. You do not block MITM without HTTPS.

It's obvious that HTTPS should be used for online banking and for software updates, but HTTPS should also be used for ordinary websites, to protect your privacy and to prevent content-tampering (by an unscrupulous ISP, or when using insecure Wi-Fi).

People sometimes give Wikipedia as an example of something that doesn't need HTTPS, but these people clearly haven't spent much time thinking about it. A snooping ISP should not be able to tell whether a customer has been looking up an embarrassing medical condition.

I'm reminded of a lengthy HackerNews discussion on this same topic, a month ago [0].

The only compelling arguments against HTTPS are that old smartphones used in developing countries don't support it, and that it prevents HTTP caches like Squid. Browser defaults regarding JavaScript, certainly have nothing to do with it.

[0] https://news.ycombinator.com/item?id=21912817


>Absolutely not. You do not get privacy without HTTPS.

My sites do because I put all of them up as tor hidden services too.

>Browser defaults regarding JavaScript, certainly have nothing to do with it.

They do. Because everything 'insecure' you just described comes from users running code that might be injected. There's no danger from some entity tricking some person into viewing a simple html page.


> everything 'insecure' you just described comes from users running code that might be injected.

No, I gave 3 different examples where JavaScript is irrelevant but HTTPS is still important.

* Online banking (HTTPS prevents snooping)

* Software updates (HTTPS ensures you get untouched data)

* Browsing a Wikipedia page about a medical condition (HTTPS prevents snooping)

> There's no danger from some entity tricking some person into viewing a simple html page.

That's not true. Not all browser security flaws involve JavaScript.

Browser flaws aside, it's still important to prevent an attacker from modifying the page to perform a phishing attack (tricking a non-technical person into visiting faceb00k.com, and then capturing their password). Less seriously, HTTPS blocks injection of spam into your page by an ISP.

HTTPS is also important to prevent profiling by unscrupulous ISPs.


You do realize your isp knows you visited a domain like wikipedia. The only thing private is the page content which can be gotten by visiting your request.


That isn't news to me, and it does not undermine my point. Again: A snooping ISP should not be able to tell whether a customer has been looking up an embarrassing medical condition.

Someone going on Wikipedia tells you relatively little. Knowing which specific pages they've been reading, tells you a great deal more.

HTTPS goes a long way to preventing a snooping ISP from telling which page you visited. A truly committed ISP might still be able to infer it from the traffic patterns, but they'll have a much harder time than with plaintext HTTP.


With a very large property like Wikipedia it's probably unavoidable that it'll be possible to determine that you contacted Wikimedia, even just from IP addresses. If that's too much you'll need TOR.

But far from "the only thing" being page content, almost everything is "kept private" with HTTPS, the request itself including any body provided, and the response to that request.

So while "visiting your request" might well get them their own copy of the content of a particular encyclopedia page you looked at, they're stuck with not knowing what that request was.

And eSNI plus DPRIVE is the final dash to a finish where the ISP doesn't even know which Wikimedia host you visited, assuming they all share the same IP ranges. Italian Wikipedia? Simple English? Wiktionary? Wikivoyage? That's suddenly an ocean of possibilities.


> It's not an insecure protocol. What is insecure, in every single example I've seen in this thread and in the article, is the bad defaults of browsers executing javascript automatically. Without that terrible design choice, prioritized because of commerce and the desire to change the web of documents into a surveillance operating system, HTTP would be, and is, just fine.

OK, so what you're saying here is that HTTP is insecure as long as the browser distributors continue to do something that (you say) is insecure.

Well, I've got news for you. The browser distributors are going to continue to do this.

Also, do you really think it's within reason to expect users to examine all the Javascript that is loaded on a page looking for malicious code before clicking some sort of button to run it?


I mean, it absolutely is an insecure protocol. It has no security whatsoever built into it, and anyone can inspect or modify the contents of the connection with impunity. The only way it's not "insecure" is if that word has no meaning whatsoever.

Other insecure protocols besides HTTP would include Telnet, basic DNS, port 110 POP3, FTP, basic IRC, etc. None of this is controversial or really even arguable.


Unless you think that someone changing the site to be a picture of a gaping hole is a problem.


It is an insecure protocol.

For example I was reading Hacker News over HTTP and there is this guy named superkuh saying "Hitler was right".

See, with no execution of code one can completely change a message with no authentication.


For the few use cases where the dangers of HTTP can be reasonably consented to by the user, it would make sense to serve the content on a specific subdomain such as:

http://insecure.example.com

alongside:

https://www.example.com


Offering HTTP allows MITM attackers to strip HTTPS from visitors who want it. HSTS can help, but the vector still exists.

Optional security is not just an upgrade; it opens up a downgrade path from more secure to less.


You can't securely disable http on websites, even if you are not offering it, because it still can be faked by a MITM attacker via proxying it to https. So removing http is pointless for security and only hurts legitimate uses. I guess this is also one of the reasons why "HSTS preload" exists.

Also encryption is neither security nor privacy.


That's a good point and a dirty trick, but like you said, it's why we have HSTS and preload lists. I only serve HTTPS (as best I can) because I've never had a case where something truly justified the possibility my system would betray my user. I'm sure I could contrive one, and probably there's someone somewhere who'd agree, but I would rather treat that case existing as a bug to be fixed and not a use case to support. Otherwise you get stuff like the other recent thread [0] with people proudly serving unauthenticated binaries with HTTP for no defensible reason.

Someone in a cousin comment made another, maybe better point: URLs get linked and crawled and cached and having them HTTP just normalizes something that was fine in 1995 but isn't fine in 2020.

It's always possible for someone to get proxied like you said, but it's still safer overall if ever seeing "http://" raises eyebrows. There's another front page thread [1] right now about the normalization of deviance.

[0] https://news.ycombinator.com/item?id=22136710 [1] https://news.ycombinator.com/item?id=22144330


Pray tell, how did you come to the conclusion that encryption is not security or privacy, and are you an Australian politician?


There is no need for trolling. He's talking about encryption, but calls it security, that's something I have a problem with.


I meant the second paragraph about security in general, not TLS [0] encryption, but I can see how that's not clear. HTTPS improves security in part through encryption.

[0] What do you think of that S?


FWIW, one issue with having the plain HTTP site available is that browsers (without the HTTPS Everywhere extension installed) will default to loading the unencrypted site when someone types your domain into their browser. Google is also giving plain HTTP links to your site when it turns up in results it seems.


That makes sense. I wonder if any web servers have smarter decision making for that. Like give a hard 301 to modern browsers, but let older or nonstandard clients get a standard http response.


There are so few browsers that don't support HTTPS that it's not worth worrying about it.

Besides, if the security negotiation is to be done in plaintext, then it's trivial for an attacker to MITM a connection, replace the User-Agent headers, and then trick a server into thinking it should serve content insecurely. This is a huge gaping attack vector. It's better to just always serve securely.


Your static website may be nice and friendly, but if you serve it with HTTP, I could ask for your website and get something else.

Maybe something less friendly.

https://news.ycombinator.com/item?id=22146260


And as long as the user isn't executing all code the web sends them willy nilly it's fine. My sites never require users to have javascript on.


Using s_client makes what’s behind TLS human accessible, and for those of us that grew up in the age of telnet I get that desire. But in this age, we cannot be leaving unprotected web sites around, even static ones. It’s a matter of responsibility. You as an operator don’t know where network attackers will find a way.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: