Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wondered why in the heck a static web site running in a resource limited environment would even attempt to run https. Based on the HN hug of death, it appears the reverse proxy adds HTTPS/TLS/whatever support, and it wasn't able to cope.

I wish that the demand for security theater in web browsers wasn't so high that they effectively prohibit plain old text transport.



I would blame ISP's and public WIFI providers before blaming browsers for preventing ISPs and/or anyone else from MitM/malware/injection attacks.


> I wondered why in the heck a static web site running in a resource limited environment would even attempt to run https.

Err, I don't think it is. This is in the response headers:

    Server: nginx/1.18.0 (Ubuntu)
Getting nginx (let alone Ubuntu) running on a ESP32 would be a seriously impressive achievement.

The web site also says:

    Once it is running, you can access your website by entering your ESP32's IP address in a web browser. For example, if your ESP32's IP address is 192.168.1.100, input http://192.168.1.100 in your browser.
The site works for me now, and I'd suspect it to be able to support a minimal HTTP (not HTTPS) server if it ran natively, but then we also have this:

    You need to download micropython.py file and place it in the root directory of esp32.
So it's written in Python and a chip with the compute power of a potato. It would be interesting to compare it to the same thing done in C or Rust.


The potato chip is more powerful than a Amstrad PC1512.

https://en.wikipedia.org/wiki/PC1512

While any serious game development was done in Assembly, languages like Python were a plenty to chose from.

So instead of looking down on the project for having chosen Python, as someone that used to play Defender of the Crown at the high school computing club, I marvel to what an ESP32 is capable of.


yes, you got it right its due to that odd message web browsers show for non-https websites. This HTTPS/TLS has been added using nginx on different server, also to support some load. The website is still under lot of traffic, however nginx is doing good on its part.


So here is nginx running on a real computer and a homebrew webserver running on a potato, and your theory is nginx is the limiting factor. And it’s all the fault of encryption and the browser cartel.


In what way is requiring encryption security theatre?


Not everything needs to be encrypted. If I'm serving static webpages the only thing I might want to log is the which IPs visited at what time of day.


> Not everything needs to be encrypted. If I'm serving static webpages the only thing I might want to log is the which IPs visited at what time of day.

As a frequent user of public WiFi (mostly at coffee shops, airports, etc.), I prefer that every page is encrypted so that nobody can MITM me/tamper with what I see in my browser, even on plain text pages.


If you are frequently using networks you suspect to be hostile, wouldn't you L2VPN your traffic back to a trusted exit point regardless? HTTP/HTTPS is likely only part of the information your computer is providing to said hostile networks. Worrying about the encryption of plain text pages seems to be like worrying about a stain on the couch whilst the house is on fire.


I think there are two discussions here:

* Is using HTTPS enough on an insecure network? Should one also be using a VPN?

* Would end-users see a benefit from HTTPS on simple/plaintext sites?

> HTTP/HTTPS is likely only part of the information your computer is providing to said hostile networks.

What other non-encrypted information might a normal person's computer be communicating?

I understand that VPNs do improve privacy. Privacy is moderately important to me, but I don't think it's important enough for me to use a VPN.

There are also occasional vulnerabilities in TLS/SSL/HTTPS but... what can I really do about that? Even a VPN might establish its session with those technologies.

> wouldn't you L2VPN your traffic back to a trusted exit point regardless?

It's reasonable to expect someone technical like myself to do this, and maybe I am really just playing loose with my security. But, nobody outside of the tech community is even thinking about this. 99% of people are happy using whatever WiFi is free and aren't going to question its security.

So, using HTTPS for "simple" sites is still beneficial since you will be making your content more secure for less technical users who might be on insecure networks.


Where does logging enter into it? To my understanding, serving traffic over HTTPS doesn't require you to do any additional logging (or any logging at all).

The point about static webpages would be a potentially good one in a world where ISPs and other Internet middlemen are honest and transparent actors, but this has so far proven not to be the case. I think it's in everyone's interest for your static content to reach my browser without ads or tracking mechanisms being injected by a third party.


What example would you have of an ISP or third-party injecting an ad or tracker within the HTTP response? I've certainly seen the DNS query hijacking and while HTTPS will encrypt the transmission, at the ISP level they already have your DNS query and src/dst IP address. Even with HTTPS based on session data it wouldn't be difficult label Netflix/Youtube traffic patterns.

Do you also have any reference to what exactly the collected data is useful for? I could see an ISP selling traffic data for a zip or area but they would already have that based on your billing address.


At the registrar level: the .tk registrar was (in)famous for injecting both ads and random JS into websites that were hosted on domains registered against it.

At the ISP level: I had a Spanish ISP attempt SSL stripping on me a few weeks ago.

> Do you also have any reference to what exactly the collected data is useful for? I could see an ISP selling traffic data for a zip or area but they would already have that based on your billing address.

The goal is always more (and more precise) data points. Being able to run JS on the same origin as the request is more valuable than just the rough GeoIP data the ISP already has.


> At the registrar level: the .tk registrar was (in)famous for injecting both ads and random JS into websites that were hosted on domains registered against it.

I did a google search for domain hijacking, ad injection, javascript and while it does look like .tk domains had/have this issue it doesn't necessarily point to the registrar. After all they are offering free domain registration which is going to get abused. Its also not surprising when their own website doesn't use HTTPS, however their mission statement isn't about security on the Internet.

> The goal is always more (and more precise) data points. Being able to run JS on the same origin as the request is more valuable than just the rough GeoIP data the ISP already has.

But isn't this what Google, Bing, Amazon, Alibaba, already do when they fingerprint your device? They can't use just an IP addresses due to NAT so they collect unique characteristics to your specific device. My question was more so if advertisers can already get down to the device level when you visit their site, what is the ISP's motivation if their data won't be as unique or specific? or maybe a better question is what organizations would be buying the "less" specific data that an ISP could get from your session data?


As in, if someone hacks into there wouldn't be anything much to grab. For any sort of commercial service I would use as regular computer or on S3.


Unless I'm misunderstanding, I think that's kind of orthogonal to the question of encrypted transit. Plenty of services that expose only HTTPS don't encrypt at rest (and vice versa).


Https in this case prevents your users from being hacked, it does nothing to prevent your serverside from being hacked regardless.


I agree that not everything needs to be encrypted, but unfortunately a lot of people who browse the web are concerned when the browser complains that something is not secure.

From the browser maker's side, how does a browser know whether something should or should not be secured? They have clearly taken a more aggressive approach to inform users what is going on within the underlying protocol. While I do agree that not everything needs to be encrypted, I also agree that the user should know what is or is not happening under the hood.


I guess the warning should be “not encrypted “. Being encrypted doesn’t mean being secure (as there is a way to overcome the encryption).


Explain how this encryption can be “overcome”..?


Isn't HTTPS needed to stop injection by ISPs?


customer migration and fiscal loss is what should happen.


But it won’t, so… https everywhere.


This is like saying police aren't needed because people should just move away from dangerous neighbourhoods.


If everything that didn't _need_ encryption wasn't, then the use of encryption could be suspicious in itself. For my part I'm glad that we've moved to a mostly working system of encryption most things


How can you reliably know that you're not being MITM without HSTS?


How can you know you're not with HSTS? The whole centralized security system is suspicious in terms of failure points.


The Web PKI is hierarchical, but it isn't particularly centralized (other than Let's Encrypt increasingly eating everyone else's lunch, which is probably a good thing).

But in terms of actual failure points: if you're initiating a connection over HTTPS, then the only way an attacker can MITM you is by convincing a CA to incorrectly issue them a certificate for that domain. That's why Chrome and Safari monitor certificate transparency logs, and why website operations should also generally monitor the logs (to look for evidence of misissuance on their own domains).


Not my problem if I am just serving a static page.

For a commercial service or if I was handling people's credentials I'd use something more robust.


It is your problem if you're interested in making sure people are actually getting your static page.


I'm sure there is some nuance to what someones static site is serving but someones blog doesn't need to be HTTPS. If they are offering downloads you can provide checksums or verify their data through other sources or contacting them out-of-band.

Anything that needs some form of validation from any site should be verifiable in multiple ways. Just because they have HTTPS doesn't mean the provided information or data is automatically correct.


The entire website can be rewritten by a MITM without https. Checksum or no checksum is not helpful.


The attack you are suggesting is not commensurate to the types of blogs and information that _need_ HTTPS.

If you are operating at a level where your personal blog can have all possible transit paths compromised by a third-party such that they are hosting some or all resources that you provide for download, modifying them and producing new checksums then you have bigger problems than a blog that doesn't have HTTPS. You would also at that point consider using someone else's platform that will absorb or actively be motivated to thwart these exact scenarios. Not to say that always works out[1].

Additionally your concern of checksums being compromised can easily be thwarted by hosting packages on github, gitlab, bitbucket, pastebin, or a google groups mailing list. All of which still don't require your blog to have HTTPS. You don't have to manage getting your own certificate, paying for yearly renewals or setup any auto-90-day let's encrypt auto-bot.

Great grandma's cookbook recipes on a blog don't need HTTPS.

[1] https://www.zdnet.com/article/krebs-on-security-booted-off-a...


There are multiple TLS stacks for microcontrollers, that are like decade old and running on devices much less performant than ESP32.


Maybe it's more than security theater. With mandatory TLS, i.e., encryption plus third party-mediated authentication, the ability to publish a website comes under the control of third parties, so-called "Certificate Authorities" (CAs).

The source of this third party "authority" is unclear. If a CA uses DNS for verification, then the "authority" is ultimately ICANN. And we know that ICANN's "authority" is completely faked up. It has no legal basis. These pseudo regulatory bodies have no democratic process to ensure they represent www users paying for internet subscriptions. As it happens, these organisations generally answer only to so-called "tech" companies.

Effectively, CAs and browser vendors end up gatekeeping who can have a website on the public internet and who cannot. Not to mention who can be an "authority" for what is a trustowrthy website and what is not (CA/Browser Forum).

The hoops that the browser vendors make a www user jump through in order to trust a website without the assistance of a third party are substantial and unreasonable. It seems that no www user can be expected to make their own decisions about what they trust and what they don't. The decision is pre-made, certificates are pre-installed, autonomy is pre-sacrificed, delegated to so-called "tech" companies.

Meanwhile these so-called "tech" companies, who are also the browser vendors, are commercial entities engaged in data collection for online advertising purposes. For more informed users, these actors are perhaps the greatest eavesdropping threat that they face. The largest and most influential of them has been sued for wiretapping www users on multiple occassions.

There are conflict of interest issues all over the place.

tl;dr Even if the contents of the transmission are not sensitive and perfectly suited to plain text, the system put in place by so-called "tech" companies, to benefit themselves at the expense of every www users' privacy, ensures that TLS must be used as a means of identifying what is an "acceptable" website and what is not. Absence of a certificate from a self-appointed, third party certificate authority means "not acceptable". Presence of certificates from first party authorities, i.e., ordinary www users, means "not acceptable".


Let's encrypt is doing god's[1] work to work around the CA scam. And they've made it extremely easy to use. Literally just run one command and you have SSL on your website. May take a few more commands if you're not using one of the more standard HTTP servers.

You have to have a domain to use it obviously. Lucky there are other god's workers like duckdns to work around the domain scam too.

https://letsencrypt.org/ https://www.duckdns.org/

[1] Obviously not referring to any theistic entity here, but more to something like the spirit of FSF or whathaveyou.



absolutely nothing prevents you from either:

- adding another root CA, or

- bypassing HTTPS warnings, or

- taking 10 minutes to set up LetsEncrypt

and for obvious reasons, neither of the first two should be easy


Because sniffing? And tampering?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: