I see a lot of hypocritical bullshit on Hacker News - a lot of it from websites which frustratingly lack HTTPS support - but this takes the cake.
This has got to be some of the most hypocritical bullshit I've ever read.
This all-text website, wherein the author espouses the benefits of "the open web" and rails against the corporate overlords who would destroy the web... this all-text website won't even load unless you enable third-party JavaScript access from FOUR external domains[1]. A fifth domain loads a font from - you guessed it - Google Fonts.
P.S. Nobody's mentioned encryption as a means against against surveillance. You might be fine browsing websites openly in HTTP, but the poor soul in China might just appreciate transport-level encryption and encrypted DNS when they choose to read whatever innocuous pro-democracy blog their country has deemed unsuitable.
You're not wrong about the javascript, but that doesn't make you right about the https.
This enforcement is ridiculous. I've wondered if I had chance at a law suite against goggle (and their lapdogs at mozzila foundation) for calling some of my domains "illegitimate" because I use self-signed certs.
Just because I didn't pay twate, or get my cert from someone else who paid twate, doesn't mean my site is "illegitimate". Luckily for me, I'm not trying to launch the next twitverse. I use those domains for my own work, they're not out there to be popular.
The article never delves into the world of small network attached devices, some of which are in use long long after their manufactures have been purchased and shut down by the same asshole trying to make you throw it out because it's encryption algorithms are too old, or, Oh No!!!, it uses http for it's webinterface.
In the end, https only is goggle power play propped up by fanboi-ism...
Alas, in $CURRENT_YEAR I do think that for non-technical people the https can not guarantee no eyesdropping by government. In fact, I personally believe that in many countries this is already happening in one way or another.
I don’t think people remember the days when ISPs/free wifi hotspots would inject their own ads into the content of pages served over http, or replace urls on YouTube content to show lower resolution copies of videos.
This German webhost does not include SSL in it's bare bones package without surcharge. Thus my kita had no SSL certificate, and thus I could detect my ISP or some hacked router injecting ads (Berlin) when I used that site.
There are plenty of local http sites due to German's (reasonable) distrust of US companies and a propensity to roll their own
> This is not an argument in favour of a search engine/browser penalizing HTTP servers.
Why isn't it? By navigating to a domain (or searching for it), the user is arguably making an intent statement ("I want content on $domain"). Ensuring that the user doesn't get spam, malware, etc. instead seems well within the scope of the browser.
It may be in the browser's interest when the majority of their userbase is non-tech. I'd rather my mom use such a browser than one which does not enforce it.
"Yeah, whatever, feel free to die fighting against it"
I think advocating against somethig, generating a discussion, moving the Overton window is a valid, almost necessary thing. (I admit sometimes they threaten the things I like (or me), but most of the time, it's OK.)
Perhaps "agreeing" was a bad choice of word, instead discussing, debating, or just letting me know your opinion about the matter.
I love the creativity and historical value of the open web, but I don’t get this anti-HTTPS stance. These days, it is easy for site administrators to upgrade to HTTPS without modifying their content.
Get a free certificate from Let’s Encrypt, set up redirects, and then add the Strict-Transport-Security header to further secure the site and reduce network traffic from the redirects.
Your only new external dependency there is Let’s Encrypt, which is run by a non-profit. And if you’re ever unhappy with them, you can always switch to another provider and most people won’t even notice.
> Get a free certificate from Let’s Encrypt, set up redirects, and then add the Strict-Transport-Security header to further secure the site and reduce network traffic from the redirects.
If you want to be inclusive of old clients that may not be able to do modern TLS, you can serve the same content on http and https, but reference your favicon on https and serve a STS header with it. Clients that understand STS, will internally redirect to https, clients that do, but can't reach your https for whatever reason, will be able to continue with HTTP.
I'm sympathetic & way more agree than disagree. But DNS doesn't fill me with joy. It's quite centralized, quite a huge organizational vulnerability.
If we had some alternate addressing schemes in the browser that could do trust, I'd be much happier. Like, can dat protocol be a secure origin? Or like, if the goal really is just to secure users, maybe we need to let opportunistic encryption be something users can opt in as secure (even though it can be mitm'ed at the start).
Let's Encrypt has changed the game. It's great that https has so very very suddenly gone from frustrating & business class only to something even the casuals can easily do. But still, I'd love some less centralized systems for trust to be available, some visible known alternative paths demonstrating that there are diverse options at these lower transport/security layers of the network stack.
Let's Encrypt doesn't work with cPanel shared hosting. A lot of cPanel providers have free certs thanks to AutoSSL but many do not (such as Namecheap). A lot of the obscure web runs on shared hosting.
That's like saying "Linux doesn't support GUI programs." and when someone says yes it does, saying "No it does not, when the admin sets it up without X or Wayland installed."
The Let's Encrypt extension is not installed by default and needs to be explicitly installed, and I've never seen a cPanel provider that has it.
You can't use ACME to get a cert without said extension.
Perhaps it would be more accurate to say that in practice in general Let's Encrypt is not usable with cPanel providers.
In the context of this conversation, people were talking about how Let's Encrypt certs can easily be used to enable HTTPS, and my point is that that's rarely possible with cPanel hosts.
Will that “non-profit” be ok with my site raving about Russia’s involvment in Ukraine? What about a fan website for the Iranian Ayatollah? Or a website describing in detail Taiwan’s vulnerable targets just as China is about to attack them?
Chances are that the answer to all those questions is “no, the Western-run Let’s Encrypt will not be ok with all that” and if it were for the push to come to the shove (or whatever the expression is) they’ll be more than glad to invalidate that certificate (or worse, to compromise it without letting you know).
Granted, the same thing will most probably happen with one’s DNS provider and with his/her ISP, but why have another dependency if one can avoid it?
Let’s Encrypt was created in part to take humans out of the loop where possible. LE does not review your site and that is made obvious by the speed of getting a certificate and the automated renewals, etc. They aren’t aware of you unless there’s a technical error or you’ve made headline news.
That said, I agree with you that there is some level of risk here, even if it is small. That’s why your ability to switch is so important. Even if they became aware of you and they decided to revoke your certificate, you would just switch providers. Because of that, they have no meaningful power over you. If it ever becomes difficult to switch, then things are getting bad.
> or worse, to compromise it without letting you know
What does that mean? The certificate either passes validation or it doesn’t. They aren’t able to change the content on your site or anything like that.
>What does that mean? The certificate either passes validation or it doesn’t. They aren’t able to change the content on your site or anything like that.
They could say that your certificate passes validation while, in fact, said security has been already tampered with on your side, giving your website's visitors a sense of false security.
> They could say that your certificate passes validation while, in fact, said security has been already tampered with on your side, giving your website's visitors a sense of false security.
This isn't how the Web PKI works. In order to tamper with your site's traffic, Let's Encrypt (or another CA) would need to issue another certificate for your site with a key that they (rather than you) control. This would be detected via CT[1], which your browser (unless it's Firefox) is already using.
And note: by design, any CA in the trusted set can already do this, regardless of whether you use them or not. The things that are stopping them are that it's (1) not in their interest to do so, (2) it's detectable due to CT, and (3) would result in their root being hell-banned by the browsers.
I was thinking of actions taken by state actors in the context of the current geo-political climate. As such (1) can be easily brushed-off if a national security letter from a 3-letter agency is in the mailbox, (2) is debatable, but looking at that "transparency list" I mostly see Western companies, companies which I suppose are also subject to that kind of correspondence, as for (3), afaik two of the biggest browsers now on the market are controlled by companies actively working with the US defence apparatus, including Google [1].
Again: this has nothing to do with LE; any CA can issue any certificate at any time by design, and it's the responsibility of the CT scheme to detect misissuances and malevolent behavior.
A NSL sent to the operators of a CT log cannot stop the log operators from logging inauthentic certificates: each CT log is a timely and append-only signed ledger of all certificates issued, meaning that any deviation between logs would also be detected and treated as a sign of mis-issuance or compromise by the larger Web PKI. What's more, certificates need to be logged as a matter of validity: an order compelling log operators to refuse a certificate would effectively ensure that the certificate never becomes valid. That's what makes CT nice: anybody who wants to create a malicious certificate needs to do so in a way that's globally detectable.
It can be fun to play mind games about shadowy agencies, but that's not really how these things work: if you're of sufficient interest to a country's intelligence service, then they're going to spearphish you, exploit your phone, steal your TLS session keys, your cookies, or do any number of other much less visible things to get the access they want. And, if they're competent, they will get it[1].
IIRC they either outright refused to provide a certificate to KiwiFarms, or have close enough ties to ISPs who cut them off that applying seemed like a non-starter.
I would go ahead and hazard that “major DV CA refuses to validate controversial site” would be major news.
There’s no evidence that LE has ever done this, much less ever considered it, much less even has the basic technical ability to manually intervene in the DV process.
Hardcoding a check for whether the domain is a particular fixed string is trivial. Any competent technical organisation can do that in less than a day. Yes it wouldn't be principled or elegant, but it would work.
Currently LE looks like neutral infrastructure that would never do such a thing. But so did CloudFlare, right up until the point where they suddenly weren't.
I don’t think it makes sense to compare a for-profit content delivery product with a non-profit CA. They fill different purposes, and exist under different regulatory and incentive models. They don’t do remotely the same things, and don’t even have the same basic “profile” (one being an active network participant, and the other being a static participant in the PKI).
Again: this entire thread exists because you (baselessly and incorrectly) speculated that LE has censored people. We’ve now moved on to baselessly speculating about what LE could do, which is approximately as useful. Which is to say: not.
I made it clear from the start that a) this was a partial recollection b) it was entirely possible that it hadn't got to the point of physically attempting to issue a certificate. It was definitely a topic that was dicussed on KF, and a part that I do remember clearly is that LE's terms permit them to choose to refuse service to entities based on the content of what those entities are serving. And it's very obvious that it would be technically possible for them to do so, contrary to what you ("baselessly and incorrectly", if we're going there) suggested in your last reply.
If LE wanted to make a clear committment to content-neutral operation it could do so. It has taken a different tack in its ToS, even if it hasn't been pushed to the point of applying that in practice yet.
CloudFlare never looked like neutral infra, since they control the traffic so they need to (from a legal standpoint at least) block certain things. There has always been things cloudflare would not host.
CA:s do not control the traffic, they just say if a key is associated with a domain.
One point the author neglects is that merely browsing an informational site is sensitive depending on social and political context. Examples:
* Woman views information on abortions or reproductive health care in a jurisdiction that persecutes over that
* Queer person views information pertaining to LGBT services/issues in a jurisdiction or LAN that would punish them
* Person views information on protests
* Person reads the news about a crime[1]
TLS does not solve this problem entirely (SNI/DNS leaks, traffic shape analysis) but it helps.
I would not expect search engines/browsers to try to determine if a site publishes 'controversial' content especially since that depends on who you ask.
As you mentioned, TLS doesn’t solve this problem entirely since the ISP can still see what domains you visit.
For those use cases, people should really be using Tor instead. In that case, if the website is public, HTTPS doesn’t really serve a purpose since the exit node isn’t going to get any private information.
Perhaps HTTPS does help if the sensitive information is hidden in a wide-ranging domain (e.g. Reddit, Twitter) and because support for it is so widespread. In those cases, HTTPS is likely already supported.
However, if the politically sensitive but public information is on a small domain, such as a personal blog or informational topical site, HTTPS serves almost no purpose there since it’s obvious what your activities are from the domain. In that case, one should definitely rely on Tor.
HTTPS may not be proprietary, but it comes with many of the same drawbacks: central control, lack of transparency, more difficult to debug, and less accessible in many circumstances.
You rely on one of several central authorities to grant a user access to your website (or be faced with a scary message about your site being insecure).
Just because you can choose from one of several central authorities doesn't change the fact that you are reliant on them to authorize the user's ability to establish a network connection with you.
Since we're already complaining about "New thing Bad, Old thing Better," why oh why does this blog require third-party JS from three separate domains (s3.amazonaws.com, fargo.io, and scripting.com) just to render a page of static text with minimal styling? The only user-visible thing I can see that uses JS is the section folding thing, which can be done with no JS using the <details> tag on modern-ish browsers, or a tiny chunk of inline JS if you want to support older browsers.
The point they they seem to omit is that TLS roughly doubles the cost to serve static content, assuming no hardware offloads are used. Roughly 40% of our CPU time on the Netflix CDN (on nodes w/o TLS offload NICs) is spent doing TLS crypto.
I'm not saying I'm against TLS, far from it. But it does carry costs, and I'm surprised somebody so passionately against TLS would omit this argument.
Probably because for static text sites the compute cost of TLS is negligible (and we're largely talking about either small personal blogs or the long tail of historic content, so traffic is already assumed to be minimal). In your case, TLS is rather unfortunate since you can just include a crypto hash (or merkle tree or whatever) of the payload in your HTTPS control connection, then just load your video payloads over HTTP and check for a mismatch client-side. Basically, Google is saying they don't trust "you" to get it right.
The bigger problem with TLS is that it adds so much brittleness and complexity (do you know how many times I've broken ACME/LE python dependencies by accident?) and increases the time-to-first-byte for so many sites, especially if you're not getting your hands dirty with nginx and OpenSSL internals.
(Even if you are tuning all the right knobs, one thing we didn't realize was that at some point using an EV SSL certificate (before Chrome removed the UI benefits of doing so) began causing a massive increase in TTFB that there doesn't (didn't?) seem to be any way to avoid, so we ended up moving to regular TLS/SSL certs w/ Let's Encrypt not for the free cost of entry but for the reduced initial connection time. I can't remember the exact details now as it's been some years, but I think with EV SSL there's a secondary cert or revocation lookup to a remote url even if you have correctly configured ocsp stapling.)
But would that be relevant? In other words, are CDN nodes CPU-limited? Or, does the bulk encryption absorb otherwise unused CPU time? I would have imagined that most Netflix traffic is to clients that Netflix itself controls, like Roku apps or other first-party apps and not web browsers, so Netflix could choose against encryption for most streams.
Yes, without TLS offload, we're CPU limited. For example, on our 400GbE (4x100GbE) servers, we are CPU (really memory bandwidth) limited at ~240Gb/s without NIC TLS offload. Using Mellanox CX6-DX NICs to move most of crypto onto the NIC increases the effective limit to ~375Gb/s.
We encrypt everything we can with TLS to protect the privacy of our members.
Thanks. I've read all the Netflix posts about the difficulty of moving 400gbps through a NUMA system. I imagined that if it was load-store limited then load-encrypt-store could potentially be "free".
The happy path is storage ===>> RAM ==>> NIC, without CPU access. So there is basically a single memory write and a single memory read per byte. That's the non-TLS (and NIC TLS offload) path.
Since TLS is per-connection and the page-cache is per-file, any software crypto needs to encrypt from a common source in the page cache to a per-client buffer (eg, it cannot happen in-place). So this introduces an extra memory read from the page cache and write to the per-client socket buffer. We use the non-temporal version of ISA-L, so as to write full cache lines into the socket buffer, and not pay for an extra memory read as part of a read-modify-write update of a partial cacheline. So software crypto basically doubles the memory bandwidth requirements over no TLS (or NIC TLS offload).
So you're proposing that Netflix have two completely separate pipelines, one for browser traffic and the other for first-party traffic? With separate CDNs, separate delivery mechanisms, separate client-side algorithms, etc?
While I do think this is an important conversation to have, and part of an even more important conversation about how much control Google/Chrome has over the web, I grudgingly disagree. I think signed data is table stakes for any communication over the internet, and encryption in most cases.
Let's Encrypt/ACME make encryption accessible enough in theory. That said, we're not there yet. Domain names still need to be easier to buy and use, and more software should use ACME by default (like Caddy does).
I had a different (and imho, more pressing) argument for rejecting the "all-or-nothing" approach Google has taken towards HTTPS, namely that it actually punishes basically all non-enterprise intranet/lan devices using TLS/SSL because it makes them harder to access than if they only used HTTP instead [0].
The question that Google seemed to ignore (and still does) is "how do you expect a newly unboxed network device to ship with a non-self-signed SSL certificate?" The workaround some manufacturers have come up with is more repulsive than the original (There was one Chinese manufacturer that tried to get a signed certificate for a domain, then mapped that domain to 192.168.0.1 and shipped the private ssl cert w/ every router iirc. TP-Link has their own weird workaround where they give you a domain to log in to the router instead of the IP, and their router MITMs the connection and redirects to HTTP.)
Google forcing everyone to HTTPS is one thing, Google forcing everyone to HTTPs and universally treating self-signed HTTPS connections as scams is another.
The specific thing that Chinese manufacturer was doing wrong was shipping everybody the same private key, which means Mallory gets given Alice's key.
This trick but for individual devices is fine, even recommended. But of course if the keys are each different that costs more than shipping devices which are identical...
I've implemented https for IoT devices in a similar way that plex did it. Basically every device got a cert for *.$MAC_ADDRESS.myiot.com and then the DNS for myiot.com would essentially bounce back 192_168_10_10.$MAC_ADDRESS.myiot.com to an A record for 192.168.10.10
You needed to know the IP for the device still (in our case we still had a central service keeping track of it), but the principle works. For cheap IoT I guess the cost of the certs can be too large though, we couldn't use letsencrypt due to limits on the number of certs per domain.
in theory it would be neat to have dhcp option for acme server, which would allow automatic provisioning (short-lived) certs to lan hosts. basically all the building blocks are there
It isn't HTTPS that's the issue so much as centralisation under CAs. Self signed certificates should not throw up a warning on sites that don't deal with sensitive data.
> Self signed certificates should not throw up a warning on sites that don't deal with sensitive data.
I would have went with a slightly different suggestion: Self signed certificates should not throw up any warnings that wouldn't appear for the same site visited over insecure HTTP, since the latter is strictly more dangerous.
> Self signed certificates should not throw up any warnings that wouldn't appear for the same site visited over insecure HTTP, since the latter is strictly more dangerous.
For an unknown server, they're strictly equivalent: trusting a self-signed certificate is the canonical "private chat with the devil" example. You could argue that it prevents some other unknown adversary from reading your traffic on the line, but that's a weird threat model (why would an adversary even bother, when you'll accept any certificate they give you?).
More fundamentally: a policy among browsers to trust self-signed certs if the only other option is HTTP is way easier for an adversary to MiTM: instead of attacking TLS itself, they can redirect your entire connection to an inauthentic server with a self-signed certificate. Not throwing up any warnings for that case would mean that browsers would automatically trust the inauthentic server.
> a policy among browsers to trust self-signed certs if the only other option is HTTP
I'm not arguing to remove warnings when going to a site with a self-signed cert. I'm arguing to add warnings to insecure HTTP until it's at least as scary as going to a site with a self-signed cert is today.
DANE shortly offered an alternative, but then it turns out very few DNS hosts are competent enough to enable DNSSEC so that died an early grave.
Self signed certificates are completely pointless and they should never be trusted without manual confirmation. You have no way to verify who's generating these self signed certificates, you may as well use HTTP if you're going to accept self signed stuff.
Self-signed certificates aren't quite as bad as insecure HTTP, since you at least have the ability to manually verify them out-of-band, and even if you don't do that, you're still at least safe from passive taps.
HTTPS is not only about "asking user for data"
It is also:
- preventing third party from spying on you
- preventing third party from altering the content presented to the user
Yes, yes, 20 years old websites do not have https
Is running a 20 years old software a good idea, on the internet ? Of course not
What is the point of this post ?
I read the "why google really wants you to use https" and failed to find any sense in this
The privacy protection is a benefit, but it is balanced against accessibility, which is often more important, such as when trying to access important information on an older device.
Usually supporting HTTPS is combined with requiring relatively recent versions of TLS and cipher suites, for security reasons. I've been in a position where I needed to use a computer from ~2007 or so briefly and found that I couldn't access a huge fraction of websites, despite the fact that the browser "supported" HTTPS.
Firefox 27 runs on systems as old as Windows XP SP2, and it supports TLS 1.2 and the TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 cipher suites, which are still considered up-to-date and don't have any known vulnerabilities. If a computer from 2007 couldn't reach modern TLS sites, then it was just behind on updates that were available to it.
Right - I did end up doing basically this. In this case, though, it was a massive pain to figure out what version of a web browser (a) supported enough cipher suites, and (b) would run on the system in question. And then I had to figure out a way to get the browser on the system to begin with; all the download sites I tried required cipher suites that were too recent!
If you have Internet access, NTP trivially gives you this.
> A recent device would be a requirement for access (not everyone can afford a new one).
That's literally the point I addressed in the comment you replied to. Computers that are over 20 years old are still capable of connecting to websites using modern TLS.
> Site admin keeping up with certificate registration would be a requirement.
No, ever since ACME came out, certificate renewal can trivially be fully automated, with zero admin work required when one is about to expire.
> Approval from the centralized certificate authority would be a requirement.
Which is trivially granted as long as you actually own the domain you're trying to get the certificate for.
> Server's self domain name matching accessed domain name would be a requirement.
You can get multiple certificates and have the server use SNI to send every client the right one, or get one certificate with a bunch of domain names.
> These are all real scenarios where real people can be denied access to information that is crucial to them, up to the point of survival.
This is like saying that it's dangerous to go outside because there have been real cases of people being killed by meteorites.
> If you have Internet access, NTP trivially gives you this.
This is not always true, and the user is not always at liberty to change the clock settings on their device.
> That's literally the point I addressed in the comment you replied to. Computers that are over 20 years old are still capable of connecting to websites using modern TLS.
This is just not true. I do a lot of testing, and there are many devices as young as 5 years that cannot access some websites due to TLS incompatibilities. I have a very nice device that's 10 years old which I use daily that experiences this on a regular basis.
> No, ever since ACME came out, certificate renewal can trivially be fully automated, with zero admin work required when one is about to expire.
Automation breaks, and certificates expire. I encounter websites with broken certificates almost daily.
> Which is trivially granted as long as you actually own the domain you're trying to get the certificate for.
It is not trivial at all. It requires a lot of administrative work, and many people around the world do not have access to this process at all.
> This is like saying that it's dangerous to go outside because there have been real cases of people being killed by meteorites.
No, it is like saying there are people out there who want to access information on the device they have access to, and we should enable them to access that information as much as possible.
> there are many devices as young as 5 years that cannot access some websites due to TLS incompatibilities. I have a very nice device that's 10 years old which I use daily that experiences this on a regular basis.
Can you name the specific models?
> Automation breaks, and certificates expire. I encounter websites with broken certificates almost daily.
How many times have you seen that where the expired certificate came from Let's Encrypt? I'm guessing never, and that when you've seen it, it's always been from legacy ones without any automation in use.
> It is not trivial at all. It requires a lot of administrative work, and many people around the world do not have access to this process at all.
It only takes a few minutes to set up. Who has the required resources to own a domain but not to use Let's Encrypt?
> Lots of networking equipment works over HTTP only, still.
MikroTik has supported https for a very very long time, though it comes disabled by default and even if it didn't it wouldn't work because it requires the user creating/importing a cert to use with it.
At least the cert manager is quite intricate and they have a video (assuming the device has a valid WAN IP) on how to set it up with letsencrypt.
I understand your skepticism, but I'm coming from years of actual testing on many real-world devices that people use everyday and have no agency in upgrading for various reasons.
As I mentioned in other comments, there are many possible areas where HTTPS can present a barrier to a user's access to your resource. Some of these I listed in another comment, which I copy below:
---
Only a few of the barriers presented by HTTPS:
Clock sync would be a requirement for access.
A recent device would be a requirement for access (not everyone can afford a new one).
Site admin keeping up with certificate registration would be a requirement.
Approval from the centralized certificate authority would be a requirement.
Server's self domain name matching accessed domain name would be a requirement.
These are all real scenarios where real people can be denied access to information that is crucial to them, up to the point of survival.
Just a few of the reasons why all my websites still allow HTTP.
did you read the part about how https doesn't protect us from the browser vendor, ie google?
They tell us to worry about man-in-the-middle attacks that might modify content, but fail to mention that they can do it in the browser, even if you use a "secure" protocol. They are the one entity you must trust above all. No way around it.
Nonsense, all sites need HTTPS. Any HTTP connection can have its contents rewritten by an middle party, so if you are using HTTP and not HTTPS then you are 100% OK with the recipient getting altered or incorrect data. If that's the case, just delete your content and serve a 404 instead since you've already stated you're OK with this.
Why do you think "handling user data" is the only reason to employ encryption? What if the website is serving up static content that seems completely fine to you or I but the user is in a legal situation where they would be in trouble for reading it? What if the user is somebody the local police are trying to frame, so the request was normal, the response was normal, but the response was replaced with something illegal so the police can arrest for possession? Why should every router on the path be able to see what I'm reading, what I'm reading web pages about medical conditions? Why do you insist that everyone should collectively make technical decisions that serve to enable this kind of thing?
This isn't hypothetical. A country once used their position of owning the local telecom to inject javascript into web pages that would DDoS another website they didn't like. See https://en.wikipedia.org/wiki/Great_Cannon .
The web moved from HTTP-over-TCP to HTTP-over-TLS because it fixes real problems. One of the common mistakes people make when thinking about security is trying to imagine the attacks themselves and then only defending against those. Your imagination is not the limit, you're up against the collective creativity of everyone else. Encrypt your stuff and remove a whole category of possible problems.
Do LAN sites need HTTPS? IE wifi router, printer, fax control panel?
I am pro-HTTP for these use cases for as long as browsers have more serious warnings against self signed certs, old SSL/TLS versions and weak algo choices than the warnings for HTTP.
Hardware deserves to be supported as long as it physically works rather than as long as its embedded TLS stays supported
> Do LAN sites need HTTPS? IE wifi router, printer, fax control panel?
Of course. All traffic needs to be encrypted. Why must these not be encrypted?
> I am pro-HTTP for these use cases for as long as browsers have more serious warnings against self signed certs, old SSL/TLS versions and weak algo choices than the warnings for HTTP.
IMO, HTTP-over-TCP support should have been removed from all browsers years ago, solving your self-cert vs HTTP problem.
Good solutions for the LAN use case have been rather weak so far. The two categories of solutions are a) a CA for your LAN. People are OK using this for things like dev servers on localhost, but what you want is to have router that issues DHCP leases also issue IP certs for local devices. b) TLS-TOFU for self-certs. We can even do this over the public web too, but we can also treat self-signed certs on LAN vs. public web differently and report when the identity behind the IP has changed. (Generally a bad idea over the public web because we want to allow key rotation, but that might be fixable by adding multiple keys or some hack like publishing the next key into .well-known/next-key or something -- I'm riffing here.)
> Hardware deserves to be supported as long as it physically works rather than as long as its embedded TLS stays supported
Why? Or put differently, if the manufacturer has stopped providing firmware updates, they've unsupported it no matter what the rest of the world has done. Why should the rest of the world compromise security by design in order to only partially not-break a hardware device that the manufacturer doesn't support? In this one exact scenario? Maybe in the future we can provide a small ethernet-to-ethernet (or wifi-to-wifi) device that wraps a single old TLS device and provides a modern interface to it, if support is that important.
The argument I have in favour of not requiring encryption is best summarised by what you have outlined. It would require a massive industry shift towards no obvious solution with no backwards compatibility that doesn't solve anything
Until I hear a convincing story of how I will do the usual lan tasks of
- connect to my router to fiddle with settings
- see the management interface on my printer
- join my parents network and do things for them without having to explicitly trust a CA
- And most importantly, see consumers who don't understand any of those things be able to do these things all out of the box
I can't see how it is a viable expectation
I totally love encryption. It's great. But seriously: what domain will I visit to fix my pppoe settings. Who's going to control that domain, and who's going to renew the certs for it. Because if the answer we will expect consumers and SMEs to trust a certificate authority created by a factory with its own crappy security practices, I'm not sure how that's an improvement
Otherwise we are breaking things to "fix" something that doesn't "fix" anything. if someone is MITMing my lan, it doesn't matter whether my router is TLS or not. it's compromised
It's not just for sites which handle user data. MITM attacks are real. Non-tech people don't know what VPN, HTTP or HTTPS are.
Here's an example: a person goes to say Reddit, and clicks on a HTTP link about a new recipe. It's possible for a link to a malicious site being injected in the page. These were pretty common in developing countries.
No what you stated was "take money out of politics" which is going to be far more difficult then the technical issues we're presented with.
Why that statement, because any law to ban ISPs from doing this will be met with millions in political investment by said ISPs to prevent it. Since the average person doesn't care about this issue it's a losing proposition.
Could you elaborate? I thought upgrading to HTTPS wouldn't involve redoing any sites, no changes to the HTML/CSS/JS/server side code, except for the TLS configuration and maybe replacing links from "http:" to "https:" but that's optional. I'd like to hear how it requires redoing a website?
Certificate transparency and the fact that Chrome only trusts leaf certificates that are in multiple log servers stop governments or rouge CA’s from doing a ssl mitm(or at least doing so quietly).
Well I would love to trust that, but if you go to the main police station in Stockholm Sweden and use their WiFi they proudly show you how they can read and modify your HTTPS traffic without your device so much flinching with their root cert.
Upvoted simply because this claim is so astonishing. If you can back this up I implore you to provide as much technical information and proof as possible, because the PKI managers for the major browser vendors would certainly revoke certificates for this.
This person has claimed that video games like starcraft won't be playable in the future due to power constraints and that 10 gigabit networking is impossible due to using too much power (no explanation for why it already exists). They have also claimed that 2012 atom CPUs are the pinnacle of CPU design so I wouldn't invest much into thinking you will get a real explanation.
I already told you to have power backup for 24+ hours for what can saturate 10Gb/s you need more than ~8x 15kg lead-acid batteries = Not practical.
25W Atom 14nm CPU is the best middle ground between 80W Xeon and 15W Jetson Nano. It is what you need to load balance saturation of 1Gb/s = Final decentralized node.
StarCraft might be playable, StarCraft 2 however will be a privilege.
Nothing you said here makes any sense at all. People are out there using 10 gigabit networking. There are benchmarks that show how slow the atom cpus are. There no power shortage. Why do you believe these ideas?
To be clear here, you are predicting the future and in that future there is for some reason not enough power to play a game from 12 years ago, even though it can be played on a 65W AMD chip. You realize that refrigerators, space heaters and air conditioners take about 10x-30x the amount of power you are saying won't be available in the future right?
The burden of proof is on the people who make the (ridiculous) claims.
With that price you are paying almost $1 per day of running your 65W computer.
That's $30 per month just to keep the computer gaming during daytime.
There is no roof for how high prices or inflation can go.
Other gear are more important than entertainment, people will not consume games when they are hungry or cold (though computers help with that somewhat).
I'm going to go out on a limb here and say that nuclear is heating the planet more than CO2, the efficiency of nuclear is 35% so for every KWh of electricity you heat the planet with 3 KWh.
3KWh that the planet has to radiate into space that did not get added during any time before.
We are slowly boiling the planet, like frogs in a casserole.
Eventually you wont have the energy, money or time to play anything.
That's $30 per month just to keep the computer gaming during daytime.
No it is not. 65W is the max amount of power. 10 hours a day every day at max is still a little over $4 per month. TVs, refrigerators electric heating and air conditioning are all 20x the impact of an average computer.
I'm going to go out on a limb here and say that nuclear is heating the planet more than CO2
You should not go out on a limb for anything. You should read and stop guessing.
Where are you getting your ideas? Source something, link anything that supports what you are saying.
No, there are other free CAs too, and ACME clients can let you enable automatic fallback, so you literally don't have to do anything to fix it: https://github.com/caddyserver/caddy/pull/3862
It's not hard to convert to HTTPS
and it doesn't cost a lot.
I have to disagree. Certificates are still a pain in the butt.
The reason is that the only free and "easy" solution is Let's Encrypt. And they make it a pain in the butt.
You can get a certificate for domain.com by putting a file into domain.com/.well-known/ - so far so good.
But you cannot get a wildcard certificate for domain.com this way.
For no reason as far as I know. Who owns domain.com but not bla.domain.com?
And I don't want all my subdomains listed in a public directory. So only a wildcard cert makes sense for me.
Which means fiddling with DNS entries every 3 months. Or setting up some complex, brittle scheme to automatically update those every 3 months. Which then depends on the API of some name server provider. Which will work differently for every provider.
There's general consensus in the Web PKI world that wildcards in certificates were a mistake. RFC 6125 has an entire section dedicated to security considerations[1].
> For no reason as far as I know. Who owns domain.com but not bla.domain.com?
Many, many services[2]. DNS is fundamentally hierarchical, but not in a way that implies authority or control: the owners/operators of `com.` should not be able to obtain a wildcard certificate for `*.com.` and thereby read the traffic of `google.com`, `facebook.com`, etc.
While I don’t necessarily agree with all the points in this article (it takes only a little more effort to make a site https rather than http), I do think comparing the complete deprecation of http to a wide scale book burning is on the nose.
As the author said, many old http sites that have been on the web for more than a decade are rarely maintained, if at all. Removing access to these sites in totality would definitely take a large percentage of them offline for good. This should be avoided.
I've enabled HTTPS mode in my browsers to prevent data leaks. You get a big, scary warning if you open an HTTP website but you can still continue.
I'm pretty sure this is the direction websites are going. HTTP isn't going away, but users will start getting more explicit warnings in the coming years.
If I want to the public library and the fire extinguishers didn't work the fire marshal would put a lock on the door. If the shelves fell over and smashed the patrons they'd be sued out of existence. Just because something is 'public' and/or a 'library' doesn't mean there isn't a necessity of upkeep to ensure it's not a danger to others.
> I think of the internet more like a public highway and not a library.
I think I asked a reasonable question.
I would argue that locking the doors on a library for non-working fire extinguishers is an overreaction and fairly authoritarian. If there was a sign on the entrance that told library-goers that there are no currently working fire extinguishers and to use their own judgment on whether or not to enter the building, what’s the harm in that? The vast majority would walk out unscathed with a book in their hand.
The internet is not a passive medium. The information highway is actually a decent description, much like the road if people don't maintain their cars they become a danger, so the internet is the same.
SPF was a bandaid on a flesh wound. SPF+DKIM+reverse PTR(+DMARC unless you like discovering your email hasn't been arriving for months) are almost good enough (except that DKIM doesn't sign the things you expect it to sign).
Google and Microsoft still killed email with their unreliable spam filters, but SPF alone was never enough.
Given that you talk about 'even' implementing SPF, and now about IP blacklisting gives me the impression you haven't actually worked (or kept up with) email over the last 10 years.
None of the major email service providers (or 'behemoths', as you call them) use IP blacklisting anymore. They abandoned that idea years ago. IP addresses are ephemeral, especially when IPv6 is used, so IP blacklisting simply does not work anymore. If we were to use IP based blacklisting we'd have blacklisted the entire IPv4 address space by now. Yes, like any other SMTP service they will block IP addresses momentarily if you misbehave, but never permanently.
All major spam filters are content based, and reputation is always on the domain level, not IP level.
If you properly configure your email infrastructure, and you don't spam, then deliverability really shouldn't be a problem.
Heh, so many anti-https people here that seem clueless the internet is not a safe place. There is always someone out there looking to exploit you on the net.
It’s a lot of emotional arguments without any technical substance. I agree that for personal static information HTTP is good enough. Another good use case is the usage of HTTP for distribution of update packages for distorts like Debian since the OS can validate those packages using signatures. Other than that, HTTPS does make the web more secure and I don’t see a reason to rally against its widespread adoption.
All other arguments you're presenting about HTTPS being easy (it is, have you tried Caddy) are moot. It's the sites that were made before Google took over control that aren't maintained that are at issue. And the idea that a for-profit company that no one should trust is saying they're the only ones you have to trust.
And the fact that many of us adopted the web because it was a platform that no company controlled. If it had been presented as Google's platform I would have run the other way and would have advised you to do the same. But now I'm invested. My freedom as a developer depends on the integrity of the web. And a web controlled by Google isn't the web.
Google has a nasty habit of taking control of open protocols and then trashing them.
Roll up your sleeves, make some quiet time and actually READ THE DOCUMENT.
Works on a few machines / connections I tried. Perhaps you should switch to a better browser or change your ISP if they don't let you visit websites that are on the web.
This has got to be some of the most hypocritical bullshit I've ever read.
This all-text website, wherein the author espouses the benefits of "the open web" and rails against the corporate overlords who would destroy the web... this all-text website won't even load unless you enable third-party JavaScript access from FOUR external domains[1]. A fifth domain loads a font from - you guessed it - Google Fonts.
This has got to be satire.
[1](https://i.imgur.com/rKj4Xkv.png)
P.S. Nobody's mentioned encryption as a means against against surveillance. You might be fine browsing websites openly in HTTP, but the poor soul in China might just appreciate transport-level encryption and encrypted DNS when they choose to read whatever innocuous pro-democracy blog their country has deemed unsuitable.