> I doubt the web will ever be fully encrypted. There will always be tons of people who don't care enough.
I don't think the Web will become fully encrypted by virtue of everyone caring enough to move to HTTPS.
The knockout punch for unencrypted sites will come when browsers make the decision to not load plain HTTP sites by default in order to protect their users. At that point, for the vast majority of people, the Web will be 100% HTTPS because they'll never see a site that isn't HTTPS.
We just have to get the encryption percentage high enough to allow browsers to finish the job.
What about the other part, the vast quantities of information that will never be encrypted because the sites remain but the people have moved on or are gone? Why does that deserve a knockout punch?
Not trying to defend the idea, but the domain will expire at whatever point the registrant doesn't fund renewal anyway. Is that potentially arbitrary sunset date "better" by virtue of being more distant? If the authors care to let the sites exist yet prefer to abandon stewardship now, they should transfer ownership now and to someone who cares enough to encrypt as needed.
Some of these sites may be on shared hosting where the hosting provider doesn't provide SSL for custom domains, or provides it for a fee which is unjustifiable to the website author[s], so it may not even be that they don't care enough, but that they don't want to shell out personal money for say a static, hobbyist website.
Well, there's always services like CloudFlare which would offer free certificates - thus even allowing for pages hosted on GitHub Pages to be served through HTTPS.
They don't deserve a knockout punch any more than FTP or Gopher deserved it, but unfortunately commercialisation of the web and the merging of static pages into dynamic apps (feature creep) and one gargantuan application (the browser) to interface with all of it and one gargantuan entity to discover it (Google), means that minnows are going to have to go with the overall flow.
I understood it to mean that FTP has mostly been replaced by HTTP, through no fault of FTP itself, but rather because that's what serves the interests of the people who implement things. That is, FTP still works, and works as well as ever, but its usage is dropping.
The browser providers have clearly made that decision for the vast majority of users: default safety trumps information access.
We've yet to see if users will either (A) follow the on-screen, flashing red prompts, or (B) be trained to click through the warning screens due to their sheer prevalence.
Even in case B, it's a non-negligible probability that browser vendors respond by disallowing click-through. Just another step towards (re-)making the web into a nice, safe walled garden.
those sites aren't really under any threat from an interstitial like you currently get on an expired certificate (which i assume is what the parent means by "refuses to load by default"). It would be a knockout punch for any site trying to build an audience, but it doesn't actually block the site. I don't think anybody is suggesting that insecure archived sites be deleted from the internet.
Oh I can't wait for that day. Then I can sell an Internet enabled appliance with a certificate that lasts just as long as the warranty! Recurring revenue here I come!
What prevents you from already selling an appliance that stops working, certificate or not, after the warranty expires? Sounds more like a lawsuit than recurring revenue to me.
> What prevents you from already selling an appliance that stops working, certificate or not, after the warranty expires?
That sounds malicious -- whereas having it no longer work after the certificate expires sounds like security, and everyone likes security. With enough PR, you can string together something that sounds like a legitimate concern, like "with privacy being a major concern, encryption has progressed leaps and bounds past what we had two years ago -- so even if we were to renew the certificates, we simply do not think our customers' data would be safe when handled by these old devices."
The most chilling part is the casual way that data is discussed, because the consumer of the future will be okay with their refrigerator gathering data to drive advertising profit.
It seems that if something stops working for a "legitimate" technical reason (like, your camera stops working because of no server access, even though you only ever used the camera on you lan anyway) the customer can't win the lawsuit.
> The knockout punch for unencrypted sites will come when browsers make the decision to not load plain HTTP sites by default in order to protect their users.
That would be a terrible idea: it would break every "http:" link in existence, requiring people to edit billions of documents. And if "modern" browsers started pretending that "http:" meant "https:", it would break every other browser and lots of bots.
Sadly because of SNI browsers send the domain name in clear text, so I do not feel there is much to gain in going all encrypted. HTTPS/HTTP2 is broken by default in this regard and there is no way you can fix it other than using a VPN. At least with DNS you can do something about it your self.
It looks like TLS 1.3 still puts server_name into ClientHello and that isn't encrypted.
I can't see how you could encrypt SNI without pushing TLS out to at least two round trips, which would suck for the Web although it might be tolerable in other applications.
One option that does occur to me would be to shove say, a 32-bit SHAKE(FQDN) instead of an FQDN in as the identifier. This way we don't show eavesdroppers the actual FQDN we wanted, although they can try to guess and discover if their guess was right at their leisure. So it doesn't prevent a sophisticated attacker verifying that we connected to vpn.example.com not www.example.com on the same IP address + port, but it does make it essentially impossible for them to find out that our server has a site named xy4298-beejee-hopskotch-914z.example.com by inspecting the traffic.
The server knows all the hosts it serves for, and can work out 32-bit SHAKE(name) for them and pick the right one or reject it without revealing anything extra. The birthday paradox means this is likely to have conflicts above a few tens of thousands of vhosts per server, but that's already getting into deeply bulk hosting territory where you don't care too much about security anyway. Ordinary people aren't doing much more than hundreds of distinct sites per IP address so conflicts for them would be extremely rare.
I can passively sniff all traffic and know where my users are going that is a big deal. Having a ip -> DNS mapping is a lot of work AFAIK, sure it can be done, but executing a get_hosts_by_ip() on every https packet is hard. In contranst this is enough to get all hostnames from SNI:
I do agree that there is more things to consider about privacy, but the call for al websites to be encrypted is so naive. If that is what you want it would be enough to extend chunked http with an adler32 on the end of each response/chunk, and it would be a lot easier to work with than TLS.
I don't think the Web will become fully encrypted by virtue of everyone caring enough to move to HTTPS.
The knockout punch for unencrypted sites will come when browsers make the decision to not load plain HTTP sites by default in order to protect their users. At that point, for the vast majority of people, the Web will be 100% HTTPS because they'll never see a site that isn't HTTPS.
We just have to get the encryption percentage high enough to allow browsers to finish the job.