Hacker News new | comments | show | ask | jobs | submit login
Chrome 68 will mark all HTTP sites as “not secure” (googleblog.com)
806 points by el_duderino 6 months ago | hide | past | web | favorite | 812 comments



The people who are pushing back against HTTPS really bug me to be honest. They say silly things like “I don’t care if people see most of my web traffic like when I’m browsing memes.”

That presumes that the ONLY goal of HTTPS is to hide the information transferred. However, you have to recognize the fact that you run JITed code from these sites. And we have active examples of third parties (ISPs, WiFi providers) inject code into your web traffic. When browsing the web with HTTP you are downloading remote code, JITing it, and running it on every site you visit if you aren’t 100% noscript with no HTTP exceptions. You have no way of knowing where that code actually came from.

Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?


My primary concern are local servers ‒ which of course, are irrelevant if you are a centralised service provider such as Google.

To provide some context, I'm currently working on a web application where the server is intended to be running inside a home network (where the server requires zero configuration by the user). As of now, some of the JS APIs I'm using are only available if the site is running in a secure context, so the server has to serve the application using HTTPS, otherwise some functionality won't be available. However, it is impossible to obtain a valid TLS certificate for this local connection -- I don't even know the hostname of my server, and IP based certificates aren't a thing. So basically, to get a "green lock symbol" in the browser, the server would have to generate a random CA and get the user to install it, which comes with its own severe security risks and is not an option.

So my current plan is to have a dual-stack HTTP/HTTPS server, which on first startup generates a random, self-issued certificate. When the server is first accessed using HTTP, the client automatically tries to obtain some resources via HTTPS. If this succeeds, the user is redirected to the HTTPS variant. If it fails due to a certificate error, the user is presented with a friendly screen telling her that upon clicking "next" an ugly error message will appear, and that this is totally fine. Oh, and here's how to permanently store an exception in your browser.

Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.

This sucks. It just seems that Google and co don't care about people running their own decentralised infrastructure; and marking your own local servers as "insecure" does definitively not help.


Yes, this reminds me of the Mozilla IoT gateway from yesterday, which seemed like it pulled exactly that rat-tail of requirements behind it. Something like:

- We'd like to make an IoT gateway that you can use from a browser.

- To get access to necessary APIs, we have to provide it via HTTPS.

- The get HTTP we need a certificate. Because no one is going to pay for it, we'll use Let's Encrypt.

- To get a Let's Encryt cert, we need a verifyable hostname on the public internet. Ok, let's offer subdomains on mozilla-iot.com.

- To verify that hostname, Let's Encrypt needs to talk to the gateway. Ok, let's provide a tunnel to the gateway.

- Now the gateway is exposed to the internet and could be hacked. So we need to continously update it to close vulnerabilities.

So in the end all your IoT devices are reachable from the internet. But hey, you can use Firefox to turn your lights on!


The real solution to this would be something like TLS-SRP where you can authenticate both sides of a TLS session with a zero knowledge password proof (devices could ship with a piece of paper containing the generated password, no need for central servers, remote connections to the mothership, gateways, or certificates, or exposing stuff to the internet, or even any internet connectivity at all).

In simple terms it allows you to setup a TLS session by both sides proving they know the secret password, without either side exposing it, thus MITM is unable to capture it, it is an alternative to the cert model that works very well for network local devices.

But of course Chrome and firefox both have zero interest in supporting this use case despite TLS-SRP kicking around for ages now, they'd much rather have you connect to your own devices via a mediated cloud gateway server, for your own safety of course.


Pre-shared keys are only good for bootstrapping a stronger trust relationship. You could use TLS-SRP to exchange identities and then mutually authenticate each other for the general case. X509 is not the problem. Centrally manager trust hierarchies are.


I’d rather see an ssh-style ask once and then trust forever after that.


Isn't this just key pinning? I'd like to see a P2P system. This way governments can't force CA's to allow MITM.


That's what you use the pre-shared secret to establish: longer lived identities.


I didn't see that post, but if you control the DNS for a domain you can use the DNS challenge with Let's Encrypt.


Most people dont control a domain.


Mozilla does though. So if their plan is to "offer subdomains on mozilla-iot.com" then they just need to set it up so that their infrastructure will fulfill the DNS challenge for the user's device when it requests a new cert.


Which itself raises several questions:

1. Should most people have access to a domain they control?

2. Should there be a standard for nonroutable / nonpublic domains? (.lan and .localdomain are two of which I'm generally aware)

3. How should browsers deal with hosts and/or domains which are not and can not be on the public Internet?

4. How the hell did we get into this mess?

I'm kicking around the notion of relaxing various aspects of the problem domain and seeing where we end up. It's mind-numbingly bad, though.


Also: if we don't use the domain registration system to proxy for individual sites and presence on the Internet, then what alternatives might be substituted?

Note that domains don't solve the problem of a third-party controlling your point-of-access or presence, they only move it elsewhere.


Maybe this is a problem.

I'd say that in the world we live in, getting a domain is as useful as getting a passport. It costs a bit of money to get and renew and is a bit of a hassle, but it opens a lot of doors.


Then you usually don't need a certificate from a CA. Just selfsign.


This seems like it's going in circles, but, if you selfsign modern browsers display a big scary warning. If you're making a device or software meant to live behind a firewall and to be accessed from a browser, users will either have to install your CA in their browsers, or deal with the big scary warning. Both of which are bad.


If you don't have a domain, chances are that you don't provide service to normal users. Or to users at all besides yourself.


> chances are that you don't provide service to normal users. Or to users at all besides yourself.

I think that's the point. Creating friction that scares normie users when people are using web UIs on local networks puts non-cloud based products at a disadvantage against the centralized giants.


Though there are a couple of technical options to make this work, the thing I realised many years ago is that if you want to use internet technology you need to be connected to the internet. Anything else is just an endless headache.

If you are sure that you don't want your IoT devices to be reachable from the internet, don't use the internet protocol to talk to them, don't use DNS, web browsers, etc.

Just create a new local protocol (or switch back to IPX)


It really feels like you're throwing in the towel. The internet is designed specifically to allow federation between forests of hosts. That's what routers do. Browser vendors need to get their heads out of the fucking cloud and stop coercing people into relying solely on global TLS. I think TLS is wildly misunderstood. The whole global certificate thing only works for global communication. Global hosts should be your first point of contact and from there you should be able to leverage global TLS to bootstrap more intimate trust relationships... to make informed decisions about the trust relationships you want to establish. Browsers need to facilitate this not do everything possible to prevent you from owning your trust. If your browser is only able to communicate globally that's not the internet's fault. It's not TLSs fault (TLS is designed to allow clients to choose who they trust). That a failure of your browser to support your diverse use case.


If you like tilting at windmills then go ahead. Since forever people have written software that assumes a global internet.

However today the situation is much worse. The global internet is a very hostile environment. Everybody who writes software that has to work on the internet has to assume the worst.

If there are modes in a piece of software that assume the network can be trusted, then you can be sure that some attacker will try to activate that mode.

You cannot assume that the users of a piece of software have any idea how the internet works. If they can solve an immediate problem by disabling security, they will.

So, over time just about any piece of software that wants to support secure use on the internet will come with built-in trust anchors that are very hard to change.

In browsers this trend is quite visible. But the same trend, though less visible is going on in DNS.

E-mail is a bit of a mess. But the basic features are there. Just not a lot of adoption.

To sum it up, users don't want to know about internet security. They want their devices to work securely at a random open wifi network. Devices have a hard time figuring out if they are in a trusted environment. So the best way forward is to assume all environments are untrusted and require encryption everywhere.


My issue was with you suggesting people abandon security if they don't want a public internet connection.

I think the global TLS system works great when you're in a global context. But now the internet has proliferated and it's infecting people's homes. It turns out people _don't want_ to be in that hostile global context when they're at home. They want a context that is private. They want to isolate their home from the firehouse of global bullshit out there.

I think we've passed an inflection point and we're seeing a revival of attention to home networking. Sure for the last 15 years all I connected to my wifi was my laptop and that's pretty useless without the global internet. But now people have IoT things. Door locks, ovens, toilet paper, you name it. Why the hell would anyone want those things phoning home to a remote server all the time? Why does it even need to be publicly routable? Maybe it does need limited connectivity so strong interactive controls allowing you to federate whatgoes in and out are necessary. More than ever the _integrity_ of your home is also the integrity of your network. Your network is an appliance now. It is the lifeblood of your home.

People deserve to be able to administrate their personal enclaves as they see fit and do it securely. The best way to prevent someone from remotely unlocking my doors is to eliminate the remote path to the door lock. Did you forget that we've come rather far in securing later 2? People _can_ and _should_ trust their home network. Not wrecklessly, but private networks are not a fairy tale.

Finally finally, don't patronize users. The users of today are tomorrow's grandmas. It's not unreasonable to expect we can slowly adapt user expectations and understanding surrounding the security of their software. I'm not saying you don't have a point, but I think it's a disservice to users to treat them all like idiots.


> They want a context that is private. They want to isolate their home from the firehouse of global bullshit out there.

With all due respect, over the past fifteen years of trying to secure corporate networks, we've learned the hard way that this simply isn't a realistic goal. Having a hard outer shell and a squishy center just leaves the entire network vulnerable, because it's simply not possible to isolate yourself from the badness of the outside world. As soon as you let anyone in who's ever spoken to the hostile outside world, they bring the hostile outside world in with them.

We can't keep pretending that we can keep home networks isolated from the rest of the world when many of the computers in those networks move freely between the global Internet and the private home intranet. All connected devices are part of the global Internet whether or not we want them to be, and whether or not their connections are persistent.


I think we're arguing similar things. I'm not saying we only need a hard outer shell. I'm arguing that we _need_ security in depth. And that depth should include all scopes of the internet protocol, not just global. You _should_ be able to run TLS locally so that if some global thing does get in, it now has to penetrate another security/application layer too. I should be able to tell my browser to trust these certificates for global IPs and these other ones for my home site. Why not run IPSEC in you home while you're at it too so it's even harder for a remote thing to pivot.


Fair enough, thanks for the clarification!


People can administrate their personal enclaves as they see fit. There are a number of operating systems that you can rebuild from source. You can just create your own root CA and add it to your browsers. You can run your own root DNS zone.

Most people have no clue about computer security and don't want to get one. Tomorrow's grandmas will know as little about how their computers are attacked as today's grandmas.

And they don't want to know. They want technology that is safe to use.

We know from experience that a network where devices trust each other is a disaster waiting to happen. So let's kill that model. All devices have to survive in the open internet. Because if they don't, somebody will figure a way to attack them.


I think we're talking past each other at this point. I want security both globally and locally. I want to be able to tell my browser who it should trust and what level of paranoid it should be in each context. If I go through the work of recompiling a bunch of software with custom trust anchors, I don't want it to all be for naught because in the last mile my browser says "I'm lost I only understand global TLS".

I LOVE the idea of NAT-less global internet. It's why I love IPv6. At that level anything that wants to participate should be secure and "go Chrome" for leading the charge. I even think firewalls for global IPv6 are stupid. If you're global you're global no amount of silly packet filtering rules is gonna change that.

But that's not the end all. I don't want my entire house to stop working because I got a new ISP and I have a few days of down time. Or more like because my ISP went down because they oversell bandwidth and haven't updated their routing hardware in 15 years. Maybe I don't want my file server with all my family photos globally addressable. Maybe I don't want my kids on the global internet at certain times. The point is I know what's best for my house. Im sick of the old IPv4 mindset where the only reasonable model is centralized global trust (see we agree that's been the status quo).


> Global hosts should be your first point of contact and from there you should be able to leverage global TLS to bootstrap more intimate trust relationships... to make informed decisions about the trust relationships you want to establish.

Explain more how this would work.


Part of setting up an account with a web service or iot device provider or whatever should be acquiring their certificate authority. However, instead of having a single bucket OS level root trust store, browsers empower users to whitelist the sites they trust that authority to well have authority over. You trust googles ca for google.com. Or maybe you don't.

At the application level, trust should be managed by the application provider and the user. Their certificate authority can issue whatever certificates it wants for whatever kind of network topology or application use cases or whatever else they need to support. As a user you're either still using a browser or you've at this point switched to their native app or you've got their JS helpers loaded or whatever. Their application logic can manage all the certificate crap so that users are minimally encumbered by it. If you're talking to local network devices their application logic would issue certificates for whatever scope of IPv6 addresses you're using. Maybe your fancy device is running dns, their thing issues certs for your scope's site, etc.

You can even do mutual TLS now because their authority can issue installs of their app their own certificates. Browsers should support client certs too. Navigating to foo.com using a scoped IPv6 address? You're prompted to select the identity you wish to use. Your browser remembers your choice for that scope. The CA is only valid for your blessed names, scope aware.


> Their certificate authority can issue whatever certificates it wants for whatever kind of network topology or application use cases or whatever else they need to support.

But that means that other customers can get a trusted certificate for the exact same ip address, right?


> If you are sure that you don't want your IoT devices to be reachable from the internet, don't use the internet protocol to talk to them, don't use DNS, web browsers, etc.

What utter nonsense.

TCP/IP works fine on a local network

DNS works fine on a local network

HTTP(S) works fine on a local network

Web Browsers work fine on a local network

After all the internet is nothing more than a collection of connected networks.

If I have no need for anything within my local network (or subset of that network) to be routable from the internet then I put a firewall in the way, it doesn't mean I don't use the same tools and protocols.


Actually,

TCP/IP on a local network will make mobile devices think they are on a captive portal.

DNSSEC will prevent you from serving local answers for signed zones. The DNS root zone is signed. So the IETF at the moment has a hard time figuring out which zones need to stay not signed to allow these kinds of local answers.

HTTP is not secure, so that is going the way of telnet.

HTTPS is what we are discussing here. Without a valid cert, don't expect browsers to support you much longer.

Yes, you can create your own little internet island. But don't complain if any software fails to work in that environment.


> TCP/IP on a local network will make mobile devices think they are on a captive portal.

No, that's the phone doing a HTTP call to a server on the internet to try and work out if it's a captive portal, all TCP/IP still functions perfectly fine within an internal network with no WAN access.

> DNSSEC will prevent you from serving local answers for signed zones. The DNS root zone is signed. So the IETF at the moment has a hard time figuring out which zones need to stay not signed to allow these kinds of local answers.

The devices are on a local network, they're not making requests to things outside the network. Even if you do need outside access for resolution you can still happily use a DNS forwarder with your local DNS served up locally with any outside DNS (including DNSSEC) queries being forwarded where they need.

> HTTP is not secure, so that is going the way of telnet.

That's neither here nor there, it still works fine in the context of a local network.

> HTTPS is what we are discussing here. Without a valid cert, don't expect browsers to support you much longer.

A cert is as valid as my cert store thinks it is, again, still works on an internal network.

> Yes, you can create your own little internet island. But don't complain if any software fails to work in that environment.

By "own internet island" you mean a local network? Sure it's more advanced than your nan's local network but it's still a local network in which the IP part of TCP/IP still works absolutely fine. Maybe you come from the land where your mongo instance needs a public IP, or that your lightbulbs can be used to pivot onto your home network. I don't. This is basic network design. Maybe the shitty software/hardware you're running shouldn't assume it will always be connected to the internet directly.


Sure, as a power user, it can all be done. You can also manually trust a self-signed cert. I wouldn't recommend it for the masses though.


I was rebutting the blatantly wrong "don't expect IP to work". Yes it's not your mum's network, but then I never said it was.


But that's the whole point.

The goal is that you can build a device that your grandma can put at home, which never connects to a remote server, which she can connect to from her browser, which just works, never shows an annoying HTTPS warning, never requires enabling custom CAs, and provides all functionality of the browser, without being marked as "not secure".

That is the goal: All the functionality of e.g. a Nest device, without ever sending a single packet outside of your LAN.

(Disclaimer: For my own IoT projects, of course I use a special domain with DNS delegation and Let's Encrypt certificates, and HSTS preloaded)


> never connects to a remote server, which she can connect to from her browser, which just works, never shows an annoying HTTPS warning, never requires enabling custom CAs, and provides all functionality of the browser, without being marked as "not secure".

Right, do you have a proposal for how to accomplish this? If you don't want to require an internet connection, I think trusting a self-signed cert is the best way to go, otherwise owning a domain name + letsencrypt is good if you're ok connecting to the internet.


I don't think you'll be able to explain your grandma how to trust a self signed certificate.

The alternative would be HTTP, but that's "not secure" anymore.

There are situations where trust on first use or out of bands verification are the only option. This is one of them.


What I think could really do good is some browser-supported protocol specifically made for identifying devices on a LAN.

E.g., imagine some UPnP-style broadcast where a device announces a public key, human-readable name and some type/capability information. Browsers listen for broadcasts and show users a notification once they discover an unseen device. Once confirmed, they can identify the device by its public key and also use it to establish encrypted connections.

You could also define out-of-band methods to share the key, e.g. via an on-device wifi hotspot, NFC, a USB plug or a QR code printed on the device.

Mozilla's Web-of-Things spec seems to show at least a willingness to work on this. Though right now they seem to re-solve exactly all the already solved problems and tackle none of the unsolved.

...or you could simply treat HTTP in local networks as a secure origin and be done with it.


I agree, though instead of "public key" I would just say "self-signed cert".

> ...or you could simply treat HTTP in local networks as a secure origin and be done with it.

I think this is a lot harder than it sounds. What's a "local network". Is a coffee shop or public wifi a local network? How does the browser know? How do you know a rogue IoT device isn't doing ip spoofing?


Right, yeah, browsers are not guaranteeing any security on the local network.

If self-signed is not an option, then I think an internet connection and domain name are required. (Which seems to me is totally fine. That's the whole point of IoT)


I totally agree. Either have it securely connected to the internet, or not at all. The fact that something is on your LAN is just an implementation detail and shouldn’t be relied upon for security.


Is the fact that your physical door locks and keys are only unique to your local region very much an implementation detail? No it's not. It's conceptually the same, but keys and locks are not globally secure. They're only locally secure.

Also IPv6: everything gets a globally routable address. Great so why do we need anything else? Well it turns out that in order to support all the different modes of network operation and all the different topologies and use cases, the Internet Protocol needs to support non-global scopes too. Arguing you can't have e.g. link-local security is absurd and rally quite green, from a networking professionals perspective.

Oh also IPSEC is part of IPv6 not just an afterthought like it was for IPv4. This makes it even more likely we'll see trusted network scopes sooner rather than never.


> It's conceptually the same, but keys and locks are not globally secure. They're only locally secure

Criminals in Hong Kong can’t teleport to my door. Local security for locks is fine because the threat is always local.


> Criminals in Hong Kong can’t teleport to my door.

They might be able to hijack a different IoT device on the network. The more IoT devices, the greater the attack surface.

What if you have guests over and they happen to have a bad virus on their computer or their phone?


That is basically my point. People need to have ways to create local enclaves so it's impossible for packets to ever make their way into your zone. And wonxe you do that, local security is perfectly reasonable and desirable.


HTTPS only protects data in transit from the server to your device. A local network work just fine for providing the same protection. It's not like an internet attacker can spoof a 192.168.x.x address on my LAN, or sniff the traffic to or from my server.


Possibly a rogue IoT device could spoof an ip address. But that's probably not going to happen to someone who knows what they're doing. Browsers don't know if the network is trusted and can't assume that LAN IPs are safe.

It seems to me the best options are either trusting a self-signed cert (on every computer that needs access) or pay for a domain name and use letsencrypt to get a cert. I do think it's unfortunate that you now need to pay money just to do things like this on your own network, but I don't see a better way. It's either paying for a domain name or needing to explicitly trust the device.


I don't think there's a way to do what you want in a secure manner.

I think fundamentally your issue here is with secure contexts, not with the site labeling. In the end, you can have a site like you describe, but you have to avoid using APIs that require secure contexts.

Any sort of avoidance of this, as by the method you describe ("please ignore the ugly warning you are about to see") is a mistake, because you're helping to train the users to ignore these messages.

> Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.

Is it, though? Assuming your server hasn't been compromised (nobody is monitoring it to make sure!), and assuming that the self-signed cert cannot be easily exfiltrated, and assuming that they don't do the same thing the next time they get an ugly warning from chase-bank.ru because they're sure that it's spurious -- then maybe?


> In the end, you can have a site like you describe, but you have to avoid using APIs that require secure contexts.

While that might be an option now, it's not going to be viable. Browser vendors have agreed to make all new JS APIs -- mostly independent of their security implications -- available to secure contexts only [1]. Even now, you cannot use the Crypto API -- which is entirely implementable in plain JS, albeit slower and with higher energy consumption -- without secure contexts. Or you cannot raise the IndexedDB storage limit for your application above a certain threshold without a secure context (which is exactly my problem, I want users to be able to temporarily store a few hundred MB on their mobile device).

> Any sort of avoidance of this, as by the method you describe ("please ignore the ugly warning you are about to see") is a mistake, because you're helping to train the users to ignore these messages.

I completely agree. I understand the implications of me doing this and I honestly don't want to.

I guess what I'm complaining about mostly boils down to an UX issue. It would be near-trivial for browser vendors to add the following to the error page if they detect a self-signed cert on a local connection: "This site seems to be served from the local network. If you are trying to access your own network infrastructure, please make sure the following fingerprint matches the one displayed by the application you are trying to access <fingerprint>, <autogenerated fingerprint art>. If you are unsure, please click 'Cancel'.".

That would solve the problem, no landing page from my application required.

> Is it, though?

You're right, there are a lot of assumptions I'm making here. However, I see no reason why my local HTTPS site should be displayed as less secure (red cross, red text in the URL bar) than a local HTTP site.

[1] https://blog.mozilla.org/security/2018/01/15/secure-contexts...


> I see no reason why my local HTTPS site should be displayed as less secure (red cross, red text in the URL bar) than a local HTTP site.

Exactly, that's why browsers are trying to move in the direction of making HTTP sites appear as less secure.

> I guess what I'm complaining about mostly boils down to an UX issue. It would be near-trivial for browser vendors to add the following to the error page if they detect a self-signed cert on a local connection: "This site seems to be served from the local network. If you are trying to access your own network infrastructure, please make sure the following fingerprint matches the one displayed by the application you are trying to access <fingerprint>, <autogenerated fingerprint art>. If you are unsure, please click 'Cancel'.".

I totally agree it's a UX issue, but I don't see a good solution. Unfortunately your browser doesn't know if you're on "your own network infrastructure" and not at a coffee shop. It needs to be on the safe side and assume it's not a trusted network.

Maybe the browser should require you to type in the fingerprint of the key.


In IPv6 it does know. This is because IPv6 addresses have scoped prefixes. So much of this discussion is hinged on old IPv4 assumptions.


So how does your browser know that a certain range of ip addresses should be trusted?


What about trusting ip from private address ranges?


Wow -- you're right, the overreach in "secure contexts" is astonishing. It looks like [1] is the main thread discussing this policy. The notion of "internal/isolated network services" is mentioned in one comment but never address, other than pointing out the difficulty of getting a cert for such a service. I think it's probably worth jumping into that discussion before the policy is set in stone.

[1] https://github.com/w3ctag/design-principles/pull/75


Thinking on this further, the storage one makes sense for secure contexts, because allowing that for insecure contexts would mean that by spoofing the DNS of a site, I could steal its information. I'm not certain how secure the data that you store would be, but if it were, say, camera footage or something, it would be possible for it to be extracted from the user's phone by a malicious website out of your control.

I don't know if this problem is soluble -- at least a self-signed certificate would mean that the certificate would have to be exfiltrated in order to do this, assuming the browsers key the indexedDB to the certificate fingerprint, which would indicate a degree of compromise that most likely means that the data could be stolen directly from the device.

If you don't mind being more specific, what kind of data would you be storing on the phone? Is it just for caching purposes? It seems like it might be better for the data to be fetched from the device on demand rather than stored on the phone, even if this causes a performance hit, to avoid the possibility of leaking the data to untrusted parties.


> If you don't mind being more specific, what kind of data would you be storing on the phone?

Sure, my particular use-case is a media management/playback application (think web-based audio player; like Plex, Ampache, Spotify) that is intended to be used with a large personal library of audio files (in the order of a hundred thousand files, about 1 TB of data). The application has an offline-mode (via ServiceWorkers, everything being REST, content addressed and infinitely cacheable), where the connected client can request to "pin" a certain playlist/filter. Upon request, all media files with that filter will be transcoded/downloaded onto the client and are available even when having no connection. So it's not really any data that needs to be kept secure (still, all media blobs are encrypted anyhow to allow public caching -- for internet hosted instances -- without having to fear IP issues).


Let's maybe hope that they'll make an exception for the RFC1918/4193 ranges. Of course, the other side of the coin of is that even a "private" network could be anything from your private home to your workplace intranet to an airport wi-fi hotspot, and can't be assumed to be safe from snooping/injection.

As for your particular hassle, it makes sense to me for a browser to mark sites that mix http/https as insecure from the point of view that once the data is on the plain http page you can no longer be sure that it won't be handed off over an unencrypted connection some place else by some rogue javascript.

Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks. Say, a method for routers to discover certificates announced by devices on the network to list them in its management interface where you can enable or disable them.


> Let's maybe hope that they'll make an exception for the RFC1918/4193 ranges. Of course, the other side of the coin of is that even a "private" network could be anything from your private home to your workplace intranet to an airport wi-fi hotspot, and can't be assumed to be safe from snooping/injection.

That would be idiotic not just because unrusted parties can use those addresses, but more importantly because those are more or less terrible hacks that should be avoided completely if possible. You rather should have globally unique addresses on your internal network if you can, which would just break this.

> Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks.

That's also not sensible. The whole idea of linking stuff to specific networks is bad. There is no reason why access to a device on your home network should in any way be linked to your client device being connected to that same network. It's the internet, not "the home network and the cloud".

What is needed is a way to establish a trust relationship between two devices that you have control over. Where those devices happen to be connected to the internet should be absolutely irrelevant. There might be an argument to be had to maybe support for a simplified peering procedure on a local network might be a good idea--but the point is that once the trust trelationship is established, you should be able to move your client device to a different network on the other side of the planet and still be able to talk to your device on your home network.


> but more importantly because those are more or less terrible hacks that should be avoided completely if possible

Seems quite unavoidable with IPv4, or what is the hack you're referring to specifically?

> That's also not sensible. The whole idea of linking stuff to specific networks is bad. There is no reason why access to a device on your home network should in any way be linked to your client device being connected to that same network.

What is the reason against? It's not like the idea of a privately managed network in which you trust all peers is novel or rare. Most people have a network in their home that they manage for themselves or for their family. It's the perfect scope for IoT devices.

> It's the internet, not "the home network and the cloud".

That's an interesting notion, but unfortunately does not reflect reality of use or the design of existing internet protocols, or even the very core concept of the internet: interconnected networks. I have a private network at home. My means to connect devices on this network to the internet is via a gateway which is assigned a single globally unique internet address by my service provider, and a locally unique address on the private network.

If at some point every device has its own global address and is accessible globally, it will be more accurate to assume that something is insecure if it communicates in the plain, but we're not there yet. What the browser is doing now is pretty much assuming an arbitrary level of "better safe than sorry".

> What is needed is a way to establish a trust relationship between two devices that you have control over.

Say, by sharing sharing keys over a network under your control, certified and authorized by a device you trust for pretty much everything else on that network?

> Where those devices happen to be connected to the internet should be absolutely irrelevant.

Agreed, but the current conundrum is that they need to be connected to the internet if you want to use a central certificate authority.


> Seems quite unavoidable with IPv4, or what is the hack you're referring to specifically?

Well, yes, for most people it unfortunately is. But imagine if your are one of the lucky ones who do have global IPv4 addresses everywhere. And now someone sells you a product that tells you "sorry, nice IPv4 network that you have there, but you have to install NAT and an RFC1918 network to use this IPv4 product". Not very sensible, is it? Same applies for IPv6 and ULA, obviously.

> What is the reason against?

What would be a reason for limiting the usefulness of your devices?

> It's not like the idea of a privately managed network in which you trust all peers is novel or rare.

Which is fine, but not a sensible assumption to make in an IP product. If you want to use it in a privately managed, trusted network, of course you should be able to, but the idea that an IP device should just refuse to work over IP if your IP happens to extend beyond your LAN is idiotic. That should be a matter of the network's policy, not of the device's hard-coded policy.

> That's an interesting notion, but unfortunately does not reflect reality of use or the design of existing internet protocols, or even the very core concept of the internet: interconnected networks.

Erm ... that's completely backwards? It unfortunately does not reflect the current use of IPv4 in particular due to NAT everywhere, but that certainly was not part of "the design of existing internet protocols", that was a hack due to lack of addresses.

What was before the internet were separate local (and sometimes not so local) networks: You had all kinds of link-layer protocols, and then various higher-level protocols, usually specific to a given link-layer. The whole point of the internet was to add a common abstraction to all of those link layers protocols, precisely to eliminate any distinction between local or remote, ethernet or token ring, modem or ISDN, GSM or CDMA, an addressing layer that erased the distinction: If you had an IP address and the thing you wanted to communicate with had an IP address, you could, even if you were on token ring, your WAN link was ISDN, the backbone was ATM, the peer's WAN link was a dial-in modem and their LAN was ethernet. The point of IP is that you don't have to care, any IP address is as good as any other.

> I have a private network at home. My means to connect devices on this network to the internet is via a gateway which is assigned a single globally unique internet address by my service provider, and a locally unique address on the private network.

Well, yes, unfortunately, that is the case nowadays. That is not how IP was meant to be used, and it's causing massive problems. If it weren't for lack of addresses, your home network should have a globally unique /24 or something (and it did, back in the day).

> If at some point every device has its own global address and is accessible globally, it will be more accurate to assume that something is insecure if it communicates in the plain, but we're not there yet. What the browser is doing now is pretty much assuming an arbitrary level of "better safe than sorry".

Not sure I am getting your point!?

> Say, by sharing sharing keys over a network under your control, certified and authorized by a device you trust for pretty much everything else on that network?

Well, arguably you totally should not trust your router, they tend to be crap security-wise.

But in any case, my point was that at most that should be a pairing mechanism. So, once the trust relationship is established, there should be no need to stay on the local network for further secure communication.

> Agreed, but the current conundrum is that they need to be connected to the internet if you want to use a central certificate authority.

Well, yes?! But the solution is not to hard-code policies that prevent full use of IP.


> Well, yes, for most people it unfortunately is. But imagine if your are one of the lucky ones who do have global IPv4 addresses everywhere. And now someone sells you a product that tells you "sorry, nice IPv4 network that you have there, but you have to install NAT and an RFC1918 network to use this IPv4 product". Not very sensible, is it? Same applies for IPv6 and ULA, obviously.

Are you arguing from the assumption that my suggestions and any other form of establishing trust are mutually exclusive? If you're that lucky guy with a global address for your lightbulb, by all means use what's at your disposal to establish a trusted encrypted link between the device and the user in a convenient way. Not sure how that would prevent the vast majority using these on private networks with a different method of authentication and different criteria for trust.

> Erm ... that's completely backwards? It unfortunately does not reflect the current use of IPv4 in particular due to NAT everywhere, but that certainly was not part of "the design of existing internet protocols", that was a hack due to lack of addresses.

So, given the limited address range, it was clearly not designed for every person in the world to have an address, not to mention every appliance in your kitchen. The internet has grown rather organically and has adopted a broader use case. The infrastructure, protocols and best practices used on the internet now reflect this unanticipated use case.

> Well, yes, unfortunately, that is the case nowadays. That is not how IP was meant to be used, and it's causing massive problems. If it weren't for lack of addresses, your home network should have a globally unique /24 or something (and it did, back in the day).

How it was meant to be used is an artefact that stopped mattering some time in the 80s.

> Not sure I am getting your point!?

The point is that flagging plain http websites as "unsafe" makes a lot of assumptions about my network. They're not necessarily unsafe. In one case, it's on my apartment-wide LAN. In another case, it's connected by ethernet directly to the client. Neither of these are particularly exotic topologies.

> But in any case, my point was that at most that should be a pairing mechanism. So, once the trust relationship is established, there should be no need to stay on the local network for further secure communication.

Why not both?

> Well, yes?! But the solution is not to hard-code policies that prevent full use of IP.

Agreed? I'm not sure where you got the idea that I think that any of these things should prevent the full use of IP. Certainly not from anything I've said.


> Are you arguing from the assumption that my suggestions and any other form of establishing trust are mutually exclusive? If you're that lucky guy with a global address for your lightbulb, by all means use what's at your disposal to establish a trusted encrypted link between the device and the user in a convenient way.

Well, if that were a standardized way to establish trust, that necessarily would lead to vendors adopting it at the cost of suporting other kinds of setups?

Also, it is very problematic to overload not globally routable addresses (also often misleadingly called "private addresses") with security semantics. While many home setups do have a sort-of security boundaries around RFC1918 subnets, there is absolutely no guarantee that that is the case. So not only would such a mechanism break "sane" (i.e., NAT-free) setups, it also would make otherwise perfectly fine and useful setups risky. Have a VPN link to another company that also uses RFC1918 space or ULA, and suddenly your IoT stuff starts trusting that other company. Or even just if you happen to have departments that aren't supposed to trust each other, and that happen to have a common ULA prefix, and now some devices simply assume trust where none is implied/make it impossible to use otherwise perfectly fine setups because of unjustified trust assumptions. Or simply a guest on your network. Or ... whatever else that can share non-globally routed address space with you without any trust implied.

> So, given the limited address range, it was clearly not designed for every person in the world to have an address, not to mention every appliance in your kitchen. The internet has grown rather organically and has adopted a broader use case.

Well, yes, but that wasn't because it was intended to be used with NAT, or anything else that was not a globally routable address for every device, but because it wasn't expected to gain that many users.

> The infrastructure, protocols and best practices used on the internet now reflect this unanticipated use case.

Which is an argument for what exactly? Especially with regards to IPv6 and ULA?

> How it was meant to be used is an artefact that stopped mattering some time in the 80s.

Why would that have stopped then? Again, in particular with regards to IPv6, which does not have the address scarcity that might have justified use of NAT and non-globally routed address space as a temporary workaround?

> The point is that flagging plain http websites as "unsafe" makes a lot of assumptions about my network.

... just as not doing so does? If anything, it would be arbitrary to just exempt certain prefixes from security policies when there is not normative basis for such an exemption. I happen to have only globally routable IPv6 addresses on my LAN, but my LAN is indeed trusted, both wired and WiFi, using distinct /64s. But I also have a guest WiFi that is in the same IPv6 /48, which is not trusted. And I have a VPN link to a customer of mine that uses RFC1918 address space, which is absolutely not trusted.

So, yes, it is "better safe than not safe". But it's exactly the opposite of arbitrary, in that it does not make any assumptions about your network, it provides security no matter what the details of your network, and using the exact same policy for everything. And it's hardly "better safe than sorry", given that this is all a result of being very sorry about all the crap that resulted from lack of security so far.

> They're not necessarily unsafe. In one case, it's on my apartment-wide LAN. In another case, it's connected by ethernet directly to the client. Neither of these are particularly exotic topologies.

Yeah, and how is your browser supposed to know that?

> Why not both?

What both?!

> Agreed? I'm not sure where you got the idea that I think that any of these things should prevent the full use of IP. Certainly not from anything I've said.

The question is not whether it should, but whether it would. Suppose browsers were to implement a policy of "RFC1918 and ULA are considered safe unencrypted and -authenticated". What would vendors of devices do? I guess we can agree that they would use that policy for config access, as it simplifies the design of their devices, right? Now, that would cover 99%+ of their current user base. Which probably means they won't bother providing an alternative mechanism. Which means (a) you can't use their devices in other setups and (b) their users are locked into such setups, which makes it impossible for, say, router vendors, to build more useful networking products that use the full potential of IP.


> Well, if that were a standardized way to establish trust, that necessarily would lead to vendors adopting it at the cost of suporting other kinds of setups?

So on one hand you believe that supporting one standard will necessarily come at the cost of supporting another (I don't), and you agree that what I suggested might cover 99%+ of the current user base, yet you favor a solution that depends on global addresses, something which definitely doesn't come close to 99% of potential users?

> Also, it is very problematic to overload not globally routable addresses (also often misleadingly called "private addresses") with security semantics.

Well, if you want to be really anal about it you could call them "addresses which fall into one of the address ranges allocated for private use", but you're just splitting hairs.

> While many home setups do have a sort-of security boundaries around RFC1918 subnets, there is absolutely no guarantee that that is the case.

There is no guarantee, but that's different from saying that it's inherently unsafe.

> Which is an argument for what exactly? Especially with regards to IPv6 and ULA?

It's a reflection on how the internet is built. It doesn't matter that in the ideal network, everything might have an address, when pretty much every device is behind some kind of NAT. IPv6? Come back when it's widely adopted.

> Why would that have stopped then? Again, in particular with regards to IPv6, which does not have the address scarcity that might have justified use of NAT and non-globally routed address space as a temporary workaround?

It stopped mattering because of address exhaustion and slow adoption of IPv6. Now, NAT is an integral part of the internet. I'm not trying to justify it or state this as a matter of preference—I'd definitely prefer having a ton of IPv6 addresses over the single IPv4 address I actually have—I'm just laying things out as they are, and how they are for the vast majority of consumers.

> ... just as not doing so does?

No. Flagging a website as "safe" when it can not be established that it is safe is at least as wrong as flagging it as "unsafe" when it can not be established as being unsafe. What I'm suggesting, not flagging it in any particular way at all, would be taking a neutral stance. IMO, the practice of calling HTTPS sites "secure" is itself potentially misleading to consumers. It is only secure in a very specific sense, likely not in the broader sense a layman would consider.

> Yeah, and how is your browser supposed to know that?

The question I stop at is "why is my browser supposed to know that?"

> What both?!

Both a way of verifying and distributing certificates network-wide in a LAN and for those certificates to be usable globally.

> The question is not whether it should, but whether it would. Suppose browsers were to implement a policy of "RFC1918 and ULA are considered safe unencrypted and -authenticated". What would vendors of devices do?

The premise of my suggestion is that the browsers won't back down from indiscriminately marking plain HTTP sites as insecure, hence "Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks."—so suppose they would support such a method. Would that be better or worse than current practice?

> Which probably means they won't bother providing an alternative mechanism. Which means (a) you can't use their devices in other setups and (b) their users are locked into such setups, which makes it impossible for, say, router vendors, to build more useful networking products that use the full potential of IP.

That's a load of conjecture. I'm not sure how to respond except with a bunch of other conjecture, so I'll refrain.


> So on one hand you believe that supporting one standard will necessarily come at the cost of supporting another (I don't),

So, you think vendors who have covered 99%+ of their userbase with a solution will generally also implement an alternative that is way more complicated for the remaining 1%?

> and you agree that what I suggested might cover 99%+ of the current user base, yet you favor a solution that depends on global addresses, something which definitely doesn't come close to 99% of potential users?

No, I favor a solution that does not depend on the global (non-)routability of an address, i.e., a solution that works for 100% of users.

> Well, if you want to be really anal about it you could call them "addresses which fall into one of the address ranges allocated for private use", but you're just splitting hairs.

But that is still equally misleading. There is nothing "private" about those addresses, and in particular nothing "more private" than globally routable addresses. Anyone can use those addresses, all the RFC essentially says is that you won't collide with addresses allocated by RIRs, but they might collide with other administrative domains that choose to use the same prefix. That doesn't mean that you cannot use them on a WAN, or between companies, or really anywhere where you can agree with all participating networks on the allocations. All it means is you have to expect collisions if you connect previously separate administrative domains, and that you cannot expect yout ISP to announce them for you on the public internet, that's it.

Also, just as you can use non-globally routable addresses between networks, you can use globally routable addresses for private networks, and you should if you can (which in practice means when you build an IPv6 network): Even if you build a network that is not intended to be connected to the internet at all, if you do have a globally routable IPv6 prefix allocated for your organization, you should number that network with addresses from that prefix.

> There is no guarantee, but that's different from saying that it's inherently unsafe.

No, it's actually not. "unsafe" does not mean "you will hurt yourself", it means "it has not been established that you won't hurt yourself".

> It's a reflection on how the internet is built. It doesn't matter that in the ideal network, everything might have an address, when pretty much every device is behind some kind of NAT. IPv6? Come back when it's widely adopted.

So, for the question of how to achieve (as close as possible to) an ideal network, it doesn't matter what the ideal network would look like?! Or do you think we should just wait until device vendors have screwed up IPv6 before we try to enforce some sensible policy?

> It stopped mattering because of address exhaustion and slow adoption of IPv6. Now, NAT is an integral part of the internet. I'm not trying to justify it or state this as a matter of preference—I'd definitely prefer having a ton of IPv6 addresses over the single IPv4 address I actually have—I'm just laying things out as they are, and how they are for the vast majority of consumers.

So ... because noone uses IPv6, you suggested to use ULA as an indicator for security?! I am not sure I follow ...

> No. Flagging a website as "safe" when it can not be established that it is safe is at least as wrong as flagging it as "unsafe" when it can not be established as being unsafe.

No, you don't establish "unsafety", that is the default assumption. The only way to establish that something is unsafe is to show after the fact that someone got hurt, which is just completely useless as a security mechanism.

> What I'm suggesting, not flagging it in any particular way at all, would be taking a neutral stance.

Wouldn't a neutral stance be to instead display a security status of "security unknown" (which is obviously equivalent to insecure)? "Not flagging it in any particular way" simply means that the user makes an assumption one way or another, not that the user thinks "it is unknown whether this is secure".

> IMO, the practice of calling HTTPS sites "secure" is itself potentially misleading to consumers. It is only secure in a very specific sense, likely not in the broader sense a layman would consider.

Well, yeah, but that is not really relevant to the question about warning about an insecure situation. Just because there are insecure situations that you cannot warn about, does not mean that therefore warning about other insecure situations isn't useful. Really, it makes much more sense to warn about insecure situations (which means, situations not known to be secure against certain types of attacks deemed relevant in the respective context) than to in display anything that says "this is secure", as security is alway relative to specific attacks, not a global property.

> The question I stop at is "why is my browser supposed to know that?"

Because your browser should help you protect your personal data that you process using your browser? I mean, I don't think it should know that, it should just enforce the same encryption requirements everywhere, but you seem to disagree with that because there are networks where your personal data is secure without encryption--in which case, your browser would either have to give up the goal of protecting your personal data, or it would have to know about which parts of your network are secure without encryption.

> Both a way of verifying and distributing certificates network-wide in a LAN and for those certificates to be usable globally.

Well, that would be a pairing mechanism then?! (Which still should not overload global routability with security semantics.)

> The premise of my suggestion is that the browsers won't back down from indiscriminately marking plain HTTP sites as insecure, hence "Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks."—so suppose they would support such a method. Would that be better or worse than current practice?

It depends on the mechanism? Yes, centralized certificate management for your own devices would be useful, but it should not in any way overload the routability of addresses. If you want to use the LAN as a semi-trusted key exchange mechanism, that probably should happen at the ethernet layer. Or maybe with a one-hop TTL on the IP layer. You have to detect whether you are on the same LAN, not whether you are using an RFC1918/ULA prefix, because not all LANs use RFC1918/ULA, and nor all RFC1918/ULA are limited to a LAN, let alone a trusted LAN.

> That's a load of conjecture. I'm not sure how to respond except with a bunch of other conjecture, so I'll refrain.

It is mostly an observation of what always happens in such situations.


> So, you think vendors who have covered 99%+ of their userbase with a solution will generally also implement an alternative that is way more complicated for the remaining 1%?

Not necessarily, but the designer of a standard could take both use cases into account.

> No, I favor a solution that does not depend on the global (non-)routability of an address, i.e., a solution that works for 100% of users.

What, more precisely? Of course taking into consideration that 100% of users might not even have an internet connection. One thing I'd like is for devices to a their key signature printed on a sticker. Then I can verify the signature, log in and generate a new key and password, generate a certificate that I can install myself or sign up with a service like Let's Encrypt.

> But that is still equally misleading. There is nothing "private" about those addresses, and in particular nothing "more private" than globally routable addresses.

No, it's not misleading to say that they are allocated for private use. Your ISP drops connections to these addresses because they respect RFC1819 and don't route to the private address ranges. Even if they didn't, these address ranges are still allocated for private use, and your ISP is Wrong. They're only routable in the sense that IP would technically allow it, but the internet is not simply IP but a collection of standards and best practices.

And sure, "private" has a very broad meaning. A browser could very well flag a certificate that was distributed from a private address as such and let the user decide whether they trust that source.

> Also, just as you can use non-globally routable addresses between networks, you can use globally routable addresses for private networks, and you should if you can (which in practice means when you build an IPv6 network): Even if you build a network that is not intended to be connected to the internet at all, if you do have a globally routable IPv6 prefix allocated for your organization, you should number that network with addresses from that prefix.

Sure. But again, "in practice means when you build an IPv6 network", i.e. not a typical consumer, for how long more? In an enterprise there are already many different ways to solve the problem of authentication, certificate signing and encryption. Consider that the average internet user doesn't even have a registered domain name or a static IP allocation.

> No, it's actually not. "unsafe" does not mean "you will hurt yourself", it means "it has not been established that you won't hurt yourself".

So everything on the web should be flagged by the browser as unsafe? I don't know how the browser can ever safely establish that I won't hurt myself. "Unsafe" and "safe" are two sides of a subjective, blurry line, at best a reasonable assumption and at worst an arbitrary handwave.

IMO, the browser is taking what should be exact descriptions of the nature of the connection and water them down to vague, misleadingly simplified concepts. The browser could tell me that my connection to a site is unencrypted, that it is encrypted with an uncertified key, or that it's not encrypted, and when you click them they could show a help text describing what that means exactly, the possible consequences of using the service and details on the key and certificate if applicable. When I click the "Secure" badge in Chrome, I don't even get to see which CA signed it, or a public key.

"Secure" and "Insecure" mean just that, rather impossible things for a browser to verify, and something that a user unfamiliar with the underlying technology may interpret as an authoritative rating of the provider of the service as a whole, when in reality there are many more aspects to take into a count in deciding whether a site is secure or insecure.

> So, for the question of how to achieve (as close as possible to) an ideal network, it doesn't matter what the ideal network would look like?!

Well, it involves IPv6, we can start there. We're talking about a new security policy that a major browser seems to want to implement shortly, definitely much more shortly than full IPv6 rollout.

> Or do you think we should just wait until device vendors have screwed up IPv6 before we try to enforce some sensible policy?

This is a very loaded question, given that we still disagree on whether a solution that works well both for globally routable and NATed devices is possible.

> So ... because noone uses IPv6, you suggested to use ULA as an indicator for security?! I am not sure I follow ...

I never said that no one uses IPv6, so I agree that you don't follow.

> No, you don't establish "unsafety", that is the default assumption. The only way to establish that something is unsafe is to show after the fact that someone got hurt, which is just completely useless as a security mechanism.

Let's say that I see your ladder. It's broken, so I tell you that it's unsafe. Unreasonable assumption? You take it down and bring another ladder. I don't see it, but I tell you it's unsafe. You see it and can clearly say that it isn't. Is it unsafe? Is it reasonable for me to tell you that it is unsafe? "Unsafety" isn't the default assumption that a browser makes (and with regards to plain HTTP in particular still isn't in the version of Chrome I'm using).

> Wouldn't a neutral stance be to instead display a security status of "security unknown" (which is obviously equivalent to insecure)? "Not flagging it in any particular way" simply means that the user makes an assumption one way or another, not that the user thinks "it is unknown whether this is secure".

Maybe that's actually the better option. But no, "security unknown" in that sense is not equivalent to insecure. As an extreme, I could create a network with an Ethernet cable between two off-grid devices that I control in a faraday cage. On the other end of the extreme, someone could be tapping a cable far away from my computer and figure out what connections I make regardless of encrypted data. Somewhere in between the two extremes, close to the likely, the party that I establish a secure connection too could be sharing our communication with other parties.

> Because your browser should help you protect your personal data that you process using your browser?

We've already established that the browser can't know it, so is the browser a fundamentally flawed concept?

> Well, that would be a pairing mechanism then?! (Which still should not overload global routability with security semantics.)

Yes? The only part you seem to disagree with is the possibility of having a router in a local network (and yes, local networks exist) facilitate and streamline the exchange.

> If you want to use the LAN as a semi-trusted key exchange mechanism, that probably should happen at the ethernet layer.

So again, a perfect application for a router? The router is in a perfect position to verify that I am on its network.

> It is mostly an observation of what always happens in such situations.

Yes, as evident from the absolute lack of overlapping authentication and encryption standards...


[part 2]

> On the other end of the extreme, someone could be tapping a cable far away from my computer and figure out what connections I make regardless of encrypted data. Somewhere in between the two extremes, close to the likely, the party that I establish a secure connection too could be sharing our communication with other parties.

Which is in no way in conflict with saying "this is insecure". That is in conflict with saying "this is secure", because that implies "... against this specific set of threats", which is not understood by the average user. So, yes, I agree, browsers should generally avoid telling users that something "is secure", but it is perfectly fine to say "this is insecure".

> We've already established that the browser can't know it, so is the browser a fundamentally flawed concept?

No, it's just a subjective entity as all entities in the world are, and so it has to determine risks based on incomplete information, as all entities in the world have to. Also, it's not strictly true that it cannot know that, but it cannot know that without you telling it. It might well be possible to have an option where you could tell your browser "this set of addresses is safe to talk to unencrypted and unauthenticated".

> Yes? The only part you seem to disagree with is the possibility of having a router in a local network (and yes, local networks exist) facilitate and streamline the exchange.

No, I disagree primarily with overloading the semantics of "private addresses", and with mechanisms that only allow communication in a local network. "private addresses" is neither reliably indicative of nor a required property of "within the same local network".

But also, a mechanism that does not depend on being on the same local network for pairing would be preferable.

> So again, a perfect application for a router? The router is in a perfect position to verify that I am on its network.

OK ... how?

> Yes, as evident from the absolute lack of overlapping authentication and encryption standards...

... implemented in the same product, where one of them would always have been enough to meet the requirements of 99% of potential users, and the others would have taken considerably more effort to implement?


[part 1]

> Not necessarily, but the designer of a standard could take both use cases into account.

Which doesn't help if it's a separate mechanism. If 1% of the work gets you to the goal in 99% of the cases, that's what vendors will do. Whether that fulfills the requirements of some standard or not does not matter.

> What, more precisely?

I am not making any suggestions as to the solution.

> Of course taking into consideration that 100% of users might not even have an internet connection.

So, if the device is one that does not inherently need global internet connectivity to be useful, then, yeah, things should work without global internet connectivity.

> One thing I'd like is for devices to a their key signature printed on a sticker. Then I can verify the signature, log in and generate a new key and password, generate a certificate that I can install myself or sign up with a service like Let's Encrypt.

Well, a fixed key is a problem, but other than that, yeah, an out-of-band path for key exchange sounds good.

> No, it's not misleading to say that they are allocated for private use. Your ISP drops connections to these addresses because they respect RFC1819 and don't route to the private address ranges. Even if they didn't, these address ranges are still allocated for private use, and your ISP is Wrong. They're only routable in the sense that IP would technically allow it, but the internet is not simply IP but a collection of standards and best practices.

OK, let's get this straight: What does "private" mean? It's a word with a whole lot of only partially overlapping definitions. For the purposes of this discussion, it is important to distinguish the aspect of "independent from official entities" from the aspect of "not revealed to the public", i.e. "providing privacy". RFC1918 addresses are only private in the former sense: You can allocate and use them without coordinating with RIRs or your ISP. However, they have absolutely nothing to do with the latter sense of providing privacy. That is why it is misleading to call them "private addresses": People understand that to mean that they are defined to provide some sort of secrecy or privacy or protection from the public or something along those lines, which they don't. It's not wrong, because there is a different meaning of "private" that fits exactly what RFC1918 are defined to be used for, but it is misleading because it leads people to assume that it encompasses more than that.

Also, whether ISPs do it or not doesn't really matter. What matters is that RFC1918 address space is in fact routed between networks that are not intended to trust each other. And that is perfectly within the uses intended in RFC1918. The RFC isn't concerned with home networks, really, but with "enterprises", and it defines "an enterprise" to be the scope of an RFC1918 allocation. Nowhere does it say that that implies any sort of trust or security relationship between machines within such an allocation. And also, in practice, it is common to link RFC1918 networks of different "enterprises" together, as a sort-of "meta-enterprise", where a trust relationship is even less likely.

The only thing that is "private" about RFC1918 addresses is that you can allocate them without coordination with IANA/RIRs/ISPs, and that you cannot expect an ISP to route them for you on the global internet. There is no privacy specified in the RFC.

> And sure, "private" has a very broad meaning. A browser could very well flag a certificate that was distributed from a private address as such and let the user decide whether they trust that source.

How does it matter for this whether the address is "private" (i.e., allocated without coordination with IANA/RIRs/ISPs)?

> Sure. But again, "in practice means when you build an IPv6 network", i.e. not a typical consumer, for how long more?

Erm, most IPv6 use is by consumers, with ~ 20% adoption based on google users?! Not sure whether that's quite "typical" yet, but certainly not unusual either. Most user devices support IPv6, and increasingly, ISPs are rolling out IPv6 to their customers with new subscriptions, which tends to come with new routers, which means that at that point their network is using IPv6 for all services that support it.

> Well, it involves IPv6, we can start there. We're talking about a new security policy that a major browser seems to want to implement shortly, definitely much more shortly than full IPv6 rollout.

Yes, and that is the only way to do it. If you wait until after full IPv6 rollout, you will have to work around assumptions that device vendors by then will have made based on the browser's behaviour, which means it only gets harder to implement. If you want to have any hope of success, you have to act now, when your actions can shape what device vendors will do.

> This is a very loaded question, given that we still disagree on whether a solution that works well both for globally routable and NATed devices is possible.

You have so far failed to even show a solution that works better for NATed devices than non-NATed ones.

> I never said that no one uses IPv6, so I agree that you don't follow.

Replace "noone" with "essentially noone" if you want to get my point.

> Let's say that I see your ladder. It's broken, so I tell you that it's unsafe. Unreasonable assumption? You take it down and bring another ladder. I don't see it, but I tell you it's unsafe. You see it and can clearly say that it isn't. Is it unsafe? Is it reasonable for me to tell you that it is unsafe?

Unsafety is not a(n objective) property of the ladder, it's a (subjective) state of your knowledge. The ladder will only either fail or not (that is an objective fact about the ladder). Even a ladder with partially broken steps might still hold up, and a ladder that is all new and shiny can still have some manufacturing defect that causes it to fail on first use. The former is good to use, the latter is not. But that is a useless concept if your goal is to minimize harm because you only know that after the fact. So, what we use instead is a concept of "unsafety". Statements about unsafety are an expression of our knowledge about something. So, the ladder with broken steps is considered unsafe, because based on what we generally know about the statistical properties of ladders with broken steps, they are known to have an increased failure rate. But then, you might apply load tests to that ladder and establish that it does carry the loads required reliably if you avoid the obviously broken steps, in which case it can be considered not unsafe. Mind you, nothing has changed about the ladder, only our knowledge about it has changed. Similarly for the new and shiny ladder, those are generally considered not unsafe because of what we statistically know about new and shiny ladders, and maybe about how ladders are tested after manufacturing. But then, you could test that as well, and maybe find that it breaks apart under light load, at which point you would change to considering it unsafe. Again, nothing has changed about the ladder, it's all about the knowledge you have about it. And the tests I suggested are not the end of that process of discovering the unsafety of a thing. You might still do other tests yet and come to yet another conclusion (like, I dunno, the testing conditions were unnecessarily harsh, and under more realistic usage conditions the opposite conclusion is appropriate).

Now, not knowing anything about the ladder is just another state of knowledge. And if your goal is to minimize harm, then the default is not to assume safety. Again, that is in no way a statement about the ladder. That does not mean the ladder won't hold up. That only means that the ladder is not known (to you!) to hold up. It is always and exclusively a statement about your knowledge about the ladder.

This is not about answering the question "will the ladder fail?", this is about answering the question "is it known to the best of our understanding that the ladder will not fail under some generally expected load conditions?". If the answer to that is "no", then that is reason to be cautious, and that is why the browser warns you/is going to warn you.

You can of course argue that your goal is not to minimize harm, in which case the default assumption does not apply ... but then the whole discussion is pointless, as you are then essentially just saying "if you don't care about minimizing harm, there is no problem with trusting unencrypted connections (of some sort or another)". True, but not my goal, and obviously also not the goal of those people implementing the change.

> Maybe that's actually the better option. But no, "security unknown" in that sense is not equivalent to insecure. As an extreme, I could create a network with an Ethernet cable between two off-grid devices that I control in a faraday cage.

Yes, you could. But the browser doesn't know that. Therefore, its subjective determination is "this is not known to me to protect your private data", and that is what it is telling you. If you know better, that's fine, but the browser doesn't, so it warns you. If you don't know better, you better should listen to what your browser is telling you if your goal is to minimize harm. If you do know better, why do you care that your browser warns you based on its incomplete knowledge about the world?


I guess you missed the whole IPv6 scoped addressing architecture memo.


Like what specifically?


A) Scopes aren't a hack they're part of the protocol.

B) Scopes are exactly: "the global internet and the home".

Considering those things why should it be absurd if I want to secure my home scope at the application layer too? IPv6 is literally designed to allow for this. Browsers are the things that are being stubborn.

If IPv6 is just a terrible collection of hacks then we need a new version and fast before everyone get stuck on v6 for the next 50 years....


What scopes specifically would you suggest are "the global internet and the home" in the sense that is applicable here?


In IPv6 one interface has many addresses. Each one can have:

1. global addresses,

2. rotating temporary global addresses,

3. universal local addresses (one for each site), and

4. a link-local address (required).

The first 3 now technically reside in the global scope. ULAs used to be called site-local and had their own scope, but they were restructured to basically be fancy UUIDs and their scope abolished and merged with the global scope. Link-local is still its own scope.

Although both are globally scoped, there's a difference between a global IPv6 address and a ULA. The global is globally routeable and prefixes are organized regionally and delegated to allow hierarchical routing while ULAs have arbitrary prefixes (not suitable for global routing) and are not supposed to be forwarded to interfaces outside their subnet.

So to answer your question, for local communications in your home that you didn't want leaving your network, you would use ULAs. You could use link-local addresses if your home was all on the same l2 link, but the generally preferred solution is to use ULAs so you don't leak protocol details upwards and so you can leverage l3 tunnels.

Local DNS is allowed to respond with ULAs, just not servers participating in the global authoritative DNS. If you want DNS on your home site you simply run a local DNS server that resolves your local names and is configured to forward unknown names to the global DNS.

IPv6 kills NAT, so "scoped" addresses step in to fill the void and are overall a much better solution.


> 2. rotating temporary global addresses,

Well, not disagreeing that you can have those, but why are you mentioning that specifically for "global addresses", but not for ULA?

> while ULAs have arbitrary prefixes (not suitable for global routing)

How are ULA prefixes different from PI allocations as far as suitability for global routing is concerned?

> and are not supposed to be forwarded to interfaces outside their subnet.

Would you mind pointing to the normative source for this forwarding policy?

> Local DNS is allowed to respond with ULAs, just not servers participating in the global authoritative DNS.

Would you mind pointing to the normative source for this?

> IPv6 kills NAT, so "scoped" addresses step in to fill the void and are overall a much better solution.

But you said that ULA are global scope (which they are, I agree), so what does this have to do with scoped addresses?

Also, which void exactly does killing NAT leave that needs filling (and that then supposedly is filled by ULA)?


It's all in the RFCs. Start here: https://tools.ietf.org/html/rfc4007


ULA isn't even mentioned in that RFC? (Presumably because ULA was standardized half a year later?)


Yes ULA is a different RFC.


OK, so? Where exactly do you think are those things specified? Or do you expect me to re-read all IPv6 RFCs only to then repeat those questions because I still don't know where they are specified?


It would require you to run outside services to support it, but you could most certainly rig something together that lets each "installation" claim randomsubdomain.domainyoucontrol.com, phone home with the local network IP of the "installation", phone home the Lets Encrypt DNS-01 details, and then get a valid certificate for a domain that points to the local instance.


Interestingly the german lawyers association build a software that does nearly this. Just with a small design change that made it a total flaw:

To communicate out of a https secured web mailer to a local IP and Port where a security card reader driver would respond they registered bealocalhost.de which resolves to 127.0.0.1. To prevent mixed content errors they then shipped the private key for a https certificate for bealocalhost compiled into the card reader driver.

Obviously after someone noticed that the cert got instantly revoked...


That is like, way way WAY less secure than just using an unencrypted connection, as now my requests are being alerted over DNS to some third party who has the ability to trivially hijack my connections off to the Internet at large.


Who has? They still need a valid cert for randomsubdomain.domainyoucontrol.com, how will they trivially get that?


The owner of domainyoucontrol can simply rebind randomsubdomain and generate a new cert.

There is no way to build a device with HTTPS that allows the user to distrust the maker of the device.

Edit: before I get responses "you can never distrust the maker" — on HTTP I can audit the device, install it, and keep it in the local network forever, trusting it for decades. On HTTPS it needs to be online every 3 months, and the owner could very well intercept it at that time.


Ah, the maker, fair enough. Still, there is, although not for free: the device can let you configure a subdomain of your own domain, and then use LE (with the DNS challenge) to get a cert. That still requires trusting your DNS provider, of course.


It requires internet connection every 3 months, it requires trusting the DNS provider, and it requires having a domain.

It'd be much better if DANE would already be supported, as the DHCP server can send you to a DNS resolver returning key info for local IPs, which in turn could use the corresponding certs.


How would that protocol work? I'm not seeing how the DNS resolver would securely get the key info from the device. What if an attacker cloned the device's MAC and said to the resolver "hi, my new key is X"?


You're in a LAN. You can not trust any device there anyway, and stuff like DHCP is unencrypted, and anycast anyway.

Trying to add security in such an environment is... useless.


Why? The whole point of authentication and encryption is to allow secure communications over insecure networks. So what if DHCP is unencrypted? If you have E2E auth/encryption, all a malicious DHCP server can do is prevent your communications, it can't spy or MITM them.


If you have E2E auth/encryption, all a malicious DHCP server can is, indirectly, identifying every site you connect to, and potentially blocking you from accessing some sites. Or all. It can intercept all unencrypted stuff. This includes NTP, which in turn means your computer’s clock will be set wrong, which in turn means it can give you long distrusted certificates whose private keys have leaked.

TL;DR: If you can MitM a system at the root, you can already break basically anything relevant.


Yes, they can block me, but that's just annoying, not insecure. Intercepting unencrypted stuff is the reason for this whole thread. And if they change my computer's date, they might get one malicious cert working - assuming they can change it enough, since NTP has mechanisms for avoiding large leaps - but they'll break every single HTTPS connection besides that one, which I'm sure the user would notice almost immediately.

Meanwhile, routers are generally known to be insecure, and who knows how many viruses my guests bring onto my Wifi network when they ask to connect.


It could be configurable/disableable.


If you control the local DNS server, you can install a certificate for localserver.example.com, then have the server return a local IP for localserver.example.com


Register domain called X.com

Set up a DNS server that, for a domain of 10-0-0-1.$rand.X.com returns 10.0.0.1

Generate DNS challenges for your domain, issue lets encrypt certificates for said domain.

Viola, private, local, publicly-trusted, SSL-encrypted.

Have http://10.0.0.1/ redirect to https://10-0-0-1.$rand.x.com/

The biggest issue with this is getting LetsEncrypt to issue you enough certificates.

PS: this idea is somewhat borrowed from Plex.


It’s not impossible to obtain a SSL certificate for a local connection. You can add an entry for fictional domain that maps to localhost in your hosts file and then self-sign a certificate and install it.


but for users of NAS type devices, we've gone from an easy http web page to change the config to needing to install a custom certificate and change the localhosts file...


No need to change the local hosts file, just get a self-signed cert for the ip address.

But, yes, there needs to be an easier way to say, yes, please trust this certificate. (Like how ssh prompts you the first time)


I sort of have the reverse problem. I would like to use a websocket to connect to an insecure host on the local network from a secure context. I realise that this is incredibly niche and would probably need independent confirmation through the browser to prevent abuse. But it's needed to connect to a local weechat instance from Glowing Bear, which is essentially a web UI for WeeChat, an IRC client: https://github.com/glowing-bear/glowing-bear. Right now we have an https and a non-https-version of the website, which is arguably even worse.


If you have a web service counterpart you should consider looking into webrtc... the sdp exchange can happen through your site and then they will connect directly


That's a valid point, and something like this might be a solution (i.e. serving the client JS from a public site with valid TLS certificate and connecting to the local network from there) -- however, I don't want my application to be dependent on an internet connection. This is something you should be able to run in the literal lonely cabin in the woods.


You could use service workers to aggressively cache the client for offline use. And maybe have an insecure version served from the application as a fallback.


I think manually trusting a self-signed is the simplest way to go in that case.


Your problem is isolated to local development servers, which can be easily excepted from blocking non-HTTPS sites. The potential privacy/security gains totally outweigh the inconvenience of seeing "Not Secure" in the URL bar of your browser on an app you are developing.

This isn't as big of a problem as you'd like to believe. IMHO.


Sorry for not being clear enough in my initial post. I'm not talking about a development server. I'm talking about the end product being a server. For example in the form of a user-friendly executable that is natively running on your Windows/Linux/macOS Desktop, a Mobile device, or (alternatively) a single-board computer such as a Raspberry Pi. The end-users using this are not developers. They are just normal users running a simple piece of software that provides them with a web-frontend for a service in their home-network. No internet connection required whatsoever.


So, how do I build an IoT device that never sends a single packet outside of my LAN, which my grandma could have set up and run without ever seeing a security warning, and which does not show "Not Secure"?

In general, how can I get all the functionality e.g. a Nest device may provide, while staying purely within of my LAN?

(Disclaimer: For my own IoT projects, of course I use a special domain with DNS delegation and Let's Encrypt certificates, and HSTS preloaded)


Certificates in which the subject is one or more IP addresses rather than DNS names _are_ a thing, but not that many get issued by public CAs and almost certainly your laundry list of objections about how you don't want to require any setup or Internet connection will ensure you wouldn't be able to qualify.


Why not get a wildcard certifiate for you domain and then put the local services in internal-only subdomains ?


Can't wait to walk Grandma through this on the phone when she buys a Firefox-enabled lightbulb.


The only way I see to do it the "right way" for the masses is to have the lightbulb phone-home to the manufacturer. It's a little silly, but an IoT lightbulb is silly in the first place.


Ah the fridge as a service model. Instead of buying something that just works for decades you now get to pay service fees to keep some remote server online. Oh wait the company just went under and nobody can host trusted replacement servers, guess I need a new fridge.


Right, any software that's hooked up to the internet needs to be updated over time to stay secure, therefore needing to pay for a service or having it go insecure in 2 years.


Grandma should not be installing her own network, no normal person should.

Do you also expect a regular end-user to install their own electrical outlets ?


Accessing a network-connected TV or other home gadget is also "using a local server", do you really suggest people should not do this without a networking professional? That's just not going to happen.


Of course that's going to happen. It's just too dangerous to let someone without the right qualifications do it. I expect it to become a legal requirement to have a licensed technician install networks just like it is with electricity or gas.

The internet is just too important to leave to amateurs, look at how much damage badly configured home networks and computers are causing already. This stuff needs to be secured properly.

You also didn't need a drivers license when cars just became available, but now there are shitloads of cars we have to make sure drivers are capable before letting them drive. Same goes for network-connected equipment.


Also, being part of a botnet should directly impact your internet bill. I don't really see another option. It's a bit silly that nobody knows when their devices are saturating their bandwidth 24/7 because they are compromised.

That way people are then motivated to hire a professional. Also, people making devices will be motivated to not use a default "admin" password because customers will start saying "uh, this smart toaster cost me hundreds of dollars when I plugged it in."


Most home broadband users don't have domains and would be unlikely to want to pay for one, just to remote login to their router.


Let's Encrypt supports wildcard certificates (since a couple of days ago), you could automate it.


Automate what exactly? the point is most home users access their router with 192.168.0.1, they don't have a domain name, nor are they likely to want to buy one. This is now no longer a niche thing that a few people want to do, it's a 100s millions (if not a billion) of households needing to do this to access their own router, so it doesn't say "not secure".


Every internet-connected device should have a FQDN, that's just common sense.


Should? Yes. I can get on board with this.

A ridiculous proposition given the current state of technology? Also yes.

If it happens, implemented in turnkey devices (such as SOHO routers) in a way that enhances vendor control rather than empowering home users with the option to use their own domain names and certs? Likely, that's the trend these days and doesn't show signs of receding.

The last line is kinda a tangent, but if that came to be the case, I would no longer be on board with the idea every device should have a FQDN.


Source? Their blog post had mentioned 27th Feb as the date for wildcard support in production.


You are deluding your users if you convey the idea that home networks are separated the internet. Or that traffic on a home network is safe and doesn't need TLS. Can't you just put up a domain can give your users subdomains on that?


Already thought about this. But a) the application does not require any internet connection, b) while it is possible to just get a global domain name and redirect that locally to the local server, this would require my server to hijack all DNS requests in the local network. Which I don't want to. And I don't want users having to setup DNS redirects themselves.

Edit: And don't get me wrong, I'm totally for TLS on the local network; but there should be an easy way for users to permanently mark self-signed certificates from a local address as secure.


There would be no hijacking involved, just give each of your users a normal unique subdomain that you serve from the DNS. As in, user-1.yourapp.net, user-2.yourapp.net etc.

If someone wants to run it in a network with no access to the DNS, you can just tell them to put that in their hosts file (or whatever local DNS setup they are using).


> there should be an easy way for users to permanently mark self-signed certificates from a local address as secure.

I'm not sure I agree that that's where ux need improvement. Microsoft already have a pretty "easy" solution for pushing locally trusted ca certs, but only in an "enterprise" environment.

Most/all Linux distros will allow pushing to /etc/ssl/certs/ca-certificates (via eg /usr/local/share/ca-certificates and update-ca-certificates).

But that doesn't help as long as browsers work hard to be "special", and manage their own trust.

Being able to mark some nets as trusted/local might help - both with :::1 and with vpns.


Do you have the IP of your server on your local network ? Is it fixed (always the same) ?

If so, that I would try something like this :

- get a domain if you don’t have one

- configure a subdomain (like myhomeserver.mydomain.tld) to that IP on the DNS (A record)

- get a Let’s encrypt certificate using DNS validation

- install the certificate on the local server

I haven’t tested this with certificates, but we used to do this back in the days to avoid configuring local DNS on some small companies networks


So, are you suggesting to buy domains even for isolated networks? And then for this to work you need to connect to the internet just for DNS lookup?


In order to be assured of something's identity, it needs an actual identity to be assured of. For things on the network, this will usually be a DNS name, so we should give them a DNS name.

You don't need to buy "domains", but certainly for a commercial project that makes loads of things which need names it would make sense to own a sub-domain to put all the names in.

You also don't need to "connect to the internet just for DNS lookup" unless you really want to. The point of using DNS names isn't that you can look them up in DNS, it's that they're are a unique hierarchy with a central authority.

There _are_ alternatives to DNS names but none of them have a trustworthy and working PKI today so you can't use them to secure anything you build. Maybe building a trustworthy PKI is hard?

If you insist upon using Let's Encrypt (which is a charitable purpose and so charges $0 for certificates) perhaps because it's actually a hobby project then yes, somebody would need to control DNS records in order to periodically prove control over each name and get issued a certificate, because that's how ACME (the protocol Let's Encrypt use) decides whether to issue.

Many other public CAs are for-profit companies and several already have _active_ commercial deals in which they issue certificates for devices in bulk to the name owner. If you're EXA Metal Poles Europe and you're making 50 000 devices named in the range pole0000.foo.example.com through poleFFFF.foo.example.com they are quite happy to issue you, the legitimate owners of example.com with 50 000 certificates for those devices in exchange for money.


At some level, a certificate is an identity. If I’ve trusted a cert then I know that anyone using it has the private key, no matter their IP or dns name. Being able to do that for a local device would be very nice—I could connect to whatever up it dhcp-ed to and be sure I was talking to the right thing.


That deserves a hard stare. If I trust the certificate on my chat server to be _the certificate for my chat server_ that doesn't suddenly make it OK to present that certificate if you're claiming to be my bank, or my operating system vendor, or Hacker News.

The local device should have a _name_ and then we can issue it a certificate for that _name_ and know we're really talking to the same thing as last time. DHCP and other address allocation protocols don't (needn't) change the name.


I think we should just take the same approach that I2P and Tor take for naming, and base the domain name on the public key. Local devices automatically get a domain like gmaf2cgbn3q2be3vaaytrev3qyxcksemkdxtzefq5bl3542uyf3q.local, which they can advertise via mDNS, and the browser accepts a self-signed certificate for a public key which matches this fingerprint just as if it were "domain validated".

The domain stays the same so long as the key does, so the user can bookmark the page for their own device and be sure that when they navigate back to that domain they're getting the same device and not some imposter—effectively making this equivalent to "trust on first use".

Edit: Changed "on the certificate" to "on the public key". It might be necessary to regenerate the certificate periodically, e.g. to update the expiration date, and that shouldn't affect the domain name.


No, you can use a local DNS server or put the DNS entries in your host file. No need to use the internet for DNS lookup.


Certainly there could be an exception for localhost right?


Yes, there already is an exception for localhost. HTTP on localhost is considered a "secure context".

Edit: To be more precise, and to quote [1]

"Secure Contexts says UAs MAY treat localhost as a secure context only if they can guarantee it will only ever resolve to a loopback address (and are in any case not required to). https://w3c.github.io/webappsec-secure-contexts/#localhost"

[1] https://github.com/w3ctag/design-principles/pull/75#issuecom...


I'm curious about this myself. Would there be an attack vector?

It seems to me secure localhost would only be needed for developers, and developers should be able to allow a self-signed cert.


You also could stop misusing the browser as an application frontend, and write a proper frontend with a cross-platform toolkit, and distribute that.

I don't understand why developers so often choose the browser as a frontend. Are there better rationales besides having at least some frontend for tyrant-controlled devices like iOS'es, and just using the skills one already has?

For the first, just tell the people to get proper devices.

Because of the second, I see schooling efforts for JavaScript by the tech giants so negatively. It leads to masses of people using JavaScript where it shouldn't be.


Perhaps you don’t remember back to the days before the browser was used for application front ends. The problem was no one wrote the front end on some “nice cross platform toolkit”. Instead everything was some crappy windows only app and Linux and Mac users were left out in the cold. Give me the browser any day.


If there was another way to ship an application that can be accessed in one click, in less than a second, with shareable urls, I'd be interested.

Other nice things to see: multiple independent open source implementations of the application platform; a stable and battle tested sandbox, such that users can run code from hundreds of different vendors every day without much worry about being pwned.

The web is old and hoary, but to me there isn't any comparison. For most apps I build, the second place choice isn't even close.


> just tell the people to get proper devices

That seems like a great way to have no users.

An incredibly obvious reason would be that it is the largest application delivery platform with the highest level of user familiarity and comfort.

If you compare two services where one of them offers you a direct login to the app and the other offers you a 200MB download, most people will choose to log in to a website. It's a better user experience. Especially for things that will see infrequent use.


If you _only_ care about the number of users, then I see a point. However, at least for non-commercial programs, why care at all about the number of users?

The scenario you are drawing is not a proper comparison. There is no reason why a native toolkit couldn't support rendering the program before it's fully loaded, so I see no reason a native program would need more data transfer upfront than a JavaScript one. Though I think too that being prompted to download the program, then install it and find the way to run it can be cumbersome, but the solution to this is to not do it this way. Why not integrate with the native way to obtain applications and make it transparent and convenient for the user?

After all, I think there might be fundamentally different goals when developing a software, and that explains the difference. If one has accepted advertisement-based financing of projects, then them and I would probably disagree in many ways. I think users devices must only and exclusively work for the user.


>You also could stop misusing the browser as an application frontend, and write a proper frontend with a cross-platform toolkit, and distribute that.

This is pure insanity. There are tons of applications built on the web stack now that are supposed to run on local networks. There has never been a requirement that "Network == Internet".

For example, enterprise software. Dynamics CRM? Dynamics NAV? Dynamics... anything? Sage CRM? Everything runs on the browser now.

Why would anyone pass up a gigantic, proven, powerful software stack that represents >90% of all applications in the world?

The error is in the stack, not that people would want to use the easiest, most powerful tool for the job.

We might as well be advocating to go back to FoxPro.

>For the first, just tell the people to get proper devices.

It must be super nice to live in a world where you can dictate what devices the clients use, as opposed to a huge investment of pre-installed devices, or, devices users bring from home.

Whatever job you've got, I want it. Because it's completely alien to my career experience.


How is this controversial?

I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."

At first, the idea that something is being done "for your safety and security" sounds good, but like all utopian goals, it has deeper connotations that are truly dystopian.

As mentioned in another of the comments here, this is yet another instance of companies using the "more secure" argument to gain control over the masses and ostracise anything they don't like. They're harnessing fear and exploiting it to their advantage.

To give an analogy in the real world, we don't lock ourselves in bulletproof cages and expend great efforts in hiding from others (for the most part), and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage. We shouldn't let companies and (and try very hard, unfortunately not always succeeding for) governments dictate every detail of how we should live our lives offline, and the same should apply online.

There's a very long tail of sites, many sadly disappearing from Google[1], of old yet extremely useful information, which are probably going to stay HTTP for the forseeable future. I made a comment about this in a previous "HTTP/S debate" article:

https://news.ycombinator.com/item?id=14751540

Fortunately the only good thing that may come of this is that people will just start completely ignoring "not secure".

And I have JS off by default and whitelisted on a very small number of sites, if you were wondering...

[1] https://news.ycombinator.com/item?id=16153840


You have javascript disabled. I do not. I happen to use plenty of sites that require javascript, with HN being one of them. Most of the users out there do not have javascript disabled either. How do you account for their security?

> and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage.

This is just a warning though, no one is preventing you from "going there", it's merely a warning that it might not be safe. Your analogy is more similar to "there's a slow down on that road, let me navigate you somewhere else, but feel free to go there if you want". You have exactly the same amount of freedom.

How exactly is using HTTPS "gain control over the masses"? Google does not control the HTTPS infrastructure.

It's not like those sites are gone either. There's always archive.org.


The irony of (mis)quoting "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety" in the same breath as saying "And I have JS off by default" is definitely worth pointing out here


How is preventing sites to run arbitrary code on your machine giving up essential liberty?


Indeed. I, the user, get to choose whether to allow JS or not.

Now consider that things like Meltdown and Spectre have JavaScript PoCs

If this was instead 'Chrome 68 will mark all JS-using sites as "not secure"', I wouldn't want that warning either, but then I'd be in agreement with the majority...


Right back at you. How is buying an iPhone over an Android giving up essential liberty? I always have the right to sell it, go to the store, and buy a device that lets me do whatever I want.


I am free to run it to get it's benefits and it's downsides, limiting me for the sake of security isn't okay.


Except browsers today are continuously developed, often with larger strategies in mind. Google has stated very openly that their ideal state is a world where there is no insecure HTTP at all and they intend to move everything as close as possible to that state. To reach this, the browser behavior is slowly tightened over a long number of releases.

So a better analogy would be a carmaker announcing that they don't want people to drive to unsafe areas. They'll monitor people's driving behavior via telemetry and OTA-update the car's software accordingly to nudge people's behavior closer to that goal.

So the warning would probably be the first step. Driving to unsafe areas will still be technically possible but it will become progressively more cumbersome until you just give up.


I think that imposing decisions on others for their own good is a dangerous path to take. You stop considering them as adults capable of taking rational decisions but as children that need ot be shown the light, by employing authority and force if necessary. The enlightened technological elite thinks that HTTPS is better so they have decided to impose that decision on everyone else.

It should always be a choice. If something is insecure, show a warning, in big red bold letters if you have to, but allow the freedom of choice to the end-user. But here, what do we have ? Many newer Javascript, HTML or HTTP features are restricted to HTTPS. Even things like Brotli stream compression which bear no security implications. This is done only to coerce people to use HTTPS.


> I happen to use plenty of sites that require javascript, with HN being one of them.

HN works just fine without Javascript. The only thing that doesn't work is collapsing comment threads.


> HN works just fine without Javascript.

Confirmed by me[0] using HN throw Links2, screenshot[1] attached ;-)

> The only thing that doesn't work is collapsing comment threads.

Voting on comments also not work without JavaScript on HN, but in same time voting on threads work without JS.

[0] https://news.ycombinator.com/item?id=16191843

[1] http://www.hnng.moe/f/ZZJ


Hacker News's search feature requires javascript.


And I consider that one of the greatest things to happen to HN.


It's a bit nice, but not nice enough to convince me to enable Javascript.


the gps could also be just a warning. same effect

also we know how these things go. first a warning, then a ban, then extinction from history


Let's not start the whole slippery slope logic, please.


didn't that happen with whatever technology google deemed unfit for the batteries of their phones, like Flash for example?`


Flash was a proprietary non-mobile-friendly technology from the start until its demise at the hands of its creators. No heavy-handed dictatorship, just bad products dying, and products that unfortunately depended on them died along with them.

http://blogs.adobe.com/flashplayer/2012/06/flash-player-and-...


I think your last statement highlights the real issue here. Everyone is afraid of malicious javascript. I don't know why they're conflating that with http injection.

The most straightforward thing to do would be to disable javascript on non https sites by default or warn if a nonhttps site has javascript. Most of the old sites we want to keep around don't have javascript in them (or much javascript in them).

Ideally people should only be enabling javascript on sites they trust (and are running https for "real" "trust") but having a trusted whitelist for enabling javascript brings back your big brother arguments.


It's not just javascript though, we've seen ISP (or other malicious actors in the network) inject ads, of even place the entire HTML content of the site into an iframe.


Bingo. This is why it matters to Google. Ad injection could impact their bottom line.


This is a malware injection problem and it should be possible for google to create signatures of JavaScript from different websites and have chrome verify it and block it.

Penalizing all http users is heavy handed and google should not go down that path.


Nobody is giving up freedom for security; you can still visit HTTP sites if you want to, its just going to be treated as unsafe by default.

As other commenter noted, Comcast has been observed to inject content to its customers; would you rather that on HTTP or that you run HTTPS?


When a highway is plagued by bandits, the govt must go after the bandits and put them away, not close the highway or turn into a cramped tunnel made of armoured steel with only one narrow lane.


So which central govt should we trust to "put the bandits away" on the internet?


I'm saying that instead of doing away with plain-text protocols, maybe national govts can each regulate their ISPs to stop injecting content into the bits they transmit? That would go a long way towards increase of privacy. As for bad state (sponsored) actors, I doubt https can really stop them doing what they want, especially not the big ones.


> maybe national govts can each regulate their ISPs to stop injecting content into the bits they transmit?

Good one. Next you'll be telling me that those very same companies don't spend billions on getting the laws written the way they want.

Sure you could try the top down approach where everyone has the best intentions but that's not going to happen, in the meantime I'm installing bulletproof windows in my car.


That's a very narrow view of the options available for law enforcement. Sometimes "going after all the bad guys" is a terrible strategy.


Some Javascript, HTML or HTTP features are available on HTTPS only. Even things like Brotli stream compression which have no security implications. So some liberty is definitely taken from users who wish to use insecure connections for whatever reason.


> I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."

Now if you could explain to me how using secure connections and showing a correct warning for insecure connections is restricting your freedom that'd be interesting.


My ISP offers security certificates for $145 more than I currently pay per year. The result is that I'll have to pay up, or face a drop off in traffic for my sites from people who will be too scared to view them because of this new browser-based warning... My sites are all public information, no secure data is displayed on them, and there are no user accounts beyond those of my team's editing accounts. It would be sort of overkill to require HTTPS on them.

Cloud hosting is much more expensive than my current hosting plan. It seems like this is also highly convenient for ISPs that http will be phased out because either way, ISPs make a lot more money out of web site business by the newly required standard.

This is the future we knew was coming, where it becomes so expensive for individuals to do the same as companies do. It's how Radio, TV, and many other things were taken away over the years, it simply became too much of a legal hurdle and way too expensive to run until large companies became the only channel owners.

It's just history repeating itself, but now to shut out individual web site and application makers who don't have resources to compete with big business. :\


My hosting service makes no money from me installing Let's Encrypt on my sites. Is that available on your hosting provider?


The quote is:

“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”

And it was about taxes.


> Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?

Please explain to me how Javascript delivered from a malicious ad delivered over HTTPS is somehow safer. Most malicious code is delivered with the help of the website.

This post was written with no javascript enabled.


I would have nothing against it, if browsers just disabled Javascript on plain-HTTP web sites. I do it anyway by default, enabling it on domains where it serves some useful purpose. Treating non-HTTP content as untrusted makes absolutely sense (though I don't think that HTTPS really makes a https://random.l33t.site/ a trusted source of anything).

The old simple plain-HTTP plain-HTML web is still useful and practical for showing text (don't care much even about CSS - HTML 3.2 is perfectly suitable for showing readable information). It seems to become a victim of collateral damage in the pursuit of "better web", which is sad.


My only concern is that the end user can always control which Certificate Authorities his/her browser accepts, and that anyone can set up their own CA. It seems like both of these conditions are vital for the future of decentralized web.


People who cheer for tons of stupid fucking HTTP warnings all over the place really bug me to be honest. Anyone who uses the web needs to be taught the difference between a secure and insecure connection and told where to look for signs (this assumes we burn stupid shit browsers that look to mangle URL bar by removing data from it).

The grandma will not make better decision if you put in another dumb warning like this one https://i.imgur.com/rxmyWtF.png to waste more and more of my time when I enter my password on the same website over and over. I KNOW IT'S INSECURE, JUST STFU already.

I'm at the point where I will look to build my browsers from source after removing that shit (along with removing inability to add certificate exceptions for certain situations). Good thing they're open source.


I don't get how so many people miss the point here, but alas.

It's actually not so complicated: obviously, there's nothing actionable for the user to do here. The message is for the user, but only indirectly: it's there to push developers into better practices. Your boss may not care about encryption or privacy, but definitely will care about hundreds of phone calls asking why they are warned that the form is insecure when they try to login.

With plenty of obscure pages accepting everything from passwords to credit card numbers on plain HTTP pages, this is important. There may not be someone browsing the page knowledgeable enough to catch this, but if any end user can know what they're doing is wrong, then it's much more likely it will be discovered.

While mostly not actionable directly for users, one thing a user can, and probably should, do is close the page, if they can.


It's actually not so complicated: obviously, there's nothing actionable for the user to do here. The message is for the user, but only indirectly: it's there to push developers into better practices

So it's the online equivalent of plastering Warning: Contains chemicals known to the state of California to cause cancer all over everything from gas pumps to gerbils.

Yeah. Real useful.


If 99.9% of companies could remove those carcinogenic chemicals without reducing the value of the products, I would support that warning too.


Even if cancer rates were more or less completely uncorrelated to the presence and/or usage of those products? Congratulations, you've raised the social noise floor for no good reason.


Except it works. If it makes you mad, imagine how many other people it makes angry. And aside from recompiling your browser, all you can do is fix the problem and make the browsers happy.


It makes me want to start serving my sites over Gopher exclusively.


Except it works.

No, it doesn't, unless your goal is to condition users to ignore yet another warning.

There are downsides to doing that, believe it or not.


Frankly, if they could just drop plain HTTP support altogether, I'd prefer that. This is more of a friendly compromise.


This is more of a friendly compromise.

"Nice open Internet you've got there, no gatekeepers. Be a shame if something were to happen to it."


This is the issue with all systems that use human-friendly naming because someone is in the authority to do the name resolution. This battle was lost when DNS came along.

Not everybody wants to use public keys to refer to everything - even if it is via QR codes.


There were never no gate keepers.


> this assumes we burn stupid shit browsers that look to mangle URL bar by removing data from it

That's just about every browser now. I think that full URLs are only available in Firefox, and only if you set `browser.urlbar.trimURLs` to `false` in about:config. Hiding URL information is a very bad trend.


As it is demonstrated now, the indicators work to show the http part of the URL in a way that relates some of its subtext much better to average users. Now, if it were to stay that way, I would be fine with it. But if past performance is any indicator here, this will be just the first step on that silly warnings spiral. If browser vendors aren't careful, they'll flood users with too many warnings, which makes them ignore them entirely.


>If browser vendors aren't careful

But they are being careful. This rollout has been taking place slowly over the last two years, and continues to do so.

Changes are incremental to give developers a chance to adapt, and to prevent users from getting used to warnings.

HTTP/2 requires https. New APIs have started requiring https. And only forms with passwords or CC inputs will trigger security warnings. These are incremental steps.

I don't know why people think browser vendors haven't considered this stuff before.

Eventually vendors will start deprecating existing APIs to continue the migration towards requiring https. That will put more heat on developers, but only after they've ignored warnings for literal years.

But even the change announced today is only a UI update to the omnibar. It's hardly earth shattering.


Browser vendors are clearly not careful enough. Requiring SSL brings with it a kind of baggage that is ill-suited for tons of use cases. Do not simply assume that all developers ignored these signs out of laziness. They might instead be unable to comply in a useful fashion.

HTTP worked unchanged for close to 3 decades. People started to rely on that. Now they are trying apply crowbars to force that out. This is myopic.


You can show full urls in Safari if you want.


Safari is the worst by default. I don't have any Apple devices, but when I look at screenshots of Safari's URLs, they are not full URLs. Also, it looks like you can't show the full URLs on iPads or iPhones. (You can on Firefox for Android.)

Not a full URL: example.com

Not the full URL: http://example.com (the page will load after completing the URL with a trailing slash)

This is the full URL for the homepage of example.com: http://example.com/


also Opera.


The fact that people want to force HTTPS really bugs me. HTTPS centralizes the publication rights to browser vendors and cert vendors. Why should you require permission before publishing content on the web? IMHO, They should rename the S in HTTPS to $.

I believe that Google's intentions here are to block other pass-through internet entities from collecting advertising data. Obviously, Google would never encrypt user data so that they couldn't mine it. Personally, I am more worried about Google mining our data than some rinky dink ISP.

Also, could you link to the PoCs? I have not seen a reliable PoC but maybe I haven't look that hard. The ones I've seen only work on specific CPUs, and only if certain preconditions are met. But anyway, that is a separate discussion.


> IMHO, They should rename the S in HTTPS to $.

This is an absurd thing to say when Let's Encrypt exists.


Why is it absurd? So I either have to pay a CA vendor, or I have to ask Lets Encrypt permission to renew my cert every 90 days.

In any case, why should I be forced to support a model where I have to beg for permission to host anything. I believe in stating my opinions and let others make up their own mind. The opposition believes in forcing everyone to adopt their position by fiat. Why not put the information out there, and let people switch to HTTPS if they feel like the benfits make sense to them.


> Why not put the information out there

And that's why chrome will mark your site as insecure.

> why should I be forced to [..] beg for permission [..]

That seems to be the best thing humans came up with so far, if you want nice stuff that requires coordination. You also have to "beg" for a connection & ip address. You also have to "beg" for a DNS name. Please propose something better (that doesn't come with its own significant drawbacks).


>And that's why chrome will mark your site as insecure.

That is a false statement. Using 'what could happen' logic, Windows should label Chrome/Firefox/IE as insecure/malware since someone could theoretically do bad things with those tools.

>Please propose something better (that doesn't come with its own significant drawbacks).

Why don't you start first? Since this HTTPS so-called 'solution' has its own significant drawbacks.


You aren't forced to do anything. If you choose to use HTTP, you can, and no one will stop you. The Chrome omnibox will accurately report to users that your site is insecure, if you do, but that's just making them aware of a basic fact.


>The Chrome omnibox will accurately report to users that your site is insecure

What evidence does Chrome have that my site is insecure or poses a threat to users?

'What could happen' logic would mean disabling all browsers since you could get pwned by using any of them. That is not a rational way to approach anything.


> What evidence does Chrome have that my site is insecure

The fact that it is using HTTP, and therefore can be trivially MITMed by anyone controlling any point traversed between you and the client. Communication between the browser and your server is insecure.

Whether that is an important fact to users is a decision users will need to make, but it is a fact.

> 'What could happen' logic would mean disabling all browsers since you could get pwned by using any of them.

Clearly, to the extent that is accurate, that's not the logic at issue since nothing is being disabled here. So, please, stop with the irrelevant strawmen.


>The fact that it is using HTTP, and therefore can be trivially MITMed by anyone controlling any point traversed between you and the client.

Can you please link to any evidence showing the millions of HTTP sites that were MITMed? I mean after all its so trivial as you claim. OTOH, why would anyone care to do that when they've found it much easier to trivially inject scripts and other potentially harmful stuff via compromised ads, third party hosted JS scripts, compromised CDNs, etc, etc. The current proposal fails to address any of those real, actual, tangible 'bad' things that are actually occuring with alarming frequency.

>Communication between the browser and your server is insecure.

That applies to every single piece of data transfered that is not under the control of the domain being visited.

>Clearly, to the extent that is accurate, that's not the logic at issue since nothing is being disabled here. So, please, stop with the irrelevant strawmen.

Simply asserting it doesn't make it so. I reject your interpretation. The most dominant browser vendor showing scrary yellow triangles with exclamation marks, instead of showing your webpage is exactly like disabling it.


> The most dominant browser vendor showing scrary yellow triangles with exclamation marks, instead of showing your webpage is exactly like disabling it.

No, it'd not, and we know it's not because the much more forceful click-through warnings they used for HTTPS certificate errors (because scary red icons in the address bar failed) still had a high enough click-through rate when the “proceed” link wasn't hidden behind a multi-click process hidden behind an “advanced” button that, well, they invented the multistep, hidden process they use now for certificate errors.

And, anyway, the UI they've shown is simple light grey “Not Secure” text, not a “scary yellow triangle”. It's not anything like blocking, and it's not a blocking attempt that failed because—frommthr experience with certificate errors—they know how much it takes to really stop casual web users from proceeding in the face of a security warning.


> Can you please link to any evidence showing the millions of HTTP sites that were MITMed?

Pretty much every single one, when accessed from a hotel or public wi-fi network that injects ads into pages?


It's absurd because you're describing it as some sort of money shakedown when 1) Google isn't profiting off of this and 2) you can get certificates for free.


The complaint is not about the cost of the certificate, it's about handing the control to availability to someone else.

What you are describing is how things are working for now.

First, there's now guarantee Let's Encrypt will be here tomorrow. You're pinning the future of the Internet onto the shoulders of maybe a dozen people who run it.

Second, CA's are - by definition - centralized. There's no guarantee that, in principle, you'll always be able to get a certificate for them.

The Internet was designed to be distributed and hard to break. HTTPS-only Internet is going backwards on this.

I'm all for security and privacy, but any solution that requires a centralized third-party for the whole system to function is, in my opinion, broken by design.

There's that, and there's making the system needlessly complicated for many tasks. There are very, very many websites and users (I'd say - the absolute majority) who are OK with the Dangers Of HTTP (that is, some intermediate party being aware that you read a certain webpage). Worrying about that in the age where every step is tracked, every face detected, and every page is full of trackers seems, to me, facetious.


If this is just a warning, that's one thing. If it becomes a solid block, it means more hassle for everyone running an intranet, and a huge liability for millions of long-lived devices with embedded web servers for browser-based configuration.


Actually, I want a way to turn off the warning on my intranet because of all those printers, sensors, and control units on my intranet that are not going to get upgraded. I don't think I can find an HVAC or sensors with https, and I doubt anyone will authorize a 10's of thousand dollar replacement. I do not want a call anytime people have to deal with these devices and get a scary warning. Telling them to ignore it is not going to generate the right attitude when they go on the internet.


An inappropriate warning is still counterproductive, I agree, but it's in a different league to a solid block.

Not so long ago, I was dealing with the browsers vs. Java applets war, which had a similar effect on many devices as they became progressively harder and ultimately impossible to use. The attitude demonstrated by many defending those browser changes -- as if completely and permanently breaking useful, long-developed, working software on a private network was somehow a good thing -- was somewhere between smug, dismissive and arrogant, and actually quite offensive in some cases.


Developers often wonder why people use iSeries (AS/400) machines. Well, the software is long lived and keeps working. The whole IoT is not going to go forward when the software infrastructure is sand as opposed to stone. I get the feeling if this stuff keeps up, it would be wise for some infrastructure company (e.g. Honeywell) to sponsor an OS specifically for infrastructure with some version of a remote desktop to talk to it. Obviously, web browsers are the wrong way to control long lived devices.


The whole IoT is not going to go forward when the software infrastructure is sand as opposed to stone.

I agree with you about longevity, though I think IoT is mostly a solution in search of a problem anyway, even if the hype train is clearly going to run for a long time yet. Connectivity is obviously useful for some devices, but most of the products I have ever seen in the IoT space weren't IoT to benefit the user, they were IoT to benefit the developer, and they often made the user's experience worse. The other day I visited a friend, and he couldn't change the settings for the lights in his home because his Internet connection was playing up!

Obviously, web browsers are the wrong way to control long lived devices.

There, I will respectfully disagree. Web browsers are, or at least were, a useful way to access devices over well-defined and standardised protocols that were supported on many client devices. They allow UIs much more user-friendly than a command line for non-technical users, and they don't depend on native applications for each client or customised control protocols for each product.

I've built many browser-based UIs for clients over the years with good results, and the only thing that has ever seriously broken them was when browsers started dropping support for useful, long-established functionality -- hence my concern in the current case.


There, I will respectfully disagree. Web browsers are, or at least were, a useful way to access devices over well-defined and standardised protocols that were supported on many client devices.

I think its the "were" part that getting me. We had a standardized protocol: http which is now being phased out and a whole lot of developers of embedded systems jumped on it as a control mechanism for their devices. Now we have this crowd of other developers that focus on a different market segment and don't really give a damn about the problems that causes for others. "Move fast and break things" is fine for startups, but not for anything that our profession built into things. I just a little sick of a disposable culture. I agree https is a great thing and needed, but putting scary warnings on http that cannot be mitigated is a pain in the butt because at some point they will remove http entirely.

At this point, I honestly wish the embedded device programmers would jump off the web train and move on to something else.


At this point, I honestly wish the embedded device programmers would jump off the web train and move on to something else.

OK, but what? That's the real question here, surely.

There are very few developments in the history of computing that have been as widely useful and long-lasting as the fundamental web technologies. The fact that certain browser developers are now trying to embrace, extend and extinguish those technologies for their own obvious purposes and without regard to collateral damage just means we need to push back hard against those browser developers. Google is already more powerful than is safe for our industry, and certainly we must not let it become the de facto owner of anything essential.


There are very few developments in the history of computing that have been as widely useful and long-lasting as the fundamental web technologies.

I'm not so sure. All of the web technologies have changed over time. I'm pretty sure only the simplest web pages from the 90's are still functional. Their are still computers sold today that can run IBM/360 programs unchanged.

The fact that certain browser developers are now trying to embrace, extend and extinguish those technologies for their own obvious purposes and without regard to collateral damage just means we need to push back hard against those browser developers.

Well, I still think the idea of using a document format as an application format was totally foolish. Frankly, something like a decedent of QML or even Sun NeWS would have been more appropriate. Heck even a networked p-machine with a UI would have been better. A frozen subset of HTML might work (well, could use WML and WMLscript since nobody uses that anymore). I think the whole push back is not going to happen. Instead we end up with a harder development environment than Visual Basic or NeXTSTEP with less functionality than either.

Google is already more powerful than is safe for our industry, and certainly we must not let it become the de facto owner of anything essential.

No disagreement there. They own the web and can declare your site removed from the internet as far as 90% of web users are concerned.


One might understand an extra-hostile stance against Java applets, having witnessed tortured attempts to keep a vendor's applet running when that applet required a very, very specific (and even then very obsolete) version of Java 1.3 incompatible with some other awful software's requirements. Breaking other, innocent software would have seemed like a pretty fantastic deal, in order to reduce future applet pain.


That was my life a few years ago when Java applets switched from being the best thing since sliced bread to being anathema. Or take ActiveX for another example. Trying to do business with consumer-grade browsers is just one pain point after another.


If that's really an issue, surely there's an option of putting a reverse https proxy? Definitely less then 10's of thousands dollars.


It’s really an issue, and why am I setting up another server app to fix something that should be a switch for the administrator in software that worked before?


Chrome 64 is already a hassle for intranets. It adds "Not Secure" to the address bar whenever the user enters text on an HTTP site.


>Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?

Seems totally irrelevant, since any "legit" site with a $10 certificate will still be able to inject malicious code, either by its operators putting it there directly, or by it being hacked -- whether there's a man in the middle or not. And with something like Spectre out there, https wont do anything.


There is a quite big difference between the site injecting it or having it injected via ISP.


Yes. That you can trust ISPs (few, large companies) more than random websites.


How can you trust someone with a known track-record of violations more than a small website admin?


It's easier to trust someone with a "known track-record of violations" than millions of random websites.


I browse a popular gaming forum that has not implemented HTTPS. Chatting with the admin, the reason is simple: Ads.

We can definitely blame the ad networks. Some have switched, but many won't work on https, and websites relying on ad revenue must stay with HTTP or make less money with HTTPS-friendly ad networks.


Those shady ad networks are going to get blocked by Chrome anyway, even without the HTTPS warning. Good riddance.


This hasn’t been true for at least a year. There is zero difference in ad revenue for HTTP vs HTTPS now. Join any major online community for publishers and ask anyone if you don’t believe me.


Genuinely curious what kind of ads network/system you use that is not fully HTTPS compliant by now?


> However, you have to recognize the fact that you run JITed code from these sites.

I guess browsers could then make an effort to help sites use scripts only when absolutely necessary and give users easy to use tools to disable scripting. But, oh wait, they do the exact opposite.


Or the users could switch to browsers that disable Javascript, third-party cookies by default instead of the ones controlled by advertising and DRM monopolies. But, oh wait, they do the exact opposite.


You don't need HTTPS to verify integrity. HTTPS actually adds attack vectors, complexity, and removes useful functionality like proxies. And like another commenter mentioned, most malware is delivered from an authentic site anyway.

HTTPS evangelists are basically playing a game of political ideology shrouded as concern for safety. I think they care more about their own privacy than they do the functionality, security, and maintenance required for HTTPS sites.

It's also not coincidental that Google has a vested interest in keeping all traffic surrounding their services hidden or obfuscated: traffic content and metadata is money. Google is basically eating the lunch money of ISPs' passive ad revenue. (This is also part of why they want to serve DNS over HTTPS)


How else would you practically verify integrity for web browsing?

What's wrong about caring about your privacy?!

Why the hell do ISPs deserve ad revenue? I don't like Google either, but ISPs that want to tamper with connections to inject their ads can fuck off and die in a fire. That is more unethical than anything Google has ever done.


> How else would you practically verify integrity for web browsing?

Download a signature once, verify any file before rendering it. You could even control this behavior using an HTTP header if you wanted granular control. It would be a trivial extension.

Today, nobody verifies that content was created by the author. That content can be subverted on the web server, and this is how malware is distributed today. Verifying content with a signature would actually be more secure than just TLS.

> What's wrong about caring about your privacy?!

Ignoring all the other concerns for the sake of it, is what's wrong.

> ISPs that want to tamper with connections to inject their ads can fuck off and die in a fire. That is more unethical than anything Google has ever done.

Google reads your e-mails and search history and tracks where you go on the internet, and sells the information to advertisers, who then display the ads no matter where you are or what you're looking at - including over HTTPS pages.


That functionality doesn't exist today, so it would be a new protocol. One that would be almost as complicated as HTTPS, would be starting from zero as opposed to the 50% usage of HTTPS on the internet, would require new code to be written which hasn't been thoroughly tested for security issues and would provide inferior assurance to HTTPS. All for the sake of being "simpler."

With regards to signing content, this already exists in the form of code signing. Given the amount of software that isn't signed I doubt it would be practical for anything else e.g. blog posts.


The simpler solution is to not run that code.

The imaginary user cited does not need that code in order to "browse memes".

The code is there for advertising, e.g., to attract advertisers as customers by gathering data about users.

Hence the push to HTTPS is for companies that aim to generate revenue from selling access to or information about users to advertisers.

I have no problem with HTTPS on the public web, to the extent that it is the concept of encrypted html pages, and perhaps these are authenticated pages (PGP-signed was an early suggestion).

Encrypt a page of information (file), sign it with a public key and then send it over the wire . The wire (network) does not necessarily need to be secure.

However I do have a problem with SSL/TLS.

I would like to leave open the option to not use it in favor of alternative encryption schemes that may exist now or in the future. It seems one allegedly "user-focused" company wants to remove this option. Comply with their choice or be penalized.

The issue I have with TLS is only to the extent TLS is the idea of setting up a "secure channel" to some "authenticated" endpoint (cf. page), with this authentication process firmly under the control of commercial third parties, using an overly complex protocol suite and partial implementation that is continually evolving (moving target) while people scramble to try to fix every flaw that arises out of this complexity.

To the extent it is not what I describe, I have no issue. (That is, I'm pro-TLS.)

We have one company aiming to replace HTTP with their own HTTP/2 protocol, which to no surprise has features that benefit web app developers and the advertisers they seek to attract far more than they benefit users.

Could we design a scheme to encrypt users web usage that would not benefit advertisers? I think yes. But this is not what is being developed. Encryption today is closely coupled with the "ad-supported web". If we are not careful, this sort of policy pushing by Google could cripple the non-ad-supported web that existed before the company was incorporated.

Encrypted "channels" are not the only way to protect information transferred via the web. TLS is not the only game in town.


How would wifi portals work with only https?


They don't! Isn't that amazing?

The only use I know for captive portals is EULAs, and I'm not sure those ever had legal weight (though obviously IANAL).

But honestly they were starting to be outdated (technologically) even before this. Since a lot of popular sites use HTTPS, I usually have to try and think of a non-HTTP site before I can get through. They're just a nuisance at this point.

This is not to defend Google's actions as "altruistic" in any way. But sometimes Google's interests and the public's do align.


> I usually have to try and think of a non-HTTP site before I can get through.

http://neverssl.com/ is built for that problem; it's what I use


I just use http://example.com for those


>The only use I know for captive portals is EULAs

What? You use them to log in with some criteria more than just a username/password. For example CableWifi hotspots that let you log in with TWC, Optimum, or others.


Wifi has supported that kind of login for ages (15 years?) at the protocol level, without fucking with the traffic in a MITM fashion. OS support was there already for XP and iOS 2.0.

https://en.m.wikipedia.org/wiki/IEEE_802.1X


except there is no provisions for UI indications whatsoever, not even what login you should be using, or what you are logging into, or why.


Every hotel room in which I've stayed has a placard on the table with the BSSID name, userID and password for the portal login.

It would be identical with 802.1X. The only difference in the UI flow is that the authentication prompt is generated by the OS, not on some HTTP page that I only remember about when I wonder why my VPN isn't coming-up.


So why don't captive portals at e.g. cafés use 802.1X? Are they just stupid and don't know it exists, or is it really user unfriendly in that setting?


> log in with some criteria more than just a username/password

The problem there is that captive portals don't add any extra link-layer security. The network is open, so literally anyone can sniff packets.

It's uncommon, but a network using WPA2-Enterprise and user/pass uses different keys for each person (not sure if per device or per user), so you don't have to trust everyone in the room.


Most portals I use intercept your request to a HTTP site and redirect you to their logic form which is served over HTTPS.


Yes, but after authentication, all traffic can be sniffed - including unencrypted connections.


How is this different from the case without a captive portal, again?


Using WPA-Enterprise, each connection is encrypted separately, eliminating that hole.


Now you don’t have to trust the other customers, only the bar you’re at, their ISP and a million other parties between you and the site you’re visiting.


That's a reasonable point, but I'm speaking from the perspective of the bar owner - I feel I have a duty to provide better security even if the patrons have no reason to trust me.


Like a bar is going to run account administration.. at most they’re going to set a proper password with WPA2-PSK which provides protection against outsiders. But it can’t provide protection against an active attacker that has the password.


You could have a wifi access product that used a voucher system. The code could be on the bar receipt.


They're probably also not putting up a captive portal, so what's your point?


Using WPA-Enterprise, as I understand it, requires devices to be preconfigured to authenticate with the radius server, which makes it a non-starter for the kinds of networks that use a captive portal.


No, there's no preconfiguration needed, it's just a username/password account. You choose the network, then the OS asks you for your user/pass, then you're connected.

It's the router that connects to the RADIUS server, not the device directly. And some routers have one embedded, so you don't even need to configure that, it "just works".


Wouldn't it be nice if there was an encryption mode for Wifi that ensures integrity without requiring authentication? At CCC events, the workaround is to have a WPA2-Enterprise network that accepts every username/password combination, but that's going to be hard to explain to non-technical users.

I think WPA3 is going to support this use case.


Companies can easily adapt by changing your username to the form of an email (e.g. "jsmith@optimum.net"). Many of those already offer email addresses anyway.


How do you communicate "log in with an email address" to a user if you can't display a web page?


No, I mean the only username given by Optimum et all would be the email address.


They are in fact legally required in many jurisdictions.

How do you implement them properly when the user has to agree to certain terms to connect?


What if this isn’t possible? The world implodes? Or suddenly the lawyers discover it isn’t that ‘required’ anyway?


Then your country will look like Germany did until a few years ago, with no WiFi hotspots anywhere, not even in Starbucks, and people being careful not to open their WiFi to guests at a party even.

That's what has happened before, that's what's just going to happen again. In Germany, the owner of a WiFi hotspot was liable for anything users on the WiFi did, unless the owner could prove that every user had signed an agreement that they would not do illegal stuff


>they were starting to be outdated (technologically) even before this

So what is the proper technology today for limited guest Wi-Fi access?


If it's unauthenticated, the portal doesn't give you any advantage beyond showing a message.

If it's authenticated, you should be using WPA-Enterprise anyway, which supports different logins, and actually isolates the traffic between clients, whereas those "portals" don't, allowing any other user to sniff your traffic.


It's surely authenticated, but I don't want guests to use WPA-Enterprise. I just need a way to communicate temporary passwords to them.


Right, and WPA-Enterprise (which even many cheap home routers support nowadays) is that way - you can just generate temporary logins.


WPA-Enterprise is a way to authenticate users, not a way to distribute logins/passwords. Am I missing something about it?


I guess I misunderstood you before. You're using the portal to provide passwords to the users? How does that work?


Portal shows a message like "here's your temporary login/password for 4 hours". Then a user enters them to the login form. When using WPA-Enterprise, a user can also set up WPA on their device with these credentials.


If the portal provides you with the credentials and also asks for them, what's the point of that portal in the first place?


You can use them across multiple sessions and devices.


I use cbc.ca. It's short, memorable and to date, hasn't implemented http->https redirect.


When macOS connects to a Wi-Fi network it makes an HTTP connection to captive.apple.com, and if it doesn't get the expected page in response it pops up a little browser window. I much prefer this solutions to hijacking page loads in my browser (which tends to have lots of annoying side effects).


There are proposed standards to serve a URL for a captive portal via DHCP.

But hopefully, many captive portals will just die off.


Captive portals have always been a hack. I'm sure there'd be some interim solution, even if it's just hardcoded captive.apple.com as an exception.


If you have to use an internet provider that does not provide reasonably direct access to the internet, you should tunnel your traffic through the service that does (e.g. a VPN).

The idea that all internet sites have to compensate for the low quality of the last mile of some users simply does not make sense. If a site accepts sensitive input from the users than sure, it needs an authenticated and encrypted connection; but if it serves static content, it may hold the internet infrastructure and the receivers responsible for the correct delivery.


I would have trouble following security advice from a company that was serving mining ads directly from its websites last week.

Main argument in favor of http would be it's making browser fingerprinting harder from a privacy standpoint. Https by itself let the server identifies its clients individualy without cookies.


As for reasons, there are tons of folks with personal websites on providers that only offer expensive yearly certificate options. Most CDN providers don't support LetsEncrypt. Most shared hosts don't. The Cloud Sites platform for multi-site shared hosting (formerly RackSpace, now at Liquid Web) that I use for a dozen of my friend's sites I host for free doesn't. So, essentially, this means that personal hosted websites either stay 'not secure', get ditched, or increase in price quite a lot. Yes, it's just 5 to 10 bucks a month to spin up another instance on Digital Ocean, but it's yet another 'server' to manage when I'd rather decrease the number than increase it.


Maybe just disable js on port 80, let 443 be for web apps and 80 be for web pages.


What a great idea this is!


My only issue is HTTP/2 client and servers only implementing TLS. This makes it unnecessarily hard to reverse-proxy HTTP/2 on a loopback connection where it's reasonable to assume being safe from MITM.


The existence of vulnerabilities doesn’t negate the bullshit security theatre of certificate authorities.

Banning http is a great example of tossing the baby with the bathwater.

Https is better, but there are still valuable use cases for unencrypted web traffic.

I am sorry this bugs you, but please note that your straw man is not the argument I’d make. Sometimes you need a low hassle web server. Renewing let’s encrypt certs is not low hassle.


> The existence of vulnerabilities doesn’t negate the bullshit security theatre of certificate authorities.

I agree that the current CA system has flaws, but there are efforts such as Certificate Transparency[1] and DANE[2] attempting to improve or bypass the CA system. That said, just having encryption defeats passive eavesdropping, and even the current CA system of authentication raises the bar for active eavesdropping.

> Banning http is a great example of tossing the baby with the bathwater.

I'm not sure what you mean by that.

> Https is better, but there are still valuable use cases for unencrypted web traffic.

Like what?

[1] https://en.wikipedia.org/wiki/Certificate_Transparency

[2] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


>> Https is better, but there are still valuable use cases for unencrypted web traffic.

> Like what?

Things like this: https://utcc.utoronto.ca/~cks/space/blog/tech/BIOSViaHTTPSen...


Let's take it as given that HTTP is the right answer in that situation; I'm not saying it is or isn't, but let's assume it is. We're talking about HTTP(S) in web browsers and client-side applications; no one is actually talking about _banning_ HTTP, e.g. blocking port 80 or filtering via deep packet inspection. If you have a use case where after careful thought you decide that HTTP is better than HTTPS, then fine. But HTTPS should be the default.


The functional result for the chrome browser’s ui warnings will be almost indistinguishable from banning http.


For web sites and web browsers, sure. But if you want to use HTTP for your BIOS updates (dozzie's example), then go ahead; I don't think anyone is seriously proposing to stop you. Just please make sure that it's done securely. Anyway, TLS is not enough for secure updates.


To be clear the ONLY reason this is happening is to make sure that any ad served from Google is not tampered with. This is protection for their money making machine. Plain and simple.


Of course it is not the ONLY reason.


There are some drive-by benefits but I would imagine money/ad delivery is the ultimate purpose.


There is a non-zero impact of HTTPS on the size of a webpage. This can be bad for users on bad connections.


The attack surface of browsers is relatively small if you keep them up to date. Browsers were early in including meltdown and spectre. Running JavaScript is pretty harmless.

On the other hand https makes the web performance horrible

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: