Hacker News new | past | comments | ask | show | jobs | submit login

I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN, often without any internet access and no well defined domain name.

I'm all for HTTPS everywhere but right now for my products it's either: https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security.

I wish there was a way to opt into a "TOFU" https mode for these use cases (which is how browser dealt with invalid HTTPS certificates for a long time), although I realize that doing that without compromising the security of the internet at large might be tricky.

If somebody has a solution, I'm all ears. As far as I can tell the "solution" employed by many IoT vendors is simply to mandate an internet connection and have you access your own LAN devices through the cloud, which is obviously a privacy nightmare. But it's HTTPS so your browser will display a very reassuring padlock telling you all is fine!




It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

What happens with offline LAN? And the ideal IoT devices that we would all want to have? (I mean those we dream about in all IoT HN posts, where the rants typically are that no internet connection should be needed for most of these kinds of devices)

What about offline tutorials? I'd like to provide a ZIP with a plain HTML tutorial that shows how WebRTC works, but users cannot serve it up in their LAN and access through their laptops or phones, because WebRTC requires HTTPS. What's even worse, a self-signed cert doesn't work either. iOS Safari does NOT work with self-signed certs at all!

It's maddening, obstacles everywhere once you don't walk the path "they" (industry leaders, focused on mainstream online web services) want you to follow.

EDIT: There are some (lots [0]) of features that require a secure context, i.e. a web page served through HTTPS. So the defaults to HTTP are not a silver bullet, and the security exceptions for localhost are not really that useful either, being limited to the same host.

[0]: https://developer.mozilla.org/en-US/docs/Web/Security/Secure...


> without recourse

Doesn't the post say they'll fall back to http if the https attempt fails?

> For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails.

The only change here seems like it's that, from the user's perspective, initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).


I'm talking about the general state of HTTPS implantation. If you develop an offline device which offers a web UI, and it happens to use any feature that is deemed to require a Secure Context, you're out of luck.

WebRTC is such a feature, but there are lots more, and they can change from one version of the browser to the next one.

The players who are pushing so hard to shove HTTPS down our throats are simply closing their eyes and ignoring the use cases that are not interesting to them. The mandatory renewal timing is a good example: it used to be more than 1 year, now it is 90 days (and some would like to reduce it to mere weeks!) Absolutely great for the grand scheme of things and global security of the Internet, but dismaying for lots of other use cases.


I don't actually see the problem. If you're on a local network, there's no practical way to deal with certificates, so use http. Chrome will fall back. Problem solved.

If http support ever gets truly removed, I will be very upset. But that hasn't happened, so what is there to complain about?


HTTP is effectively considered legacy by the big web actors these days. More and more APIs are HTTPS-only (often for good reasons) and the "insecure" warnings you get from using HTTP become more intrusive every year.

The trajectory is pretty clear, the long term plan is to phase out HTTP completely. And I'm not against it, but I need a solution for LAN devices, and it doesn't exist at the moment because the big web actors do everything in the cloud these days and they don't care about this use case.


I AM against it, because it puts more centralized censorship power in the hands of the certificate authority.

Also, it completely cuts out "legacy" devices, basically anything more than 5 years old.

The Web is once again splitting into AOLized mainstream and "indie underground" that you have to make an effort to access.


Who is "the certificate authority" you're referring to here?


The authority who grants you your SSL certificate. There is more than one out there, sure, but you can't do it without them. And ultimately, they all answer to the same authority above them: the browser maker who populates the root trust store.

So, to summarize: one more way for the browser maker to control what the user can and cannot access without jumping through hoops.


The OP means that in using https (and being forced to used https) you are also being forced into paying a 'third party' an annual fee just to get a valid certificate.

That 'third party' is one of the recognized 'certificate authorities'.

But the OPs point is by going https, you don't have a choice, you have to pay the certificate tax.


Right, and Let's Encrypt doesn't solve the problem, it just kicks the can to DNS, which is globally unique and costs money. Communicating between your computer and any device that you supposedly own without the slow, unnecessary, and increasingly intrusive permission of some cloud IoT stack will become more and more difficult.


This is not true, you can set your host to trust a self signed certificate without much difficulty. Check out this tool for example https://github.com/FiloSottile/mkcert (prev discussion at https://news.ycombinator.com/item?id=17748208)


I would like to trust a given root very for only a specific domain (and sub domain)

I.e *.int.mycorp.com, but not www.mybank.com

Browsers don’t let me do that, it’s either app or nothing. X509 name constraints aren’t great either and don’t give me, the browser operator, the power.


Self signing doesn’t let the world access my website without some scary warning.


That’s irrelevant to this discussion about hosting sites on a LAN with no internet access.

If you need https on the public internet you need a trusted cert.


Don't think personal LAN, think e.g. industrial automation: Many sensible companies want modern sensor systems that provide REST APIs and so on, but don't want those to access the internet. The hosts in this case often are appliance-like devices from third parties.


But that’s my point, and many others’. Sure, we can self sign, but it’s useless for the WWW. You’re forced to pay up to one of the few certificate providers. Thankfully, Let’s Encrypt has made it free and easier, but it’s not a no-brainer.


How long do you think it would take someone who has never been to HN?

I don't think they would even know the option exists.


Letsencrypt provide a really good service.

I can recommend the docker image made by linuxserver in particular [0]. Makes Https a (tax free) breeze.

[0] https://docs.linuxserver.io/general/swag


That's OK then, if that's we all have to do to run any devices inside our LAN/home network.

Want a NAS box for sharing family files/photos or some other IoT device at home? Just set yourself up some other device to run the docker image, get your self a certificate from LetsEncrypt and then... install it on the NAS box? How does that happen?


Perfect time to radicalize the underground (say by beginning to experiment with Gemini or other protocols), the mainstream as usual only knows how to follow


Gemini requires TLS 1.2 or higher.


But it doesn't rely on CAs. It relies on TOFU.


I prefer HTTP :)


Let’s encrypt exists, your argument is moot.


Can you use it on a microcontroller in a home network?


The problem is that there is no way to deal with certs on a local network, but the OP would like to be able to use https anyways; http might be considered too insecure for their usecase


What I do is buy localme.xyz and get a wildcard cert via DNS validation. This way you get SSL for offline devices. But you need to update the cert periodically.


I wish there was a way to automate wildcard certs, at the moment I'm building a python script that logins to my domain registrar's panel and updates DNS records


let's encrypt supports wildcard certificates: https://community.letsencrypt.org/t/acme-v2-and-wildcard-cer...


If your domain provider's API sucks, or doesn't exist, or requires generating a password/key with more permissions than you're willing to give a script, look at acme-dns [1] and delegated DNS challenges:

https://github.com/joohoi/acme-dns


Make your own CA, install on each computer, install certificates, voila.


Telling your clients to install your certificate in their computer/browser store is not very practical. And they will need to do that regularly.


It shouldn’t be practical, that’s by design. Imagine if every captive portal had you install their root certificate to access the WiFi, with just the click of a button.


Not regularly, my root is 10 years long.


Repeat every 3 months or whenever the root certs expire.


3 months? I must have updated FireFox / Discord / VS Code /etc. about a hundred times in last 3 months. Plenty for them to add renewed SSL whatevers inside one of the updates.


> 3 months? I must have updated FireFox / Discord / VS Code /etc.

I think this state of affairs is nuts. With the exception of Firefox, because web browsers have an inordinate number of security issues to contend with.


And other programs don't?


An instant messaging client shouldn’t be executing arbitrary remote code, no.


It's not really possible to prevent that. E.g. a well crafted image can easily trigger an RCE on some older versions of Android: https://nakedsecurity.sophos.com/2019/02/08/android-vulnerab...

Issues like this exist at all layers of the stack, so anything touching the internet needs regular security patches.


I agree completely. But, I also think that in most cases, if a simplistic piece of software like an IM app needs a security patch every three months, regularly, it's a sign the attack surface is too large.


Why would the certs you create for this purpose be made to expire?


It needs to expire before 397 days, because otherwise the CA will not be valid, even if it is marked as trusted. https://www.zdnet.com/article/google-wants-to-reduce-lifespa...

edit: a word


The article you linked to is kind of confused and I'm not sure I blame them. This stuff is really complex!

According to the proposal[0], leaf certificates are prohibited from being signed with a validity window of more than 397 days by a CA/B[1] compliant Certificate authority. This is very VERY different from the cert not being valid. It means that a CA could absolutely make you a certificate that violated these rules. If a CA signed a certificate with a longer window, they would risk having their root CA removed from the CA/B trust store which would make their root certificate pretty much worthless.

To validate this, you can look at the CA certificates that Google has[2] that are set to expire in 2036 (scroll down to "Download CA certificates" and expand the "Root CAs" section) several of which have been issued since that CA/B governance change.

As of right now, as far as I know, Chrome will continue to trust certificates that are signed with a larger window. I've not heard anything about browsers enforcing validity windows or anything like that, but would be delighted to find out the ways that I'm wrong if you can point me to a link.

Further, your home made root certificate will almost certainly not be accepted by CA/B into their trust store (and it sounds like you wouldn't want that) which means you're not bound by their governance. Feel free to issue yourself a certificate that lasts 1000 years and certifies that you're made out of marshmallows or whatever you want. As long as you install the public part of the CA into your devices it'll work great and your phone/laptop/whatever will be 100% sure you're made out of puffed sugar.

I guess I have to disclose that I'm an xoogler who worked on certificate issuance infrastructure and that this is my opinion, that my opinons are bad and I should feel bad :zoidberg:.

[0] https://github.com/cabforum/servercert/pull/138/commits/2b06... [1] https://en.wikipedia.org/wiki/CA/Browser_Forum [2] https://pki.goog/repository/


HTTP does not solve the problem if you still want your traffic encrypted in transit.


Yeah, I think we need a browser that isn't developed by companies with vested interests in having all your traffic go to them...


Then again I think Google would do just fine even if Firefox was the only browser.


Self-signed certificates seem reasonable in this context - unless I’m missing something.


They might to you, but the browser doesn't agree. It will scream with all its force to all your users that this accessing that product is a really really dangerous idea.


You can set up the users' machines so that they trust your certificate.


I have tried to do just that but ran into all kinds of difficulties:

1. Overhead: I have 5 devices that I own 3 of my wife and a smart TV. Setting all this us takes a lot of time, even if it worked fine.

2. What about visitors to my home, that I want to give access? They need the cert as well together with lengthy instructions on how to install it.

3. How do I even install certs on an iPhone?

4. Firefox uses it's own Certificate Database -- for each profile. So I'll have to install certs on the system AND on the host (for e.g. Chrome to find it).

5. All these steps need to be repeated every year (90 days?!) depending on the cert expiration period.

Eventually I just gave up on this. It's not practical. There needs to be a better solution.


I bought a domain For use on my local network. I use letsencrypt for free certs, and I have multiple sub domains hosted under it. It works very well and wasn’t that hard to setup. It’s actually better organized and easier to use than my old system since I had to take the extra step up front to organize it under a domain.


I am as upset as you about this and cancelled IoT related projects because of it

But, #3 How do I even install certs on an iPhone

AFAIK (though I've never done it) you use a configuration profile

https://developer.apple.com/documentation/devicemanagement/c...

https://support.apple.com/guide/deployment-reference-ios/cer...


https://news.ycombinator.com/item?id=17748208

I found the steps described in https://github.com/FiloSottile/mkcert reasonable to follow. It describes an iOS workflow too.


15$ a year for a domain, throw traffic through local [split?] dns and traefik with the lets encrypt dns challenge and call it a day? i have over 25 internal domains & services with 25 certs auto renewing and no one can tell - it just works and is easier the self signing certs and loading them into whatever rando service or device your trying to secure


Split dns is dying - doh is sorting that. Sure canary domains exist, but they won’t forever


i'm not sure i follow, what does split DNS have to do with DoH? i don't want my internal DNS addresses public, there is no need + security and for some addresses i have different IPs internal vs external.


Browsers send to external provider like google, rather than the network provided server which has the internal addresses (and which may override external addresses for various reasons)


split-dns will never die, it would kill far to many internal corp environments where there is no public DNS entries.


I have been looking at using MDM for my iPhone so I can install trusted root certs and require it to be on vpn when not using a specific list of WiFi networks.

It is a hassle and as far as I could see it requires resetting the device and I’m not sure I can restore my backup over it and retain both.


If you keep your CA secure, no reason that you can't set the expiration of the root cert to something like 10 years.


Will browsers accept that?


Browsers accept a root CA with long lifetime. Certs signed by CAs installed by the user or admin also allow long lifetimes (probably will still for a while).


Maybe it is a dangerous idea. You could be snooping on them for all they know. A little truth never hurts.


There is an important difference between (A) trusting "just this one" cert for a specific reason, and (B) installing a root cert that is able to impersonate any server.

It ought to be a practical to do (A) without doing (B), but due to a variety of deep human and technical problems, it isn't.


> initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).

More than a bit. For HTTPS-only sites, the site could serve a stub HTTP endpoint on port 80 that redirects to HTTPS. The redirect causes maybe some milliseconds to a few seconds (worst case) of latency.

HTTP-only on the other hand can't do a HTTPS stub as easily (as the primary reason you'd want HTTP-only is probably that you don't want/can't deal with the certificate management - if you can set up a redirect from HTTPS to HTTP, you might as well go full HTTPS)

So the only option for HTTP-only is to not open port 443 at all - meaning Chrome has to wait out the 30 seconds (!) network timeout until it can try the fallback. So pure-HTTP sites would become a lot more unpleasant to use.

(A site might be able to cut this short by actively refusing the TCP handshake and sending an RST packet - or by opening and immediately closing the connection. I don't know how Chrome would react to this. In any case, that's likely not what HTTP-only sites in the wild are doing, so they'll need to update their software - at which point, they might just as well spend the effort to switch to HTTPS)


You can simply send a ICMP Rejected Message, which should direct your browser to immediately try any fallbacks or other hosts.

Timeouts occur when you incorrectly configure your firewall to drop packets instead of rejecting.


True, that would work as well. Though I believe it's recommended as a best practice to simply drop packets as not to help port scanners.


I don't worry about port scanners. If your infrastructure becomes less secure because of a port scanner, it's not very well secured. Sending a REJECT is cheap, no DDoS opportunity and helps browsers and other apps fail over fast.

The recommended practise I've heard everywhere is that an ICMP REJECT provides no opportunities for an attacker they wouldn't have with 10 minutes extra time (modern port scanners can be set aggressive enough that REJECT or DROP doesn't matter if they have a known open port they can obtain an ACCEPT timeline from).


Yeah, that absolutely makes sense - as I said, a site could also avoid the timeout by sending a TCP RST packet. My point was more that I believe not many sites are doing any of this as in the past, "best-practice" firewall configuration was to be as silent as possible.


Being as silent as possible has only two effects; an attacker takes 2 seconds longer and you significantly degrade the network experience of all your users. It hasn't been best-practise in a while (outside the greybeard firewall admin circles), ie, almost a decade at this point. Anyone who hasn't figured that out should really consider not doing network admin if their knowledge is a decade out of date.


Hmm, OK. Might be that my knowledge was not really up-to-date there. I'd really appreciate if network protocols designed for common benefit (such as ICMP) are not discouraged due to security for a change. In this case, sorry for spreading FUD, that wasn't my intention.


> It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

I constantly have issues with this address bar hiding the scheme and even the www.

One issue is when I quickly want to select some parameters or delete parts of the url in order to "up" one level.

What drives me absolutely insane is their inconsitent autocomplete functionality. Sometimes I end up at googling "examp" instead of navigating www.example.com, which was a page I already had visited and therefore showed up as the domain which was autocompleted inside the address bar. Sometimes pressing enter autofills the address, sometimes it googles the halfway typed domain. If I tab the halfway typed domain, it always completes it and an enter will navigate to it.

There is some strange difference in entering a domain and performing a search, something feels off. I can't exactly tell what it is, but sometimes I end up with submissions which I did not intend.

Also something with pressing the down-key in order to select the topmost entry, the one which gets selected with tab, it just gets skipped when I use the down-key.

Another thing is if I want to query "Raspberry Pi disable wifi" or something which begins with "Raspberry Pi", that I then get suggested the URL raspberry.org, a domain which I have often visited, and am forced to type through the entire word Raspberry in order for the URL-functionality to get aborted and have it switch over to googling mode. Maybe there is a keyboard shortcut or something which would help me out, but it simply isn't intuitive.


It's a silly hack, but if you install googles "Suspicious site reporter" extension for chrome, then the chrome address bar retains the full URL all the time.


You can also right click the address bar > "Always show full URLs".


OMG, I can't thank you enough!

(Who reads those menu entries anyway, at least fully?)


Or you could right click on the address bar and select "Always show full URLs"


In Chrome you can right click the address bar and select "Always show full URLs". I prefer how Firefox highlights the most important part and still shows the full URL.


70% of scenarios? If I were to guess, it would be 70% of your time that your happy with https, and 30% you don't. But its 99.9% or higher in reality.

Also, did you know 80% of facts are made up? XD


That number was totally made up, of course :) that's why I quoted it... didn't really want to be too pedant and explicitly say it, but maybe I should have.


But the number matters to your point. Focusing on 70% of users to the detriment of 30% seems a lot less defensible than focusing on 99.9% of users to the detriment of 0.1%.

The claim you made was “chrome is hurting a large minority of users and they should make a browser in a more fair way” but this changes when you change your made up numbers to something more like “chrome is hurting a tiny minority of users with weird use-cases like me and I don’t like it”

FWIW, it feels like the problem is more that your use-cases don’t fit into web PKI, and I agree with you. But I don’t think harming the security of web browsing for the vast majority is the solution to those problems.


> Focusing on 70% of users to the detriment of 30% seems a lot less defensible than focusing on 99.9% of users to the detriment of 0.1%.

Ah, but now you’re measuring a different thing than the GP, users vs scenarios!

I’d hazard a guess that at least 30% of users need to log into a router at some point or another. I hope it’s more than 30%, because everyone else is likely paying ridiculous prices for a crappy router from their ISP.


I would be substantially surprised if 10% of people had logged into a router EVER and even those folks spend 99.999% of their time on actual websites.


The change doesn’t affect routers (which I think most people don’t know how to log into these days) as there is no https default for URIs outside of the scope of normal PKI (ip addresses and single-level names). Are we disagreeing about the actual change or an imagined future one?


I wonder if a scheme could be invented where your router could be responsible for issuing certs to local devices. Forgetting about the impossibilities of industry adoption, would such a scheme be possible?

E.g. your router/DHCP controller/AD box gives an IoT device a DHCP ip and maybe a DNS address, and additionally it will provision a cert+key to that device by some standard protocol (keeping this secure might be impossible?). Router has an internal CA cert+key to do this.

Your PC then (handwavy) "knows" to retrieve the CA cert of your router by some standard protocol (dhcp extension?), and "knows" to trust it for devices on the router's subnet.

Is a scheme like this possible?


One problem is that it's easy to inject malicious DHCP on to any network you have access to, and you can then route all traffic to yourself (by telling clients that you are the gateway.) This kind of attack is partially mitigated because of TLS - redirecting all traffic to yourself isn't particularly useful if it's all encrypted. But if you could issue a cert along with the DHCP it'd be game over for everyone on the network.


If you’re the network operator you can mitm network traffic if curse. If you aren’t how are you running a dhcp server when the switch will block the packets?


Switches don't typically block DHCP packets. You can literally just spin up your own DHCP server and plug it in to a switch port - if your fake server responds to a DHCP request faster than the legit DHCP server the client will get your lease instead of the right one. It's this way by design - it's not at all uncommon for the DHCP server to not run on the router itself, but on some other device elsewhere in the network, or even outside the layer 2 network using a DHCP Relay.


Depends on who configures them, but normally you’d have dhcp snooping on your switch



> IP addresses, single label domains, and reserved hostnames such as test/ or localhost/ will continue defaulting to HTTP.

According to the post this shouldn't be an issue.


See my edit. There are lots of stuff browsers forbid you from doing if HTTPS is not in use, so the kind of defaults you quote are not really that useful. For example, the browser won't let you capture webcam video (MediaDevices.getUserMedia()) from a page which is not https or localhost (so, good for a computer where you are running some software, not so good for an embedded device you want to install in your home and access from within your LAN with e.g. your phone, for whatever reasons)


Is it the case that self-signed certs don't work in iOS at all? I'm looking around, and I appear to see tutorials for how to properly configure one in iOS.

https://medium.com/collaborne-engineering/self-signed-certif...


I'm talking about user access. At least other browsers still allow it (but that's also prone to change at the whims of the developers), but in Safari for iOS the page will fail silently and won't load, with absolutely no feedback as to why.

Having to install custom-made Root CAs into all and every client device doesn't sound to me like an ideal solution...


Unfortunately, since one is breaking the SSL trust model, that's probably the right solution. Not unlike having to explicitly enable "Developer mode" before a whole host of security-breaking options are available.

Actually, that's one solution Apple could consider: if a user has enabled Developer Mode on a given iOS device, allow the trust model to be broken with an "Are you sure you know what you're doing?" button instead of a silent failure.


Scammers: "You have to enable developer mode to see our new bank website because it's in development"


At that point, isn’t it easier to send the user to chase.com.scammer.com?

The goal isn’t to make a 100% foolproof system (because you can’t), and needing to flip a switch called “developer mode”, which preferably also displays a warning message, should make it clear something is wrong.

...I think this whole discussion is kind of missing the point though. Developers are not the only people who need to log in to routers.


Yeah, I don't know what OP is talking about, I'm using one on my iPhone right now. Enterprises deploy them all the time.

It is true, that in recent versions of iOS (in the past five years or so), you have to install the certificate in Safari, then go to Settings->General->About, scroll all the way down, and manually trust the certificate (to ensure you really know what you're doing by enabling it). And iOS doesn't make this known anywhere outside of that special menu three levels deep, I suppose to not confuse people who had an attacker install a cert on their phone somehow.


If you are talking about installing the Root CA in the iPhone, yeah. That's how I do it in my development devices.

But for a user, iOS Safari (not Safari for MacOS) doesn't show any certificate warning that the user can accept, like other browsers. In fact, it just fails absolutely silently. You'd have to connect it to a Mac and open up the developer tools on the desktop's Safari, to see the errors that are being printed on the JS console.

Otherwise, you'd just be left wondering why it just doesn't work like all the other browsers.


Unfortunately, user behavior testing shows those certificate warnings are a threat vector. There's a reason the browsers have been moving towards the exits on trusting the user to understand the security model enough to override the trust breakage.

Chrome pops a warning, but (with a few exceptions) doesn't let you just navigate through it (there's a secret key sequence you can type to override it, but it's both purposefully undocumented and periodically rotated to make it something that you can only know if you have the chops to read the source code or consult the relevant developers' forums).


I'm using self signed cert on iOS/macOS and it works just fine with Safari. Safari is messed up in other ways with TLS. Like it re-uses HTTP2 connections when making requests for a different Host when it's running on the same IP address as the host it connected to previously, which completely breaks client certificate selection and unless you recompile nginx with custom patches, it also doesn't work with nginx, because SNI and actual Host header differ, which nginx doesn't like by default.


70% of web servers don't have access to the Internet?

Do you have a source on that? I would guess more like 99.99%.


>It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

It's even less funny when you realize that the class of devices that gets effectively crippled includes the wast majority of all IIoT devices. Including ones that run manufacturing and power generation. Yeah, your precious personal website is now secure from MITM attacks. The factory that made your car, however, uses critical infrastructure controlled by web interfaces with no encryption at all. Congrats.


How do the hosts on your local LAN find each other?

If via Multicast DNS, what stops you from publishing TLSA records as well?


> It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

70% of the scenarios impact 99.99% of users as well as the project's intended scenario.

> being limited to the same host.

Yes, that's the point, of course.


> "I'm all for HTTPS everywhere but right now for my products it's either: https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security."

How about what Plex did for its self-hosted media servers?

"First they solved the problem of servers not having a domain name or a stable IP (they are mostly reached via bare dynamic IPs or even local IPs) by setting up a dynamic DNS space under plex.direct"

"Then they partnered with Digicert to issue a wildcard certificate for *.HASH.plex.direct to each user, where HASH is - I guess - a hash of the user or server name/id."

"This way when a server first starts it asks for its wildcard certificate to be issued (which happened almost instantly for me) and then the client, instead of connecting to http://1.2.3.4:32400, connects to https://1-2-3-4.625d406a00ac415b978ddb368c0d1289.plex.direct... which resolves to the same IP, but with a domain name that matches the certificate that the server (and only that server, because of the hash) holds."

https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


That's very cool, but it still requires a working and reliable internet connection.

Also these ultra-long URLs are very clumsy and can't really be used directly, so you need to have some sort of cloud frontend that where the device phones home in order to announce its LAN IP, then the user can go through there in order to connect to their LAN devices.

For something like Plex it makes a lot of sense since you're probably going to have an internet connection when you use it anyway, but for my devices it's a deal breaker.

And at any rate, that's a whole lot of infrastructure just to be able to expose a simple web interface.


Yes, why don't we come up with an extremely elaborate scheme to issue more or less faux certificates to these devices that still breaks in practice because it looks like DNS rebinding and requires an internet connection for the device and some VC funded service on the other end in perpetuity for correct operation?

Ultimately, the question is why the fuck my browser needs TurkTrust, Saudi CA or others to authenticate the device when I could do that right fucking now by turning it over and reading a label. No 3rd parties required.


But this requires the manufacturer of the IoT device to provide a central service like Plex does.

Maybe they can't afford it, or the device is expected to run for a very long time, like even after the manufacturer goes out of business. Or as some other commenters have said, it's a personal / hobby project and the "manufacturer" doesn't have the means nor the will to maintain some outside management server.


How does the client learn the full domain name (the one with the hash)?


Presumably the hash is deterministic and uses data that the client had access to when making the old style of request


The article says

> IP addresses, single label domains, and reserved hostnames such as test/ or localhost/ will continue defaulting to HTTP.

I don't think this affects you. If you are accessing a device on your LAN, you either use its IP address, or if you use DNS, you must be using your own DNS resolver then. In that case you can just use a single-label domain name such as http://media/ and you can omit the "http://" in that case. This is also the case for many enterprise networks. Tip: you need to enter a trailing slash to tell Chrome is a single-label domain, otherwise Chrome will think you want to search.

I see nothing to worry about.


You could be using mDNS (bonjour), in which case local addresses look like 'whatever.local'. This is actually pretty common?

Honestly if they just made it so self-signed worked for .local that would probably help a lot.


mDNS uses link-local multicast so it does not work if your local network is more than one (l2) network segment (e.g. separate segment for wired and wireless).


Here's an approach I've used before successfully. It's not perfect but it's better than nothing.

1. Create your own root Certificate Authority.

2. Create a script using your favorite language and libraries that will create a new certificate for each device something along the lines of "myiotdevice-AABBCCDD.local". The AABBCCDD needs to be some sort of serialized number that's assigned during manufacturing and won't be repeated between devices.

3. Add to your product support for ZeroConf/mDNS/DNS Service Discovery and advertise an https web server at myiotdevice-AABBCCDD.local.

4. Provide instructions to your users on how to download and install the certificate for your root CA (this only needs to be done once).

5. Print the name "myiotdevice-AABBCCDD.local" on the device and instruct users to type that in to a browser's address bar.

I'm doing this from memory so I may have missed an intricacy here or there (like DNS SD is a weird story on Windows 10) but this approach should basically work well enough.

EDIT - good commentary in replies about the dangers of the CA being compromised. Also, good mention of X.509 Name Constraints and how they can be used to mitigate that danger somewhat. More info here: https://systemoverlord.com/2020/06/14/private-ca-with-x-509-...


> 1. Create your own root Certificate Authority.

2. Ensure that the security around your new root CA is watertight, so that if your environment ever gets compromised, someone can't generate a new *.google.com or *.yourbank.com certificate signed by your CA and then MITM your connection.


3. use cross signing with name constraints to not have this problem

https://tools.ietf.org/html/rfc5280#section-4.2.1.10


4. Find out that name constraints are either not supported or ignore by basically all major libraries.


Issuing CA cert with Name Constraints is good, but end user should recognize the certificate is constrained to their domains or not.


The end user should be able to choose the domains the root is valid for - regardless of x509 name constraints.


So then I have to install a root CA for every random IoT product I buy? Which also entails handing them the keys to my machine, since being a root CA means any certificate they generate will be trusted.


That should work but trusting a root cert from a third party makes be a bit wary depending on how it is done.

If the certificate is scoped to only that domain or to only domains used by that user then I suppose it's OK but there is currently no way to enforce this, that I am aware of, without the user understanding and inspecting the certificate.

Thinking out loud here: It would be neat if browsers supported some form of addresses which are public key hashes like is done in many distributed systems. Maybe, out of caution, it would only be supported on local networks. For ease of use this address could be discovered via QR code or a simpler local dns name.


I think there is the solution - to support scopes for certificates, but I am afraid big companies won't be keen on donating resources to implement that.


Any device manufacturer that does this should not be allowed to touch anything related to security. It's one thing to have a little article about why they get the big scary security warning and how to add their device's cert as a one-off exception but "let this random IoT manufacturer vouch for any website" is nuts.


I was looking into a similar approach, but wasn’t sure how to fix it for mobile devices and things like chrome cast. Any ideas?


I think the solution is TLS-PSK[0]. But browsers don't support pre-shared key mode. If they did, each IoT device (or consumer router, NAS, etc) could ship with a unique key which your browser could prompt for on first use. These could be even be managed by password managers, so you'd get the trust on all your devices.

Why isn't this a thing?

[0] https://en.wikipedia.org/wiki/TLS-PSK


In the grand scheme of things, IoT devices that don't phone home to a central server just aren't a big enough scenario to drive browser features.


The very partial solution that I've been experimenting with and trying to refine is to "abuse" DNS records and Certbot's DNS tests so that I can have a bunch of public subdomains that point at intranet sites.[0] There's no rule that says you can't point a DNS record at a local IP.

This really isn't a full solution though because there are instances where you don't want a public DNS record at all. It's also not particularly plug and play, at least right now. And it requires you to have a static domain name and an Internet connection.

It's a step in the right direction, but not perfect, and not usable in every situation. Setting up a custom certificate server and configuring every device is a no-go for me, maybe that works for highly managed networks. I don't trust myself to do it, and even if I did, it's too time consuming and annoying to set up for new devices. At least with my current setup I don't need to transfer any information (other than the DNS lookup) out of my network, and anyone and any device on my network will immediately be able to connect to whatever service I have, and I don't need to worry about accidentally compromising every website I visit.

I'm not sure what the better solution would be. I'd also be interested in hearing ideas about this, it's a problem I'm running into as I try to figure out how to get HTTPS encryption on my local projects. And I do want HTTPS on those projects. I don't want my local network to be running everything unencrypted just because it's behind a LAN. But it's very tricky to try and set something up that's both robust and dependable, and that is fast enough to be usable within 1-2 minutes of me starting a new project that I'm just hacking on.

It's also something that we're thinking about at work. We'd love to manage HTTPS for software installations on our client networks, but we don't want to force them to reveal too much info about their networks on the public Internet, and we don't want to deal with trying to integrate with whatever weird, half-broken certificate servers they have running.

[0]: https://danshumway.com/blog/encrypting-internal-networks/


Your solution comes to closest to mine, so I'll comment under here:

- Register the domain names you'll use in your LAN, point them to a public VPS and generate a wildcard TLS certificates.

- Copy the certificates to the servers in your LAN. This is the annoying part, as it needs to be done after every renewal (90 days with LE).

- Have Pi-hole in your LAN. It's necessary for security reasons to every serious IT professional anyway. It allows to set local DNS records - configure your domain names to point to the LAN servers.

This way, you get valid certificates on private servers without exposing DNS records into the public, without having to configure each client individually. Other people here still using self signed certificates are crazy...


It seems like we should have something like known_hosts for ssh, yeah. As long as it’s trusting one domain at a time (not a root CA), would it really be /that/ bad?

This and browser vendors being overbearing about extensions (I know they’re powerful) gets me down.


> As long as it’s trusting one domain at a time (not a root CA), would it really be /that/ bad?

I think the unhappy reality is that yes, it would. In security terms it's a good thing the major browsers are extremely hostile to invalid HTTPS certs, and do not give the user an easy way to proceed (as somehnguy mentioned).

If you give the average user a simple Click to proceed, they will use it unthinkingly. Then, an invalid cert would no longer be a website-breaking catastrophe (which it absolutely should be), instead it would just be a strange pop-up that the user quickly forgets about, and the door is opened to bypassing the whole system of certificate-based security.

If you have the technical know-how, you already have the ability to customise the set of trusted certificates on your machine (with the possible exception of 'locked-down' mobile systems like iOS). The rest is a matter of UI.

> This and browser vendors being overbearing about extensions (I know they’re powerful) gets me down.

Similar situation. Unless carefully policed, browser extension stores can be used as attack vectors, and whether it's fair or not, the browser gets a bad reputation.


Known_hosts works by having a list of trusted public keys. The TLS equivalent is adding the endpoint's TLS cert to your local trust store.


That's not equivalent and easily dangerous. Random server's TLS cert could be a wildcard or even a CA, which you should not add to your trust store with a single click.

The certificate needs to be either restricted to specific domains (preferable) or validated to make sure there aren't any suspicious attributes (seems easy to get wrong or reject many certificates).


Isn't this the default behavior of most browsers? Access an https service with an untrusted tls certificate, the browser throws a warning and offers a way to permanently trust the certificate.


I am actually glad that they don’t permanently trust it like I belief Safari does it. Accepting invalid certificates in Safari always freaks me out


Neither Chrome nor Edge offer a simple way to permanently trust the cert. I’m sure there is a way to do it but they don’t make it obvious. It’s maddening as someone who develops and distributes local network apps with https.


Sounds like you're using Windows, and I believe both ultimately outsource that to the OS, so you'd need to look whereever windows manages certificate trust to find and remove any old trusted certs. HTH.


I'm on chromium right now and it does remember my decision to trust self signed cert once I click proceed to unsafe domain the first time. It's not ideal, and it shows an alert to inform the user of the fake certificate, but it works and unlocks the https only APIs on a local offline environment. It's permanent until you switch user profile or click on the alert on the left of the address bar and reenable the warnings for that domain.


If I click accept/trust, what does the browser actually do?

Surely, it won't start trusting the certificate as it is (a self-signed wildcard or CA cert would get blanket MITM capability).


You used to be able to add your own certificates to a device's certificate store. Nonadjustable certificate stores complement planned obsolescence, and help split the market into consumer and enterprise devices, that latter of which you can charge a premium for.


Firefox has its own store, and you can add certs: https://support.mozilla.org/en-US/kb/setting-certificate-aut...

Chrome uses the OS store (for now: https://www.chromium.org/Home/chromium-security/root-ca-poli...) which you can also add certs to.


I can't do that with my Chromecast, though, which my point. There are devices that depend on HTTPS to function, but are designed such that the user who owns the device cannot add their own certificates to their trust store.


A consequence of running software you don’t control. They all laughed at Stallman


Yeah, there really needs to be a "secure, but not trusted" mode. My suggestion would be add a "trusted-only" TXT DNS entry that a browser could check when presented with an untrusted connection.

HTTP: gray broken padlock HTTPS+Cert: green padlock HTTPS+no cert: gray padlock HTTPS+no cert+trusted-only: red broken padlock

Any complaints? No? Ok, let's make it a standard! Oh wait... we're not in control of the standard, it's the world's largest cloud hosting, service and product providers. Intranets and locally-accessible embedded devices are a threat to their business model. Fuck.....


There's no such thing as "secure, but not trusted". The security depends on the trust. That isn't just how TLS works; it's how all secure key exchanges work.


That's exactly how mail works between servers though. Granted, it's about semantics, but virtually all mail servers accept TLS connections without the need to check cert validity of their respective counterparts.


Yes: encryption does not work in multi-hop SMTP email. Email is not a secure messaging system, and it is difficult to even build a novel secure messaging system on top of it.

Client-server TLS has a goal of actually thwarting adversaries. SMTP encryption is mostly about raising the costs of adversaries (I think there's plusses and minuses with this strategy; to some threshold, increasing costs for the US IC is actually helping them, organizationally, because the IC's real primary goal is budget-seeking).


That is very obviously not the case. A self-signed certificate protects data from being intercepted and read just as well as a signed one. The only thing "valid" certs protect from that self-signed certs don't is impersonation. In the case of an intranet or local embedded device, if someone can MITM your connection, you're already screwed - and either way, your are just as screwed getting MITMd with an unchecked cert as you are with no encryption. The difference is, that without any encryption, an attacker doesn't even need to MITM you - they can just sit quietly and snoop on your packets.


One of the basic jobs of a secure transport is to prevent MITM attacks. MITM attacks are the modal attack on TLS. It's not 1995 anymore. Nobody's using solsniff.c.


s/secure/encrypted/ ?


There’s little point in encrypting if you’re not authenticating


Which is why ssh is meaningless compared with telnet?


Obviously, SSH authenticates. But on that first connection, when it prints the "you've never seen this key before" warning? You get that you're not getting any real security on that connection, right?


On the face of it, it sounds simple enough: special treatment when the IP address is an IETF-designated private IPv4 address (e.g. 192.168.x.y).

Is there some reason this wouldn't work, that I haven't thought of?


I considered that, but I think at the moment there's no concept of IP address for web certificates, it's all based on domain names as far as I know.

It doesn't mean it's not doable of course, but I could understand if it make people uneasy since it means that the same domain and the same certificate would behave differently depending on what it resolves to.

It may be an interesting solution to consider though. That would definitely make my life easier.


> there's no concept of IP address for web certificates, it's all based on domain names as far as I know

Regardless of whether you can or can't issue a certificate with a CN of an IP address, the browser doesn't receive the certificate in isolation, it receives it from an IP address, and can handle certificate validation differently depending on what it's connected to.

This may be a terrible idea for reasons I haven't considered (it probably is), but I can't think of any off head myself right now.

EDIT: this is probably terrible because someone can just stick a MITM proxy on your lan, and poison your DNS to resolve google.com to a RFC1918 address and boom.


You absolutely can get a certificate for an IP address. Clients should verify them based on the common name, and a subject alternative name has various field types including IP address.

A quick Google search shows various certificate authorities who will issue certificates for IP addresses.



Whitelisting specific addresses makes me uncomfortable.

But I can't actually articulate why it would be bad in this instance, so maybe it's fine...


well, rfc1918 addresses just specify which ranges should not be advertised into the default free zone. (aka, the internet routing table). It says nothing about if a network is LAN or not.

One could totally build a network with globally routed addresses, and not announce those addresses to the rest of the world.


> "One could totally build a network with globally routed addresses, and not announce those addresses to the rest of the world."

Could, and people do; I've worked on networks where the original people must have misunderstood networking and did the equivalent of using 10.0.0.0/8, 11.0.0.0/8, 12.0.0.0/8 for internal networks, including public /8s they didn't own, so they lost access to one or two chunks of the internet - and it never seemed to cause all that many problems for them working this way (so no motivation to do a big involved risky rework-everything project). We added new private network subnets for new build things, but never swapped everything over. It'll phase out eventually I guess.


There are companies who’ve been using public IPs on their intranet for decades, however.


On a related note, pinning the public keys of TLS certificates in browsers used to be a thing (HPKP) and it did mitigate certain classes of attacks with caveats (i.e, let's hijack a domain using an "incompetent" domain registrar and MITM clients that previously visited this site before, happens more than you think[1][2]).

Given how it was configured using HTTP headers and with the average site that has buggy webapps and such that could be used for header "injection" independent of the webserver it was unfortunately considered a theoretical persistent DoS vector, and thus removed from browsers.

I'm not convinced other solutions (CAA, CT) are adequate replacements because it best, they are reactive (versus preventative) solutions, and CAA assumes all CA's are properly checking DNS records at the time of issuance and that those DNS queries are not being intercepted, which is a big assumption in my book.

[1]: https://www.fox-it.com/en/news/blog/fox-it-hit-by-cyber-atta...

[2]: https://krebsonsecurity.com/2020/03/phish-of-godaddy-employe... (okay, was just a deface, but still accomplished with a hijacked registrar account)


Browsers would need to be modified, but I wonder if TOR's method of URLs with the public key encoded could be repurposed for devices on a LAN?

https://hashhash.lan/

(Causes browser to broadcast a message on LAN and wait for a response. Keys are exchanged and if valid, completes connection.)

Devices would need to be programmed with a private key at the factory and print out a QR with the encoded hash and stick it somewhere discreet.

What's that? The factory is producing devices all with the same private key to save money? This is why we can't have nice things.


> https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security.

If you go the self-signed route, you'll encounter devices that simply won't work with them, especially IoT devices. If you got the HTTP route, you'll still encounter devices that simply won't work with them. For example, you can't cast an HTTP resource from Chrome or Chromium to a Chromecast, even if it exists on your LAN.

As long as there are devices that you can't insert your own certificates into their certificate stores, this will be an issue.


>I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN, often without any internet access and no well defined domain name.

Don't use the browser?

I understand the temptation to use the browser, but this is the price you pay for using someone else's platform: They're free to close whatever door they want.

HTML renderers are dime a dozen. Electron is a thing. Serve the HTML using _another_renderer.

JavaFX isn't bad either but I'm not gonna sell anyone on that front because I don't even do it, but the option is there.

Please stop using my browser in obscure and painful ways just so you can (understandably) avoid having to write a native UI.


I would much rather a web UI, please. A router, webcam, network storage, firewall, etc. that responds on ports 80 or 443 and serves up a usable web interface is a dream , all you need is a username and password and you can guess the rest.

The distant past where you needed a desktop program, and that needed a login to the manufacturer's website and a support contract to download, and it's never native it always needs Java and must be 3 versions out of date to work, then needs fiddling with Java's excrable and innumerable "security" prompts, then uncommon ports to be opened, and the older the device is the more likely it is to not work with UAC and need to run with Admin rights and depend on old versions of libraries, and then you end up with one carefully curated fossilised-in-amber management VM for that specific device; that time was much much much worse.

A 3D printer where you need CAD software to make much use of it, fine, have a desktop program. A thing which only needs an IP address for management and maybe it to talk to cloud services, browser web management absolutely any day please.


Telling users that, in order to connect their new router to the internet they must first download your native app from the internet... sounds like a non-ideal design?


To be fair, my current ISP (xFinity) and my previous ISP (Google Fi) both technically support some kind of web interface, but both push hard to get you to download their app and use that instead.

I had no problem (in either case) just downloading the app and using it to set up my networks. Most people know how app stores work now.


I agree. Am I right - would it be solved by prefering https for all ip addresses except 10.0.0.0/8, 192.168.0.0/24 and 172.16.0.0/16?


> all ip addresses except 10.0.0.0/8, 192.168.0.0/24 and 172.16.0.0/16

I assume you're trying to exclude RFC1918 addressing. As such:

> 192.168.0.0/24

This should actually be: 192.168.0.0/16

> 172.16.0.0/16

This should actually be: 172.16.0.0/12

(10.0.0.0/8 was correct)


IMHO, what's needed is a way for appliances to obtain a real, permanent global domain name (and therefore certificates, email, etc.).

There's a bunch of mostly-obsolete rules that assume domains are held by organizations with full-time staffs (like whois contact records, per-domain ICANN fees, UDRP, etc.) a tld like ".thing" that allows non-humans to claim permanent global names on a first-come-first-serve basis would let autonomous hardware devices integrate with the existing infrastructure without hacks and exceptions. Maybe a ccTLD could be convinced to do this?


> I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN, often without any internet access and no well defined domain name.

I don't know how it could work if you're truly disconnected from the Internet, want to connect arbitrary machines with no setup (not install local CA certs), and don't want any sort of prompt on first use (the TLS-PSK someone else mentioned). We might just be stuck with http in that case. Chrome isn't turning off http support, just changing the default behavior to try https first.

What I can imagine though is home LAN appliances being able to get certificates automatically when you have an Internet connection, have a domain name (they're pretty cheap), and set up your router for it. The router could present a (hypothetical) DHCP option saying "get certificates from me" (maybe via the standard ACME interface) and use the DNS-01 challenge with the upstream ACME server (letsencrypt) behind the scenes on each request.

This is certainly more complicated than just doing the DHCP request for a hostname and being done, and it makes your appliance hostnames public, but you wouldn't have to make appliances accept traffic from the Internet, much less have all your traffic proxied through some cloud service. And I can imagine it being a standard router feature some day with a wizard that walks you through the setup.


Probably, and I have found myself in the same situation btw, the best solution would be a fork of one of the browsers, that would (for example) only browse to addresses in the user's hosts file. That way, it could still piggyback all of the web interface machinery of a modern browser, but the things like "omg that's a self signed certificate you will die now" warnings can be safely removed, since the browser will only be able/willing to go to addresses in the user's hosts file. Just a thought.


> If somebody has a solution, I'm all ears.

We've known the solution forever: public-key cryptography [using asymmetric keys to share a symmetric key]. Every single server admin in the entire world already depends on it to secure access to their servers. You might also know it as "ssh keys".

You connect once, it says "this is the first time you've visited this site. please confirm these numbers are legit???", you compare some numbers, then you save those numbers, and if they ever change, the browser screams. There are ways to make it more user-friendly, such as QR codes or serial numbers.

You could do this a million different ways to differentiate it from the rest of the internet. They could require non-DNS-compliant names so the services could never route to the internet. They could dedicated a TLD like ".lan" or ".local" to it. They could add a new protocol prefix, like "local://" (but that doesn't jive with their vision of completely eliminating the address bar). You could just create a new PKI cert attribute that specifies this is a local-only cert and to use public keys, and the browser could enforce that the IP address could only be RFC1918 (but this is a terrible idea as a hacker could just proxy requests from your router to bankofamerica.com or something).

Good luck getting any browser vendor to accept it if it doesn't personally benefit them. You could try bribery.


This is a huge pain point for me as a user, too. I interact with a lot of managed network devices that have web UIs as their primary configuration method. All existing options for this stink. At least Firefox still lets me click through the security warnings - in Chrome you have to know the 'thisisunsafe' incantation, and who knows when that will just go away. There has to be a better way.


Seems like a browser could treat the scenario when a user types an IP address differently from a normal domain name resolution (eg even just changing the messaging to be less scary).

If you're actually using domain names on your LAN maybe you just have to bite the bullet and sign certificates too. You don't need internet access to have a properly signed certificate.


what about every major network that is not a very small bussiness or home network?

Split DNS is used all across to globe to build internal networks.


Could you use an domain that’s internal only and get a wildcard certificate for it ?


There is a (defunct?) W3C working group about this, see https://www.w3.org/community/httpslocal/ and https://github.com/httpslocal


A hypothetical solution would be to stick with the DNS SD / avahi concept and use domains ending in .local.

That would have the benefit to be able to create a CA for that domain that can cross-sign your certificates in order to prevent the snakeoil workflow, which must stick to local IPv4 ranges and the fe80:: IPv6 prefix.

I've been digging through the DNS SD specifications lately (and how airplay, airprint, airscan and others work)...and I'm mindblown on how simple all things IoT could be if everything would support the DNS service discovery RFC.

In a parallel world nobody has IP problems, and nobody has problems connecting to their printers.

The way it's currently going though, I don't see any legacy firmware working in the near future due to CORS. I mean, most of the admin interfaces will break, probably, because of how they use forms to submit configs and settings.


> I've been digging through the DNS SD specifications lately (and how airplay, airprint, airscan and others work)...and I'm mindblown on how simple all things IoT could be if everything would support the DNS service discovery RFC.

It's amazing when you see it all, isn't it? IMO the slow process is communicating to product owners and IPv4-locked technical architects about how these things are actually supposed to work.


I wish we could get industry standard making .local official, then we could get browser-level support for ".local is allowed to be self-signed". Of course, a crapload of software still recommends against .local. And consumer routers and the like don't create a nice .local domain for all your crap.

Nevermind.


I wonder if you'd be able to use a .local name and somehow get a proper signed certificate for it, which you embed into your local device.

Alternatively (and this is pretty hacky, but should work), you get a full domain unique to your device, e.g. mydevice-admin.com, and you require users to run a small installation script on their local devices that'll map that hostname to the local IP of the device. If not setup, it'd show a webpage that has instructions on how to run said installer. Then you embed the certificate on your devices.

There's some obvious security risks on this (if someone could extract the certificate and MITM the website and then trick people into running a malicious installer…) but at least your default experience would have the desired green check-marks.

(These are all probably shitty ideas.)


let's encrypt with *.lan.mydomain.com via DNS validation, installed all over where needed, and annoying to update every 90 days because it's in weird/internal/non-standard places :)


I develop broadcast TV equipments which are often rented all over the place for short amounts of time, often don't have any direct internet access etc...

I simply cannot make any assumption about the network these devices will run, and can certainly not rely on any sort of DNS validation. Virtually 100% of the time the devices are addressed directly by IPv4. I really can't think of a solution for this situation.

For network you control your solution makes a lot of sense though.



That's quite a cool hack, but as I mentioned elsewhere in this discussion it's still a lot of infrastructure to let a user connect to a device on their own lan, and it still presupposes that the user will have a robust internet connection when they need to use the device.

That makes a lot of sense for Plex, but it's really not applicable for my equipment.


Interesting. I’m currently working on an “IoT” device and this seems like it could theoretically work. One concern I have is that there’s an initial step where the device creates an access point so that you can enter wifi credentials that it will use to connect to your home network. In this case, the device connecting to the local server will not have internet access, and would not be able to resolve the plex.direct domain. Maybe I can rely on the browser dns cache, but that seems pretty sketchy...


Perhaps you could distribute an Electron style client that has a self-signed certificate pre-configured and ask your clients to interface with the equipment via that?

If you were using Electron you wouldn't have to worry about browser support either as you'd just have to target Chrome/Blink.

Just brainstorming ideas here, someone will probably shoot me down.


DNS validation can entirely be done by a server on the internet, which does all the stuff necessary to get the certificate, and then gives the certificate to your end user device.

All the end user device needs is a connection to the internet once per 90 days. The vast majority of networks have sufficient network connectivity for this.


I think the more frustrating point is that you need all this internet infrastructure (with running costs) at all - even if your device has nothing to do with the internet at all.

If the vendor of your dumb device goes out of business, you can just keep using the device until it breaks down.

If the vendor of a "smart" device - or now anything with a modern web interface - goes out of business, the device will have a broken UX at best and at worst turns into a brick 90 days later.

In the end, browsers are now a platform and you have to register and pay a subscription fee to make use of them.


> All the end user device needs is a connection to the internet once per 90 days. The vast majority of networks have sufficient network connectivity for this.

I'm no expert on long-tail use cases, but I'd imagine that most networks either have internet connectivity or they don't. I can't think of many situations where you'd only have internet once every 90 days.

Of course one could argue that 90 days long is enough so an employee could go around with a memory stick and manually copy the certificate to every device - which is theoretically possible but sounds like a ridiculous thing to do just to keep a web interface workable. (And even then, you'd somehow need a unique certificate for every device)


I can't think of many situations where you'd only have internet once every 90 days.

Having worked in broadcast news, I can think of hundreds.

News doesn't happen in the newsroom. It happens in the field. And very often in places without internet access. Sometimes for weeks or months at a time. (Think siege at Waco, plane crashes, hurricanes, etc.)


I work in broadcast news. Specifically in connectivity in the field. I can’t think of any time we’d be without some form of Internet for more than a couple of days, depending how you define China as Internet.

There’s not much point in doing broadcast news of you can’t file, and you can only file if you have an IP connection (we do have some non ip satelite but without IP you wouldn’t be able to do much in the way of production - no production system, no email, no phone)

Covering natural diaaaters is why we have bgans and generators and MREs and water cleaning kits. Internet access is as essential as any other high risk safety equipment, and there’s no point deploying if you can’t file back.


If it's broadcast news, then there's some sort of uplink/downlink involved?

Is there no way to have a sideband of that used to broadcast/multicast am X.509 cert over a 90 day period?


I worked in embedded radios for the broadcast industry and SCADA applications for a spell and dealt with the same problems that the GP is describing. Many of these systems are composed of networks built on top of radio modems. It is very common that these devices can't reach the Internet...


They don't have to reach the internet, they need to reach your infrastructure. Your infrastructure needs to reach the internet.


I've literally never seen anybody use a domain name to address my devices, only a simple IPv4. That already makes it a nonstarter, but let's entertain the idea. Maybe I can convince my clients to change the way they work, they generally love that.

Just going through the trouble of having the customer mess with their OS's DNS resolver to connect to the device is ludicrous. Can you even do it on Windows without having to deal with the hosts file manually?

Then they need to remember to update it when the IP inevitably changes because they've moved onto a new project with a different address plan, and they'll invariably forget to change it or forget how to do it or do it wrong and then call me to help them.

And on top of all that, no I can't really expect my devices to have internet access, not even once every 60 days. It's not uncommon for broadcast operations to use costly and critical satellite links for internet access, and they're not going to let random devices use that without a good reason. More generally, even if there's an internet connection available they're likely not to configure the gateway correctly, then complain when the night of the big event their browser refuses to connect to my equipment, saying that they're about to be h4x0r3d.

My use case is certainly a niche, but I think it's probably a significant niche given that everything and anything has web interfaces these days. Cameras have web interfaces for configuration, middle end smart switches and routers have web interfaces, I've even seen power plugs with web interfaces...


There are a couple of possibilities here, given that we're talking costly and critical satellite links.

1. You set up a private PKI using a private CA. AWS will sell you one for $400/month. Install the public root in the trust stores of the clients and then issue 5 year private certs for either IP addresses, or for .local and use DNS-SD.

2. You create a private domain at private.example.com. You get letsencrypt to issue a wildcard for .private.example.com. Then you set up private DNS for that zone. Server cert needs to be updated every 90 days. That involves downloading a single PEM file from your internal network.


I don't think Let's Encrypt is going to be the right use-case for you then. I think your best bet is to work with another CA to get your company an intermediate cert that you can use to issue longer certs that include ip addresses in the SAN.

Then it's just a matter of the devices connecting to the internet at least once a year and doing a very simple "Hey, I'm $device and using $address. Issue me a cert plz."


> work with another CA to get your company an intermediate cert

I'm no expert, but wouldn't that make GP's company effectively a delegate CA? This seems like it would need a very close relationship with the original CA - and all just for a simple web interface.

> include ip addresses in the SAN.

Not sure if this may be different with intermediate certs, but you won't find any public CA that will add private IP addresses as a SAN - as this would undermine the whole security model. If any CA did this, Chrome would likely ban them quickly.

I'm sceptical a CA would let you do that with intermediate certs if there is any danger the leaf certs get into the wrong hands (e.g. because the devices are sold, someone reverse-engeneers one and manages to talk to the back-end service)


let's encrypt does not like that use case btw. because you would need to validate the cert and store it somewhere and then download it to the specified box, since the service can only be inside an intranet.


What does "does not like" mean in the context of let's encrypt?


A suggestion would be to treat destinations that do not need to traverse a gateway (ie local network) differently. Browsers would need to implement this. Further, they could present an initial dialog to 'trust' the local network. Obviously we don't want to do this in public Wi-Fi networks, but the OS already has a concept of 'private' vs 'public' networks, and the browser can easily know if the destination needs to be routed or it's local.


You can get embedded devices with TLS running in 200K of RAM. Cypress has WiFi MCUs based on Cortex M4 running their WICED stack which is a customized FreeRTOS, LwIP, and MbedTLS. I can't wholeheartedly recommend it as a platform because their modifications to reduce memory consumption break the MbedTLS API in subtle ways that make porting code more difficult than it needs to, but it is possible to get secure networking on something less capable than an RPi.


It's funny how pushing stringent privacy and security defaults in one domain degrades the privacy and security experience in another domain. My jaded takeaway from the last 5 or so years is that the internet companies (understandably) don't care about non-internet experiences. I empathize with how annoying that reality is because internet technology certainly works locally if you configure everything correctly.. just not how things work by default.

The actual problem though is fascinating and there is a ton to unpack. For one, the internet was always supposed to be zero-trust. Security-by-NAT was an accident not a feature. And IPv6 enshrines this reality (and it breaks my heart when I see tech companies trying to make IPv6 work like IPv4 in the home). The privacy problem is not really an issue if everything is appropriately firewalled and communicating securely. And as you mention, half the IoT things out there only use a central server to get around what is ultimately a problem NAT introduced: devices don't have public IPs and aren't 1st class internet citizens.

Here's how it's supposed to work:

Your ISP delegates you an IPv6 prefix. As the gateway to your home site, your router advertises the public prefix, as well as a ULA prefix to your home devices. Devices construct both permanent and temporary IP addresses from the public and universal local prefixes (at this point a device that needs public internet access has 4 IP addresses). That's just IPv6 so far but the point is that devices have multiple addresses, ones for public communication and ones for local communication.

Once you have the setup above you can do this:

Your ISP provides you a domain (optionally you purchase your own vanity domain, of course) and their nameservers delegate to your gateway device (or possibly some other one, but conveniently your gateway) as the nameserver for your home site. After a device comes online it dynamically registers its desired hostname with gateway, which will now respond to DNS queries with its address. The gateway should also serve PTR and SRV records for your site unicast DNS-SD style. Your nameserver can serve the public record if the request comes from a public IP and the ULA record if the request comes from that prefix. All public traffic stays public and all local traffic stays local.

Since your devices now are publicly routable, they get certs using ACME. If you want to run local ACME on your ULA network, go for it, but you'll continue to run into the original problem that browsers aren't configured to use your local CA by default and getting users to bootstrap that is essentially impossible. In that vein I do wish there was a way to start a browser window for "local" browsing where it only trust one CA (your local one) and thus isn't mixing public and local security domain concerns.

If devices don't want to communicate on the public internet because that's a privacy or security concern, then they simply don't provision themselves a public-prefixed address or add a public DNS entry, etc.

In short, NAT killed the internet and we're still recovering from it.


Picking specifically on the claim that IPv6 doesn't degrade user privacy.

In an IPv4 + NAT overload residential network, google can see 10 different accounts logging in from a single IP address.

In an IPv6 + privacy-extention-addressing residential network, google can see the unique IPv6 used for each address, and concludes that since five of these 10 accounts are coming from the same IPv6 address, and the other five are coming from five different distinct addresses.

That's more than it was able to glean before.

IPv6 was designed at a time when NAT was a hack to extend address space, before we had pervasive surveillance on the internet. IPv6 privacy extension addressing is a hack to try and address the fact that we now have pervasive surveillence on the internet.

Privacy properties of IPv4 NAT overloading was by chance rather than by design, but tragically the IPv6 privacy extentions are worse by design, than by chance.


You're not wrong.. but honestly I don't understand what people expect when _browsing the internet_. If "all my household devices come from the same IP" is a privacy requirement for you then you're always going to need NAT. And the advantage of your scenario disappears if any of those devices actually communicate with google's servers because then you can profile the requests, look at user agent, probably get different tokens, etc. Your nit seems so marginal to me I don't understand how championing this type of privacy paranoia is beneficial for internet technology. If you're worried about google seeing your traffic the answer is simple: don't send it to them.


What if someone bought `httpslan.net`, issued a wildcard certificate for `*.httpslan.net` and published the private key?

Then anyone could just add `my.httpslan.net 192.168.0.13` to `/etc/hosts` and use published private key to verify connections?

It wouldn't be secure, as anyone could decrypt the connection, but for LAN it's not relevant, and it will make browser shut up.


Yeah I came here to say just that. It's really annoying when Firefox is stuck on https for some reason, maybe history? So I have to test if one of my LAN services works with curl.

I think it has to do with history so I have to clear all history for that site and then start using it with http and it should work fine. This is only Firefox.


Strict-Transport-Security perhaps? My preferred method for testing is a clean profile (firefox -no-remote -P) or just ctrl-shift-p to open private browsing which does not read SiteSecurityServiceState.txt


An arguably "not finished" solution for me has been to ramp up adoption of IPFS and use Brave as a default until other browsers support it. It is speeding up my transition to full fledged adoption.


Just host a certificate + private key at a well known location and have all the devices download / update it every month. /s (I'd bet someone has done this).


How do you trust that the devices are allowed to pull down the private key?


This was a joke. It would make things look secure, but it would be extremely stupid to actually do it.


An ability for a browser to talk https over a Unix socket, or a Windows pipe, would help a lot.

Each can be secured by regular OS means, so a rouge impostor process won't be an issue.


> I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN

There almost is! Instead of self signed certificates, use a certificate authority, and install that on the LAN's machines. https://github.com/devilbox/cert-gen

You can use macOS Server or Active Directory to push out the Certificate as trusted.

It's not perfect, but it's close enough for a LAN.


Now http will tell them that they are treading on thin ice and surely imminent death now :) . That's what firefox does.


Perhaps something like bluetooth pairing would work, where you enter a code into your browser to authenticate?


Would something like an electron app allow you to embed an alternative root/public certificate?


Could always ship your browser.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: