Hacker News new | past | comments | ask | show | jobs | submit login

It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

What happens with offline LAN? And the ideal IoT devices that we would all want to have? (I mean those we dream about in all IoT HN posts, where the rants typically are that no internet connection should be needed for most of these kinds of devices)

What about offline tutorials? I'd like to provide a ZIP with a plain HTML tutorial that shows how WebRTC works, but users cannot serve it up in their LAN and access through their laptops or phones, because WebRTC requires HTTPS. What's even worse, a self-signed cert doesn't work either. iOS Safari does NOT work with self-signed certs at all!

It's maddening, obstacles everywhere once you don't walk the path "they" (industry leaders, focused on mainstream online web services) want you to follow.

EDIT: There are some (lots [0]) of features that require a secure context, i.e. a web page served through HTTPS. So the defaults to HTTP are not a silver bullet, and the security exceptions for localhost are not really that useful either, being limited to the same host.

[0]: https://developer.mozilla.org/en-US/docs/Web/Security/Secure...




> without recourse

Doesn't the post say they'll fall back to http if the https attempt fails?

> For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails.

The only change here seems like it's that, from the user's perspective, initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).


I'm talking about the general state of HTTPS implantation. If you develop an offline device which offers a web UI, and it happens to use any feature that is deemed to require a Secure Context, you're out of luck.

WebRTC is such a feature, but there are lots more, and they can change from one version of the browser to the next one.

The players who are pushing so hard to shove HTTPS down our throats are simply closing their eyes and ignoring the use cases that are not interesting to them. The mandatory renewal timing is a good example: it used to be more than 1 year, now it is 90 days (and some would like to reduce it to mere weeks!) Absolutely great for the grand scheme of things and global security of the Internet, but dismaying for lots of other use cases.


I don't actually see the problem. If you're on a local network, there's no practical way to deal with certificates, so use http. Chrome will fall back. Problem solved.

If http support ever gets truly removed, I will be very upset. But that hasn't happened, so what is there to complain about?


HTTP is effectively considered legacy by the big web actors these days. More and more APIs are HTTPS-only (often for good reasons) and the "insecure" warnings you get from using HTTP become more intrusive every year.

The trajectory is pretty clear, the long term plan is to phase out HTTP completely. And I'm not against it, but I need a solution for LAN devices, and it doesn't exist at the moment because the big web actors do everything in the cloud these days and they don't care about this use case.


I AM against it, because it puts more centralized censorship power in the hands of the certificate authority.

Also, it completely cuts out "legacy" devices, basically anything more than 5 years old.

The Web is once again splitting into AOLized mainstream and "indie underground" that you have to make an effort to access.


Who is "the certificate authority" you're referring to here?


The authority who grants you your SSL certificate. There is more than one out there, sure, but you can't do it without them. And ultimately, they all answer to the same authority above them: the browser maker who populates the root trust store.

So, to summarize: one more way for the browser maker to control what the user can and cannot access without jumping through hoops.


The OP means that in using https (and being forced to used https) you are also being forced into paying a 'third party' an annual fee just to get a valid certificate.

That 'third party' is one of the recognized 'certificate authorities'.

But the OPs point is by going https, you don't have a choice, you have to pay the certificate tax.


Right, and Let's Encrypt doesn't solve the problem, it just kicks the can to DNS, which is globally unique and costs money. Communicating between your computer and any device that you supposedly own without the slow, unnecessary, and increasingly intrusive permission of some cloud IoT stack will become more and more difficult.


This is not true, you can set your host to trust a self signed certificate without much difficulty. Check out this tool for example https://github.com/FiloSottile/mkcert (prev discussion at https://news.ycombinator.com/item?id=17748208)


I would like to trust a given root very for only a specific domain (and sub domain)

I.e *.int.mycorp.com, but not www.mybank.com

Browsers don’t let me do that, it’s either app or nothing. X509 name constraints aren’t great either and don’t give me, the browser operator, the power.


Self signing doesn’t let the world access my website without some scary warning.


That’s irrelevant to this discussion about hosting sites on a LAN with no internet access.

If you need https on the public internet you need a trusted cert.


Don't think personal LAN, think e.g. industrial automation: Many sensible companies want modern sensor systems that provide REST APIs and so on, but don't want those to access the internet. The hosts in this case often are appliance-like devices from third parties.


But that’s my point, and many others’. Sure, we can self sign, but it’s useless for the WWW. You’re forced to pay up to one of the few certificate providers. Thankfully, Let’s Encrypt has made it free and easier, but it’s not a no-brainer.


How long do you think it would take someone who has never been to HN?

I don't think they would even know the option exists.


Letsencrypt provide a really good service.

I can recommend the docker image made by linuxserver in particular [0]. Makes Https a (tax free) breeze.

[0] https://docs.linuxserver.io/general/swag


That's OK then, if that's we all have to do to run any devices inside our LAN/home network.

Want a NAS box for sharing family files/photos or some other IoT device at home? Just set yourself up some other device to run the docker image, get your self a certificate from LetsEncrypt and then... install it on the NAS box? How does that happen?


Perfect time to radicalize the underground (say by beginning to experiment with Gemini or other protocols), the mainstream as usual only knows how to follow


Gemini requires TLS 1.2 or higher.


But it doesn't rely on CAs. It relies on TOFU.


I prefer HTTP :)


Let’s encrypt exists, your argument is moot.


Can you use it on a microcontroller in a home network?


The problem is that there is no way to deal with certs on a local network, but the OP would like to be able to use https anyways; http might be considered too insecure for their usecase


What I do is buy localme.xyz and get a wildcard cert via DNS validation. This way you get SSL for offline devices. But you need to update the cert periodically.


I wish there was a way to automate wildcard certs, at the moment I'm building a python script that logins to my domain registrar's panel and updates DNS records


let's encrypt supports wildcard certificates: https://community.letsencrypt.org/t/acme-v2-and-wildcard-cer...


If your domain provider's API sucks, or doesn't exist, or requires generating a password/key with more permissions than you're willing to give a script, look at acme-dns [1] and delegated DNS challenges:

https://github.com/joohoi/acme-dns


Make your own CA, install on each computer, install certificates, voila.


Telling your clients to install your certificate in their computer/browser store is not very practical. And they will need to do that regularly.


It shouldn’t be practical, that’s by design. Imagine if every captive portal had you install their root certificate to access the WiFi, with just the click of a button.


Not regularly, my root is 10 years long.


Repeat every 3 months or whenever the root certs expire.


3 months? I must have updated FireFox / Discord / VS Code /etc. about a hundred times in last 3 months. Plenty for them to add renewed SSL whatevers inside one of the updates.


> 3 months? I must have updated FireFox / Discord / VS Code /etc.

I think this state of affairs is nuts. With the exception of Firefox, because web browsers have an inordinate number of security issues to contend with.


And other programs don't?


An instant messaging client shouldn’t be executing arbitrary remote code, no.


It's not really possible to prevent that. E.g. a well crafted image can easily trigger an RCE on some older versions of Android: https://nakedsecurity.sophos.com/2019/02/08/android-vulnerab...

Issues like this exist at all layers of the stack, so anything touching the internet needs regular security patches.


I agree completely. But, I also think that in most cases, if a simplistic piece of software like an IM app needs a security patch every three months, regularly, it's a sign the attack surface is too large.


Why would the certs you create for this purpose be made to expire?


It needs to expire before 397 days, because otherwise the CA will not be valid, even if it is marked as trusted. https://www.zdnet.com/article/google-wants-to-reduce-lifespa...

edit: a word


The article you linked to is kind of confused and I'm not sure I blame them. This stuff is really complex!

According to the proposal[0], leaf certificates are prohibited from being signed with a validity window of more than 397 days by a CA/B[1] compliant Certificate authority. This is very VERY different from the cert not being valid. It means that a CA could absolutely make you a certificate that violated these rules. If a CA signed a certificate with a longer window, they would risk having their root CA removed from the CA/B trust store which would make their root certificate pretty much worthless.

To validate this, you can look at the CA certificates that Google has[2] that are set to expire in 2036 (scroll down to "Download CA certificates" and expand the "Root CAs" section) several of which have been issued since that CA/B governance change.

As of right now, as far as I know, Chrome will continue to trust certificates that are signed with a larger window. I've not heard anything about browsers enforcing validity windows or anything like that, but would be delighted to find out the ways that I'm wrong if you can point me to a link.

Further, your home made root certificate will almost certainly not be accepted by CA/B into their trust store (and it sounds like you wouldn't want that) which means you're not bound by their governance. Feel free to issue yourself a certificate that lasts 1000 years and certifies that you're made out of marshmallows or whatever you want. As long as you install the public part of the CA into your devices it'll work great and your phone/laptop/whatever will be 100% sure you're made out of puffed sugar.

I guess I have to disclose that I'm an xoogler who worked on certificate issuance infrastructure and that this is my opinion, that my opinons are bad and I should feel bad :zoidberg:.

[0] https://github.com/cabforum/servercert/pull/138/commits/2b06... [1] https://en.wikipedia.org/wiki/CA/Browser_Forum [2] https://pki.goog/repository/


HTTP does not solve the problem if you still want your traffic encrypted in transit.


Yeah, I think we need a browser that isn't developed by companies with vested interests in having all your traffic go to them...


Then again I think Google would do just fine even if Firefox was the only browser.


Self-signed certificates seem reasonable in this context - unless I’m missing something.


They might to you, but the browser doesn't agree. It will scream with all its force to all your users that this accessing that product is a really really dangerous idea.


You can set up the users' machines so that they trust your certificate.


I have tried to do just that but ran into all kinds of difficulties:

1. Overhead: I have 5 devices that I own 3 of my wife and a smart TV. Setting all this us takes a lot of time, even if it worked fine.

2. What about visitors to my home, that I want to give access? They need the cert as well together with lengthy instructions on how to install it.

3. How do I even install certs on an iPhone?

4. Firefox uses it's own Certificate Database -- for each profile. So I'll have to install certs on the system AND on the host (for e.g. Chrome to find it).

5. All these steps need to be repeated every year (90 days?!) depending on the cert expiration period.

Eventually I just gave up on this. It's not practical. There needs to be a better solution.


I bought a domain For use on my local network. I use letsencrypt for free certs, and I have multiple sub domains hosted under it. It works very well and wasn’t that hard to setup. It’s actually better organized and easier to use than my old system since I had to take the extra step up front to organize it under a domain.


I am as upset as you about this and cancelled IoT related projects because of it

But, #3 How do I even install certs on an iPhone

AFAIK (though I've never done it) you use a configuration profile

https://developer.apple.com/documentation/devicemanagement/c...

https://support.apple.com/guide/deployment-reference-ios/cer...


https://news.ycombinator.com/item?id=17748208

I found the steps described in https://github.com/FiloSottile/mkcert reasonable to follow. It describes an iOS workflow too.


15$ a year for a domain, throw traffic through local [split?] dns and traefik with the lets encrypt dns challenge and call it a day? i have over 25 internal domains & services with 25 certs auto renewing and no one can tell - it just works and is easier the self signing certs and loading them into whatever rando service or device your trying to secure


Split dns is dying - doh is sorting that. Sure canary domains exist, but they won’t forever


i'm not sure i follow, what does split DNS have to do with DoH? i don't want my internal DNS addresses public, there is no need + security and for some addresses i have different IPs internal vs external.


Browsers send to external provider like google, rather than the network provided server which has the internal addresses (and which may override external addresses for various reasons)


split-dns will never die, it would kill far to many internal corp environments where there is no public DNS entries.


I have been looking at using MDM for my iPhone so I can install trusted root certs and require it to be on vpn when not using a specific list of WiFi networks.

It is a hassle and as far as I could see it requires resetting the device and I’m not sure I can restore my backup over it and retain both.


If you keep your CA secure, no reason that you can't set the expiration of the root cert to something like 10 years.


Will browsers accept that?


Browsers accept a root CA with long lifetime. Certs signed by CAs installed by the user or admin also allow long lifetimes (probably will still for a while).


Maybe it is a dangerous idea. You could be snooping on them for all they know. A little truth never hurts.


There is an important difference between (A) trusting "just this one" cert for a specific reason, and (B) installing a root cert that is able to impersonate any server.

It ought to be a practical to do (A) without doing (B), but due to a variety of deep human and technical problems, it isn't.


> initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).

More than a bit. For HTTPS-only sites, the site could serve a stub HTTP endpoint on port 80 that redirects to HTTPS. The redirect causes maybe some milliseconds to a few seconds (worst case) of latency.

HTTP-only on the other hand can't do a HTTPS stub as easily (as the primary reason you'd want HTTP-only is probably that you don't want/can't deal with the certificate management - if you can set up a redirect from HTTPS to HTTP, you might as well go full HTTPS)

So the only option for HTTP-only is to not open port 443 at all - meaning Chrome has to wait out the 30 seconds (!) network timeout until it can try the fallback. So pure-HTTP sites would become a lot more unpleasant to use.

(A site might be able to cut this short by actively refusing the TCP handshake and sending an RST packet - or by opening and immediately closing the connection. I don't know how Chrome would react to this. In any case, that's likely not what HTTP-only sites in the wild are doing, so they'll need to update their software - at which point, they might just as well spend the effort to switch to HTTPS)


You can simply send a ICMP Rejected Message, which should direct your browser to immediately try any fallbacks or other hosts.

Timeouts occur when you incorrectly configure your firewall to drop packets instead of rejecting.


True, that would work as well. Though I believe it's recommended as a best practice to simply drop packets as not to help port scanners.


I don't worry about port scanners. If your infrastructure becomes less secure because of a port scanner, it's not very well secured. Sending a REJECT is cheap, no DDoS opportunity and helps browsers and other apps fail over fast.

The recommended practise I've heard everywhere is that an ICMP REJECT provides no opportunities for an attacker they wouldn't have with 10 minutes extra time (modern port scanners can be set aggressive enough that REJECT or DROP doesn't matter if they have a known open port they can obtain an ACCEPT timeline from).


Yeah, that absolutely makes sense - as I said, a site could also avoid the timeout by sending a TCP RST packet. My point was more that I believe not many sites are doing any of this as in the past, "best-practice" firewall configuration was to be as silent as possible.


Being as silent as possible has only two effects; an attacker takes 2 seconds longer and you significantly degrade the network experience of all your users. It hasn't been best-practise in a while (outside the greybeard firewall admin circles), ie, almost a decade at this point. Anyone who hasn't figured that out should really consider not doing network admin if their knowledge is a decade out of date.


Hmm, OK. Might be that my knowledge was not really up-to-date there. I'd really appreciate if network protocols designed for common benefit (such as ICMP) are not discouraged due to security for a change. In this case, sorry for spreading FUD, that wasn't my intention.


> It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

I constantly have issues with this address bar hiding the scheme and even the www.

One issue is when I quickly want to select some parameters or delete parts of the url in order to "up" one level.

What drives me absolutely insane is their inconsitent autocomplete functionality. Sometimes I end up at googling "examp" instead of navigating www.example.com, which was a page I already had visited and therefore showed up as the domain which was autocompleted inside the address bar. Sometimes pressing enter autofills the address, sometimes it googles the halfway typed domain. If I tab the halfway typed domain, it always completes it and an enter will navigate to it.

There is some strange difference in entering a domain and performing a search, something feels off. I can't exactly tell what it is, but sometimes I end up with submissions which I did not intend.

Also something with pressing the down-key in order to select the topmost entry, the one which gets selected with tab, it just gets skipped when I use the down-key.

Another thing is if I want to query "Raspberry Pi disable wifi" or something which begins with "Raspberry Pi", that I then get suggested the URL raspberry.org, a domain which I have often visited, and am forced to type through the entire word Raspberry in order for the URL-functionality to get aborted and have it switch over to googling mode. Maybe there is a keyboard shortcut or something which would help me out, but it simply isn't intuitive.


It's a silly hack, but if you install googles "Suspicious site reporter" extension for chrome, then the chrome address bar retains the full URL all the time.


You can also right click the address bar > "Always show full URLs".


OMG, I can't thank you enough!

(Who reads those menu entries anyway, at least fully?)


Or you could right click on the address bar and select "Always show full URLs"


In Chrome you can right click the address bar and select "Always show full URLs". I prefer how Firefox highlights the most important part and still shows the full URL.


70% of scenarios? If I were to guess, it would be 70% of your time that your happy with https, and 30% you don't. But its 99.9% or higher in reality.

Also, did you know 80% of facts are made up? XD


That number was totally made up, of course :) that's why I quoted it... didn't really want to be too pedant and explicitly say it, but maybe I should have.


But the number matters to your point. Focusing on 70% of users to the detriment of 30% seems a lot less defensible than focusing on 99.9% of users to the detriment of 0.1%.

The claim you made was “chrome is hurting a large minority of users and they should make a browser in a more fair way” but this changes when you change your made up numbers to something more like “chrome is hurting a tiny minority of users with weird use-cases like me and I don’t like it”

FWIW, it feels like the problem is more that your use-cases don’t fit into web PKI, and I agree with you. But I don’t think harming the security of web browsing for the vast majority is the solution to those problems.


> Focusing on 70% of users to the detriment of 30% seems a lot less defensible than focusing on 99.9% of users to the detriment of 0.1%.

Ah, but now you’re measuring a different thing than the GP, users vs scenarios!

I’d hazard a guess that at least 30% of users need to log into a router at some point or another. I hope it’s more than 30%, because everyone else is likely paying ridiculous prices for a crappy router from their ISP.


I would be substantially surprised if 10% of people had logged into a router EVER and even those folks spend 99.999% of their time on actual websites.


The change doesn’t affect routers (which I think most people don’t know how to log into these days) as there is no https default for URIs outside of the scope of normal PKI (ip addresses and single-level names). Are we disagreeing about the actual change or an imagined future one?


I wonder if a scheme could be invented where your router could be responsible for issuing certs to local devices. Forgetting about the impossibilities of industry adoption, would such a scheme be possible?

E.g. your router/DHCP controller/AD box gives an IoT device a DHCP ip and maybe a DNS address, and additionally it will provision a cert+key to that device by some standard protocol (keeping this secure might be impossible?). Router has an internal CA cert+key to do this.

Your PC then (handwavy) "knows" to retrieve the CA cert of your router by some standard protocol (dhcp extension?), and "knows" to trust it for devices on the router's subnet.

Is a scheme like this possible?


One problem is that it's easy to inject malicious DHCP on to any network you have access to, and you can then route all traffic to yourself (by telling clients that you are the gateway.) This kind of attack is partially mitigated because of TLS - redirecting all traffic to yourself isn't particularly useful if it's all encrypted. But if you could issue a cert along with the DHCP it'd be game over for everyone on the network.


If you’re the network operator you can mitm network traffic if curse. If you aren’t how are you running a dhcp server when the switch will block the packets?


Switches don't typically block DHCP packets. You can literally just spin up your own DHCP server and plug it in to a switch port - if your fake server responds to a DHCP request faster than the legit DHCP server the client will get your lease instead of the right one. It's this way by design - it's not at all uncommon for the DHCP server to not run on the router itself, but on some other device elsewhere in the network, or even outside the layer 2 network using a DHCP Relay.


Depends on who configures them, but normally you’d have dhcp snooping on your switch



> IP addresses, single label domains, and reserved hostnames such as test/ or localhost/ will continue defaulting to HTTP.

According to the post this shouldn't be an issue.


See my edit. There are lots of stuff browsers forbid you from doing if HTTPS is not in use, so the kind of defaults you quote are not really that useful. For example, the browser won't let you capture webcam video (MediaDevices.getUserMedia()) from a page which is not https or localhost (so, good for a computer where you are running some software, not so good for an embedded device you want to install in your home and access from within your LAN with e.g. your phone, for whatever reasons)


Is it the case that self-signed certs don't work in iOS at all? I'm looking around, and I appear to see tutorials for how to properly configure one in iOS.

https://medium.com/collaborne-engineering/self-signed-certif...


I'm talking about user access. At least other browsers still allow it (but that's also prone to change at the whims of the developers), but in Safari for iOS the page will fail silently and won't load, with absolutely no feedback as to why.

Having to install custom-made Root CAs into all and every client device doesn't sound to me like an ideal solution...


Unfortunately, since one is breaking the SSL trust model, that's probably the right solution. Not unlike having to explicitly enable "Developer mode" before a whole host of security-breaking options are available.

Actually, that's one solution Apple could consider: if a user has enabled Developer Mode on a given iOS device, allow the trust model to be broken with an "Are you sure you know what you're doing?" button instead of a silent failure.


Scammers: "You have to enable developer mode to see our new bank website because it's in development"


At that point, isn’t it easier to send the user to chase.com.scammer.com?

The goal isn’t to make a 100% foolproof system (because you can’t), and needing to flip a switch called “developer mode”, which preferably also displays a warning message, should make it clear something is wrong.

...I think this whole discussion is kind of missing the point though. Developers are not the only people who need to log in to routers.


Yeah, I don't know what OP is talking about, I'm using one on my iPhone right now. Enterprises deploy them all the time.

It is true, that in recent versions of iOS (in the past five years or so), you have to install the certificate in Safari, then go to Settings->General->About, scroll all the way down, and manually trust the certificate (to ensure you really know what you're doing by enabling it). And iOS doesn't make this known anywhere outside of that special menu three levels deep, I suppose to not confuse people who had an attacker install a cert on their phone somehow.


If you are talking about installing the Root CA in the iPhone, yeah. That's how I do it in my development devices.

But for a user, iOS Safari (not Safari for MacOS) doesn't show any certificate warning that the user can accept, like other browsers. In fact, it just fails absolutely silently. You'd have to connect it to a Mac and open up the developer tools on the desktop's Safari, to see the errors that are being printed on the JS console.

Otherwise, you'd just be left wondering why it just doesn't work like all the other browsers.


Unfortunately, user behavior testing shows those certificate warnings are a threat vector. There's a reason the browsers have been moving towards the exits on trusting the user to understand the security model enough to override the trust breakage.

Chrome pops a warning, but (with a few exceptions) doesn't let you just navigate through it (there's a secret key sequence you can type to override it, but it's both purposefully undocumented and periodically rotated to make it something that you can only know if you have the chops to read the source code or consult the relevant developers' forums).


I'm using self signed cert on iOS/macOS and it works just fine with Safari. Safari is messed up in other ways with TLS. Like it re-uses HTTP2 connections when making requests for a different Host when it's running on the same IP address as the host it connected to previously, which completely breaks client certificate selection and unless you recompile nginx with custom patches, it also doesn't work with nginx, because SNI and actual Host header differ, which nginx doesn't like by default.


70% of web servers don't have access to the Internet?

Do you have a source on that? I would guess more like 99.99%.


>It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

It's even less funny when you realize that the class of devices that gets effectively crippled includes the wast majority of all IIoT devices. Including ones that run manufacturing and power generation. Yeah, your precious personal website is now secure from MITM attacks. The factory that made your car, however, uses critical infrastructure controlled by web interfaces with no encryption at all. Congrats.


How do the hosts on your local LAN find each other?

If via Multicast DNS, what stops you from publishing TLSA records as well?


> It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.

70% of the scenarios impact 99.99% of users as well as the project's intended scenario.

> being limited to the same host.

Yes, that's the point, of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: