Hacker News new | past | comments | ask | show | jobs | submit login
Life Is About to Get Harder for Websites Without HTTPS (troyhunt.com)
286 points by finnn on July 12, 2017 | hide | past | web | favorite | 351 comments



I'm most worried about the "long tail" of often very interesting, useful, and rare content (a lot of it from a time when the Internet was far less commercialised) that is unlikely to be hosted on HTTPS, and whose owner may have even forgotten about or can't be bothered to do anything about, but still serves a purpose for visitors. The "not secure" will drive a lot of visitors away, and even lead to the death of many such sites.

Imagine someone who knew enough to set up a site on his own server a long time ago and had left it alone ever since. Maybe he'd considered turning it off a few times, but just couldn't be bothered to. Now he suddenly gets contacted by a bunch of people telling him his site is "not secure". Keep in mind that he and his visitors are largely not highly knowledgeable in exactly what that means, or what to do about it. It could push him over the edge.

...and then there's things like http://www.homebrewcpu.com/ which might never have existed if HTTPS was strongly enforced all along.

I understand the security motivation, but I disagree very very strongly with these actions when it also means there's a high risk of destroying valuable and unique, maybe even irreplaceable content. In general, I think that security should not be the ultimate and only goal of society, contrary to what seems the popular notion today. It somewhat reminds me of https://en.wikipedia.org/wiki/Slum_clearance .

(I also oppose the increased centralisation of authority/control that CAs and enforced HTTPS will bring, but that's a rant for another time...)


Unfortunately, if the owner of the content is not interested in keeping this site up, the content will be lost sooner or later anyways. He probably also does not bother to install security updates and he will most likely stop paying the bills at some point (domain name, hosting server, etc).

Installing LetsEncrypt is not much work and he might be motivated if a lot of people ask him. If he is really not interested, it is probably best to archive the website to a real archive and hope they make sure they content remains available. Unfortunately this also means that the archive will no longer be found on google or most other search engines. It is really a shame that there is no work on google's side to make sure archived content can be found among other search results.


> Installing LetsEncrypt is not much work and he might be motivated if a lot of people ask him.

Assuming he has direct access to his server of course. If he's on a shared host, he may not have the option to use LetsEncrypt, but may be forced to a buy a certificate from the hosting company.


Many hosting companies already give out free LetsEncrypt certificates with the ease of a checkbox. And those that don't will have plenty initiative to do so because their customers will spam their support saying that their website is not save anymore (and many will switch if the price for the certificate is unreasonable).


Vast majority of hosting companies don't offer LE.


Such at-risk sites should be archived in case they do go down. This excellent site has info/tools on doing just that:

http://archiveteam.org/index.php?title=Main_Page


I suspect people who commonly look at the older or weirder parts of the Internet will get used to the "not secure" warning. If you don't care enough to fix it, you probably don't care much about growing a mainstream audience anyway.

"Not secure" doesn't mean you have to shut it down, it just means it's not secure.

This is part of a trend of gradually increasing production values for mainstream sites; it makes the less polished stuff look dated, but in a way that's just going back to the old days when the Internet was an obscure place with a small audience.


I completely agree...this will just dilute the impact of not secure and after a bunch of safe interactions to not secure sites, it might actually make not secure meaningless to the masses.

That being said, I think the owners of sites will still feel the sting in their gut seeing or hearing their site is insecure or that browsers are scaring visitors away...so I think it will speed up adoption even further and I do agree that https will become the default.


How can one view the HTTPS certificate nowadays???

The feature got removed recently from Chrome and Firefox - why?

In older FF, Chrome and still available in IE11 you just click on the "secure" icon next to the address bar.

I am really the only one who misses this feature?


In Chrome, right click -> Inspect -> Security -> View Certificate


Thx.

But why hide it so that normal web user can't access it?

Checking what Cert-company signed the certificate is important to trust a site.

Before it was one left click and it showed the company that signed the cert (optional you could view the cert itself)

Now many steps like click on hamburger button -> More tools -> Developer Tools -> Security tab (bottom of screen) -> View Certificate (shows certificate itself)

What a trainwreck of decision t remove this vital info. e.g. YCombinator cert is from "Comodo". If it's suddenly from a funky else it would help lessen the trust.

I am sure, it was just a mistake to remove it and some cargo cult.


Must be a Chrome thing. In Firefox 54 it's three clicks: lock icon, ">", "more info". Actually the second click already gives some summary including issuer.


The browsers probably want to start relying on HTTP Public Key Pinning rather than relying on the user to verify the public key.


Interesting you link to homebrewcpu.com, since one of the links on that page directs to a blog where all the images on the homepage have been broken by Photobucket's recent policy change. I feel like in the future people will be mystified at how much of the early internet just gradually vaporized.


I feel that way today. Most of the web pages I used as a teenager are just gone now.

Old myspace pages? Geocities? Old BBS boards?


People who have created and then abandoned such sites are very unlikely to self-host on their own hardware or VPS. Their web hosts will simply get certs through Let's Encrypt or other and things will continue working fine.


I hope lans are exluded? I'm scared that I will get security warnings everywhere in my lan.

- when I log in to my webcams it says the connection is not secure

- when I log in on my nas it says the connection is not secure

- when I log in on my router it says the connection is not secure

- when I log in on the web interface of mythtv it says the connection is not secure

- when I log in on my self hosted gitea instance it says the connection is not secure

- when I log in to my self hosted nextcloud it says the connection is not secure

- when I log in to the configuration page of my toaster it says the connection is not secure

All these things are on my lan, and on most things there is no way to install a tls cert on them, nor would I want to do that.

Firefox already nags me that the connection is not secure when i enter a username and a password in any of those sites.


I'd hope LAN is included because while you use some services at home, most people will primarily use LAN services in public Wifis. It would be sensible to have HTTPS used there. They'd have to come up with a new idea of login pages (currently most systems MITM http requests) but otherwise it would make sense if you get a warning when you access pages on your hotel/in-flight Wifi without encryption.


> most people will primarily use LAN services in public Wifis

[citation needed] - I would say most people will primarily use LAN services in private home/corporate networks. But I don't have a citation either.

It's different use cases. Maybe one needs the warning, the other one not. But putting the warning on everything and overload the user with them is not the right solution.

Using a self signed CA is a pity, because installing it on every phone, tablet, laptop, tv, pc, ... is cumbersome in a home network, and making all hosts public and depending on an external CA for local resources to not get scary warnings can not be the right solution either.


> [citation needed] - I would say most people will primarily use LAN services in private home/corporate networks. But I don't have a citation either.

Sorry I don't have a citation here. But looking at my coworkers and family, none of them uses services in their home LAN. Most won't even know how to access the router.

On the other hand it's very common to use Wifi in public transport (at least here in London for the tube), airports, trains and hotels. Few people consider security when using these hotspots, making sure that SSL is enabled for all pages would be an improvement.


> On the other hand it's very common to use Wifi in public transport (at least here in London for the tube), airports, trains and hotels. Few people consider security when using these hotspots, making sure that SSL is enabled for all pages would be an improvement.

But do you use LAN services on that network? I think you will use that public hotspot to check facebook, hn, search something on google and stuff like that, not connect to some service in the local network of the hotspot.


Many captive portals operate as a LAN service, where you may be expected to type a password. If the public Wifi has a roaming agreement, that password may even be for your ISP or mobile phone provider.


Yes I agree. But I see that as an exception. The Captive Portal is ONE thing that needs to be secured and be accessible inside the LAN of a public wifi hotspot. I see that as a special case, and public HS providers can work around that (using a domain for the CP they own and got a cert for). But imho that does not mean that browsers should throw security warnings at users for everything else inside their home lan. At least until we have a solution that works for everyone. And

- depending on an external service for your internal home stuff, or

- installing a self signed CA on every device you own

is not a solution.

Until we have something we can replace the status quo with, we should not deprecate it.


> none of them uses services in their home LAN

They don't have printers? They don't connect to a NAT router before their cable modem? They never tether their computer to their phone? Their DVR doesn't have an app to control it? None of them have their receiver connected to their iTunes library?


While quite a few users will use services on their home lan, contrary to the parent posters statement, I believe the majority of users don't use those services via a web browser.

For instance (to draw from your examples): 1) Most people either install the printer drivers from the manufacture's web site, or let Windows/OSX auto-discover and auto configure the printer with out a web browser. 2) Most users just use the wifi information provided by the cable/DSL provider when they installed the modem, use the WPS button, or install the app that came with their router rather than use a web browser to configure it. 3) Most users will not tether their computer to their phone. 4) Most don't install a DVR app on their phone. If they did, their phone probably communicates with their DVR via some intermediate cloud service. 5) If their receiver auto-connects to their itunes library, it probably won't complain about that itunes not having a public certificate either.


All these things are preset in the many tech-savvy houses in my area ... and every single one of them (with the exception of the printer) is missing from less tech-oriented houses here. The printers are also usually locally connected when needed in many of my acquaintances' houses.

There's a clear divide that I see daily between households that understand their tech, and households that simply use whatever was set up for them. That divide seems to be growing wider, lately, too ...


Seems like something that should tie onto the public/private network distinction that Windows makes, though I can't remember ever seeing it in other OSes.


Can we do SSL locally though?


LANs should be included. It's just as bad as using most websites without TLS.

Maybe this helps forcing companies of IOT-like devices finally take security a bit more serious.


I would really love to see some kind of LAN TLS solution that doesn't rely on requiring you to have your own CA. I've thought a fair bit about the problem, but haven't come up with any solutions that I like.

Browsers, rightfully so, don't accept self-signed certificates. Active Directory and Group Policies can push out a trusted self-signed root CA certificate and generate certs that endpoints can use, but that's a pain in the ass and usually requires central IT to manage.

Please someone come up with something I haven't thought of yet that doesn't break the internet but gets useful certs onto my LAN!


ssh is something that most users see as a secure protocol, and that uses tofu. maybe we could use the same auth mechanism inside a lan for http?


> LANs should be included. It's just as bad as using most websites without TLS.

Why? What's the threat model, that your employer/colleagues can see your traffic?


That any jerkoff with the right work shirt and demeanor can convince your coworkers that they need to service equipment at your demarc - and a lan turtle ends up connected to a trunk-enabled port on your switch that nobody notices for a year.

We are rapidly moving away from perimeter solutions and toward zero trust networking models. Yes, you should encrypt and authenticate inside your LAN.


Why not just give them a domain name? I gave them a subdomain of my personal domain and Caddy handles TLS for all of them automatically.


...but those services, served over HTTP, are not secured and can be spied upon by your insecure Chinese webcam or by the 3% of routers that have a malware, isn't that correct? So the browser should display so, it's up to you to ignore the warning.


Well it's my lan, and I can decide what I trust and what not. For example in my home network I have different vlans: one for things i reasonably trust, and one for the chinese webcams and iot lightbulbs. If my router has malware, even if i talk to it over https I'm screwed anyway, I don't think that is something I can fix with https.


> Well it's my lan, and I can decide what I trust and what not.

The browser will still let you choose. It's just changing the default.

> For example in my home network I have different vlans: one for things i reasonably trust, and one for the chinese webcams and iot lightbulbs.

Congratulations, you are the 0.0001%.

> If my router has malware, even if i talk to it over https I'm screwed anyway, I don't think that is something I can fix with https.

Right, but if your lightbulb has malware and your router doesn't yet, then using HTTPS while talking to the router is pretty important.


> if your lightbulb has malware and your router doesn't yet, then using HTTPS while talking to the router is pretty important.

I agree. So what's the solution you propose?

- Using a self signed CA that you have to install on all your devices?

- Using a trusted by default CA, that makes your internal-only devices depend on an external service (the CA)?

Or something else?

I'm not saying that encrypting the traffic in a LAN is useless. I'm saying that https and the current CA system is not the solution.


> - Using a trusted by default CA, that makes your internal-only devices depend on an external service (the CA)?

This one. Realistically all those devices are depending on external services already.


> This one. Realistically all those devices are depending on external services already.

This is codifying (IMHO) bad practices and a really brittle architecture. Why would you want this? One of the reasons IoT security is a mess is that devices need to talk to internet endpoints instead of staying inside the NATted, firewalled LAN.


I think the NATed, firewalled LAN is an untenable concept: it blurs the line between private and public, a line that security requires making make clear and sharp. If it's connected to a network, it's exposed enough that we should treat it as connected to a public network (which greatly simplifies our threat model); if it's not secure enough for that, it needs to be built as local-only hardware.


Way off on a tangent here, but can you point me in the direction of some documentation for VLANing specific devices, giving them access to the internet for whatever they need to do, the ability to access them over the LAN by trusted devices, but not giving them the ability to call into the secure LAN?


Say you have a VLAN-capable switch and a spare PC with a couple NICs to use as a router. I use the Sophos XG Home Edition for example but other router setups (like the mentioned pfsense) should be similar.

Say ports 1-4 on the switch are your trusted devices, 5-8 are your untrusted, and 9-10 are the LAN side of the router. There are other ways to do it but this is probably simplest and easiest to mentally map. Set ports 1-4 and 9 as access ports for VLAN 100. Set ports 5-8 and 10 as access ports for VLAN 200. Now you have two virtual networks hooked into one physical, partitioned by the router.

You then set up eth0 (port 9) as trusted LAN and eth1 (port 10) as untrusted LAN in the router, and give each an IP range (Separate subnets at a minimum). Now all trusted can talk to each other and all untrusted can talk to each other, but nothing together yet.

You then set routes and rules (port filtering, SNAT, DNAT, etc) for port access from either network to the other to finely control what ports are available to what. You do the same thing with eth2 which would be the connection from the router to the modem for internet access.

Not a full tutorial but hopefully that points you in the right direction.


That's exactly what I did, but I can't point you to a howto or something like that. I just bought a vlan capable switch and a pfsense router and set everything up.


Some browser vendors have an agenda. But why now? Amazon.com worked fine with HTTPS from 1995 to 2006 - only the login page used HTTPS.

Now a small group tries to break the web and enforce HTTPS, but why? Is it to de-anonymize the web users?


Serious question: if I just run a simple blog with static HTML hosted with Apache, do I really need HTTPS? Will I be penalized by not having it?


3% of routers in Canada and the US have some kind of malware in them, and almost all other countries are worse off. You might not need HTTPS, but I need you to have it so I can trust downloading a PDF when I'm on your site. I need it so I don't get MITM by some shitty black hat tracking company. I need it so while I travel I don't have to worry about shitty governments tracking what I'm reading or watching.

And if your answer is "sure, but there will be other websites that are HTTP" my response is "yeah, but one day soon when enough of the web is secure I'm going to disallow all HTTP connections from my browser". And eventually I'll disallow all connections that aren't on HSTS preload lists. And eventually, hopefully, I'll disallow websites that don't have HPKP with long expires.


> eventually, hopefully, I'll disallow websites that don't have HPKP with long expires

I doubt HPKP will ever see wide adoption. At least not in the form it has now. It's just too damn easy to bork the config and take your entire site offline with no way to remedy that error.


Not really. Just put a root level cert or two as your backups.


Then let me tell you it is quite easy to MITM by any company owning part of the connection.

Many moons ago I was using a proxy engine capable of unpacking/packing HTTPS requests.

It was an in-house proxy taking advantage of Apache and IIS APIs for certificate management and SSL connections.

We used it for debugging secure connections, but I can easily envision other purposes.

Many years have passed since 2000, but I am quite sure it is still quite possible to do it.


Proxies that do https inspection act as a client to the destination server, decrypt traffic, then reencrypt locally using a self-signed (or otherwise non publically trusted) certificate.

In short, you can intercept the traffic, but it relies on the client explicitly trusting your certificate. This is the foundation of all security on the web.


My certificate is the original certificate.

We weren't using an off-the-shelf proxy, rather some nice debugging tools with extra help from the networking layering.

Maybe TLS is more full-proof to Website replication with DNS spoofing and a few other tricks, but they did work without issues in SSL 1.0 connections, using modified web servers.


You'd still not be able to MITM traffic while I see the same fingerprint as usually? Either the traffic looks like http to me or it's a different certificate.


We did it, I already explained a bit in another post.

As mentioned this was done with SSL 1.0, I don't know if a similar set of tricks could be done with TLS, as I am out of these type of applications since 2000.

Now believe whatever you want, but I am sure of what I programmed in 2000, the infrastructure we had and gladly explain it in job interviews where confidentiality is assured.


SSL 1.0 wasn't even supported by many of the earlier builds of Internet Explorer on Windows XP, with SSL 2 getting deprecated around the time of XP. Even SSL 3 - the last of the SSL protocols - has now largely been deprecated bar a few misconfigured servers.

Furthermore TLS 1.0 has been supported since XP but even that is now in the process of being deprecated.

So your attack, if it did depend on SSL v1.0, is so outdated that it's not even worth mentioning. And certainly not in the way that you announced "let me tell you it is quite easy to MITM by any company owning part of the connection."

(Please excuse me using "Windows XP" for approximate timeframes. I can't remember the exact year for when these protocols were phased out but I can remember which devices we had to support).

edit: It was bugging me that I didn't really know much about SSL 1.0 compared with the later protocols so I decided to do a bit of reading. It turns out the reason I don't know much about SSL 1.0 was because it was never publically released[1].

Which makes me even more puzzled about your anecdote as why would you even want to run an SSL 1.0 proxy when even back in the year 2000 no devices were supporting it. Either way it's certainly not proof of the ease of which SSL can be MITMed.

[1] https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1...


What you are describing does not make sense. TLS would be pointless if this were possible.


I won't comment on how you built your proxy but HTTPS has come a long way since, eg TLS wasn't common back then and SSL has since been deprecated.

These days the only viable way to MITM HTTPS is to either attack the connection while it's in plain text HTTP (ie before the browser redirects to HTTPS:// (which is where the HSTS header comes in to play - it's cached by browsers telling them to default to HTTPS and never attempt a plain text connection) or to form your own HTTPS connection with your own CA signed certificate for the target site. Which would mean you'd either need to compromise a signing authority or have your own CA certs already installed on the victims PC (the latter is what some bad ISPs reportedly do when they inject ads).

Edit: I believe some corporate network proxies also work on the principal of having their own CA certs on their business workstations - so maybe this was how your proxy worked as well?


Or for smtp and imap traffic to do a downgrade attack (STARTTLS downgrade). The committee that set encryption to be optional and downgradable in modern smtp/imap over TLS should be placed against a wall. In the history of moronic decisions... All that to save a port.


I can't disclose much, but can give a very basic idea regarding SSL.

That information eventually needs to be unpacked, so you get a replicate of the destination site with the same certificates, which you can easily download by pretending to be a browser, but running on your modified server, which then unwraps, repackages and then forwards the request to the original server.

Since one owns the server, it is also relatively easy to disable whatever validations the SSL algorithm requires at library level.

So I don't know TLS and since 2000 I don't mess with this kind of stuff, but I don't think similar approaches aren't possible.


You can't download the private part of the TLS certificate by pretending to be a browser. You can't publicly download it at all. Your browser comes with a chain of public certificates from reputable CA vendors, and you verify that a site is who it claims to be by using their public certificate (which your browser downloads).

The only way they can succeed in proving who they are, is if their server has access to the corresponding private key of the certificate, which is why you can't spoof a properly secured site unless you either hack their server, crack the encryption, or install your phony certificate on the client (which is what corporations sometimes mandate). That's it.

It's been 17 years since 2000. Whatever security hole you may have used (if any) has long since been patched. The weak cyphers used in some SSL versions which you may have depended on are mostly gone (certainly from high profile targets).

If what you state is true, now in 2017, then there would have to be a huge conspiracy which involves all browser vendors (including Mozilla), as well as all national governments and the EU that hides this fact from the people. All software developers who do catch on (the relevant source code is open source after all) would have to bribed or threatened into silence too.

You are either misinformed or consciously spreading misinformation.


> It's been 17 years since 2000. Whatever security hole you may have used (if any) has long since been patched. The weak cyphers used in some SSL versions which you may have depended on are mostly gone (certainly from high profile targets).

I very much doubt he was even doing anything that clever. Since it was a corporate network he probably just had their corporate CA certificate already built as part of the company machine images / build scripts. He possibly might not even have been aware this was happening if the IT department was large enough that different coworkers managed the desktops from himself.

The "I can't disclose much" argument on a 2 decade old hack is effectively just saying "I can't remember the details" or "I wasn't directly involved in setting up the proxy". Either way, while much has changed in the last 17 years, the practice of some businesses installing their own CA certificates on company assets has been a fairly standard way for corporate proxies to intercept HTTPS traffic. And a great deal less trouble than relying on SSL vulnerabilities since you already own and deploy the destination hardware+software anyway.


Anyone can easily find me on the net and I don't need to have fun speaking with lawyers.

Believe me or not, I don't care.


I don't really understand what lawyers have to do with the discussion but I certainly do believe you that you had an in-house proxy that did MITM HTTPS traffic. Lots of businesses do. The problem is you said "let me tell you it is quite easy to MITM by any company owning part of the connection" using that as your example. While your anecdote may be true, the statement it is trying to support simply isn't true and neither was your description of how SSL works (eg the proxy server being able to disable the client libraries SSL checks).

So it's not that I don't believe your anecdote - I'm sure that did happen in some form or other - but the nature of the exploit either isn't how you described or is so old and long since patched that it hasn't been exploitable in more than a decade thus isn't relevant to the statement you were trying to support.


You're loosely describing the 2nd attack I mentioned but overlooking the issue of the SSL handshake. You cannot perform your attack without having the server's private key, which as name suggests, is privately stored on the remote TLS endpoint. The private key is used to confirm that the client has received the websites certificates without getting MITMed. Since you don't have the websites private key you cannot use their public key in the way you described.

The workaround for that is to create your own SSL certificate for that website (this will create you a public certificate and private key). Which means you will also need that new certificate CA signed otherwise the client (who's own SSL validations you cannot remotely disable like you implied) will give the user a warning about an untrusted certificate (the message will vary from client to client). This means you either need access to the targeted client to install your own CA certs, or you need access to a compromised signing authority.


They are not. Either I get content as the key'ed server sent it, or I do not. MITM of a https source under the same domain is not possible (caveate is manually accepting a new certificate in my local browser).


People who can MITM you when you use HTTPS: the company that owns the certificate, the software on your local system, people who are capable of subverting one or the other (i.e. state-level attackers), any of the 150-odd CAs if it's willing to burn its entire business to do so (certificate transparency).

People who can MITM you when you use HTTP: any entity on e.g. http://www.bgplookingglass.com/list-of-autonomous-system-num...


> I need it so while I travel I don't have to worry about shitty governments tracking what I'm reading or watching.

For this reason I used a VPN while travelling overseas. Most places have wifi of varying quality. To my dismay/horror, I found that Hilton charge EUR15/day for the privelege of using a VPN.


Does that mean Hilton makes 15€/day injecting users who don't secure their traffic?


Does Hilton actually block VPNs on their standard internet, or is "VPN access" just code for "we'll give you a public IP so that IPsec traffic won't get swallowed"?


The question was about static HTML pages, so the government can track what you are reading or watching.


3%?! Do you have a link where I can read more? That's horrifying!


You won't have confidence that your visitors see the content you serve as you intend because it can be modified in transit. Your visitors will be broadcasting at least the exact address of the pages they read on your blog, and are vulnerable to anything that gets injected into your traffic in transit (e.g. malicious script).

Every website should use HTTPS. It's the right thing to do. It's not hard to do these days.


> Every website should use HTTPS. It's the right thing to do. It's not hard to do these days.

Unless you use simple tools like GitHub pages. I have a bunch of tiny JS heavy static projects on Pages that I can't easily add HTTPS to.


Github pages has https, see: https://github.com/blog/2186-https-for-github-pages

There are legacy setups like S3 with custom domains, probably github pages with custom domain. But in both cases there is no reason not to be using cloudfront or cloudflare.


I am hopeful github/tumblr and other static hosting services will add in letsenrypt setup once browsers start showing these as "not secure". There will be a dip in user engagement otherwise, as the main article points out.


I'm using gh pages for my site, on a custom domain. I really don't want to use cloudflare or similar services for personal reasons. Afaict there's not really any other option though :/ I'd love it if GitHub were able to provide me with a LetsEncrypt cert. I've written to them about this before, but they said there was nothing planned yet.


FWIW, I use Gitlab Pages for the same, and they allow you to provide your own cert (be it LetsEncrypt or otherwise). This has become a good bit easier to administer since LE added support for DNS verification, bit you're right, it would be cool to see these providers adding in built LE automated certs.


Thanks for the feedback, there is actually an issue regarding this topic: https://gitlab.com/gitlab-org/gitlab-ce/issues/28996. Feel free to contribute to it!


I've been doing the same. Unfortunately, it seems that if I want to update my cert I have to remove the domain and then add it back in.

Are you aware of a workaround? Otherwise, I agree with the ease now that LE supports DNS verification. I just wish we could edit existing domains in Gitlab.


It's a good point, I just opened an issue about that, feel free to add more context on it: https://gitlab.com/gitlab-org/gitlab-ce/issues/35022


That's pretty cool and interesting, thanks for the tip! :)


Tumblr with custom domain doesn't allow cloud flare. No way to enable https.

I understand that's not technically "your problem", but since you mentioned it, just thought I'd let you know :)

It's unfortunate, because tumblr really is a great blog hosting software and service, if you don't use the social aspects of it.


You could plausibly use cloudfront, I don't know what tumblr's policies are.

But these aren't technical limitations! I don't know why Tumblr retricts you from using cloudflare, presumably because they want to control your content.

My two cents, don't use a free hosting service. S3+cloudfront is dirt cheap and will do the trick.


That is only a facade of security, since you have to assume trustworthy behavior on the part of Cloudfront and Cloudflare.

I'd rather deal with the very occasional “why not secure?” e-mail rather than pretend my site is actually secure. (I only serve static pages, not forms.)


Yes and no, if the github pages is served from S3 or other resource inside AWS network, then cloudfront would be safe.

And regardless of that it offers privacy for your users.

Beyond privacy it also vastly improve security too, since users are most likely to have malware and ads injected by an infected wifi router.

It also increases complexity of an attacker that has control of the local machine, since they'll now have to install a custom root certificate.

Nothing is perfectly safe, all we can do is raise the bar.


That doesn't really change the parent comment. Just because it's harder sometimes doesn't mean it shouldn't be done.

A current project of mine has some usability issues because it accepts external API requests but I won't allow non HTTPS. It would give me great pleasure to accept all possible APIs, but it's just not right to open users to that risk.


I'm probably misreading you somehow -- if your site is hosted by Github isn't it already HTTPS?


It is, unless you want to use a custom domain with it. With the latter case your only choice to retain some semblance of HTTPS is to put it behind a CDN, but that still breaks end-to-end HTTPS. The problem really has to solved at Github's end, by allowing users to upload their own certs.


You can set your own cert on the CDN though, and have the CDN request origin objects over https from the github domain. Since the CDN is the endpoint for your domain, you'll still be presenting a custom domain to your users.


Which CDNs support that kind of domain translation?


Wait, how is serving your content through Github's servers using your own certificate any more secure than letting GitHub or CloudFlare provide the cert?


It's arguably more secure than letting cloudflare do it (adding a new party doesn't make the chain more secure) but it doesn't have to be more secure than using githubs cert. It's just that you can't use githubs cert with your own custom domain.


Route your page through Cloudflare (or presumably any other CDN) and you can get HTTPS set up on your Github Pages hosted static site quite trivially.


For $5 month you could through up what is likely to be an adequate nginx proxy on Lightsail or similar and serve letsencrypt certs for all of your projects/domains.

It's not really that hard. Set up auto-renew, make sure the box is up to date and properly firewalled. You won't really have to think much about it.



Exactly. I remember viewing my website over a mobile connection in the UK and getting an ISP banner at the top of the page! Injected right into my HTML!


My understanding is that you will already be ranked lower on search engines.

My main concern with this happening is that browsers are going to get a reputation for being 'alarmist', so when something really goes wrong, they won't be able to communicate it effectively.


Answer: https://doesmysiteneedhttps.com

Yes, search engines will penalize you. ISPs will intercept traffic and change it. Your site will load slower. The problems with plain HTTP go on and on.


It doesn't look like it right now:

> all websites with form fields served over HTTP will show a "Not secure" warning to the user

As long as you don't have a form field, you should be fine.


I believe the Chrome team has (or at least had) long term plans to mark all HTTP sites as actively dangerous, form fields or not. So, you'll be fine for now, but there will come a point when you will need to implement HTTPS.


HTTPS is pain in the neck and _currently_ I hate it from the bottom of my heart.

TLTR: if you have a commercial service or device running in a local network forget HTTPS and service workers, use HTTP and HTML5 appcache.

-- RANT starts here --

It would be lovely when every website and webapp uses HTTPS. But for a significant amount of them it's just not f..... possible without driving users completely insane.

If the HTTPS server doesn't (and never will) have a public domain forget about encryption and security, forget about using service workers. The following examples can't, by the love of god, ever provide HTTPS without completely f..cking up user experience due self signed certificates warnings:

1) internal corporation services, websites and webapps.

2) services that run in a local private network like on a Raspberry Pi.

3) webapps which are served via public HTTPS website, but need to talk via CORS to local unsecured services, like to a Philips hue bridge, or any other IoT device which is in the local network but only provides HTTP. These will enlight the users with a shiny mixed-content warning.

.... JUST use self-signed certificates, they said.

NO.

For normal users the UX of self-signed certificates is just non existent, it's a complete mess! It will scare the sh't out of users and will almost always look like your service is plain malware.

It looks much more secure to serve a good'ol HTTP site with no encryption at all.


> 1) internal corporation services, websites and webapps

If not for hostname validation, how would you even /know/ that you're talking to the "internal corporation service" rather than someones MITM proxy? And how would you feel if people on the same LAN could see and modify all your interactions with those services?

2) services that run in a local private network like on a Raspberry Pi

Depending on the type of service you may or may not want TLS. If you visit the service by IP address and specific port anyway, you can easily add an exception for your internal IP. This will never be used like so by non-tech-savvy people.

3) webapps which are served via public HTTPS website, but need to talk via CORS to local unsecured services

I cannot think of any reason to /not/ want to use HTTPS on this. It's horrible how things like the Philips Hue bridge work and rely on insecure HTTP to control your home lighting.

Don't blame browsers for warning people for their insecure systems and appliances. Instead blame their creators or manufacturers as they're the ones who can fix this situation.

edit: formatting


> It's horrible how things like the Philips Hue bridge work and rely on insecure HTTP to control your home lighting.

The Philips hue bridge REST API is accessible in the local network like http://192.168.1.123/api/ .... which is great since apps/wepapps can talk to the bridge without a cloud or philips server inbetween.

And this is the very problem, it's not possible for Philips to add HTTPS support to the hue bridge without some sort of cloud roundtrip to a Philips server, keeping the very cool feature to talk only within the local network to the bridge.

Because how could that be deployed without self-signed certificates and the usual browser exceptions and warnings?


If it is an http API, you don't really need a public certificate. You can have a long term selfsigned certificate on the device and you check that the thumbprint hasn't changed every time you connect from your client. These big warning windows are for connecting to it from a browser, not a RestClient.


>These big warning windows are for connecting to it from a browser, not a RestClient.

Which in my case is a webapp running in a browser :)


So any random webpage can talk to your Hue bridge?

I'm surprised they even allow such cross-domain requests, but this anyway doesn't seem safe.


Cross-domain is needed, otherwise an app/webapp couldn't talk to the bridge since the bridge only serves the REST API.

However in order to send control commands and query light states the app/webapp needs to authenticate and create a account, which is only possible for a few seconds after pressing a physical bridge button.


Why not just proxy the REST requests through your own backend HTTPS server-app ?


We have the same issue with Glowing Bear (https://github.com/glowing-bear/glowing-bear). It's a web frontend for an IRC client (WeeChat) that connects directly to WeeChat via WebSockets. Sort of like self-hosted irccloud without a cloud. We really want everyone to use encrypted connections [0] and push people onto the TLS version of Glowing Bear. But some people host their WeeChat on their local network, and you can't (realistically) get a certificate for a local IP. So for those people we need to open an unencrypted websocket (ws://1.2.3.4), which isn't possible from an https site. Ideally we'd like to disallow unencrypted connections to non-local destinations but that's practically impossible to determine in JS. It's a super annoying problem.

Disallowing unsafe websockets from secure origins is one of those policies that is a really good idea 99.5% of the time but for those last 0.5% of use cases, it's a major pain in the bum.

[0] WeeChat has a /exec command to execute arbitrary commands, and the client has access to that --- not great when you transmit your password in plain text.


> it's not possible for Philips to add HTTPS support to the hue bridge without some sort of cloud roundtrip to a Philips server

I don't see why not. The alternative is freedom. Philips doesn't have to lock their devices. That's a choice they made, sadly the choice that most companies make.

> Because how could that be deployed without self-signed certificates and the usual browser exceptions and warnings?

The fact that your browser warns you about insecure communication happening from that web page, that's a good thing. Even iff you deliberately choose accept that and believe that there's no other way for this particular service/device.

The simple fact that you accept some insecure traffic, doesn't make it secure.


> > it's not possible for Philips to add HTTPS support to the hue bridge

> I don't see why not.

That's not a constructive argument. I don't see how they could make it work?

Even if they somehow solve the problem of giving these devices domain names and even if they generate separate private key for each unit, the key and cert are going to be embedded in the firmware and a sufficiently sophisticated attacker will just extract them and become able to impersonate some Philips device.

How the user of another device is going to tell whether he is connecting to his device or to malicious neighbor impersonating neighbor's device to establish Philips-signed HTTPS with the victim and then another connection to victim's device and MITM the victim?

You would have to make all users install a trusted certificate authority tied to their individual device. Which is a UX disaster in current browsers and also a security disaster, because if this becomes a norm, sooner or later somebody will sell you a toy device bundled with a CA crafted to give him the ability to impersonate any website. And you'll trust this CA because you want to play with the toy.

This maybe could be made to work with some improvements in browser UI. Make it easier to add new roots of trust. Make it easier to learn and/or limit what websites these certs will be authorized to authenticate. But nothing like that exists now.

> The fact that your browser warns you about insecure communication happening from that web page, that's a good thing. [...] The simple fact that you accept some insecure traffic, doesn't make it secure.

True. As somebody pointed out elsewhere in this thread, this warning will become another EU cookie banner nothingburger.


> > > it's not possible for Philips to add HTTPS support to the hue bridge

> > I don't see why not.

> That's not a constructive argument. I don't see how they could make it work?

Missing from your quote: The alternative is freedom. Philips doesn't have to lock their devices.

If Philips (and other companies, obviously this doesn't relate to just Philips) would provide a community access to their devices and software rather than locking them out, I believe that this problem would not exist.

The original issue is that having a public website (used over TLS) that interacts with local network devices without TLS shows warnings about insecure communication. Again, the warning is shown because it /is/ insecure. There are plenty alternatives of securely interacting with an IOT device. Plain HTTP from a public website is just not one of them. For example, look at how Apple's Homekit has implemented that. Homekit is not usable from a public web page in a web browser. That's a good thing. (aside: I'm not a big fan of Homekit but their security is not bad)

So if vendors are annoyed with browser warnings, it's because /they/ are doing the wrong thing, not the browsers.

> sufficiently sophisticated attacker will just extract them and become able to impersonate some Philips device

Just like on any website. Just because something isn't 100% unbreakable, doesn't mean it's a bad idea (you do lock your doors, don't you?)


> Missing from your quote: The alternative is freedom. Philips doesn't have to lock their devices. > If Philips (and other companies, obviously this doesn't relate to just Philips) would provide a community access to their devices and software rather than locking them out, I believe that this problem would not exist.

The problem is technical and won't be fixed by just opening the software.

> There are plenty alternatives of securely interacting with an IOT device.

Please name just one which works for webapps, beside HTTPS.

> So if vendors are annoyed with browser warnings, it's because /they/ are doing the wrong thing, not the browsers.

Homekit is nice but not available to webapps, apps of course can take advantage of several security mechanisms.

My whole rant is about browsers and HTTPS in non public networks.

For webapps which want to talk to IoT devices there is only HTTPS and there is _no_ sane way to provide robust, local!, access to a LAN device via HTTPS.

Here are some requirements: (actually real, I'm working on a IoT'ish product)

* webapp must be served via HTTPS, either from the IoT device or vendor site

* it just works! if webapp served from IoT device, the user shall not be required to install certificate or set exception (because it then looks like scary as hell malware)

* the webapp must work offline (service worker or appcache) without internet connection * webapp must be able to talk directly to the device, no cloud or vendor server inbetween

* the IoT device which provides a secured REST API might be in in a LAN which is NOT connected to the internet -- so the '<random-stuff-id>.vendor.com DNS resolves to device IP with an Lets Encrypt CA' approach won't work here (otherwise nice hack)

To my knowledge it's technically not possible to build such a HTTPS secured webapp in a local network today without breaking the mentioned requirements.


I think this is part of the larger problem of the relentless "cloud first" movement that the whole industry seems to have adopted. I feel like there isn't any new software, device or standard in development by now that doesn't demand constant internet access and a dedicated background service. Even basic things that should have no business relying on internet access get swalloed by that. (Browsers, operating systems, cars...)

The economic incentives that push everyone in that direction are obvious but I think in the end that will lead to more harm than good.

That being said, I can understand that making IoT devices directly accessible from web page JS fü could cause some security headaches:

As an example, apparently a lot of recent exploits were caused by programs opening a loopback-only REST service for IPC. Those services weren't secured because, hey, if someone can talk to loopback, the system is compromised anyway. The developers didn't realize that any webpage open in a browser can do that via script (respecting CORS) and so even loopback services should be considered exposed to the internet.

I can imagine that an IoT device offering a browser-accessible REST interface might cause similar non-obvious attack vectors. So at the least, it would have to implement some kind of user management and authentication - which might be challenging for small devices.

I think what we really need is some kind of dedicated standard for browsers talking to things on the LAN. Such a standard could then handle discovery, certificate management and authentication/permissions in one go - and would enable browsers to present a good UI for those steps.

However, right now everyone seems to busy developing intricate rube-goldberg machines[1] to care and the agenda of the browser vendors seems to go in the opposite direction - so I don't have high hopes.

I think practically, the most feasible step right now is to forego browsers and build an app instead. Then you have to deal with the headaches of app development but at least you get a nice user experience without any backend services...

[1] https://twitter.com/isotopp/status/877444175708475393


> The problem is technical and won't be fixed by just opening the software.

I strongly disagree. Main reason is that if Philips opened their firmware to the public, it would have had different protocols by now than just HTTP with a poor mans JSON API.

> Homekit is nice but not available to webapps, apps of course can take advantage of several security mechanisms.

That's why basically my point is to /not/ use a web app to control local insecure IOT devices.

> Please name just one which works for webapps, beside HTTPS.

Use locally resolvable DNS names and wildcard certificates signed by commonly trusted (public) CAs. It's been done before (Plex does something like this IIRC).

* update: I just noticed another comment [1] that mentions Plex with a link to some technical details [2].

[1] https://news.ycombinator.com/item?id=14751768

[2] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


> it would have had different protocols by now than just HTTP with a poor mans JSON API.

Sure, but now you can't control the gadget from the browser and the vendor needs to write an application or something for whatever shitty OS you want to use.

> Use locally resolvable DNS names and wildcard certificates signed by commonly trusted (public) CAs. It's been done before (Plex does something like this IIRC).

Not that simple. Public CAs will likely only give you certs for domains you own (like plex.direct) and your users generally don't have nameservers authoritative for such domains on their LANs (maybe you could pull it off if you are a router vendor, but not with IoT light bulbs) so they have to query your public nameserver and the system fails without Internet connection.

And there is no easy solution: if your light bulb could register an xxx.philips.com domain via UPnP on your router or via SMB on your Windows box, it would be very much unclear what exactly should prevent it from registering philips.com as well.


> Just like on any website. Just because something isn't 100% unbreakable, doesn't mean it's a bad idea (you do lock your doors, don't you?)

Don't you think it's a completely different thing to extract keys from a remote server (try https://news.ycombinator.com/ for example) and a physical gadget you own?

Doubly so if the gadget is open source, as you apparently prefer.


Not really, for a hacker both are remote servers, aren't they? I agree that in practice many security updates are not provided for IOT devices (another reason for FOSS), so it might get easier and at the same time less relevant to extract the keys.

If a gadget is open source doesn't mean the private keys are.. Most internet servers are running open source software (BSD, Linux).

In my opinion the manufacturer should /not/ have your gadget's private key. But that's not really related to this problem.


I was talking about a different scenario: I buy the same kind of light bulb you own, extract its private key and use it to either:

1. impersonate your light bulb, because they both have the same key

2. impersonate my light bulb, because you and your browser can't tell the difference

To prevent such attacks, each device needs its own certificate and key and then furthermore you need one of the following:

1. each certificate is signed by a unique CA which you add to your browser's list of trusted CAs so that it doesn't trust other devices' certs because they are signed by different CAs

2. each device has a globally unique domain and you type this domain into the browser

3. maybe some other equally cumbersome solution


> 1. impersonate your light bulb, because they both have the same key

Definitely don't give both the same key.

> 2. impersonate my light bulb, because you and your browser can't tell the difference

Who cares if you can impersonate your own lightbulb?

> 1. each certificate is signed by a unique CA

This doesn't change either scenario. If they shared a key then custom CAs don't stop impersonation. If each device has its own key and CA then they still can't impersonate your device, and they still can impersonate their device.

> 2. each device has a globally unique domain and you type this domain into the browser

Typing in "jzhf.hue.com" sound easier than figuring out what IP has been assigned to the device.


> Who cares if you can impersonate your own lightbulb?

For MITM - you think you are connecting to your device, actually it's my proxy (DNS spoof, ARP spoof, TCP hijack, ...), you still get the green bar in your browser saying "über secure Philips lightbulb", you just don't know it's mine because the domain matches and it's signed by the same CA (assuming neither of these protections is in place).

> If each device has its own key and CA then they still can't impersonate your device, and they still can impersonate their device.

Without manual installation of my CA your browser won't accept the certificate ripped from my device.

You said in another post that providing correct address is better than per-device CA. No doubt it's more convenient in a commercial product, assuming you can solve the DNS problem somehow (which doesn't seem possible without working Internet connection or editing hosts file). From pure security standpoint though, I feel like per-device CA has an added advantage of resistance to typosquatting. But it's getting academic now, it's hard to squat if it takes buying a physical device with the right ID.


  I don't see how they could make it work?
Plex achieves this with a very convoluted setup [1] - they set up a DNS server so that 1-2-3-4.625d406a00ac415b978ddb368c0d1289.plex.direct returns IP address 1.2.3.4, then they issue a single user a wildcard certificate for *.625d406a00ac415b978ddb368c0d1289.plex.direct

Of course, you have to get a special deal from a CA at who-knows-what-cost - likely meaning open source projects need not apply. And you get a dependency on cloud infrastructure, if they stop issuing certs you end up in a bad place. And you get a giant, ugly URL. And you have to make a DNS lookup so traffic leaves your network anyway.

It's an ugly solution with a lot of downsides - but I doubt the CA/Browser Forum plans to give people much choice in the matter, so it's their way or the highway :-|

[1] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


I don't see why you couldn't do that with Let's Encrypt, especially since they just announced they'll start giving out free wildcard certs.


wildcard certs are not a solution to this problem. Sharing a private cert with all customers isn't what the solution does. every customer gets their own cert

second letsenrypt has low limits of 20 certs per week. so imagine VLC added a Plex like streaming feature. they'd need far far more than 20 certs a day given how large their user base is


wildcard certs are not a solution to this problem. Sharing a private cert with all customers isn't what the solution does. every customer gets their own cert

That's not what I mean. I mean the same solution as described by michaelt above, that is, provide a different wildcard cert per user.

second letsenrypt has low limits of 20 certs per week. so imagine VLC added a Plex like streaming feature. they'd need far far more than 20 certs a day given how large their user base is

Remember that the limit is only on the number of new users; Let's Encrypt has a renewal exemption that lets you renew your certs even after hitting the 20/week limit. So while it might still not be enough for VLC, I don't think it's a problem for most projects. Plus you can always use more than one domain.


> I don't think it's a problem for most projects

Pretty much any open source project that was to need certs similar to plex would pass this limit the moment they mentioned it on HN. Why should an open source projected have to register hundreds of domains just to handle this case? Someone else gave a long list of the number of devices and services running in his house that need certs like plex. Effectively every router, nas, IP camera, and other networked device that exposes a web interface and therefore every open source project that does those, OpenWRT for example, FreeNAS, ZoneMinder, etc...


BTW, who really is Let's Encrypt, why should I trust them, why should I trust they won't disappear once plain HTTP is no longer supported by cargo-cult-security-conscious browsers?

It seems to me like providing certificates isn't exactly free, in itself.


Say they disappear, so what? You're left in the exact same situation as before they've appeared, except with some money saved in the meantime.


You must have missed

once plain HTTP is no longer supported by cargo-cult-security-conscious browsers

There already are people talking about such possibility and some even appear to believe it would be a good idea.

Of course what happens then is that without Let's Encrypt you are stuck paying other CAs to have anything published on the Web at all.

<tinfoil hat on>LE is a conspiracy of CAs to phase out unencrypted HTTP and ensure them infinite money stream.

<tinfoil hat off>Even if it isn't, LE will disappear five months after their mission is done because what the heck, why bother.

I just wonder if there is any reason to believe that users of LE are any smarter than kids accepting free candy from pedos? Maybe there are reasons but I just haven't heard them yet.


Ah, I think I'm missing an assumption you're making: that LE is indispensable (or almost) for browsers to deprecate HTTP.

Personally, I think the deprecation (as in, the warning bells and reduced priority, not full blocking) was going to happen anyway, and LE was mostly inconsequential, even if it makes the transition easier.

As for LE being a CA conspiracy, I don't think that makes much sense considering their funders (eg. Mozilla, Google) and those funders relationships with existing CAs (see WoSign, Symantec). But anything's possible.


This is better than HTTP because complexity breeds security, right?


HTTPS, in and of itself, is extremely complex. So I might advise against that argument.

And the Plex system sounds quite awkward, but not particularly complex.


> That's not a constructive argument. I don't see how they could make it work?

Give each one a subdomain that resolves to its local IP, and give it a valid certificate for that subdomain.

> extract them and become able to impersonate some Philips device.

Or the attacker could just have a real, non-impersonated Philips device. If the user deliberately points their browser at the wrong device's site, nothing can save them. This is a very different problem from securing access to the correct site.

> You would have to make all users install a trusted certificate authority tied to their individual device.

That's not true, and I don't even understand what benefit that would have.

If you have a way to deliver a CA, instead you should deliver the correct address of the device. This makes 'MitM' impossible without any downsides.


I would suggest that browsers should support some kind of TOFU for self-signed certificates used by non-publicly accessible web servers.

What if they'd just ask the user to accept and install a certificate when connecting to a local server for the first time?


>I don't see why not.

Well then please explain how this is possible?


>If not for hostname validation, how would you even /know/ that you're talking to the "internal corporation service" rather than someones MITM proxy?

If you control physical hardware, and you control all the users on it (as a corporate network), then you can know that nothing is amiss.

>And how would you feel if people on the same LAN could see and modify all your interactions with those services?

The fact that they physically can doesn't mean they will.

For 2 & 3:

https has two modes: self signed, and certified. Certified requires that you have a public facing domain name, such as "news.ycombinator.com". Devices on private networks can't have public domain names. Consumer devices on public networks could have domain names, but this would be very difficult to configure. Without a domain name, https must be done as self signed.

With self signed, when you first interact with the server, it could be anyone. Self signed https only gives you the guarantee that any further interaction besides this first one are with the same server as the first one. It should be clear that you can still be MITMed under this mode, so long as the attacker can intersect the first message you send after a reboot. If you're scared of network ninjas sneaking into your house in the middle of the night and intersecting your packets, self signed https is no better than http.


>With self signed, when you first interact with the server, it could be anyone. Self signed https only gives you the guarantee that any further interaction besides this first one are with the same server as the first one.

If I create a CA and install that CA's public key in my browser, then use that CA to sign the cert for a device on my network, why exactly will it "be anyone"?

Unrelated, but this push for HTTPS for everything isn't without downsides. Many apps gather extensive data when running on my devices and they communicate that data back to some central location, sometimes under the guise of functionality, sometimes straight up nefariously, but always with a side effect of giving that central entity a complete record of what I'm doing and often also exfiltrates my data (contacts, etc.)

Honestly, I've been tempted to set up a transparent TLS terminating proxy at my home to give myself some possibility of seeing wtf is coming and going from my network.


That's not self signed, that's just your own personal CA chain. Self signed is the device making it's own cert.


> If not for hostname validation, how would you even /know/ that you're talking to the "internal corporation service" rather than someones MITM proxy?

Because a crap ton of Linux software comes with its own set of bundled root CAs instead of using the system defaults. Welcome to the configuration nightmare that is setting up Anaconda, npm, AWS CLI, Python (Requests library), Git, etc. for working with something like Zscaler.


Zscaler performs TLS interception in order to analyse the traffic and “protect the users”. [https://support.zscaler.com/hc/en-us/articles/205059995-How-...]

The issue is that Zscaler may have flaws, and that even if the validation is performed flawlessly then the introduced risk is not zero…

Usually one would have to trust the root CAs, but with TLS interception we have to trust the trust of the MiTM software in the root CAs. This increases the attack surface instead of decreasing it.

For a security appliance it’s a pretty bad job; sure, there may be reasons why you want to look into traffic, but then the aim is to control the communication. And control doesn’t come for free.


> 1) internal corporation services, websites and webapps.

For this use case companies usually provide an internal CA, which signs their certificates and is trusted by all company machines. We have various customers which do this and it works just fine.


Large companies do this.

Small companies/small groups of developers have no idea how to implement and manage this, but think that it should be easy.

I've recently been approached by a group of developers to enable SSL on their internal sites. When I mentioned that this would take some time, the response was "why can't you just use LetsEncrypt?"

I replied that LE only works on external facing sites, not internal sites. The next response was "fine, why don't we make it all external facing?"

I'm still trying to explain that their CI server (Jenkins, with its history of remotely exploitable vulnerabilities), and their internal OAuth2 server should not be public facing.


Google is moving away from network-centric security and VPNs. See https://cloud.google.com/beyondcorp/ . The threat model is a bit different but you could also follow their approach and put an auth proxy in front of Jenkins and deploy it on the public Internet.

But yeah, don't expose Jenkins to the Internet directly. Last month I saw a Jenkins instance that was mining bitcoins. The worm had used one of Java's serialisation vuln to get in the box and install the miner.


No vpn means any vulnerability can be attacked over the internet.


Not at all, it means the proxy can be attacked over the Internet. Just like the VPN can be attacked over the Internet. Once you're past that it's the same story.


At minimum that means your SSH service is also vulnerable, no?


> I replied that LE only works on external facing sites, not internal sites.

LE supports DNS validation; as far as I'm aware it now works great for internal sites.


Specifically... LetsEncrypt, and most other CAs no longer issue certs for domains that are not legal ccTLDs or gTLDs.

Not so many years ago, Microsoft recommended that organisations used [companyname].local as their internal DNS zone[1], as .local will never be an external zone, so there would be no conflict. Then along came cloud integration and increased need for edge services, and .local no worked well as a solution. Servers needed certs with both the local domain and a new external domain in their certs which became a security nightmare. Then (about a year ago) CAs stopped issuing certs for domains that weren't sub-domains of proper TLDs, which all but killed the concept of these internal non-legal domains.

So, unless you are prepared to roll your own CA, AND instruct your internal (non MS-domain members) users how to manually install an untrusted cert, signing internal sites that do not have a legal domain name, is a complete non-starter.

---

[1] Now of course they recommend a sub-domain of your public domain name (site1.company.com), or a reserved public domain name that you don't use externally (site1-company.com). Which is all well and good, but what about the 100s of legacy kit you've got on the old name... ~sigh~


LE works with internal websites. Just use DNS validation and register any public domain (costs almost nothing).


It is pretty easy to manage your own CA, make a Debian VM, install something like XCA and it is literally click a few buttons to generate and issue certificates and set up certificate authority root certificates.


FreeIPA sets up a CA by default and AD can do it as well (not sure if it's done by default).


Namecheap will sell you certificates for your internal domains, $9/year.


And why would I trust a company I work at to be able to sign certificates for every single website on the internet? Especially if I need to install that root certificate on a personal device?


Because you already trust chinese, russian and a bunch of unethical for profit companies to do that?


As a matter of fact I don't. I'm keeping track of my root certs.


If you need such a device to do your job, maybe ask them to provide one so you can keep work off your personal device anyway? I disconnected my phone and so forth from work email and other services some time ago, and I'm not going back!


Why would you need to install it in a personal device? Just add an exception. It's still better than plain HTTP since you can check the fingerprint against your work PC, which already validated the cert.


Add an exception for every single internal site? You know how annoying that gets?


This fails utterly when you can't control your clients. My student society for example ran into this problem. Students bring their own laptops and installing our root certificate on all of them is infeasible (if they even would allow us to do so). As a consequence, we need to expose critical internal services on the public internet, some of which contain private user data.


If you let any student that brings their own laptop connect to it, then it's already pretty darn public.

And you don't actually have to expose it to the internet to get a certificate, you only have to give it a public name.


Additionally, if you let anyone bring their own device in a diverse semi-public environment like a school, you owe it to the students and faculty alike to provide them with some protection against creative types placing fake wifi access points in busy places, trying to play man-in-the-middle for any credentials and other stuff sent to your local services. HTTPS does that.

Using a proper FQDN for each service only makes everything easier to maintain.


You don’t need to expose them. You need to use public DNS records, but there is no reason those records have to point to public IPs.

e.g. my company uses *.int.cuvva.co which all point to IPs in the 10.0.0.0/8 block, but we still have HTTPS certificates for all of those.


> As a consequence, we need to expose critical internal services on the public internet, some of which contain private user data.

No, you just need to have a public DNS entry, no need for that service to be reachable from the internet.

foo.example.com can resolve to your private RFC1918 address, when you send the CSR to a CA, they'll verify your ownership of example.com.


A public domain name costs the price of a coffee (and less than a raspberry pi) and you can get a certificate for free with Let's Encrypt. There is really no reason to resort to a private CA unless you want to MITM your client's connection.

You don't need to expose your server to the public internet to use let's encrypt. I use DNS authorization and it works perfectly.


Even if you could I would highly recommend against doing that, given that this would grant you access to every https connection that isn't hpkp secured.

I actually have all webservices in my home network secured by https, all you need to do is click a cheap vps, install nginx and tinc, and then proxy /.well-known/acme-challenge/ to your internal servers. Either setup domain or ip hijacking so the public IP is routed inside your lan. Done.

If I can do this for me and my cat in my spare time, you can do this for your university.


> My student society for example

> need to expose critical internal services on the public internet, some of which contain private user data.

The heck? Are they aware of this? Might you get sued for this?


If you can’t control your clients - maybe use a captive portal style landing page with a link to install the local certificate or something along those lines, it’s also useful to have a wireless network (SSID/VLAN) for BYOD that just has internet access and as such doesn’t need the very and one that has access to internal services that does.


I even do this on my own private network. Installing root certificates is not hard.


Western Digital solves #2 and #3 for the MyCloud EX4 by somehow issuing real browser-trusted certs to each device for the domain device<mac_address>.wd2go.com using their intermediate CA "Western Digital Technologies Certification Authority" (https://www.censys.io/certificates/eb94f8e2c8d0c8338bb8ba40e...), which is in turn issued by COMODO. Now, not everyone has an intermediate CA locked to their own domain, but maybe that's the issue? X.509 has the ability to restrict CAs to particular domains (e.g. see the "path constraint" on WD's CA in the info link above), so if it was easy to be issued a CA cert for your own domain, couldn't that be a potential solution to this problem?


1) internal company webapps just install company root cert and create properly signed certs under corporate internal CA. Installing the certificate across network is easily automatable on windows, OSX and Linux. The only issue is Firefox as it uses its own trust store. Any senior admin who can't figure it out with the resources available (plenty of information available online) should be replaced days it is not that hard.

2) with regards to raspberry pi, will anyone who can write code can learn to also create their own CA the only difference is probably no automation of adding to the trust store likely however it is only a 2-3 click install in most cases.


You are assuming that you control all client machines. Unfortunately it is not always possible and far from the admin technical decision. The admin usually can't fire the upper management.


It's possible to purchase certs signed by pre-trusted CAs extremely cheaply ($9/year/name) that can then be used on internal services. This is not a difficult problem to solve.


You can't buy certs for non.public.domain.local. So you must control the CA list at all client machines and use a self signed cert. The assumptions that there is a solution to the problem do not take in consideration that some times these changes are not possible.

If I were to choose everyone would be using public domains with DNS zone view for public / private environments but Microsoft DNS service don't even support it.


Only if you also control DNS for those internal machines...


Yes.

Also why do I get a certificate warning that looks the same for an IP (https://192.168.1.1) which you can not buy certificates? What about 10.x.x.x or even 127.0.0.1? As far as I know you can also no longer purchase a certificate for public IPs.

Just watch how consumer router manufactures are going to work around this by either re-educating their users to ignore the red warnings or only selling cloud managed and locked devices which sucks for everyone.


Assign a fqdn, I log into my router with a domain name, most routers actually have a domain name for them.

Configure the LAN setting of RT-AC68U. Device Name: ghandi


If you're in a corporate intranet environment, you ought to have an intranet CA, as well as the means to distribute the CA's certificates securely to all deployed machines within the intranet.


I dislike intranet CAs because they allow your company to intercept and play MITM with every other website you visit (except for certificate-pinned websites)...

I'd prefer that Chrome write "insecure" if there's a non-public CA in your chain.


Not trusting intranet CAs is irrational.

1) Every employee of every company needs to have some level of trust in their company. They trust their company to make payroll, and they trust their company at a reasonably high level to follow local laws and regulations, including reporting threats and violations against their physical safety. That doesn't mean that employees should trust their employers with their deepest darkest secrets and life savings, or that there aren't different types of trust, just that trust is a spectrum, and arguing that you should fully trust every one of the shadowy public CAs pre-installed in your OS and browser, that you know absolutely nothing about and have not personally vetted nor have personal relationships with, but not the intranet CA your employer operates, is rather clearly an irrational assertion.

2) If you decide not to trust your employer's CA, and your employer has provided you with a machine to access intranet sites, then you clearly cannot trust accessing Internet sites for personal reasons on your employer-provided device, not because the CA cannot be trusted but because it's irrational to distrust the CA but also trust the employer-provided device, which may have a keylogger and other tracking software installed.

3) If you decide not to trust your employer's CA and your employer operates a BYOD environment, then you are free to bring a separate device for work purposes, on which you trust your employer's CA but refrain from accessing personal accounts, instead only accessing personal accounts on devices which your employer doesn't know about.


If it's a company issued device, then play by their rules.

If it's BYOD, then create a new user account / profile for work. We're not running DOS anymore.


Minor nitpick regarding the "except for certificate-pinned websites" part:

HPKP does not validate pins if they resolve to a user-installed trust anchor like an intranet CA. The RFC [1] leaves behavior undefined (see Section 2.4), and I'm not aware of any popular implementation that would honor the pin in case of a user-installed certificate.

This can be incredibly frustrating if you're trying to protect against MITM attacks; but at the same time, I can follow the browser developers' line of thought that goes "if we were to enforce it, users would just jump ship to the next available browser".

[1] https://tools.ietf.org/html/rfc7469


At least on firefox, this can be changed through the preference security.cert_pinning.enforcement_level (see https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinn...).


It depends on what they're used for but I agree that many companies seem to spy on their employees this way.

Commonly I only allow internal CAs for specific internal websites and not allow them to MITM just any website. On occasion this meant not being able to use the company's wifi and deal with 4g instead.


That's okay unless you allow users to bring there own devices, then you have users which are taught to bypass certificate errors on a daily basis.


Sorry but I have to call BS on that. I always bring my own device and never have bypassed certificate errors. A company will either have certificates for their own apps (commonly running on specific subdomains of their own domain) or have their own internal CA that you can trust on your own device.

Certificate errors on "internal" web apps are just as bad as on the rest of the internet.


If you trust a company's internal CA then aren't you trusting them to issue certificates for every website and not just their own? Isn't that dangerous?


Yes absolutely.

If I have to I trust the company's CA in a special browser profile that I only use for working with their internal tools.

For just a few tools it's often simpler to just trust those specific certificates, though


All browsers can tell you what certificate signed the one in use. Unfortunately a recent chrome UI change made this a pain to get to.in chrome, into the other browsers just clicking on the lock in the address bar, it soon becomes obvious if the company is mitm all SSL connections.


Because a normal person will "just click the lock in the address bar" on every single HTTPS website he visits to make sure his company isn't MITMing him, right?


> If the HTTPS server doesn't (and never will) have a public domain

Then give it a public domain, keep it on a private network, and use a real certificate?


You can get a certificate for a local IP address. If you own foo.com, you can get a cert for local.foo.com from letsencrypt that points to 192.168.1.5. Obviously you can't use HTTP verification, but you can use DNS verification, point _acme-challenge.local.foo.com to a public server and run certsling or another acme client on that server.


Would just creating (public) DNS records for those private addresses be a solution for 1 and 2?


> It looks much more secure to serve a good'ol HTTP site with no encryption at all.

Not for long it won't: Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.

Taken from Google security blog[1] back in Sept 2016: https://security.googleblog.com/2016/09/moving-towards-more-...


>1) internal corporation services, websites and webapps.

If this is an internal corporation service, why don't you bake your own self-signed root certificate into every computer in your network? Then you can generate as many certificates from your own root as you like, and they'll all magically be valid on corporate computers.


> 1) internal corporation services, websites and webapps.

That one's straightforward. Set up a corporate CA and use it for your internal certificates. Or, operate your corporate services on your real domain name, and use real publicly-trusted certificates - whichever is easier.


> Set up a corporate CA and use it for your internal certificates.

Good luck with Docker containers running any Unix software that bundles the "default" root CAs along with it.


Clients need the CA cert in their trust store, not servers. Client get it by the act of enrolling into AD or FreeIPA domain.

On the docker side (or rather on the reverse proxy that provides access to them) you are solving different problem and it does not matter whether the key/cert is provided by your internal CA or third-party one.


Just build your own docker container based on the one you want. E.g.:

  FROM kanboard/kanboard:stable
  ADD ownca.crt /usr/local/share/ca-certificates/ownca.crt
  RUN /usr/sbin/update-ca-certificates


The problem is you can't do this for every Docker image you have, particularly for a large organization. It defeats the whole point of having base images if you need "include" Dockerfiles. If Docker had a way to build from multiple base images, that might fix the issue, but I believe they removed that bug/feature a while back.


You can solve this with DNS if you'd want...

Have every local machine get something that is internet-routable to a public machine that allows you to get a certificate, then on your corporate DNS, just serve the local machine it's supposed to go to and you can use that cert.

You could use Let's Encrypt for this and have free certs.


no, limit 5 certs per domain


You can get a SAN certificate where it has 100+ domains in 1 cert.


They'll also be providing wildcard certs soon.



Self sign from an internal CA and add that to client keychains

(but yeah your points are valid, which is deeply unfortunate)


> 1) internal corporation services, websites and webapps.

When firefox started throwing warnings on my intranet sites that are accessed via OpenVPN i did this:

1) move all internal sites to valid FQDN 2) push dns settings over openvpn so that clients resolve the names with the internal dns service and dont leak names. Names withing the vpn resolve to internal ip addresses. 3) set up a catch-all website, point the wildcard *.company.name domain to it and mada letsencrypt certs for the internail domains/ 4) copy the valid certs to the intranet webserver. Done. Everything working ok.


Just buy a .com domain for your internal service and serve DNS over VPN


This is complete nonsense.


let's encrypt is free, you have no excuse.


FYI, I recently watched a potential portfolio company fail dilgence for having "insecure network practices". The diligence item? Is the lockbox green.

Do I agree with this signal? Not entirely. But the company was forced into restart and into new management. Guess what colour their lockboxes were?

The default has shifted; if your best retort is "it's hard," I'm not sure what to tell you. Yet government, maybe?


Off-topic: If only IPv6 adaptation would have as much momentum as HTTPS.


I suspect the big reason it hasn't happened yet is it would require ISPs to replace tens of thousands of dollars of hardware and it would increase support requests in the short term ("site XYZ is broken but it's fixed when I turn of IPv6").


That, and—unlike TLS—there isn't any class of business-destroying vulnerability whose easy solution is IPv6.


Is the hardware you're talking about the network equipment controlled by the ISP, or the routers and modems in customers' homes? I'd be surprised if the former hadn't been IPv6-ready for many years now, but I can imagine many customers are still using ancient hardware left over from when they first signed up for service.


If you have cable, a number of providers have been making customers upgrade to the newest modem. A single old modem that doesn't support docsis 3 will slow down everyone in your neighborhood.


Really? Interesting... can you elaborate/provide some reading?

I'm a software engineer with a smidge of basic networking experience so not completely clueless, but definitely inexperienced with DOCSIS and this sort of residential networking stuff.


ISPs love IPv6 because they lack IPv4 addresses.

Here in Germany, you no longer get IPv4 addresses from Unitymedia.


How to people get to IPv4 sites? Do they setup a transparent tunnel of some kind?


Yes, they use something called DS-Lite were IPv4 gets CGNATed (many customers share one IPv4).


For real. I've seen it more these days. A friend in a po-dunk town in Northern California I visited had IPv6 from Comcast. I was kinda shocked since my fiber Gigabit ISP in Seattle didn't have IPv6 rolled out to residential customers yet.

It seems more important than ever to roll out IPv6, since, at some point, IPv4 is going to become incredibly scares. Imagine a permanent/reserved IPv4 address on DigitalOcean/AWS/Vultr going from the few dollars a month to $70/month or $100/month. Forget network neutrality, regular people won't even be able to host their own content in a way everyone else can reach.


It's already a bit of a pain, as at a dollar or a few dollars a month it's approaching or over the cost of some servers I want to run.

But I expect things will continue slowly being replaced, as more switch over the stress on IPV4 lowers.


There's a real and immediate value for users if HTTPS is used. I don't see the immediate value for end users when IPv6 is used. Worst case, the lack of NATs makes tracking easier.

IPv6 is certainly necessary but nothing users have to worry about.


Https has googles bully pulpit behind it and even though https has its issues it pales into insignificance compared to ipv6's "problematic" design choices


I wish people would stop equating "secure" with "HTTPS".


That's not what's done. "No HTTPS" is equated with "non secure".


He also claims that Quantas "secured their site" by adding HTTPS to their login page, and that serving sites over HTTPS is "secure by default"


This is true. I can intercept your username/password during login by being in close proximity to you if you are logging into their website over plaintext HTTP. Not possible if it is protected by TLS (HTTPS).


Well that's what the S in HTTPS stands for. I am pretty sure that anybody who knows the difference between HTTP and HTTPS also knows that "security" is not binary.


Since this guy is equating "secure" with "HTTPS" in some of his statements, would you say that he does not understand that security is not binary?


I hope they did some user testing to see how people actually behave in the presence of such warnings but in my experience it does nothing. Worse, it's in an environment that is already rife with little messages in corners trying to get your attention (ads) so users may be more "blind" when browsing than usual.

The success of "Let's Encrypt" suggests that a key part of the problem wasn't a lack of user complaints about security. Rather, it was a lack of a sane model (both technically and economically) for setting up and maintaining certificates. In the end, people maintaining sites already had 100 other things to worry about and weren't going to get around to HTTPS with anything less.


With Cloudflare's first easy to use free SSL and later Lets Encrypt, I think it there are no more excuses for not being secure.


Cloudflare's free SSL is not secure.


Well it kind of is if you setup a self-signed SSL cert on your server and use Full SSL (not strict).


You can setup a Let's Encrypt certificate on your server and use Full SSL (strict). It will also make switching away from Cloudflare in the future easier.


That's right. Cloudflare doesn't try to lock people in with artificial constraints. Use Let's Encrypt for your origin. Very soon we hope to support Let's Encrypt completely for our main certs (once they have wildcard support).


is not? or was not? honest question. care to elaborate for a noob?


CloudFlare's "Flexible SSL" (https://www.cloudflare.com/ssl/) offers encryption/authentication from CloudFlare's server to the client, but none from the origin server to CloudFlare's. Which means that is a vector by which the content could be sniffed or modified in transit.

It's a "better than nothing" option, as there are a slightly higher number of actively exploited attack vectors that apply to the client to CDN connection than the CDN to origin server, such as "free" wifi that injects ads, malicious ISP DNS, and the like. But it's not actually secure, as the origin server to CDN connection could be tampered with, and just because there are fewer active attacks that would be likely to affect that connection right now, doesn't mean that someone won't come along later and hijack such a connection.

CloudFlare offers other TLS options that do include encryption and authentication between the origin server and CDN, but they do require that you set up a certificate on your server, so if all you're trying to do is enable TLS (and don't care about the CDN), just installing a cert on the origin server and using TLS is probably a simpler option that using CloudFlare.


We encourage users to use Strict mode which requests and validates a certificate from the origin.

It's great that shared web hosting providers and others are starting to make it easy to acquire and install a certificate, but that hasn't always been the case.

EDIT: We also provide an API that will provision a free certificate for your origin: https://blog.cloudflare.com/cloudflare-ca-encryption-origin/. The certificate is optimized for communication with our edge (essentially just as small a chain as possible, as we don't need the intermediate to walk to the root). Either that or use certbot from EFF/Let's Encrypt.


great noob elaboration. much appreciated. :)


> is not? or was not?

Troy Hunt (OP) discusses this at length in this post => CloudFlare, SSL and unhealthy security absolutism https://www.troyhunt.com/cloudflare-ssl-and-unhealthy-securi...


Let's Encrypt's short expiration times make them too cumbersome to use.


The point is to automate. If you're manually renewing them every thee months, then you're very much doing it wrong and it should be cumbersome.


But most people don't want or know how to automate. HTTPS is supposed to be used on very site but not every site is setup by a developer. That's a basic flaw that will make it problematic to have these kinds of HTTPS-only policies.


Having done both, it's far easier to set up one of Let's Encrypts automatic tools than it is to install a certificate manually.

End users won't be configuring their own servers anyway. At best they'll get a cPanel (which using AutoSSL, can then support Let's Encrypt).


If one can't handle setting up Let's Encrypt, they should host their content on a platform where others take care of things like HTTPS for them.


I'm running Varnish in front of wordpress and mediawiki.... sure I can make it all work with HTTPS, but it's going to be a PITA.


Not really. You put nginx in front of Varnish and terminate your TLS there. It's not that much more work. Hint: make sure you have an X-Forwarded-For entry in your nginx config. Your root location would look something like:

    location / {
     proxy_pass http://localhost:8082;
     proxy_set_header Host $host;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     proxy_set_header X-Forwarded-Port 443;
     proxy_set_header X-Forwarded-Proto $scheme;
   }    
Assuming your Varnish setup was serving on 8082.


Extra cost of a static IP hosting (needed for domain registration) is a big factor, doubles the costs of my small website hosting.


https://en.wikipedia.org/wiki/Server_Name_Indication

Any host that still requires a dedicated IP for https is woefully out of date.


> Any host that still requires a dedicated IP for https is woefully out of date.

May be true in a perfect world but this is my web host (bluehost):

>> Note: Since SSL Certificates are Domain/IP specific, you must first Purchase a Dedicated IP before purchasing or having an SSL Certificate installed on your account. They will NOT work with a shared IP address.

I'm pretty sure a lot of other hosting services still go that route.


Bluehost Brasil supports SNI - http://cp.br.bluehost.com/kb/answer/2643

Bluehost US doesn't seem to, as you indicate - https://my.bluehost.com/cgi/help/473

I'm not sure whether they just don't support SNI yet, or whether they are artificially enforcing this restriction so that they don't have to deal with IE6 not working.


At this point SNI is 10+ years old, hard to have much sympathy for a company that has sat on things that long.


It's sadly not just the servers but also the clients... still you're right about that being a really long tail at this point.

Edit:

A quick search seems to indicate that the only typical consumer facing system that doesn't support SNI is IE on Windows XP. It's pretty safe to have a catch-all bucket that informs such users to use a modern, security patches including browser or to upgrade to a different OS.


Anything that cant do SNI isn't going to support TLS 1.1+ and wont be able to access a huge percentage of the web anyways.

Eventually old clients have to be let go of, maintaining compatibility for them degrades everyone else's security.



Domain registration? That certainly doesn't require a static IP... nor does any other part of deploying TLS.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: