Hacker News new | comments | ask | show | jobs | submit login
Certificates for localhost (letsencrypt.org)
684 points by colinprince 8 months ago | hide | past | web | favorite | 152 comments



I’ve always liked the concept of a localhost’d web app talking back to a localhost web server. It seems like a great way to get the cross-platform ease of use of developing the UI without having to do everything in browser, so you can optimize the heavy lifting and don’t end up with an Electron app pulling 8Gb of RAM and 100% Of 16 cores.

But I could never quite satisfy the nagging feeling that the localhost server could adequately be secured against outside network requests being routed to it, or as TFA mentions, inside network requests being routed away from it to an outsider!

This article helped enumerate some of the difficulties of securing such a service. Things like a memory-safe parser, checking origins, etc.

I wonder is there a definitive guide someone had setup, or even better a sample Golang or similar localhost server, which demonstrates the dozen-odd layers of checks and protections and magical incantations necessary to have such a server “secure” in the sense that a localhost UI is able to make requests to it to receive sensitive data but it should be safe from external attackers trying to spoof the same requests?


This is how the Dell system detect utility worked. Back in the day I found out that they only checked if the referring domain ended with dell.com, so 'notdell.com' passed their validation[1].

Instant easy unstoppable root RCE on a lot of dell machines from any website (it was a get request as well if I remember correctly). No built in auto update, no system tray icon, no idea if it's running. Good times.

I found something similar with HP as well[2]. Since then I'm ok with not having this functionality used and abused too much. There is too much scope for things to go wrong, and badly so.

1. https://tomforb.es/dell-system-detect-rce-vulnerability

2. https://tomforb.es/hp-support-solutions-framework-security-i...


> I’ve always liked the concept of a localhost’d web app talking back to a localhost web server.

We're doing exactly this prime-time with Relica: https://relicabackup.com (sorry, not much on the landing page yet, but we have emailed out some info about the UI already [1]). That technique will allow us to distribute backup software that works the same for macOS, Linux, BSD, and Windows, right away; screenshot: [2]. And it's very lightweight.

An added benefit of this approach: we were able to take its REST API and, after writing a small custom Go package, we instantly gained an elegant CLI [3] so it has a headless mode too! With all the functionality of the GUI [4].

I haven't seen much consumer software that does this, and I'm not sure why, so I feel like we're taking a bit of a risk, but I think (hope) it will pay off; the benefits have already started becoming clear and they're definitely appealing.

[1]: https://mailchi.mp/2b5e7f57e400/a-brief-introduction-to-reli...

[2]: https://twitter.com/relicabackup/status/1005105584260067329

[3]: https://twitter.com/relicabackup/status/1006204516344086528

[4]: https://twitter.com/relicabackup/status/1006206423821254656


Thank you, yes, I think architecturally there are great advantages to splitting up an app like a client/server even when designed primarily to be accessed over localhost.

Obviously the “server” API is extremely sensitive and I think you have to assume it is effectively exposed to the outside world, even with a 127.0.0.1 binding and a firewall.

I guess if you make localhost users literally login and establish a session then you could consider yourself safe. But it’s a weird experience logging into a local application. So whatever you are doing to authenticate the request as local, it has to be unspoofable.

I just don’t think I trust the HTTP headers enough!


Yeah, we don't trust just anything that comes in on that socket. For example, we implement standard CSRF mitigations like checking Origin/Referer headers. We also don't use DNS at all in the local frontend/backend interactions and require the Origin to be exactly "127.0.0.1" (or the IPv6 equivalent) which is what we bind to.

(Edit: I just looked it up again and I'm 99% sure that web pages can't override the Origin header, especially when making cross-origin requests.)


It seems like there could be unexpected interactions with other services running on 127.0.0.1, which don't even have to be HTTP to cause problems - eg what if there's a service that echoes back the request in the response if it receives a request it doesn't understand? A remote web page could probably use that to bounce its request to your service but appear to come from 127.0.0.1.


> I just looked it up again and I'm 99% sure that web pages can't override the Origin header, especially when making cross-origin requests

What about other, potentially unprivileged software running on your machine that can?


If you've got rogue software running on your machine, all bets are off.


Not necessarily. It could be running in a sandbox, or as unprivileged user, and accessing your app's API over localhost would allow for privilege escalation.


I see your point. But in my book, that's still rogue software. And that's a terrible sandbox. :)


I see your point, but you're saying:

> Yeah, we don't trust just anything that comes in on that socket

Well, you are, you're trusting it came from a browser so the Origin header is correct, right?


A login is not enough because of cross site request forgery. A site on the internet can include css, scripts or images from your service and generate get and post requests to it with the users credentials.


Interesting! How do you go about binding an unused port on client machines?


Use port :0 to let the OS choose one.

But for now we just hard-code a port. I personally prefer this since it's easy to use and convenient to remember. But if a lot of client machines have conflicts, I guess we'll change that...


Why not just change that? You are already in the same host, you can write the port that the server bound to on startup to a file and just read that from the client.

I've implemented something similar and went through the same stage "I will wait until someone complains" and they will happen, just prevent them right now :)


So how do you compare to arq backup or borg backup? Other than your choice of using a web browser tab for UI?


Great question. More details coming out soon on our mailing list and Twitter---and there's so many factors to consider---but in short, Relica:

- is designed for consumers who are not technically skilled

- works on Linux and BSD (I don't think Arq does)

- offers redundant cloud storage with up to 5 providers, and requires only a single upload of the data as compared to uploading it 5 times

- allows you to backup to local disk, friends' computers (or your own) running Relica (with authorization of course), or the cloud, which is totally managed by us so non-technical users can use it

Relica also does client-side encryption and deduplication, of course. Backups can also be restored directly with restic (open source), without requiring a reliance on Relica.

Basically, Relica is a good balance of "user-friendly" combined with features for power users.


>offers redundant cloud storage with up to 5 providers, and requires only a single upload of the data as compared to uploading it 5 times

How does that work?


We replicate your upload after it leaves your computer (at the packet level - we can't decrypt your data, we don't even have the key for it). There were quite a few technical hurdles we had to overcome to make this work, but I gotta admit, it's really cool to see it in action. :)


I recognise that username! Hello.

This does raise the question of how fast the upload will be - how's your network?

Switching from Dropbox to OneDrive doubled my upload speed simply because there's better peering from my ISP to Akamai vs whatever Dropbox was using at the time.


Hi there! Our upload infrastructure is designed to scale to anywhere in the world where we decide to put up relays. That makes the speed variable depending on where you are and where the relay is. We are still testing on our staging infrastructure and haven't deployed to our production networks yet, so it's hard to say right now what our speeds will be. But I'd love to know more about what your speeds are like now and what you expect with your backup service. Could you tweet at either me (@mholt6), @relicabackup, or email support-at-relicabackup.com and I'll get back to you on that?


Edit: I'm happy to be wrong. It seems the cross-domain checks dont allow this. I missunderstood stuff on the article/thread.

I want to say a few things regarding "insecurtity" of localhost. I find it completely insane and a huge overlook that the downloaded js can access your localhost PRIVATE services. It should be able to access external public services sans cross-origin etc policies. But the browser should not allow access to your localhost services by default, never. They are assumed to be private services. Everyone uses them as such, taking advantage of the (lack of) security implications.

If you download a program, and install it, you are open to the same problems. But, there is a huge difference between installing something and clicking a random link on the internet. Local services should be treated the same as files in your computer. Block everything unless explicitely allowed by the user.


git-annex, which uses a web server on localhost for the Assistant UI, uses this process:

- generate a random token

- start the server configured to only accept requests that contain that token

- generate an HTML file that redirects to a local address passing that token, with read permissions only for the current user

- run the browser with that file as argument

In theory, this should prevent external requests, as well as from other users in the same machine.


HTTP over Unix domain socket (owned by the user) would simplify all this.

Sadly, browsers don't support this... :(


It would be neat if you could listen to a unix domain socket and have a simple way to specify that as an endpoint for an http URI.

i.e.,

http://[uds/path/to/uds.socket]

I reuse ipv6's current syntax here just for the example. A lot of bikeshedding would be needed. It would also be ugly to expose those kinds of internals to an end user, so you'd need some additional technology on top of this to look pretty.


There is an unofficial unix: URI scheme. Browsers don't support it but system clients sometimes do, for example nginx when configuring upstream groups as a reverse proxy[1].

[1] http://nginx.org/en/docs/http/ngx_http_upstream_module.html


But that doesn't specify protocal then, does it?

I.e., http://unix:/path would be very not backwards compatible


a '+' is a valid character in an URI scheme

then http+unix:///path would make more sense


I've been building an animation tool that provides a UI over http (https://github.com/logicalshift/flowbetween) so it has a lot of these problems.

For running as a local app, what I really want is to be able to run the server as a UNIX-domain socket. (Well, I can do that fairly easily but what I really want is for browsers to be able to connect to one of these).

For a single-user app the main issue is that it could be running on a multi-user system so there's the possibility of contention for ports and so on, as well as the need to verify that the right person is connecting - while it's possible to just bind the server to the loopback address anybody on the same system can access it there so it's not necessarily secure enough. For localhost verification, accessing/setting some information from a file URL might work.

With the loopback address, encryption doesn't seem to matter too much: anybody capable of intercepting traffic between a piece of software and the browser will also be in a position to just directly read what you're typing. Possibly by looking over your shoulder.

However, one of the reasons I want a HTTP UI is to make it possible to use something like an iPad as an input device and there are definitely issues there when the service is something that's randomly stood up and torn down and usually running on a local network rather than the internet: in particular TLS really expects a centralized service so it seems anything other than a self-signed certificate isn't going to work and that comes with a bunch of scary messages for the user.

The other issues of authentication all seem to be much the same as for any other web app, though it seems to me that it's possible to streamline it a bit as it'll be quite common for a user already authenticated on one device only to need to prove that they're the same user on another.


I usually write a special middleware in golang for this. It sits in front of the actual router and http handlers and also buffers the response so it can modify it if necessary (this is unrelated to localhost security). Off the top of my head the checks are basically:

1. Who is the remote IP? 2. What does the origin and refer header say? 3. What does it say in the host? (I always enforce a whitelist for localhost applications) 4. Analyze the other headers? Anything out of the ordinary? 5. Enforce a strict CORS policy on ALL requests, no exceptions 6. Minimize contact points if external APIs are called (separate routers, preferably on a separate port) 7. Enforce a strict CSP policy (self only, no insecure eval or anything) on ALL requests 8. If outgoing requests are required, write a portal that they must go through (ie a common http client instance) that enforces where the requests are allowed to go)


> But I could never quite satisfy the nagging feeling that the localhost server could adequately be secured against outside network requests being routed to it, or as TFA mentions, inside network requests being routed away from it to an outsider!

Wouldn't that be solved by binding your listener to 127.0.0.1 (as opposed to 0.0.0.0 or your actual IP)? Sending a request to 127.0.0.1 shouldn't be able to send anything to the network.


But can’t any JavaScript in the browser can call back to 127.0.0.1 (within the bounds of cross-origin policy).

So you have to check Origin/Referer at least I think.

Can you be sure Origin/Referer isn’t spoofed?

Then there’s DNS rebinding which can get around the SOP.

What about maybe strange ways that malformed requests could end up being reflected back to 127.0.0.1 through rebinding?

So it just seems to me that even binding to 127.0.0.1 you have to be ready for just about anything to come in on the socket.


> Can you be sure Origin/Referer isn’t spoofed?

As far as I know, browsers forbid modifying these headers, (especially?) when making cross-origin requests.


The short answer is it depends on your firewall rules.

The long answer is if the sender configured their routing to point localhost to your host then your service would still connect and route traffic back to the foreign address. But this type of attack can easily be firewalled against.

There is also the potential problem of reverse proxies. However that requires local machine access anyway.


If browser and server are on the same machine, you remove a whole host of the barriers to identification. You could use any sort of local knowledge like system files or your NIC as identification. NB: I haven't thought this through, but I'm sure there's something to it :)


Or a system call to identify the process or user on the remote end of a connection; e.g., http://illumos.org/man/3C/getpeerucred


Yup that’s the type of thing we do with Lantern. In particular we:

1) Bind to a random port

2) Require a securely random base path for all requests. Anything without that path is rejected.

The backend just opens the browser at the random port and base path.


I've done exactly that https://github.com/fairlayer/fair

Single auth_code shared between web session and local app is all you need.


When I was working on e-detailing apps for pharmaceutical sales reps in 2007-2010 we did this. Originally the UI for was based on Flash and intended for Windows Tablet PC usage. I think the server security was pretty rudimentary.


This is how, for example, Beats Updater and 1Password browser extensions work on macOS, as I understand it.


The two could use mutual TLS with baked certs


The Plex approach to this kind of problem is pretty interesting: https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...

Unfortunately I haven't seen it being done elsewhere. It'd be nice if LetsEncrypt or similar could provide this for more generic everyday use.


As discussed in crbug.com/378566, Chrome currently allows connecting to unsafe WebSockets on localhost. So just use a WebSocket to communicate from your HTTPS hosted page to your local server.

And yes, you definitely should whitelist access based on the origin header.


There's also "CORS and RFC1918"[1], which IMO would be a great way to stop apps from unintentionally exposing themselves to the open web.

[1]: https://wicg.github.io/cors-rfc1918/#headers


I tried this and Chrome complains about mixed mode and forces you to allow this behaviour and then to reload the page before you can get it working. If that's acceptable, then sure, but for actual use it's not really any good.


Did you connect to ws://localhost:<port>/?

As mentioned in the bug above, this approach is currently recommended by Chrome and in use by a number of large applications/sites.

It's completely transparent and is being used in production today.


I'm desperate for this. The hardware product I work on can be controlled by a mobile app. It was a very deliberate decision to not make the hardware product and mobile app both have to talk to a remote server acting as a proxy between the two. But that leaves me using plain http between the two.


We ended up using the WebRTC data channel to set up communication between our hardware product and mobile app.

There’s still a necessity for a signalling server in between to set up the communication, and a STUN/TURN server to proxy where it’s not possible to set up direct channels. But the whole thing is super low latency and uses DTLS, so it’s a nice alternative for many use cases!


> We ended up using the WebRTC data channel ...

That is an interesting approach. Unfortunately we also have to support browsers so it doesn't look like we could use it. (You have to type in http://something to get the page that does the webrtc as far as I can tell.)


WebRTC can be used a server fashion. We opened a library that does just that: https://blog.rainway.io/real-time-communication-for-everyone...


Unfortunately not applicable. The problem is a user with a regular desktop browser open and has the ip address of the device (on the same subnet). They have to type something into the browser address bar.

As far as I can tell, there is no way they could type something like webrtc://1.2.3.4 and would have to type something beginning with http or https. It can't be the latter because of signing issues, so we are still stuck with http.


More than happy to offer my thoughts if you want to reach out: andrew@rainway.io


Couldn't you use it to coordinate NAT punchthrough?


The mobile app (or web browser) are on the same network (subnet) as the hardware device. There is no need for NAT or anything else. Zeroconf is used for discovery.

There is little threat in most home networks, but I still prefer using SSL. If we only had to support the mobile app then we could implement a solution (eg ssh style trust on first use with self signed certificates). However we also have to support regular browsers which constrains us to standard protocols and security.


I'm working on a product that has a similar case - we luckily have the benefit of the hardware having a display and input so that clients need to effectively pair themselves (i.e. you need to physically click an authorized button on the hardware) before they can have full access. We figure that if someone/something has access to the hardware, you're already boned.


We also have a display and also do pairing. You have to press a physical button the device which results in a 4 digit code on the display that has to be entered in the browser or app (somewhat analogous to bluetooth pairing). At the end of the day this results in a cookie being set. But the traffic is still over http.


We built a PKI on top of Lets Encrypt to deliver certificates to all of our users for Rainway: https://rainway.io/technology/

Similar to Plex, users run our software on their PC and then our web based client can connect to it from elsewhere.

It’s mainly used as a fallback if WebRTC fails (https://blog.rainway.io/real-time-communication-for-everyone...). If there is interest, happy to do a blog post.



Isn't this exactly what the blog post is saying not to do?

> By introducing a domain name instead of an IP address, you make it possible for an attacker to Man in the Middle (MitM) the DNS lookup and inject a response that points to a different IP address. The attacker can then pretend to be the local app and send fake responses back to the web app, which may compromise your account on the web app side, depending on how it is designed.


Only if you ship the private key with your app. Otherwise the MITM will fail certificate validation.


How would you run a local HTTPS server without the private key?


They get one private key per user, and send it to the client's device.


The private key is generated on the client side, and signed by the certificate. Plex has an intermediate which they control to issue these. It would not pass normal validation processes.


Which "compromises" the key, according to current Certificate Authorities policies. Once again the problem boils down to CAs being the sole "anchors of trust" in the current certificate system.


Then they could have their server tunnel ACME challenges to the device, so the private key never leaves the device, but can still be signed.


Oh, indeed.


Couldn't the web app also verify that the localhost.example.com domain resolves to 127.0.0.1 before attempting a connection?


What prevents you from using letsencrypt? They issue wildcard certificates now. Is there tight limit on number of subdomains?


What I don't understand is this bit:

> Fortunately, modern browsers consider “http://127.0.0.1:8000/" to be a “potentially trustworthy” URL . [...] WebSockets don’t get this treatment for either name.

It's good they at least added an exception for (verifyable) localhost access - but then why is the exception only given for HTTP? There seems to be a deliberate restriction that websockets are excluded.

I find this kind of strange and frustrating as the websockets wire protocal actually contains more protections against accessing vulnerable services than HTTP. So without this exception, this leaves you no way at all to connect to a local service via websockets.

I've found this ticket for chome[1] where apparently the rationale is that they want to move people to their own IPC messaging mechanism. However, this is chome only and requires you to register a chrome extension.

So if I get this right, this still leaves no standard way to have asynchronous communication with a local process.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=418482


Chrome permits access to ws://127.0.0.1 from https apps. I believe it's also making its way to Firefox, which is currently http only. Other browsers (ie/edge/safari) don't even follow this exception for http yet.


Interesting. My impression from the above bug was that they explicitly decided against keeping it open. If they reversed their position, that would be good news.

Could you give a link where this is discussed? I see crbug.com/378566 mentioned in another comment, but there doesn't seem to be a decision in there, apart from general acknowledgement that the use-case exists.


While a huge pain the ass, it could be possible for your local app server to create a [bridged] virtual host network that routes locally for your server. It utterly ridiculous to resort to this method though. Does anyone else have an idea of a workaround?


That wouldn't solve the problem though. The browser would still block your bridged connection if it isn't https or wss.

At which point you again need a domain for your virtual IP, a certificate for your domain and a private key for the certificate on the device, which will probably cause your certificate to be reported as a vulnerability and revoked.


If you're using ASP.NET Core, this is built into the most recent 2.1 release - local dev cert as well as HTTPS redirection middleware and HSTS in development.

https://blogs.msdn.microsoft.com/webdev/2018/02/27/asp-net-c... https://docs.microsoft.com/en-us/aspnet/core/security/enforc...

As the letsencrypt article points out, you want to start building and testing with HTTPS as early as possible, so this is all wired up as part of creating a new project with ASP.NET Core 2.1.

[disclaimer: on .NET team, Nazgûl]


Came here to write this, very impressed! Saved me so much time. It might help others to point out you need the ASP.NET Core 2.1.300 SDK -- 2.1.200 is not sufficient, and it took me a while to realise this. Perhaps I'm veering off topic, but why no Typescript in the 2.1 reactredux templates?


I was hoping for a good answer, rather than "this is hard and it will just get worse." We have an old app that my team is modernizing with this exact situation. Uses websockets now, but that's an historical thing and all the web apps were non-secure so it worked okay. Now everyone wants SSL turned on, and this puts the websocket method in jeopardy. Somebody before me decided we should switch all the inter-app communication to an external XMPP server. Ugh.


Same here. We use a subdomain with HTTPS that points to 127.0.0.1, and a WebSockets-Server on our clients machines. We install the private key on each machine for this to work... yeah.

It's always a nightmare, when a browser-upgrade comes along and changes something.

Currently the following connections work:

"wss://localhost.example.com:4321/somepath", // Mac: Chrome, Safari, FF

"wss://localhost:4321/somepath", // Win: Chrome, FF

"wss://127.0.0.1:4320/somepath" // Win: EDGE, IE11, IE10

Some other combinations are possible, that's just what I know we use atm.

The worst thing is: I didn't find a way to catch connection-errors to these URLs. So I have to use timeouts, and try all of them... (or decide by User-Agent which URLs to try.)

I wish there would be some kind of standard that solves this problem.


Theoretically you should be able to use ws://127.0.0.1 from https context but I'm not aware if it's already well supported in the browsers.


Well, this is the reason why everything round-trips to a cloud service now.


What's wrong with wss://?


As a practical matter you cannot implement a localhost secure websocket. Essentially all of the methods for communicating between a web app and a native app are some kind of ugly hack and none are supported well enough that you can feel confident they'll stick around for any length of time in all the major browsers.


Relevant discussion regarding Activision/Blizzard's use of the domain "localbattle.net" (pointing to 127.0.0.1) for localhost communication with the agent:

Reddit: https://www.reddit.com/r/heroesofthestorm/comments/7lb8vq/he...

Blizzard response: https://us.battle.net/forums/en/bnet/topic/20760626838


It reminds me of that time when I accidentally went on a domain that had his A record set to 127.0.0.1. I was extremely confused about why a random domain had a copy of my project.


Everyone here and TFA talk about 127.0.0.1 (a.k.a localhost), but the entire 127.0.0.0/8 subnet is routed to the local machine - does anyone here know how the browsers treat the other addresses?

Do they compare to 127.0.0.1? to 127.0.0.0/8? Do they consult the routing table?


I suspect that the default reading of this text means "add your localhost.crt to the system's locally trusted roots" (or worse, "import the root certificate [to your locally trusted roots]"). Although there are methods to "install localhost.crt in your list of locally trusted roots" so that it's only visible to your application, I'm concerned about the naive reading of the post.

I would be very concerned to find that a random application like Spotify, is installing root certificates on my machine, as that would allow them to MITM any connection that doesn't have some kind of key pinning.

Although not exactly the same case, because the driver's shenanigans had no legitimate use, while Spotify does have a legitimate use, an audio driver caught installing a CA root certificate was reported as a CVE vulnerability: https://www.kb.cert.org/vuls/id/446847


Blizzard application installs root certificate: https://news.ycombinator.com/item?id=15982161

I can say that I wasn't even aware about it, it installs absolutely silently. So if you care about it, check it from time to time, because that random application may install it at any time!


It's not a root certificate in the CA sense. It can't sign new certificates and is only for the browser to communicate to the https server running on the same computer.

The key isn't even distributed anywhere. It's generated locally and then marked trusted so the browsers don't show warnings.

Checking the list of certificates from time to time is a good idea, although I wonder if anyone really does that very often without some automated help, but in this case Blizzard was doing something that was not only fine (using a self-signed cert to secure local traffic) but the absolutely correct way to do this. (As additionally explained by Let's Encrypt themselves)

edit: I guess the one thing is: If you can find the certificate's private key you can serve your own server at that domain and launch an attack using a trusted certificate without needing admin permission to add your own malicious certificate.


I agree, but for me their reasoning was weak and I would opt-out of this feature, because it's not even relevant for me. I don't like when someone messes with my trusted certificate store. And they installed that certificate absolutely silently, I learned about it from hacker news.


That was with a CA certificate, though. AFAIK, as long as the certificate marked as trusted does not have the CA flag set, and isn’t a wildcard cert for “*” or something silly like that, it can only be used to impersonate the specific domain(s) named in the certificate. Thus, even if the certificate is in the system global trust cache, it wouldn’t allow anyone to “MITM any connection that doesn't have some kind of key pinning”, only connections to that site. However, I can’t rule out that trusting a certificate for “localhost” for which an attacker knows the private key could still cause some sort of security vulnerability, though I’m not sure what exactly it would be.


An absolute joke how many loops one needs to go through to do this very basic thing. Hell, this command line is longer than the code required to start a web server in some programming languages.


To be fair, the complicated openssl invocation is largely because OpenSSL is crap. Let's break it down:

"openssl req x509"

This is a dodge because we don't actually want to write a CSR and then sign the cert, we're going to skip all that so we're using a sub-feature of a different sub-feature. Fine.

"-out localhost.crt -keyout localhost.key"

There surely must be nicer ways to set this, but it's not so objectionable...

"-newkey rsa:2048"

This is arguably boiler-plate, the configuration file can set this default, but there's a good chance "your" config file was pasted in by an OS vendor ten years ago and says e.g. 1024-bit RSA, or worse.

"-nodes -sha256"

Now we're getting into the nonsense. We don't want DES encryption. Nobody wants DES encryption, and if we did want DES encryption we could specify the passphrase for it, which we didn't, so this needn't be here. SHA-256 has been the reasonable baseline choice here for years and so likewise we shouldn't need to specify.

"-subj '/CN=localhost'"

This is pointlessly arcane and shouldn't be needed, but it's only partly OpenSSL's fault. This abuse of X.509 Common Name was obsolete on the Internet last century, and it's annoying the people were still coming to terms with that in the last few years so that certs which lack Common Name often don't interoperate, and thus it's easier to put it in anyway.

"-extensions ...."

This part, which involves a multi-line sub-shell and other shenanigans, is completely out of hand. This is the shortest, least crazy way to ask for the certificate practically everybody running OpenSSL actually wants, and yet instead of being the default or offered with some easy to understand command line parameter, even in brand new versions of OpenSSL it's done only by this arcane hack.

All Web PKI certificates this century are supposed to use SANs (Subject Alternative Names), so you'd expect an openssl feature named say, "-san" which adds one such name, and OK, this being OpenSSL maybe you'd need to manually write "DNS:localhost" rather than it being smart enough to figure it out if you write "localhost". That'd be stupid, but par for OpenSSL. But no, you have to manually specify how SAN extensions even work in the X.509 certificate, as if it has never seen one before, even though every valid cert for the Web PKI has one. It's like if Firefox made you type not just the HTTP port default of 80 into URLs, but actually made you specify that you want to use TCP/IP in case maybe you're using Novell IPX or something.


FYI, OpenSSL has a new flag that makes the "-extensions" part somewhat simpler, but most people won't have it in their copy yet: https://github.com/openssl/openssl/pull/4986


Interesting, is it normal that there are no tests for features like that, or does it happen somewhere else?


In fairness, shorter than `python3 -m http.server` is an awfully low bar.


I'm having a hard time understanding the use case here. Using a domain name for loopback IP and generating a cert will work fine for internal use. They're saying it's a security hole because you may need to distribute that private key to users. What exactly is that scenario? Shipping an app with a built-in web server? Not sure I've ever seen that done. And could you not solve it with certificate pinning?


> Shipping an app with a built-in web server? Not sure I've ever seen that done.

We did it. We developed an enterprise app as a web app, but some of our customers insisted on having it run locally because of security concerns. So we just packaged up the server with a thin client and voila! What might have been a months-long re-development took a couple of hours.


We have a need for this at scrimba.com, where we are now developing a way for people to record coding casts with access to local files and your local terminal. We want to let people record from any directory using a simple CLI (you only need to install an npm package). This will start a websocket-server locally, and then open up your browser at scrimba.com - which connects to said socket. This works in chrome since we're allowed to connect to ws://127.0.0.1, but other browsers need another solution.

MITM is not a problem, since the only communication with the socket is to read files etc from your local machine, and a malicious third party have nothing to gain, and no real way to fool you.


> What exactly is that scenario? Shipping an app with a built-in web server?

I've had this need a couple times. My scenario: web-based application that needs to use some usb-gadget (NFC reader for example).

You cannot access the device directly because there's no (standard) Browser <> NFC Reader API implemented in the browsers. You need native code to access that device. Yet your entire application is web-based, and it would be fine if only you could interface with that pesky gadget.

In the past, people used NPAPI plugins for that, but those days are gone (browser plugins are much more sandboxed now, so they just don't have those capabilities anymore).

Solution? You ship a companion small native app. That native app is supposed to interface your web application with the usb device. How do you achieve that? Simple, the native app exposes an http/websocket-based API at "localhost:someport" on the one side, and talks to the device using native drivers on the other.

If you don't use https in your web-app, that is it. Your web-app can now communicate with your native app (by making requests to "localhost:someport"), and through your native app communicate with the device. Problem solved... right?

Wrong, because your web application should be using https. For security reasons, the browser will _not_ allow connections from an https-page to a non-https (or secured web sockets) server. Thus, your native app's exposed API must use https/wss too. And here is where the article's ordeal begins.

> And could you not solve it with certificate pinning?

Pinning a certificate is not secure because using the same certificate in all installations is not secure: notice that your native app must have the certificate and the corresponding private key (because it must serve requests under that "localhost:port"). At this point, any of your users could just grab that cert/key pair (from their local installation) and use it to man-in-the-middle your other clients (the cert is valid for them too!).

The problem is exactly the same if you obtain a publicly-recognized certificate and distributed it with your native app.

The secure solution is, as the article says, to generate a certificate _specific for this user_ during the installation of your native app, and adding that certificate to the user's certificate store. This way the certificate is only valid for that user. Your native app can use this user-specific certificate to serve https/wss requests, and the user's browser will connect to that without warnings because the certificate is trusted. However, if user A extracts the certificate and key from her installation and tries to use them to main-in-the-middle another user B, it won't work because the certificate B trusts is his own installation-time-generated-one, not A's certificate which is different.


>Wrong, because your web application should be using https. For security reasons, the browser will _not_ allow connections from an https-page to a non-https (or secured web sockets) server. Thus, your native app's exposed API must use https/wss too. And here is where the article's ordeal begins.

The article also says that the website could make requests to http://127.0.0.1:portnumber/ and browsers will allow that even from HTTPS websites.


It works for http requests, but not for websockets (the article also mentions that). If you want decent bidirectional communication (no polling, i.e., the kind you'd want to get reads from an NFC reader) that's not a good option.

Also, I'm willing to bet browsers will block that at some point too...


But not websockets though.


I've been working on only this type of app for the last 3 or 4 years. The customers are medical organizations.


HTTP/2 requires https in most (all?) browsers these days. If you’re measuring load times locally this can be a bit of a problem.


Major browsers require TLS for HTTP/2. HTTP/2 requires no such thing, and the standard addresses how HTTP/2 should work on a unencrypted connection. (The connection starts as HTTP/1.1, and is upgraded to HTTP/2 with the Upgrade header.)


Good timing, I was working on a short tutorial on how to setup HTTPS for localhost, including how to get a green lock even in Chrome 58+: https://gist.github.com/cecilemuller/9492b848eb8fe46d462abeb...


Easiest way is to get a certificate for a subdomain of a domain you own, e.g. dev.example.com, and then point dev.example.com to 127.0.0.1 in your hosts file.


From the article:

> "You might be tempted to work around these limitations by setting up a domain name in the global DNS that happens to resolve to 127.0.0.1 (for instance, localhost.example.com), getting a certificate for that domain name, shipping that certificate and corresponding private key with your native app, and telling your web app to communicate with https://localhost.example.com:8000/ instead of http://127.0.0.1:8000/. Don’t do this. It will put your users at risk, and your certificate may get revoked."

EDIT: oops, it's not exactly what you were talking about, since you suggested only pointing it to 127.0.0.1 in your own /etc/hosts file, so I don't think the article answers your idea directly


This sounds like a bad idea. You don't want private keys to a production subdomain being handed around teams. For instance, let's say you have dev.mybank.com. Somebody could trivially poison a DNS cache for a local system to redirect to their server, have a valid SSL key on the company domain, and implement a very real-looking phishing website for the company.

Another problem - controlling a subdomain could be used to steal login cookies from the main website. This is why Github moved Github Pages to a separate domain: https://blog.github.com/2013-04-09-yummy-cookies-across-doma...


A domain you own <-> a production domain.

Our corp has corptech.com and a few similar ones for this purpose. A generic .com costs about nothing, so no point in running anything non-production on your primary domain.


Or use ngrok.io. :)


Ngrok will route all your traffic over the internet.


They discuss this in the article with some good criticisms of it.

You could maybe try something with a fully different origin, like mysite-dev.com...


yup, works fine for a small trusted team or when you wear all the hats, also useful for troubleshooting occasionally.


No, never do this. I will find your keys, and I will have your certificate revoked.

*also misread, I was considering the DNS case. Lightly, dont do this for the hosts file example either.


from TFA:

>It’s possible to set up your own domain name that happens to resolve to 127.0.0.1, and get a certificate for it using the DNS challenge. However, this is generally a bad idea and there are better options.


I read that as being about the actual public DNS, not your own local hosts file?


Malicious network could forge DNS responses and direct user into fake server. HTTPS should protect user, because face server can't own proper certificate for that domain. But if private key is leaked, this attack could work.


There are no DNS responses if the domain is in the hosts file.


I've been using Terraform (https://www.terraform.io/docs/providers/tls/index.html) lately for local CA's like my homelab. It's nice when you want to keep everything in configuration management. Example of a self-signed CA: https://gitlab.com/failmap/ca/blob/master/ca.tf


There's government CA in Kazakhstan issuing certificates for people and for some government websites. They have software for people, so their website can talk to USB tokens. This website connects to that software via secure websockets at 127.0.0.1. And they bundle private key for 127.0.0.1 issued by that CA inside that application. Is it bad? I guess there's no point to report it to them, because they are CA and developers. It's not browser CA, it's some kind of "private" CA (users must import their certificate as a trusted root to work with their website and software).


I worked for a place that did something similar, they were running a server on their local machines listening to https://localhost.company.com:someport (resolving to 127.0.0.1) so their javascript frontend hosted at example.com could communicate with their local machine. It was set up so the server would only respond to requests originating from company.com. They distributed the private key for the certificate localhost.company.com which was trusted by all browsers.

What kind of risk is there to having the private key to localhost.company.com?


Well, CA forbid that kind of usage, so if they found out, they'd revoke that certificate, that would be the major concern for me.

Other than that, obvious attack is to extract private key from your application, launch fake server and forge DNS responses for some poor guy (for example if he's using some untrusted WiFi). So his requests would be redirected into that fake server instead of localhost application.


I visited recently and was wondering what that was.

https://imgur.com/a/F2iAMm7

Who uses these tokens, and what for?


Ordinary people, for example I own one. It's a USB crypto device which stores private key and certificate. It handles all cryptographic operations inside, so private key can't be extracted (at least trivially). Actually most people use simple files, but it's significantly less secure, because file could be easily stolen.

As to certificates, they are required for some internet services. There's portal http://egov.kz/cms/en which provides almost all government services for citizens and to use it, you should own certificate (so you sign your request with digital signature and it's treated by law like you signed it with your hand).


Excellent, thank you!


Browsers are solving this by making localhost trusted over http (so webcam, notifications, and other privileged features work work), but here's a more specific guide to getting localhost https working on MacOS - using Keychain and a single command to export the created cert into PEM for your local webserver:

https://certsimple.com/blog/localhost-ssl-fix


> Traffic sent to 127.0.0.1 is guaranteed not to leave your machine. ...

Isn't that a widely held, but incorrect, assumption?

eg People with reasonable knowledge of IPv4 on *nix can still route 127.0.0.1 traffic out through an external interface?

From memory, people used to do that when attempting to bypass various firewall/filter rules on other hosts for a locally attached network.

Maybe things have tightened up/changed in the last few years?


No, 127.0.0.1 should never appear on any network, and no network device should ever route it.

The earliest documentation I was able to find is in RFC 1122 [1] from 1989, but according to RFC 6890 [2], the principle dates back to 1981.

[1] https://tools.ietf.org/html/rfc1122#section-3.2.1.3

[2] https://tools.ietf.org/html/rfc6890 (table 4)


Ahhh yeah. But that's how most OS's set things up by default, in order to meet the required specs. (bugs and implementations hiccups aside)

Once the OS is up and running, manipulation of the routing tables at least _used_ to make this possible on Linux and Solaris. Not sure about FreeBSD, but that's just from memory fuzziness on my part. :)


> Traffic sent to 127.0.0.1 is guaranteed not to leave your machine

This is definitely false, without any routing tables. Any unprivileged user can start an SSH tunnel listening on any localhost port above 1024, sending traffic out to wherever.


The implicit threat model here is "no one outside your machine can do something to you to make 127.0.0.1 traffic route elsewhere." It's true that software running on your machine can make copies of things and send them elsewhere, but that's not the point of the sentence you quoted.


Sent to is not the same as sent from


Yeah, but that would rather involve being on the other side of the airtight hatchway (having root).


Good point. :)


At that point... you break it, you buy it.


I've posted a RFC about a potential service/solution for this 3 days ago on the LE community boards:

https://community.letsencrypt.org/t/rfc-a-way-to-use-valid-h...

The idea is to basically offer a free subdomain service (ssl.fun) in conjunction with solving the DNS-01 challenge.

This would automate the existing practice of using a e.g. a public local.domain.com A record pointing to 127.0.0.1

The difference in this approach in regards to previous attempts is that the private key would not be compromised, as the client is issuing it directly.

Happy to receive feedback on this idea :-) As this would run into the 20 certs-per-domain LE limit quickly it needs some blessing before I can offer this service publicly.


The openssl command in the article is great, but I might add "-days 3650" or something so I don't need to re-generate the certificate every month.

Also, I wouldn't use the host name "localhost", but something like "mydomain.test", and update /etc/hosts accordingly.


Ive found many of these, and reported them to the vendors. Sometimes they are happy (swag!!!), sometimes they are not. This is a great article that explains the issue and work arounds.

Your friend is strings blah | grep PRIVATE KEY Run it over your fav bins today!


Note that you can also report it to the issuing CA if the vendor doesn't take action, and the CA will revoke the certificate.


If you're desperate, just serve local sdk from http://localhost/sdk.html which will talk to your local app via postMessage proxy. You may even open this proxy with window.open under https:// app


Folks may also be interested to read this useful summary of issues around locally installed roots: https://github.com/njh/dymo-root-ca-security-risk/blob/maste...


I run my local apache server with a wildcard certificate for *.captnemo.in. Works perfectly. I forward `$port.in.captnemo.in` -> localhost:$port via Apache for some common ports and can access all my local servers easily over HTTPS.

I can expose some specific service by changing the DNS to my local WLAN ip address (have a script for this that updates the DNS entry in cloudflare).


https://github.com/ctcherry/tlself

I created this proxy to help with this, it dynamically creates certs from a self-signed-trusted local CA. It only targets OSX for now. It's not perfect but it has been working great for me.


We're using LetsEncrypt with local domains.

We have a domain for internal usage only, where we can modify TXT records. Through this, and a little help from acme.sh, and dnsmasq, every workstation can have unlimited, valid certificates for local projects.


Are you using a fake domain or a real one? If fake, I'd be interested in how that works.


Real domain, just no A record.

For our projects, we create domains like {project}.{workstation}.company.net


Any chance there's a write-up or docs on this somewhere?


Is it time to start using a better acronym than either XHR or AJAX? Is there a modern accurate alternative for an HTTP request made by a browser that is not a request for a page reload?


I believe the term you're looking for is a fetch (from the fetch API)? Though I don't know if that is also commonly used to refer to a standard page reload.


> This subtly changes how browsers handle cookie storage.

Good article, although would have been nice to have an explanation on this final point.


wrote about how you can do this with a multi-container docker set up https://meagher.co/blog/2018/05/21/certificates-for-localhos...


The command shown doesn't work on macOS. All I get is a usage text for openssl(1).




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: