Hacker News new | past | comments | ask | show | jobs | submit login
Chrome team planning to block all access from browser to localhost wss (code.google.com)
74 points by jorangreef on March 16, 2015 | hide | past | favorite | 83 comments

I filed this bug (along with http://crbug.com/458362), and there's a bit of context that may be missed. First, this is to protect weak devices and services on private networks or localhost, that really can't be safely exposed to attacks pivoting from the web. It will block only sub-resource loads and navigation from the web to private networks. Navigating directly (via bookmarks or omnibox) or in the other direction (from private to web) is still fine. Essentially, this is applying similar rules to what's already in place for file URLs.

We also appreciate the concerns for legacy apps, local servers, etc. That's why we plan on supporting blocking UI overrides, opt-in by the page on localhost/private (maybe via CORS headers), and permanent exceptions via the command-line and enterprise policy. So, there might need to be some small tweaks, but all the use cases I've seen described in this thread should still work in a portable manner after these changes are made.

Also, to my knowledge no one is actively working on these changes yet. Hence why the details in the bugs are still a bit nebulous. Although that is likely to change pretty soon.

> That's why we plan on supporting blocking UI overrides

By that do you mean, an in-browser prompt to allow the connection?

Pretty much. It would probably be an interstitial on navigation (similar to how certificate errors are handled) and a page action for subresources (similar to the shield page action for mixed content).

It's great to block access to http routers on localhost, which I think was the original point of the bug.

But it's essential that access to secure websocket on localhost not be blocked in the process (which appears to be the case is it stands).

That's the reason for posting here in the first place.

As I already stated, we plan on having an explicit opt-in for these cases. However, what you're describing is an implicit opt-in (external DNS pointing to localhost, etc.), which can be trivially abused by third-parties in exactly the kind of attacks we're trying to prevent.

WSS already has explicit opt-in in the Origin header.

That's not the way the origin header works. The origin header in websockets is provided from the client to the server, and the server must take overt measures to protect itself or scope origins that are allowed. Given that we're worried about unsafe servers, the origin header won't protect against the attacks we're seeing. What we need is a mechanism for the server to authorize the client explicitly, so that we can disallow the resource implicitly when authorization is not present.

Have you seen any attacks on WSS servers on localhost with both explicit opt-in (via the Origin header) and implicit opt-in (via public DNS record and TLS certificate for that public DNS record) and additional mutual authentication (as in the case of Dropbox)? I doubt there are even a handful of people who have setup something like that (I used to think that WSS on localhost was not even possible because of the self-signed certificates not validating).

And that is a specific scenario for which you can easily create a permanent exception for, without requiring the user to start Chrome with a command-line flag.

You can block WS on localhost by all means.

But blocking WSS on localhost after someone has gone through all the trouble of setting up a public DNS record, and CA-signed SSL certificate which validates against that FQDN (this is not just about public DNS localhost record services) makes no sense to me, and seems to be outside the intent of the original bug ticket.

I'm sorry, but I don't understand the argument you're making here. I've already explained that we plan to provide a portable mechanism for private and localhost opt in. It's just not going to be the mechanism you propose, since that mechanism doesn't address the vulnerabilities that we're concerned about.

To reiterate the issues with your proposal: Public DNS pointing to private/localhost is in fact a component of many of the attacks we're concerned about. HTTPS on localhost doesn't do anything because the transport is already secure (which should be simplified in a future change in Chrome's content handling). And given that the major concern is weak/permissive servers, we can't rely on the WS(S) server implicitly protecting itself by enforcing the origin header.

If there's no standard currently in place for this maybe Google's time would be better spent drafting one and going through the RFC process. A single app fix doesn't seem to do a lot of good since the solution for dealing with it is proprietary and not likely to be picked up by other browsers.

Uhm? Unsafe servers? How so? For me it looks like the ticket is about unsafe pages trying to mess up with innocent local services, like router http control panel. WSS origin is exactly what's needed here.

With the server being already running and meant to be connected to from a browser (as combination of WSS and proper localhost cert is proving), listening and explicitly allowing the client to connect, what threat is supposed to be there?

They're not blocking all access, just sub-resources from Internet pages (you can still develop/use local servers). And they're adding a command line switch and enterprise policy to allow localhost.

Also this is preferred way to communicate with localhost:

https://developer.chrome.com/extensions/messaging https://developer.chrome.com/extensions/nativeMessaging

Sub-resources includes websocket connections. The link you point to is not a standard and will not work cross-browser. You will also need to package your app using Chrome's format AFAIK. Basically, it's not really an option at this point in time.

I agree with you and someone has mentioned this on the issue, as well as a proposed exception to this policy: https://code.google.com/p/chromium/issues/detail?id=378566#c...

Yep, this is a good thing. Apart from dev work, I like the pattern of local Web UIs for installed apps, I wish they were more commonplace - without risking the user's security.

Yes, but it should not be blocking access to wss with DNS resolving to and valid SSL certificate, which is what they are planning to do, breaking web apps such as Dropbox which rely on this.

How does this work? Do I have an SSL certificate on my computer for www.dropboxlocalhost.com? Where does the SSL termination happen?

in the dropbox app. Feel free to extract the certificate and MITM connections to your local machine.

Then I'm not entirely sure I understand the point of having an SSL certificate in place to begin with?

It's there to enable a connection from an https web app. Otherwise it would be blocked as mixed content (e.g. if you just used ws). Because everyone has access to the localhost key in the Dropbox app, Dropbox uses additional mutual authentication after establishing the wss (this is actually also because other websites may try to access the local daemon). The wss is only there to indicate to the browser that Dropbox really wants to establish a connection between their web app and the local daemon. It's a lot of hoops to jump through to make it work, but it's the best way of doing it.

We'll be changing the mixed-content detection soon to treat localhost as a secure transport (because it is from the network perspective), which will address the mixed content issue.

> It's there to enable a connection from an https web app

Oh! Thanks. Forgot about that.

You cannot talk to non SSL connections from an SSL page.

Can't using CORS Allow-Origin headers solve this issue, if there's a domain for localhost and a valid certificate for it?

Not likely, as I think it will be blocked in the same way that mixed content is blocked (whether CORS is allowed or not). I think the plan is just to cut off access to localhost entirely whether intentioned or not, because WSS already has an Origin header which functions similarly to CORS when the client is a browser.

what I meant was that I think allowing CORS or a similar mechanism is probably enough, and breaking apps like Dropbox is not a great idea, even if there is a non standard mechanism to override it.

Yes I agree, and WSS already has that mechanism in place. A similar point was also raised in the issues thread but has not been responded to.

As I explained elsewhere, the origin header in WS(S) allows a server to overtly protect itself from a client. Unlike CORS, it doesn't allow the browser to make security decisions regarding requests, and so would not address any of the attacks we're trying to prevent.

Potentially, yes, and it's one of the ways we're looking at addressing per-page opt-in.

Sorry, but I didn't find "sub-resources" in my Webster's. Could we please have a definition or a link to a definition?

In general in documentation for computing, or other technical documentation, or any serious writing on any subject, we should avoid undefined terminology.

Sorry to be critical, but computer security is a serious subject; since all it takes is one little gap to have a terrible computer virus infection, we need to be quite clear right down to the level of each little issue, quite clear and explicit.

So far this year, I've spent over half my time fighting viruses. Bummer.

A subresource is basically additional content that a page includes via a URL (e.g. script, iframe, images). The problem underlying this is that any page on the web can currently probe and then attack devices and servers on your internal network or local host. So, the proposal is to block that by preventing remote websites from navigating to or including sub-resources from anything on your internal network or local host (unless some explicit opt-in or exception is configured).

We'd be blocking sub-resources and navigation from web to private. Of course, direct navigation via a bookmark or the omnibox would still work. And navigation or resource inclusion from private to the web would still work.

As I explained in another comment, we're also working on exception mechanisms (including per-page opt-in, possibly via CORS) to cover the valid use cases.

Some of us still use tiddlywiki. Make sure that still works.

This is a good thing. I recently found an exploit in a widely-distributed bit of software that listened on localhost but had minimal protection against malicious inputs. Anyone who could trigger your browser to make a crafted request to locahost (which is very easy) could make this bit of software download and execute any remote executable.

Yes it is, but if you read closer, you will see that they are also planning to block access to secure websocket servers with public dns resolving to and valid ssl certificates setup precisely for the purpose of being accessed by the public web. This is completely unnecessary.

But then malware can exploit every user of Dropbox, for example, it doesn't even need their own certificate. And even the users wo don't have Dropbox are vulnerable throgh Dropbox's DNS entry.

There are already plenty of public DNS records which resolve to They're offered as services. For example, 37Signals have a domain resolving to localhost.

Malware on a user's computer already has access to everything the user has.

Malware on a website would be blocked by the mutual authentication and origin checking that Dropbox do after establishing the WSS connection.

Malware in the form of a local daemon on a public computer where someone logs into Dropbox will not be able to authenticate with the web app.

I absolutely agree that an exception for legitimate use cases is necessary. (And that a proprietery Chrome API - which in addition is only available to approved extensions - is not an option) However, I don't see what security benefits TLS or DNS records would bring in this case and why they should be requirements for the exception.

If your goal is to protect local daemons from malicious web sites, then I don't see why the usual CORS restrictions aren't enough. Maybe they aren't (You could tighten them a bit, e.g. not allowing "Allow: *" responses, treating all requests as CORS non-simple or even only allowing WebSocket connections if you must), but forcing TLS or a DNS entry doesn't seem to increase the security: If I were an attacker, there is nothing that keeps me from trying wss://localhost or wss://www.dropboxlocalhost.com in addition to plain localhost. So what security benefit would that bring?

If you want to protect a legitimate web page from malicious daemons, those measures won't help you either: I can simply download your legitimate daemon, extract the TLS private key and generate a valid certificate for www.dropboxlocalhost.com myself - or I just install a custom root CA certificate and use my own private key. But I don't think defending against the latter scenario is very useful anyway: Either the "malicious" daemon has been purposefully installed by the user, in which case blocking it would be against user interests, or it is part of a malware. In that case, the malware will have dozens of other ways to compromise your web page and you will generally have bigger things to worry about.

So, why can't we simply use CORS to protect connections to localhost like we do for everything else?

The reason that web apps such as Dropbox currently go to all the effort of a public DNS record (www.dropboxlocalhost.com) resolving to plus an SSL certificate for that domain, plus mutual authentication after the websocket is established, is because it's currently the only pragmatic way to setup a non-"mixed content" connection to a localhost websocket server. It doesn't increase security as you point out. It's just that it's the only way to do it.

Any other connection would be blocked by browsers (e.g. a connection from https web app to a non-tls ws localhost server). The bug ticket referred to would seek to block this last remaining way of connecting, forcing localhost daemons to connect to the web app via server proxy.

Okay, that makes sense. But then the real problem is that solving what should be a trivial technical problem today requires an extremely wasteful workaround. If I understood jschuh correctly in [1] and [2], then they plan to make this workaround unnecessary by treating connections to localhost as secure even when over plain http/ws.

As I understood, connecting to a daemon, both over ws and wss will still be possible, as long as the - to be defined - opt-in headers are sent by the daemon. In fact, the associated change to mixed-content would even allow using plain ws://localhost on secure sites, which - as you told - isn't possible today.

Apparently vulnerable services exist and this is a security hole that needs to be closed. So legitimate users of local connections need to update their implementations anyway. If that's the case, I'd prefer a solution which just requires me to add a new response header and not need me to manage a new DNS entry and TLS cert for no real benefit.

I agree though that this change has possible sweeping implications and should be done as part of the usual HTML5 process and not unilaterally be chrome. Also, the opt-in protocol must be specified before this change comes into effect.

[1] https://news.ycombinator.com/item?id=9211034 [2] https://news.ycombinator.com/item?id=9210985

This includes blocking access by web apps such as Dropbox to their local daemon, which currently rely on a public DNS record resolving to and a secure localhost websocket server (using a certificate for the public DNS record's FQDN) to make the connection (with additional mutual authentication after the websocket handshake).

After this change, there will be no way for a web app to communicate directly with a locally installed daemon.

I mentioned this above, but we plan to let the localhost/private page support an explicit opt-in, maybe via CORS headers. Those details may not be captured in the bugs yet because currently no one is actively working on this.

Websockets already have the Origin header as mentioned in the bug by Tyoshino: https://code.google.com/p/chromium/issues/detail?id=378566#c...


From my other comments, there is the additional implicit opt-in shown by WSS itself on localhost requiring (1) a public DNS record and (2) a valid SSL certificate for that FQDN.

I doubt that there are WSS servers on localhost out there setup with a public DNS record and validating SSL certificate which are not also expecting public traffic.

As I explained in my other comment, the origin header in websockets doesn't help in addressing the attacks we see, because it is a server-side measure.

As a normal user, i.e. someone who does't really keep up to speed with whatever is the latest news in cross-platform-XRGS-injection-attack, is this good or bad?

On the one hand, blocking access to plain http servers on localhost is a good idea as it prevents people from trying to probe router servers etc. which don't expect to be accessed from the web.

But on the other hand, blocking secure websocket servers on using a public DNS record and valid SSL certificate, is going too far.

Secure websocket servers on localhost are often implemented precisely for the purpose of being accessed by the public web, and are secured for that purpose. It's very rare to find wss servers on localhost using a public dns record resolving to and a valid ssl certificate (most people seem to think a wss server on localhost is not possible).

So this is bad for users. For example, it will break Dropbox's feature which allows you to open files from within the web app in a native application such as Excel (if you have the file synced on your computer).

If Chrome continue with this, then apps such as Dropbox will have to implement these kinds of features using server proxies. These will be slower and less secure since they will need to be able to guarantee that a TCP connection from a web browser and a TCP connection from a local daemon are from the same machine (this is not easy to do when NAT is involved).

> For example, it will break Dropbox's feature which allows you to open files from within the web app in a native application such as Excel (if you have the file synced on your computer).

Couldn't that just do that with a custom URL scheme like dropbox:// that's handled by the local app? Or is there more to the feature?

The issue with custom URL schemes is that they're a cure worse than the disease here. The "internal webserver for local apps" was supposed to be a workaround for this very problem.

Yes, the custom protocol scheme is also only really useful for doing something "once". That is, you can only issue one custom protocol request or so each time you load a web page, the browser blocks the rest to prevent spamming the user.

I'm sorry, I know this is an earlier comment but I wanted to correct this framing of the threat and proposed mitigations. My response to you here explains why these measures don't solve the attacks that we're trying to address: https://news.ycombinator.com/item?id=9211034

To complete with an example of that, easel, the hipster Computer Aided Manufacturing software is hosted in the web, and it assumes that your CNC router is on the local network to receive the instructions in a HTTP post request.

> After this change, there will be no way for a web app to communicate directly with a locally installed daemon.

Well, you could just not use Chrome.

That's the same argument as "You could just not use IE4," and the same situation: "let's just break the web, we know best what's best for you." Ironic that it's Chrome this time, the browser that started out as the champion of NOT breaking the web.

Yeah, that's my point. We're beyond the place in time where a single browser gets to dictate behavior like this.

I'm noticing a large uptick in "good engineering but terrible design and implementation choices in the name of security", and I wonder what's driving that.

> Ironic that it's Chrome this time, the browser that started out as the champion of NOT breaking the web.

I don't think that was ever the message. Chrome has always been "the fastest browser" rather than anything else, and it has lead to (much-needed IMO) substantial performance improvements from the other camps.

I think you're retroactively spinning it, to fit what Chrome has become today. Let me just find the comics v1 rolled out with.

Ahem: "There are a lot of limitations to the kinds of applications that you can build today with web browsers. And the subset of things you can do is different for each browser. If one browser has a cool feature, that doesn't help -- it has to exist across all browsers in order for developers to use it." (Emphasis original)

But yeah, Google says they're not evil, so that makes anything they do non-evil...riiiiight. I have a feeling we're not in Kansas anymore.

I'm not trying to be retroactive - I genuinely don't remember any comics - just the videos with the lasers and whatnot to show how fast it was.

Well, it's still out there. The main selling points were:

- it's safe

- it's fast

- it's stable

- doesn't break the web

All of that in contrast with IE of that time, which failed all four of the above. Currently it's trying to stay somewhat safe, and ditched the other three in the process.


Yes, but if you're developing a cross-browser web app you can't expect your users to just not use Chrome.

If Google keeps breaking web apps, users will move from Chrome anyway.

This argument has been shown to be wrong on many, many occasions.

Many users do not have a choice which browser they use, do not know they even have a choice or do not know how to exercise that choice i.e. download and install a competing browser. Finally you're then left with the massive problem of sites only working on certain browsers.

The situation where the user doesn't know about and/or is incapable of installing another browser is pretty common with default browsers like IE and Safari, but not nearly as common with Chrome.

Google have been bundling Chrome with widely-used software like Adobe Flash for years now, there's probably many people who've installed it and set it as their default browser without intending to.

Might. It only took ... what - five years? Ten years? for people to start ditching IE after it turned Evil (IE5+). The years between "Browser X is the best thing since sliced bread" and "Browser X Considered Harmful" became known as the Browser Dark Ages. Not looking forward to a repetition thereof.

From 0 to 1 is a lot different than from 1 to 2, or from 1 to n.

Wow how things have changed

I can see to not allow connections to be made to any host in the local network. That's fine. But blocking access to the loopback interface is IMHO going a bit too far as this was until now a very portable and secure way for websites to talk to locally installed helper applications.

Dropbox does this, Github for Mac does this (and incidentally, my own home-grown solution for reading barcode scanners does this too).

I really don't want to have to end up in a world where we have to write browser-specific solutions (the method recommended in the issue is chrome-specific) for these kind of things.

Unfortunately, 'localhost' is not multiuser safe. So a better (cross browser!) alternative would be OK.

Multiuser on localhost can be safe like any other website; you just need authentication.

Not just to localhost, but to most of the IANA reserved space, including and


How the hell do you access your router's admin page then?

Seems like it only limits OTHER sites/pages from being able to access localhost or 192.168.x.x I think you'll still be able to access those pages directly yourself

thanks, misunderstood

By going there directly. It is only about sub-ressources in pages (so if you go to badsite.com it can't trigger a request to your router, which given the security track record of your average home router is a good thing)

That will still be possible. The issue is about blocking access to secure websocket on localhost from public web pages. Routers don't generally use websocket, and if they do, probably not secure websocket.

FWIW, the title is beyond sensationalist, and outright wrong. The title on the link is "Block sub-resource loads from the web to private networks and localhost".

You want _your_ browser to connect to _your_ local resources?

BITCH PLEASE! Whats next? Maybe you want non gimped SD card access for you android apps? HAHA.

We have ze Cloud specifically so you are forced to use us as the middle man, and we can see all of your data. How else are we going to make any money off of you?

PS: Oh, and its about security in journa^^ of your data!

The only people this would inhibit do not know what a daemon is in the first place. I have no interest in da clowd.

Does this mean I cannot develop on localtest.me (that resolves to

From the description, it only means that external pages won't be able to access resources on localhost, but you could still develop on localhost as usual.

Well we can still enable it if we need it

"In conjunction with the change, we will need a command line switch and enterprise policy to revert to the older behavior for testing and legacy applications."

If you rely on connecting your web app with a locally installed daemon (properly secured using wss and additional mutual authentication) then that's not really an option either, as you can't expect your users to know how to use command line switches. Cutting off access to wss is a big deal.

Command line switches are a bitch on OS X.

Is an about:config switch not being considered? If so I'd imagine the thinking might be "too easy to social engineer round this".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact