We also appreciate the concerns for legacy apps, local servers, etc. That's why we plan on supporting blocking UI overrides, opt-in by the page on localhost/private (maybe via CORS headers), and permanent exceptions via the command-line and enterprise policy. So, there might need to be some small tweaks, but all the use cases I've seen described in this thread should still work in a portable manner after these changes are made.
Also, to my knowledge no one is actively working on these changes yet. Hence why the details in the bugs are still a bit nebulous. Although that is likely to change pretty soon.
By that do you mean, an in-browser prompt to allow the connection?
But it's essential that access to secure websocket on localhost not be blocked in the process (which appears to be the case is it stands).
That's the reason for posting here in the first place.
And that is a specific scenario for which you can easily create a permanent exception for, without requiring the user to start Chrome with a command-line flag.
You can block WS on localhost by all means.
But blocking WSS on localhost after someone has gone through all the trouble of setting up a public DNS record, and CA-signed SSL certificate which validates against that FQDN (this is not just about public DNS localhost record services) makes no sense to me, and seems to be outside the intent of the original bug ticket.
To reiterate the issues with your proposal: Public DNS pointing to private/localhost is in fact a component of many of the attacks we're concerned about. HTTPS on localhost doesn't do anything because the transport is already secure (which should be simplified in a future change in Chrome's content handling). And given that the major concern is weak/permissive servers, we can't rely on the WS(S) server implicitly protecting itself by enforcing the origin header.
With the server being already running and meant to be connected to from a browser (as combination of WSS and proper localhost cert is proving), listening and explicitly allowing the client to connect, what threat is supposed to be there?
Also this is preferred way to communicate with localhost:
Oh! Thanks. Forgot about that.
In general in documentation for computing,
or other technical documentation, or any
serious writing on any subject, we should
avoid undefined terminology.
Sorry to be critical, but computer security
is a serious subject; since all it takes is
one little gap to have a terrible
computer virus infection, we need to be
quite clear right down to the level of
each little issue, quite clear and
So far this year, I've spent over half
my time fighting viruses. Bummer.
As I explained in another comment, we're also working on exception mechanisms (including per-page opt-in, possibly via CORS) to cover the valid use cases.
Malware on a user's computer already has access to everything the user has.
Malware on a website would be blocked by the mutual authentication and origin checking that Dropbox do after establishing the WSS connection.
Malware in the form of a local daemon on a public computer where someone logs into Dropbox will not be able to authenticate with the web app.
If your goal is to protect local daemons from malicious web sites, then I don't see why the usual CORS restrictions aren't enough. Maybe they aren't (You could tighten them a bit, e.g. not allowing "Allow: *" responses, treating all requests as CORS non-simple or even only allowing WebSocket connections if you must), but forcing TLS or a DNS entry doesn't seem to increase the security: If I were an attacker, there is nothing that keeps me from trying wss://localhost or wss://www.dropboxlocalhost.com in addition to plain localhost. So what security benefit would that bring?
If you want to protect a legitimate web page from malicious daemons, those measures won't help you either: I can simply download your legitimate daemon, extract the TLS private key and generate a valid certificate for www.dropboxlocalhost.com myself - or I just install a custom root CA certificate and use my own private key.
But I don't think defending against the latter scenario is very useful anyway: Either the "malicious" daemon has been purposefully installed by the user, in which case blocking it would be against user interests, or it is part of a malware. In that case, the malware will have dozens of other ways to compromise your web page and you will generally have bigger things to worry about.
So, why can't we simply use CORS to protect connections to localhost like we do for everything else?
Any other connection would be blocked by browsers (e.g. a connection from https web app to a non-tls ws localhost server). The bug ticket referred to would seek to block this last remaining way of connecting, forcing localhost daemons to connect to the web app via server proxy.
As I understood, connecting to a daemon, both over ws and wss will still be possible, as long as the - to be defined - opt-in headers are sent by the daemon. In fact, the associated change to mixed-content would even allow using plain ws://localhost on secure sites, which - as you told - isn't possible today.
Apparently vulnerable services exist and this is a security hole that needs to be closed. So legitimate users of local connections need to update their implementations anyway. If that's the case, I'd prefer a solution which just requires me to add a new response header and not need me to manage a new DNS entry and TLS cert for no real benefit.
I agree though that this change has possible sweeping implications and should be done as part of the usual HTML5 process and not unilaterally be chrome. Also, the opt-in protocol must be specified before this change comes into effect.
After this change, there will be no way for a web app to communicate directly with a locally installed daemon.
From my other comments, there is the additional implicit opt-in shown by WSS itself on localhost requiring (1) a public DNS record and (2) a valid SSL certificate for that FQDN.
I doubt that there are WSS servers on localhost out there setup with a public DNS record and validating SSL certificate which are not also expecting public traffic.
But on the other hand, blocking secure websocket servers on 127.0.0.1 using a public DNS record and valid SSL certificate, is going too far.
Secure websocket servers on localhost are often implemented precisely for the purpose of being accessed by the public web, and are secured for that purpose. It's very rare to find wss servers on localhost using a public dns record resolving to 127.0.0.1 and a valid ssl certificate (most people seem to think a wss server on localhost is not possible).
So this is bad for users. For example, it will break Dropbox's feature which allows you to open files from within the web app in a native application such as Excel (if you have the file synced on your computer).
If Chrome continue with this, then apps such as Dropbox will have to implement these kinds of features using server proxies. These will be slower and less secure since they will need to be able to guarantee that a TCP connection from a web browser and a TCP connection from a local daemon are from the same machine (this is not easy to do when NAT is involved).
Couldn't that just do that with a custom URL scheme like dropbox:// that's handled by the local app? Or is there more to the feature?
Well, you could just not use Chrome.
I'm noticing a large uptick in "good engineering but terrible design and implementation choices in the name of security", and I wonder what's driving that.
I don't think that was ever the message. Chrome has always been "the fastest browser" rather than anything else, and it has lead to (much-needed IMO) substantial performance improvements from the other camps.
Ahem: "There are a lot of limitations to the kinds of applications that you can build today with web browsers. And the subset of things you can do is different for each browser. If one browser has a cool feature, that doesn't help -- it has to exist across all browsers in order for developers to use it." (Emphasis original)
But yeah, Google says they're not evil, so that makes anything they do non-evil...riiiiight. I have a feeling we're not in Kansas anymore.
- it's safe
- it's fast
- it's stable
- doesn't break the web
All of that in contrast with IE of that time, which failed all four of the above. Currently it's trying to stay somewhat safe, and ditched the other three in the process.
Many users do not have a choice which browser they use, do not know they even have a choice or do not know how to exercise that choice i.e. download and install a competing browser. Finally you're then left with the massive problem of sites only working on certain browsers.
Dropbox does this, Github for Mac does this (and incidentally, my own home-grown solution for reading barcode scanners does this too).
I really don't want to have to end up in a world where we have to write browser-specific solutions (the method recommended in the issue is chrome-specific) for these kind of things.
BITCH PLEASE! Whats next? Maybe you want non gimped SD card access for you android apps? HAHA.
We have ze Cloud specifically so you are forced to use us as the middle man, and we can see all of your data. How else are we going to make any money off of you?
PS: Oh, and its about security in journa^^ of your data!
"In conjunction with the change, we will need a command line switch and enterprise policy to revert to the older behavior for testing and legacy applications."
Is an about:config switch not being considered? If so I'd imagine the thinking might be "too easy to social engineer round this".