Hacker News new | comments | show | ask | jobs | submit login

It might sound unpopular but, I've always thought these things are a bit of shenanigans, honestly. It sounds like it increases security a lot but it seems to just be a little extra paper on top. It's good for resource control (stop Chrome from eating all RAM) or installing a browser quickly in a container/docker/jail/whatever, but security wise I think it's not the right solution.

The thing is, a chroot jail doesn't really protect my browser in the way I want to (if I'm speaking personally, I guess). It's not the right level of granularity.

If an exploit compromises my browser, it would, essentially, have vast amounts of access to my personal information already, simply due to the nature of a browser being stateful and trusted by the user. Getting root or whatever beyond that is nice I guess, but that's game over for most people. This is true for the vast majority of most computer users. I don't clear my browser history literally every day and 'wipe the slate clean', I like having search history and trained autocomplete, and it's kind of weird to expect people suddenly not to. It seems like it's a move laterally, in a direction that only really satiates some very technically aware users. Even then, I'd say this approach is still fundamentally dangerous for competent users -- a simple mistake or a flaw in the application you can't anticipate, or your own mistake, could expose you easily.

A more instructive example is an email client. If I use thunderbird and put it in a chroot jail, sync part of my mail spool, and then someone sends me an HTML email that exploits a Gecko rendering flaw and owns me 'in the jail' -- then they now have access to my spool! And my credentials. They can just disguise access to the container and do a password reset, for example, and I'm screwed. Depending on the interaction method of the application, things like Stagefright were auto-triggered, for example, just solely by sending SMS. It's a very dangerous game to play at that point, when applications are trying to be so in-our-face today (still waiting for a browser exploit that can be triggered even out-of-focus, through desktop notifications...)

The attack surface for a browser, and most stateful, trusted apps -- basically starts and ends at there, really. For all intents and purposes, an individual's browser or email client is just as relatively valuable as any company's SQL database. Think: if you put a PostgreSQL instance inside a jail, and someone exploits your SQL database... is your data safe? Or do they just exfiltrate your DB dump and walk away? Does a company wipe their database every week to keep hackers from taking it?

Meaningful mitigation has to come, I think, in the way Chrome does it: by doing extensive application level sandboxing. Making it part of the design, in a real, coherent way. That requires a lot of work. It's why Firefox has taken years to do it -- and is pissing so many people off to get there by breaking the extension model, so they can meaningfully sandbox.

Aside from just attack surface concerns though, jails and things like containers still have some real technical limitations that stand in the way of users. Even things like drag-and-drop from desktop into container is a bit convoluted (maybe Wayland makes this easier? I don't know how Qubes does it), and I use 1Password, so the kind of interaction between my key database means we're back at square 1: where browser compromise 'in the sandbox' still means you get owned in all the ways that matter.

Other meaningful mitigations exist beyond 'total redesign' but they're more technical in nature... Things like more robust anti-exploit mechanisms, for example, in our toolchains and processes. That's also very hard work, but I think it's also a lot more fruitful than "containerize/jail it all, and hope for the best".




I have a feeling you misunderstood the parent's idea. The jail there is not to prevent someone from breaking out from the browser into the system. It's to contain simple attacks on your data, exactly because the browser is a stateful system with lots of stored secrets.

If you have a full sandbox breakout exploit, both cases are broken. But if you have just a stupid JS issue that breaks same-origin, or causes a trivial arbitrary file read, jails will put you from them just fine. It's pretty much to stop a post you open from Facebook from being able to get your PayPal session cookie. Not many exploits in the wild are advanced.


Couldn't this be achieved in Chrome by creating different user profiles and switching between profiles depending on the site being visited?

I already break my social media away from shopping from banking using different Chrome user profiles.


> If you have a full sandbox breakout exploit, both cases are broken. But if you have just a stupid JS issue that breaks same-origin, or causes a trivial arbitrary file read, jails will put you from them just fine.

If you can read an arbitrary file, what is stopping you from reading the browser's e.g. password database files, inside the container, or any of the potentially sensitive cached files, for example? Those files are there -- the browser writes them, whether or not it is in a sandboxed directory or not.

Or do you assume that there is no password database that the user stores in any 'sandboxed' browser instance, ever, and they copy/paste or retype passwords every time or something? This is basically treating every single domain and browser instance as stateless. This is what I mean -- users are never going to behave this way, only people on places like Hacker News will. They aren't going to use 14 different instances of a browser, each one perfectly isolated without shared search, or having to re-log-into each browser instance to have consistent search results or and autocomplete. It's just an awful UX experience.

Of course, maybe you don't map files in there, inside the container. That's too dangerous, because if any part of the browser can just read a file, it's game over. Perhaps you could have multiple processes communicate over RPC, each one in its own container, with crafted policies that would e.g. only allow processes for certain SOP domains to request certain passwords or sensitive information from a process that manages the database. Essentially, you add policy and authorization. There is only one process that can read exactly one file, the database file. The process for rendering and dealing with the logic of a particular domain does not even have filesystem access, ever, to any on disk file, it is forbidden. It must instead ask the broker process for access to the sensitive information for a particular domain. You could even do this so that each tab is transparently its own process, as well as enforcing process-level SOP separation...

The thing is... That's basically exactly what Chrome does, by design. As of recent Chrome can actually separate and sandbox processes based on SOP. But it can only do that through its design. It cannot be tacked on.

Think about it. Firefox does not have true sandboxing or process isolation. Simply wrapping it in a container is not sufficient, and simply having 40,000 separate Firefox containers, each with its own little "island" of state, each for a single domain, is simply unusable from a user POV for any average human being. It is also incredibly dangerous (oops, I accidentally opened my bank website inside my gmail container, now they're contaminated. Now if my bank website serves me bad JS, it can possibly get some content related to my gmail, if it can bypass browser policies. In Chrome's new architecture, this can't happen, from what I understand, even if you don't run two separate, isolated instances of Chrome. SOP is now process level, and it is truly baked into the design.)

How do you make this not garbage from a user POV? By rearchitecting Firefox around multiple processes, where each domain is properly sandboxed and requires specific access and authorization to request certain data from another process. And where processes that need access are literally denied filesystem access. That requires taking control of the containers itself, the same way Chrome does. Chrome goes to extreme lengths for this.

The only way to truly enforce these things is at the application level. Just taking Firefox, slapping it inside Docker or a jail, and doing that 40,000 times for each domain isn't even close to the same thing, if that's what you're suggesting.


You're right about a lot of things, but there are still missing pieces. Whatever the sandboxing idea is used in Chrome (and you're right, Chrome is the gold standard now), a simple issue can still bring it all down. The are RCEs in Chrome published almost every month. Some will be limited by sandbox and that's great. But I disagree with:

> It cannot be tacked on.

Security as in prevention of the exploit cannot be tracked on. But separation of data can be. And there's a whole big scale of how it works, starting from another profile, to containers and data brokers, to VMs like qubes, to separate physical machines.

Chrome still uses a single file for cookies of different domains. And because you may have elements of different domains rendered at the same time, it needs that access. But that's exactly where either profiles or a stronger separation like containers can enforce more separation.

Yes, it does involve some interaction from the user, but it's not that bad. The UI can help as well. "This looks like a bank website. Did you mean to open it in a Private profile?", "You're trying to access Facebook, would you like to use your Social profile instead?" Realistically, people only need 3-4 of them (social, shopping, secure/banking, work)

We practically solved spam clarification already and that's in a hostile environment. Detecting social sites should be simple in comparison.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: