
The Browser Is the Worst Sandbox Ever Designed - redman25
https://bentrask.com/?q=hash://sha256/c8909ef4e934d5954f1ef8c8
======
atemerev
> There are too many changes.

I am sorry, security guys, but unless it's military-grade software, security
is just another feature. And it is not a highest priority one. Of course,
security would be easier if there were no new features or spec changes. And
software development in general would be much easier if there weren't any of
those pesky end users.

Which is a good thing, probably. Otherwise, you'd be mostly out of your jobs.
Security is always changing, chaotic battlefront.

Having said that, sandboxing is a good idea. Theoretically. But it is really
hard to implement right — attack surface on the edges of the boxes is quite
large. Remember those Java applets? They were sandboxed neck-deep, with
excellent security model. Did it help?

~~~
3pt14159
The thing that you're missing here is that some of us do work on military-
grade software _and we still need to use a browser_. We need to trust that
going to a website won't leak information off of our HDs. I know a guy that
builds fighter aircraft displays out of a giant clean room he built in his
home. He was writing code for it on the same computer he was using for day to
day work because he didn't really know any better (more of an Electrical
Engineer than a software dev). My point is that you don't get to use the "it's
just another feature" with some of this stuff.

For Counter Strike, sure. But for things like spreadsheets or web browsers,
hundreds of thousands or millions of people working in arms manufacture or
intelligence are going to be using your software and it needs to not leak
designs to foreign intelligence agencies or competitors.

~~~
jahnu
They need to fund development of a military grade browser then. Perhaps that
might be a better investment than the F-35 for example?

~~~
syngrog66
I bet it would burn through a billion dollars, require a team of hundreds,
take 10 years, then be so buggy would get cancelled.

I started off talking about the browser, not the F-35, but by the end the line
blurred. :-)

------
amluto
I mostly disagree. A sandbox like seccomp with a truly minimal set of system
calls allowed through (read, write, close, exit) is a _tiny_ attack surface
and provides functionality roughly equivalent to raw Nacl. As far as I know,
no holes have been found in seccomp configured like that. The problem isn't
the sandbox per se, it's the set of services (3D! Audio! USB!) that are
allowed through the sandbox. This proposal does nothing to help address that
problem.

~~~
lucian1900
Exactly.

Every sandboxed app ever has needed native access of some sort to do something
useful.

The best example of this difficulty is probably graphics (WebGL). To provide a
compelling user experience, the API must allow apps to upload almost arbitrary
code (shaders).

~~~
dahart
I'm genuinely curious what WebGL shaders can be used to do maliciously, and in
what ways they can be considered arbitrary code?

~~~
dietrichepp
They are arbitrary code, they just have to be written in GLSL. One of the
vectors is the fact that less effort has been put into isolating and securing
GPU memory, since previously, it was not considered sensitive. This is
compounded by the fact that WebGL shader code and API calls are often
processed by a layer with escalated privileges, written in C or C++,
completely opaque, with lots of unknown bugs.

[http://www.kitguru.net/gaming/security-
software/carl/webgl-e...](http://www.kitguru.net/gaming/security-
software/carl/webgl-exploit-opening-browser-users-to-serious-attacks/)

[http://www.contextis.com/resources/blog/webgl-more-webgl-
sec...](http://www.contextis.com/resources/blog/webgl-more-webgl-security-
flaws/)

[http://www.pcworld.com/article/227438/dangerous_webgl_flaw_p...](http://www.pcworld.com/article/227438/dangerous_webgl_flaw_puts_firefox_and_chrome_users_at_risk.html)

~~~
greggman
Those 3 links are all to the same issue that was closed long ago and funded by
MS as FUD.

There haven't been any real WebGL exploits I know of. No you can't upload
arbitrary shaders either. They're validated and then re-written by the
browsers.

~~~
amluto
And what does the validation and rewriting? Code in the sandbox? Code outside
the sandbox? Code in a different sandbox?

------
lmm
If we can get the defect rate down to zero, then the product of that and the
attack surface will still be zero.

If we can't do that, how much will sandboxing help? Sure, we can make a much
smaller surface that we expose - let's say 1% of the size of what the browser
currently exposes. Will that be good enough though? Xen's surface is about as
small as you can get for a reasonable general-purpose sandbox, and, per the
article, Xen has a lot of vulnerabilities too.

I think the only option is to push the defect rate down to zero. (This may be
impossible; if so, we're all going to die. Computing power will inevitably
advance to the point where any vulnerable system can be cracked by a lone
terrorist, and economics and our inability to coordinate ensure that power
plants, water treatment facilities, automated bioengineering facilities etc.
are going to be computerized.)

Rust is, I think, worthwhile as a step on the road towards provably correct
programs - memory management isn't everything, but it's something. Sandboxing
OTOH feels like a dead end, because it's inherently ad-hoc and unprincipled.

~~~
btrask
Thanks for this comment. It's the first really substantiative response to my
core thesis.

I don't think the math is on your side. As the defect rate approaches zero,
there are diminishing returns to pushing it lower. On the other hand, the
attack surface effect becomes overwhelming. Addressing both at once will be
far more effective than concentrating on one or the other.

You might be right that in the long run, the defect rate needs to be 0.0. But
that is a long ways away. Once we've picked the low-hanging fruit (including
perhaps a provably correct sandbox), then we can start thinking about how to
prove the correctness of random applications.

~~~
lmm
> I don't think the math is on your side. As the defect rate approaches zero,
> there are diminishing returns to pushing it lower. On the other hand, the
> attack surface effect becomes overwhelming. Addressing both at once will be
> far more effective than concentrating on one or the other.

Huh? That's not how it works, is it? If we want to minimize X * Y and we
currently have X = 50 and Y = 5, it's much more efficient to focus on bringing
Y down.

> Once we've picked the low-hanging fruit (including perhaps a provably
> correct sandbox), then we can start thinking about how to prove the
> correctness of random applications.

"low-hanging fruit" tends to mean doing unprincipled things that can't be
generalized / don't contribute anything in the long term, right? My view is
that there's limited value in lowering the rate of security flaws in ways that
aren't on the path to bringing it to zero. Getting it to zero will be
valuable; halving it isn't really (there's some economic value in reducing the
frequency of breaches, but not enough). So I don't think ad-hoc
countermeasures are worthwhile.

To the extent to which a sandbox can be written in a principled/provable way
it will be valuable. I'm not at all convinced that a general-purpose sandbox
is possible, but that's a different question. (The techniques of factoring a
program into pieces with the minimal capabilities that they need are valuable,
but I think this needs to be done far more holistically than is possible with
a sandbox; the security design needs to reflect the program design, because
whether particular operations are safe or not is a function of the
application-specific context. But this is very much speculation on my part)

~~~
btrask
> That's not how it works, is it? If we want to minimize X * Y and we
> currently have X = 50 and Y = 5, it's much more efficient to focus on
> bringing Y down.

I guess I see 0 as the asymptote. Like if you're already at 99.99%
reliability, adding another 9 is quite hard. On the other hand, if you're at
10%, then there are big gains to be had.

> "low-hanging fruit" tends to mean doing unprincipled things that can't be
> generalized / don't contribute anything in the long term, right?

I think sandboxes generalize much better than application-specific proofs.
Once you have a provably correct sandbox (which I think is possible today, if
you exclude things like 3D acceleration), you can run whatever you want in it:
old software, games, Flash, Windows 98. Application-specific proofs only work
for applications written in the approved way.

~~~
lmm
> Once you have a provably correct sandbox (which I think is possible today,
> if you exclude things like 3D acceleration), you can run whatever you want
> in it: old software, games, Flash, Windows 98. Application-specific proofs
> only work for applications written in the approved way.

What would a generic sandbox enforce? That an application never accesses the
network? That it never accesses the local filesystem? That it never
communicates with another process? Browsers need to do all those things and
more. I think you need application-specific knowledge to be able to enforce
the restrictions that matter.

~~~
btrask
Yes, that is a good question. I think OpenBSD's pledge(2) is a good model for
what a simple and useful privilege interface can enforce (although there is
room for improvement).

To some extent, this is a question of what the requirements are. If a sandbox
limits a browser to accessing certain files (a la chroot), is that secure? Or
does it need to be more fine grained? This isn't something that can be proven,
it's mostly a matter of user interface design.

I think there are good arguments for keeping security requirements relatively
simple and coarse (including ease of implementation and making sure users can
understand what guarantees are offered).

~~~
lmm
Hah, pledge is an example I was thinking of. I think it's too ad-hoc to
deliver real security; I think it's more of an exploit-mitigation technology
(comparable to something like ASLR), and as such ultimately a dead end.

~~~
btrask
I think it's worth distinguishing two problems with pledge:

1\. It's likely to have bugs because it's mixed with a constantly changing
kernel and can't be proven correct

2\. It isn't fine-grained enough

If pledge were completely bulletproof, but still limited in terms of what it
could restrict, would that still be worthless? Granularity is an interesting
problem but it's more subtle than typical criticisms of ad-hoc mitigations.

~~~
lmm
> If pledge were completely bulletproof, but still limited in terms of what it
> could restrict, would that still be worthless? Granularity is an interesting
> problem but it's more subtle than typical criticisms of ad-hoc mitigations.

I think the idea that you can allow a program to get itself in an attacker-
controlled state and that will be ok as long as you blacklist what the program
can do is fundamentally unworkable. So yes, I think pledge-like approaches are
always going to be worthless in the long term - if the program is under the
attacker's control, it will almost certainly have enough access to be able to
do damage (with a sufficiently skilled attacker), because every program does
something, particularly if we're talking about a large and complicated program
like a browser. (You could potentially segregate the browser into multiple
processes with distinct responsibilities, but I'm not convinced that helps,
because those processes still have to send commands to each other and so an
attacker who can subvert one can probably control the others).

If it had the level of granularity to express restrictions like "should be
able to write only files that the local user has chosen via the save dialogue"
then it might become an effective security layer, but at that point we're not
really talking about a sandbox any more.

~~~
kentonv
You are correct that pledge itself (or seccomp+namespaces on Linux) does not
form a useful sandbox. These mechanisms are useful for blocking out the world,
but not for re-establishing access to certain resources that the app needs to
do its job.

> "should be able to write only files that the local user has chosen via the
> save dialogue"

The web platform, incidentally, supports this! You can invoke a "choose file"
dialog, and then your web page gets access to the one file the user chose.

In capability-based security circles, this pattern is called a "Powerbox". The
pattern works especially well in capability systems, where it can be used to
choose more than just files.

[https://sandstorm.io/how-it-works#capabilities](https://sandstorm.io/how-it-
works#capabilities)

------
ar0
In fact, sandboxing the browser in a VM (which I think the author suggests,
although he wants to go a more light-weight approach) with only limited file
system access is what is done by many security-conscious enterprises such as
banking. They usually embed Firefox in a Linux VM.

There is "Browser in the Box":
[http://www.sirrix.com/content/pages/BitBox_en.htm](http://www.sirrix.com/content/pages/BitBox_en.htm)

And then there was also VMWare's Secure Browser Appliance (in 2005! although I
cannot find any recent mentions of it):
[https://rcpmag.com/articles/2005/12/13/vmwares-secure-
browse...](https://rcpmag.com/articles/2005/12/13/vmwares-secure-browser-
appliance.aspx)

Taking this to a new level by implementing a sandbox tailor-made for this
purpose might be a worthwhile approach. However, for it to be effective you
will always need to inconvenience your users: As soon as the browser running
in the sandbox has access to the full filesystem, you are back to where you
started. And if the browser does not have full access to the filesystem (but
e.g. only to a specific "Downloads" folder as in the current sandboxes) you
inconvenience your users: E.g. for uploading files, you first need to copy
them to the Downloads folder.

~~~
amluto
> E.g. for uploading files, you first need to copy them to the Downloads
> folder.

No, you need a "portal" or "intent" or "capability" whatever you want to call
it. Browser asks sandbox to ask user to select a file, and browser gets that
file. Android has been able to do this for a while, but full sdcard access is
so easy that everyone uses it instead. Flatpak nee xdg-app will do this.

~~~
eridius
This capability system is exactly how OS X's built-in sandbox works. Sandboxed
apps don't have unrestricted access to the filesystem, but if they invoke the
system-provided Open dialog, and the user selects a file, the application is
granted access to that file (which it can persist, so it can continue to
access that file in the future).

------
jstewartmobile
When I read things like this, I get a little hope that with the end of Moore's
law, we will actually start improving the instruction set architecture in such
a way that we can write performant software in higher-level languages instead
of relying upon assembly++ languages like C, Go, and Rust.

I mean, with the number of VM languages out there like Java, PHP/Hack, .NET,
various LISPs, etc. you'd figure that some hardware support for
boxing/tagging/GC would be a standard feature by now, but nope. Instead, the
best approach we have for secure and performant software with x86 is 1) write
complicated system in tedious assembly++ 2) run under VM.

~~~
SixSigma
Back to Burroughs architecture (if only!)

[https://en.wikipedia.org/wiki/Burroughs_large_systems](https://en.wikipedia.org/wiki/Burroughs_large_systems)

~~~
coldtea
Because a single architecture change will solve all our issues and not cause
countless others (if only!).

~~~
pjmlp
Most likely not, but we surely wouldn't have the memory corruption ones.

------
0x0
When he asks for a sandbox to provide "a secure drawing API (including 3D,
which, yes, is hard). You need a secure file system. You need secure network,
and microphone, and webcam, and probably even USB. (It should also be possible
to block or control access to each of these.)"

... isn't that the job of a normal OS kernel? The article may call it a
"sandbox" but it sounds like normal access control in any OS, not much
sandboxing left when asking for all the apis?

~~~
phaedrus
In-browser applications and virtual machine technology are both poor work-
arounds to the same problem: that the design of modern operating systems gets
it so badly wrong. If I can run completely different operating systems on top
of another inside a VM application, why shouldn't I be able to run those other
operating systems _as_ applications? If I can run Javascript downloaded from
the internet that is JIT-compiled to native code inside a browser application,
why can't I just run native code downloaded from the internet directly (with
the same security - such as it is - as Javascript code). Why should I be able
to access one set of APIs from Javascript code, and a different set of APIs
from native C++ applications, etc.? Why should I be able to set up and migrate
a development environment in a VM and transfer it between computers but I
can't set up a separate development environment natively in the OS for just
one or a few applications _and export it and all its dependencies_ just as
easily as exporting a VM? In other words, if what I care about is about 15MB
of customized files and 100KB of OS settings customization, why should I have
to put 1.2GB of operating system files and other junk into a VM to make it
sandboxed and portable? (I'm speaking in this latter example a Linux project
called Code Data and Environment which accomplishes something like this by
analyzing what files and shared libraries a running application accesses and
packaging just those.)

~~~
wvenable
This is a good comment. Operating systems were originally designed to
protected people from each other. My process can't interfere with your home
directory. My process can't mess up the entire OS (unless I'm admin). There
was a time when most programs were not actively hostile to the user running
them.

These days just about every application is user-hostile in some way. Even open
source Windows applications, depending on where you download them from, might
come with a hostile installer. Programs install background tasks. Programs
track you.

Mobile operating systems have been a step in the right direction. But a good
operating system should allow us to run whatever binaries we find anywhere on
the Internet and not be able to do anything harmful to us.

~~~
0x0
Exactly. Better to improve the OS level sandboxing rather than duct-taping it
all together with another layer of indirection via hypervisors. But it'll take
a lot of work.

------
mordae
[https://xkcd.com/1200/](https://xkcd.com/1200/) nothing else to say.

~~~
akkartik
That is indeed a very incisive critique, but what it and OP and this thread in
general suffer from is a fuzziness about threat models.

Driver protections in the OS prevent people who write popular software from
remotely taking over large numbers of computers. They're not intended to
protect against your laptop getting physically stolen.

The OP here, on the other hand, I'm still confused what precise threat model
it's concerned about :)

------
mitchtbaum
If your battle with invaders happens inside your home, then you have only one
more misstep before game over.

"When asked whether it would be prudent to build a defensive wall enclosing
the city, Lycurgus answered, "A city is well-fortified which has a wall of men
instead of brick."

[https://en.wikipedia.org/wiki/Laconic_phrase](https://en.wikipedia.org/wiki/Laconic_phrase)

"It is said that if you know your enemies and know yourself, you will not be
imperiled in a hundred battles; if you do not know your enemies but do know
yourself, you will win one and lose one; if you do not know your enemies nor
yourself, you will be imperiled in every single battle."

[https://en.wikiquote.org/wiki/Sun_Tzu](https://en.wikiquote.org/wiki/Sun_Tzu)

Moral:
[https://en.wikipedia.org/wiki/Code_signing](https://en.wikipedia.org/wiki/Code_signing)
\+
[https://en.wikipedia.org/wiki/Web_of_trust](https://en.wikipedia.org/wiki/Web_of_trust)
\+
[https://en.wikipedia.org/wiki/Quality_assurance](https://en.wikipedia.org/wiki/Quality_assurance)

------
flohofwoe
Good luck getting Microsoft, Apple, Google and all the different Linux
stakeholders into one boat on that :/ At least in the browser world there are
standardisation efforts even if they don't work very well in different places,
but it's the closest to the "write once run everywhere" for a platform we ever
got. And it is the only remaining open platform where I don't need to enter a
'relationship' with a 'platform owner' to write and publish software for it
(edit: except Linux of course).

I would like an OS-level sandbox that doesn't treat its users like idiots. But
this is not what's currently happening (especially the "don't treat the user
like an idiot" part). iOS and Android have always been closed platforms, and
OSX and Windows are currently locked down at high speed.

The browser platform might be a mess, but it's less of a mess than the sum of
all the underlying operating systems the browser is running on, and the only
open-yet-secure platform that exists.

~~~
pjmlp
I actually don't care for the browser as anything more than interactive
documents, so I am really looking forward to the Apple, Google and Microsoft
efforts in sandboxing.

Specially in the way that they are also moving away from C as well.

~~~
emodendroket
Since that puts you in a small minority, I rather doubt that whatever they
come up with will be well suited to your needs.

~~~
pjmlp
So far Gatekeeper, macOS X sandboxes and UWP do suit my needs.

------
rvern
The sandbox written in Rust can only be secure if the libraries it depends on,
the compiler that was used to compile it, and the kernel as well as the
compiler used to compile it and the libraries it depends on were _all_ written
in Rust.

Furthermore, even if applications are sandboxed, that only prevents
vulnerabilities from being exploited with other applications. A web page able
to compromise my web browser would still be able to get all my browsing
history, my usernames, my passwords… The sandbox would not help with that.

This does not necessarily make it worth it to rewrite everything in Rust, but
it is worth considering writing any new software in Rust instead of C,
especially if this new software is at a low level like compilers, the kernel,
and libraries. Other applications can be written in a higher-level language
with garbage collection and static typing instead of C.

~~~
btrask
A sandbox like NaCl doesn't depend on the kernel for security. In fact, it
shields the kernel, which is good, because common OS kernels tend to have
large attack surfaces of their own. The compiler of the sandbox needs to be
reliable (so like CompCert, until Rust matures in this regard), but the
compiler(s) of the software inside the sandbox don't matter.

You're right that stuffing _all_ of Chromium into a single sandbox would not
be very good, because pages would be able to attack the browser (history,
passwords, etc) and each other. You'd want to run each renderer in its own
sandbox (which to some extent Chrome already does).

------
sayrer
"The classic solution to every combinatorial explosion is modularity:
separation of concerns."

What is the basis for this comment? It sounds reasonable, but how would one
test it?

~~~
btrask
I don't actually have a cite for you, although it comes up in information
theory. For example, guessing a password one letter at a time is
(dramatically) easier than guessing the whole thing at once. I'd be curious to
know what the term for it is.

Edit: "factoring" is one word for it.

~~~
imh
I'd call it independence.

------
jacinabox
Some points on sandbox design worth mentioning:

* The OS should be the sandbox. It has all the features of a sandbox; they just need to be secure.

* In addition to userland processes, if we trust the compiler of a high-level language without unsafe features (like Java), we should be able to compile programs with it and load them directly into the kernel. (User policy is enforced by the compiler.) This is similar to what Singularity does, and it has a number of advantages. First, since there is no task-switch boundary, we reap a speed benefit. Second is attendant to this; since there is zero cost to switch processes, people are encouraged to separate their applications into multiple processes, encouraging modularity.

* Second, the OS kernel itself should be written in a high-level language.

* Finally we need security in the compiler itself. This is achieved through the Futamura projections. That is, all that needs to written of the compiler is an interpreter; the actual compiler is condensed into the notion of a partial evaluator; the partial evaluator essentially figures out how to substitute any given program into the interpreter efficiently, hence compiling it.

------
greenspot
Anyone can help?

He is writing that the current browser sandbox model is not secure–all in a
dramatic, clickbaity manner.

After many esoteric lines, then he says (maybe it's the tl;dr)...

"We need a highly secure (ideally provably secure) sandbox that doesn’t have
any features! Then, you can run an insecure browser inside, where security
doesn’t matter."

Then again some more lines of confusion.

So, what does he suggest? Putting the browser in a VM?

~~~
btrask
Put the browser in a VM/sandbox, yes.

tl;dr: Right now there is a competition between features and security, and
security is losing. By separating them, they wouldn't need to compete, and we
could have both. It isn't a good idea for a sandbox to directly handle things
like CSS transforms.

Is my writing really esoteric?

~~~
greenspot
Thanks for your reply and apologies for the 'esoteric'. Maybe not the right
wording but I read your post few times and I would have liked to get more
details about the idea.

The idea sounds ok once you clarified at the first glance but how does it
work, will it work, what would be the implications. And many more questions.
Currently I see the core idea surrounded by many vague statements.

And btw you can do this today already: just start a VM with a browser (might
be a bit resource heavy and the integration into the main OS subpar). Or
Docker with a browser. Not sure though if latter fulfills your security
requirements.

But at the end, the browser is more then an isolated piece of software in a
VM. An integration on OS-level is required and trivial stuff like a full-
screen mode is possible but might complicate matters within a VM. Or 3d
acceleration and everything where a direct access to the API is required. And
suddenly the VM is piping everything through the main OS because a browser
just needs access to all the OS APIs and then you end at square 1. So, I also
find your idea a bit confusing.

------
xorcist
I think that partitioning and sandboxing software is underutilized, and that
the Chromium sandbox is well designed.

But I don't know what sandbox escapes for Chromium look like? My guess would
be that it involves things like browser plugins, or nasty stuff like WebGL. In
which case sandboxing isn't failed, we're just not doing enough of it.

------
Sylos
I'm not sure, I understand this correctly, but doesn't this forget that the
browser itself also contains sensitive information? So, for example login
information would be inside the sandbox, and therefore you still need security
in the browser...?

~~~
y7
Ideally every website or tab would have their own sandbox.

------
panic
Nice article, though there's a slight historical inaccuracy:

 _Blink raised the implementation quality bar with it’s tab-per-process design
and privilege dropping._

These were Chrome features, not Blink features. Blink didn't even exist when
they were developed.

~~~
derefr
They really belong to the layer between Blink and Chrome: the Chromium Content
module, the API that projects like node-webkit and Electron consume.

------
amelius
Sounds like this should be incorporated into the OS, rather than the browser.

------
jkot
> _security risk is the product of defect rate multiplied by attack surface._

Call me crazy, but security sensitive application should not have direct
access to 3D acceleration, microphone, camera, USB etc.

~~~
btrask
See the later quote:

> (It should also be possible to block or control access to each of these.)

The good news is that the capabilities of the sandbox don't need to be every-
expanding, the way browsers have been. The sandbox should support everything
the hardware can do, and then there are policy decisions about what
capabilities web pages actually get.

~~~
jkot
It should also be possible to disable JS. We all know how that worked out.

~~~
btrask
A better analogy is Android permissions. These days you can fake them and apps
mostly still work, right?

~~~
jkot
I think security model on Android 6 is now different. You can grant permission
to use camera for 10 minutes etc...

------
kluck
The best things suggested:

\- Strip down the features of the application to minimize attack surface. (see
bloated, badly designed web apis...)

\- Don't let sensitive code be produced by interns.

~~~
kowdermeister
And throw away years of progress in web technology. No, thanks.

What if I (and users) want those features? Ah, I remember, just install a
plugin, which is closed source thus more secure... wait a minute.

~~~
TeMPOraL
A lot of that "progress" in web technology should be rolled back ASAP and just
be remembered as a cautionary tale and/or used as a bedtime story for scaring
little children.

~~~
kowdermeister
If you remove them others will create even less secure alternatives. Remember
Flash?

~~~
kluck
Let them. Don't use it.

------
dakami
Rust isn't a sandbox. The whole point of a sandbox is it survives incorrect
software at runtime. Rust is compile time magic. Sure, cool, different thing.

~~~
Sylos
He's not saying that Rust is a sandbox, but rather that Rust should be used to
write a sandbox.

------
jaxondu
I think what he wants is a browser that tie to a Docker container. Does
browser on a container make any sense technically?

------
z3t4
You can already do a lot with Linux name-spaces. Every layer does help
security though, but can slow down performance.

------
spraak
Neat that this is the second article today on HN that mentions Qubes OS

------
MrPatan
Worse is better.

