
Sandboxing and workload isolation - sudhirj
https://fly.io/blog/sandboxing-and-workload-isolation/
======
hinkley
I just cannot shake the feeling that everything Containers have promised us, I
was promised back in 1991 with pre-emptive multitasking and protected memory.

I also can't help but think we're going to end up with a microkernel with some
sort of nested cgroups for user processes. Which is itself going to look a
little bit like Erlang...

~~~
rrdharan
When I was at VMware I was pretty excited about this; it made a lot of sense
(and still does):

[https://www.zdnet.com/article/bea-runs-java-on-bare-
virtual-...](https://www.zdnet.com/article/bea-runs-java-on-bare-virtual-
metal/)

~~~
hinkley
Azul systems introduces a JVM that uses the VT instructions directly to vastly
accelerate GC.

I think some of their custom hardware was also a 'bare metal' VM arrangement
for years, but I never actually met anyone who was a customer.

------
hashamali
Love what the folks at fly.io are doing with edge deployed containers.

Interesting comparison with Cloudflare's isolate approach which they expanded
on earlier this week: [https://blog.cloudflare.com/mitigating-spectre-and-
other-sec...](https://blog.cloudflare.com/mitigating-spectre-and-other-
security-threats-the-cloudflare-workers-security-model/)

Seems like there is an edge compute option for any requirement these days!

~~~
EE84M3i
What's the cutoff for something to be considered "Edge"? It looks like Fly has
only about 20 POPs.

~~~
sudhirj
The ability to have an auto scaling auto distributive compute mesh, I’d say.
Once a company builds that they can add POPs as fast as they can spend (or
raise) money.

~~~
yencabulator
By that definition, AWS and GCP are both "edges" then. Doesn't sound like a
useful definition...

------
fwsgonzo
I've worked on unikernels for several years, and I've recently had some
insight into low-overhead emulation. Depending on your use-case it's possible
that a tiny emulator will be less overhead than everything else for the
construction/destruction part. So, imagine that you have a programmable web
server and each request needs to be served quickly. Then, reducing the
overheading of forking the master machine is everything. The cost of doing a
few emulated instructions into systemcalls (which largely model what you
already have to check normally), you are going to have lots of room before you
hit a point where something heavier like a JIT-enabled emulator or rv8 which
has millisecond-overhead.

I am working on such a thing now and it has a 6-microsecond overhead in a
highly concurrent scenario (60k req/s). So, I think there are cases for
everything.

------
iancarroll
We've deployed nsjail for our file conversion pipeline (i.e. ImageMagick) and
it's been great -- very nice configuration language and strong isolation
properties, with a manageable performance hit. Definitely easy to write a
configuration that would not securely sandbox you, though, which seems like a
strong point towards Docker or other more high-level solutions.

~~~
browsergap
I might look at nsjail. I run chrome headless inside of cgroups (for cpu and
memory restrictions) and using a runtime generated no-login user (for
privilege restrictions), and with a custom group id (for IP tables bandwidth
and routing restrictions). Plus I monitor the pool of chromes with a shell
script and if any exceed a resource threshold, I used cpulimit then pkill to
take care of it.

I could just use docker (and I have docker images of this app that some people
like to run) but I think this way is more lightweight, gives me more explicit
control, and leans into the OS level security features, like privileges and
cgroups with a couple of Unix commands.

~~~
sitkack
I thought I was the only one running workloads in headless browsers! We should
start a club for just us NaN.

~~~
browsergap
OK, come join my club!
[https://github.com/dosyago](https://github.com/dosyago) send me an email :)

------
louwrentius
So light-weight virtual machines on a security-hardened code base.

Have we come full circle?

~~~
jrott
Probably since we're steadily reinventing all the techniques that people used
to run stuff on mainframes.

------
0xdeadb00f
> But you’re not running OpenBSD, so, moving on.

But.. I am running OpenBSD ;(

------
eyberg
There's also unikernels which several work under firecracker.

"The first, and really the big problem for the whole virtualization approach,
is that you need bare metal servers to efficiently do lightweight
virtualization;" <\- this isn't really true anymore as unikernels can be
deployed to {aws, gcloud, azure} as machine images and so can be as light-
weight as your program needs.

~~~
mrkurt
We've been playing a lot with OSv. But unikernels aren't strictly sandboxing
or isolation.

If you're deploying a unikernel to ec2, you're just using their underlying
isolation. You can't safely deploy multiple unikernels on the same ec2
instances without figuring out your own, nested isolation.

~~~
eyberg
Sure - the hypervisor is the isolation vehicle - the same as firecracker.

The only? reason I can see for wanting to deploy N unikernels to one AWS
instance is if you want full control on the orchestration of things like
multi-tenancy, however, it is very much possible and it is possible on GCloud
as well. It should be noted that you are in the same boat as firecracker here.
They both use virtualization.

~~~
yencabulator
The usual reason to want to do N things with 1 piece of hardware is cost
savings.

------
jamestenglish
Anyone else having a hard time even reading this article because of the lack
of contrast from text to background?

~~~
markbnj
No, but I am not sure why you got downvoted for asking.

~~~
tptacek
It's against the guidelines to do so.

