
Container Linux on the Desktop [slides] - soohyung
https://docs.google.com/presentation/d/17Hml1iFqdXElxOcrh9caQSC5px5mDgaS015Vhaz42ZY
======
xte
In my own personal opinion container idea is "meh" and today's usage trends
are AWFUL. They serve a sole real purpose, open door to proprietary software
on GNU/Linux destroying it's community model.

You can't run unsafe software in safety, it's a myth. Or worse is giving trust
to a specific tech and so ignore both it's potential mistrust and other
software security implications.

The future for me is Nix{,OS}/Guix{,SD} certainly not
"chroots"/"jails"/"zones"/"lpar"/*.

~~~
zerogvt
I agree but in containers' defense they weren't supposed to be about security
in the first place. In my understanding the main incentive for the huge push
behind them was immutability and them being microservices-friendly technology.

~~~
xte
With NixOS and GuixSD you have easy to deploy, easy to update immutable
systems with IaC built-in and no containers at all...

They can be used also for development create isolated FHS environments with a
simple text file describe the environment, again without containers.

That's one of the reason I consider them the future, you have:

\- built-in infrastructure as code

\- built-in immutable servers

\- built-in orchestration/provisioning

All with human-readable, easy to manage, text files.

~~~
res0nat0r
IMO Nix code is the exact opposite of easy to read and usable. I used it on my
headless box at home for a couple of years and really liked the concept, but
the syntax is just way too hard to use IMO. I'd like to take a look at using
guix as the lisp syntax I think is much easier to grok.

~~~
xte
Yep, that's why I second to try GuixSD but for now GuixSD is really
raw&limited compared to NixOS... And nix itself while in some case it's not
much clear for most usage is clear enough...

------
reacharavindh
Very interesting work.

I have seen OpenBSD's pledge, and unveil systemcalls that achieve this easily
for applications running on OpenBSD. Elegant in the way it's done.

On the other hand, running container images for wmeverything including text
editor seems like a NIMBYism in OS and packages. There is a reason why our
OSes evolved with package managers. Sharing and reusing common libraries so
that they can be updated once safely. Bug in libsodium? Update libsodium in
your system, and all applications that use it automatically get new version.

With containers, you have to rely on each and every container to update
libsodium...

Secondly, it takes away the sharing so you have several copies of libraries
for each applications you use as containers.. What does it do to memory usage,
and disk usage?

Very interesting either way, and got me thinking about using such for specific
cases.

~~~
the8472
It's worse on several levels. The first thing containers do is take away the
tools needed (unshare, seccomp syscalls) for an application to secure itself
or its children.

The pledge/unveil model is far more elegant.

> What does it do to memory usage, and disk usage?

You'd need a deduplicating filesystem, i.e. btrfs for the storage aspect. For
memory consumption you either have to rely on KSM or hope that they will
implement page cache sharing it for btrfs. Overlayfs has it but it's less
space-efficient.

But I agree that this shouldn't be needed because containers shouldn't each
ship with their own OS disk image. That's not orthogonal to the security
aspect.

~~~
compsciphd
you don't need deduplicating file system or ksm. regular file systems and
regular shared read only code pages work fine.

see the links I posted above, if every package (or a subset of packages based
on the underlying source package) in a traditional linux distribution is
treated as an independent layer that can be composed together on demand (i.e.
what happens when you install a package, except much slower) using a
traditional union file system you get deduplication (as every image using that
package will be sharing the same exact portion of the file systme) and you get
memory sharing for free (for exact same reason, as its fundamentally no
different than multiple processes on a single host dynamically linking the
same binaries).

~~~
the8472
_In practice_ containers are siblings with a lot of redundant data, not a
layer cake.

------
yankcrime
The corresponding talk is on Youtube, here:
[https://www.youtube.com/watch?v=gES4-X6y278](https://www.youtube.com/watch?v=gES4-X6y278)

------
willtim
I must say I am tempted by Qubes OS as a way to get this level of isolation
between apps. However I'd likely need a new laptop, as RAM requirements are
much higher.

~~~
AllegedAlec
I've been considering QubesOS for my next laptop (won't do it on desktop yet;
still like my gaming too much). My old netbook is dying, and I'm considering
something more powerful to do coding on. However, given that the lead of Qubes
just left, I'm wondering what the future of the project is going to look like.

------
larrywright
I got a new MacBook Pro from work a couple of months ago, and decided to try a
scaled back version of this. You can’t reasonably run graphical apps in Docker
on MacOS, but most of the CLI tools that I would install via HomeBrew can be
run in Docker. I took the same approach that Jess did, and set up a a shell
alias for these commands. That way I run the command the way I would normally
run it from Terminal.app, but the app isn’t installed at all on my host.

To be clear: I have no real reason to do this, other than to just see how it
works. It’s nice to know that if I got a new machine tomorrow, all of the CLI
stuff I need is defined in some Dockerfiles and bash aliases, and could be
reinstalled pretty quickly. There are other ways to do that, but it’s a fun
experiment. In practice, there’s really very little noticeable overhead to
running things this way on modern hardware, but I’m also not running things in
tight loops where that overhead would matter.

If you’re curious at all about this, Jess has a couple of Github repos worth
looking at:

[https://github.com/jessfraz/dockerfiles/](https://github.com/jessfraz/dockerfiles/)

[https://github.com/jessfraz/dotfiles](https://github.com/jessfraz/dotfiles)

Specifically, her aliases are set up here:
[https://github.com/jessfraz/dotfiles/blob/master/.dockerfunc](https://github.com/jessfraz/dotfiles/blob/master/.dockerfunc)

~~~
liveoneggs
I run as much python/node/ruby/whatever inside of containers as possible but
packaging up something like vim would probably just make me angry with the
little startup delays :)

------
ashrk
I just want to be able to have multiple, suspendible desktop sessions with
different apps running and/or installed and different files available. Ideally
I should be able to kick up more than one at a time to let them exchange data.
Preferably without having to run a full VM per session. Bonus points if I can
ship them between physical machines, though I know that's a long shot.

That's more interesting to me than individually-containerized applications. I
want to have per-project and/or per-task-group desktop sessions that are right
where I left them when I spin them back up, within reason. That's the one big
"killer feature" I feel lacking in every modern desktop OS I use.

~~~
fulafel
You can stop and continue tasks easily (eg just SIGSTOP them), but if you want
them to stop using memory and kernel resources while suspended, then it's
equivalent to process checkpointing. Which has proven a hard problem on Linux
so far.

------
DyslexicAtheist
This is a really nice idea for anyone who wants to learn containers from
scratch and get deeper into appsec.

But I can't think of a valid reason why anyone would want to use this in
practice when there is QubesOS. If the reason is to increase my base-layer of
security for OpSec, then this is a poor choice. Perhaps this is what she meant
in the slides when she says _don 't try this at home_ ... maybe she explained
it better in the talk (I haven't watched it). Seriously don't do this at home
unless it's for _educational purposes_ (in that case I agree it is awesome).

------
aritmo
You can get this running with LXD (system containers). Some people managed to
get a whole GUI running in a LXD container.

------
tony-allan
Running everything in a container is the definition of commitment! I am
looking forward to the day when this is normal.

~~~
tapoxi
I've been running Silverblue
([https://silverblue.fedoraproject.org/](https://silverblue.fedoraproject.org/))
for the past month or so and for the most part you can use it as a daily
driver. The OS itself is an immutable image created and updated with OSTree,
and you can layer RPMs on top if they don't fit a containerized workflow. For
applications, you either run Flatpaks (for desktop) or Docker/OCI images
through podman.

~~~
watersb
I am new to Silverblue, but also using it daily for the past month.

An important aspect of it, I think, is "rootless" podman -- I can create and
manage containers without elevated privileges. Any required root escalation is
done in the containers.

Still, it is weird. GUI apps do work, but hmm.

Solaris Zones are better-integrated than this newfangled podman stuff that you
kids are using these days.

------
ivnilv
is there an actual talk we can listen/watch somewhere ?

Thanks

~~~
Jaruzel
Yes, already linked in a comment:

[https://news.ycombinator.com/item?id=18500903](https://news.ycombinator.com/item?id=18500903)

