Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Docker-based immutable workstation (github.com)
155 points by lifeisstillgood 3 months ago | hide | past | web | favorite | 50 comments



This is pretty much exactly my project Darch is/does, except that it is truly completely native.

https://godarch.com/

Here are my recipes: https://github.com/pauldotknopf/darch-recipes

You can use either Ubuntu/Arch/Debian/VoidLinux.

My entire operating system is stored on DockerHub. I run "update-machine" from my terminal, grab a snack, come back, reboot, and my machine is updated.

My entire OS has a tmpfs overlay, meaning I can reak havok, and a simple reboot will wipe everything clean. I use "hooks" to mount disks at certain spots (/home, /var/lib/docker, etc) in the initramfs, before chroot.

Also, you can try it out really quickly in VM with a pre-built image: https://pknopf.com/post/2018-11-09-give-ubuntu-darch-a-quick...

I run the same exact image, bit for bit, natively on 3 machines I have (laptop, home and work). Together, with Darch and my dotfiles, my environment is consistent, wherever I go.


Truth be told Linux should become the de-facto desktop. I like that you specifically mention i3. I think there is a strong use case for a Arch-like Linux distro that has an open architecture for building the entire distro from source into container images automatically. It would also allow desktops to be managed from a cluster manager like K8S. At first I thought packages should go with static binaries for packaging, but then quickly realized that they can use dependency containers and what not to dynamically include libraries with multiple version requirements if necessary, but still allowing for dynamic updates too.

Using container permissions also give you much of the permission structure you'd be looking for in your OS, much like a mobile device. I'm honestly surprised someone hasn't put in the development efforts to create a truly modern distro like Arch in containers for desktop and mobile. I think Purism is working with wlroots for Wayland. I'm looking forward to trying this with SwayWM if I can find the time, money and partners to help me with it.

Also, check out Simula for some AR/VR concepts:

https://github.com/SimulaVR/Simula


> I think there is a strong use case for a Arch-like Linux distro that has an open architecture for building the entire distro from source into container images automatically.

That's what Fedora Silverblue is all about https://silverblue.fedoraproject.org/

I'm not sure what will happen after IBM's acquisition of RedHat but as far as I remember the last announcement was that Silverblue will get the best bits of CoreOS (in turn acquired by RedHat) and Atomic Workstation.


> I think there is a strong use case for a Arch-like Linux distro that has an open architecture for building the entire distro from source into container images automatically.

Take a look at NixOS. I'm not sure about containers, but it has `nixos-build build-vm` which builds a VM disk image with your current system configuration.


That's pretty much what Ubuntu Core and snaps aims to be. A minimal core distro with image-based containers to host applications.


>>> This is pretty much exactly my project Darch

Great minds think alike ;-)

Thank you will be checking that out with interest


Thanks for posting this, it's super cool!


I see some minor grammatical issues with the doc. "dependencies" is spelled wrong, and the final sentence of the "Docker Immutable Workstation" section seems incomplete. Also this line: "a Mac if we are lucky".. expect to be chirped at by non-cultists like myself :)

I do something similar but with a VM instead of a Docker container and it works well. The one thing I like about using a VM is it runs full Ubuntu with an init system, so it's easier to run daemons like Samba. My VM shares its home directory so I can mount it on the host and share files across the barrier (I've always found VM "shared folder" implementations flaky). Files are less an issue with Docker, but you might want to run other daemons within the context of your dev environment. The Docker "one executable per container" idiom really falls apart here, and requires you to hack around it.

To keep things fresh, I just have a couple bash scripts that I run (one for some system packages, one for dev tools, one for dotfiles) on a new Ubuntu VM every few months. I'm sure it would be trivial to automate this with vagrant to even further streamline it but it's been good enough for me.


> The one thing I like about using a VM is it runs full Ubuntu with an init system

You can get that functionality without full VM overhead by using:

- https://wiki.archlinux.org/index.php/systemd-nspawn

- https://www.freedesktop.org/software/systemd/man/systemd-nsp...


Thanks.

yes the idea of long running daemons on the workstation seems at at odds with docker - something to get my head round I guess

As for more grammar ... aspell should be next addition to the Dockerfile


> I do something similar but with a VM instead of a Docker container and it works well. The one thing I like about using a VM is it runs full Ubuntu with an init system

You could get that with LXC/LXD containers [1].

[1] https://linuxcontainers.org/lxd/introduction/


You can even get it with straight up Docker, as long as both the host and container are running a new enough kernel+docker. You can even run unprivileged containers: https://developers.redhat.com/blog/2016/09/13/running-system... - I did this with Ubuntu (Bionic and Artful, but I can't remember if it could work older versions, although this hardly new per the date on the blog). Of course this assumes you are running Linux everywhere and not some proprietary, limiting, OS.


In my experience, I was not able to run systemctl restart some.service in a docker container. This wasn't an issue in a LXC container.


This is somewhat similar to Jessie Frazelle's setup: https://blog.jessfraz.com/post/ultimate-linux-on-the-desktop...

The talk and slides at the end of that post are worth the time if this sort of thing is interesting to you.


thank you will check it out - seems way more advanced than me :-)


I tried Fedora Silverblue [0] with the release of Fedora 29.

It uses OSTree to manage state, and only allows mutability inside of the home directory and /var.

Overall I was very impressed. Another year or two of rounding out the typical use cases and it will make a fine immutable workstation.

[0] https://silverblue.fedoraproject.org/


Also have a look at https://en.opensuse.org/Kubic:MicroOS

It uses btrfs instead of OSTree. It only allows writes inside /home, /var, /tmp, /etc, ... and everything else is updated transactionally.


As a user of standard Fedora, what would this give me?

Configuration of my system is mostly easy when setting up (dnf install xyz, plus 2 config file tweaks in /etc). It's the config of everything in /home that's complex (gnome settings, my emacs, bash and git config etc.).


Its immutable, so painless upgrades without worrying about a borked package that systems tend to have after you've been running across major releases. It also grabs most desktop apps through Flatpak, so in theory more frequent releases of desktop software.

It's a trade-off. I like it because I want a system that always works and updates silently, and I don't make heavy customizations. It's probably not for you.


It also has support for podman, which runs using user namespaces. Still a bit rough around the edges (fuse-overlayfs still a bit buggy and not POSIX, the same way that overlayfs in the kernel was initially buggy and had to wait until overlay2 to have it work reliably with docker)


Not actually heard of that - will take a look thanks


Did you consider alternate approaches like using Ansible to give you more native system options as well, or was the goal here truly to have something 100% cross platform on anything? A hybrid might be a good approach too. Personally I think I'd end up backsliding because if I'm using a platform I really like then I want to be able to take advantage of some of its best native software and interface bonuses, so a purely X-based setup will still lead me to some manual setup on my own systems. Once stuff out of the automatic flow crept in feels like it might build up over time since I'm not that disciplined. Some of the playbooks I've seen though could be enhanced further with this, neat project.


> if I'm using a platform I really like then I want to be able to take advantage of some of its best native software and interface bonuses

This is precisely why I built Darch (https://godarch.com). I wanted an immutable OS, but also to take full advantage of my hardware, completely native. Your images show in in grub and you can boot right into them.


I have explored that approach, using Ansible, and I think it is promising. I earlier tried an approach with Vagrant and Virtualbox but the non-nativeness of my environment were causing more problemn than it solved.


Good work! I've written something similar - an SSH server and X desktop (served up via a VNC server) in Docker. It's a reasonable base for building this sort of thing. You can build an image on top of it and throw in whatever GUI applications you need. It's a little light on documentation, but should make up for it by being fairly barebones.

https://github.com/dpedu/docker-desktop



I will have a sneak peek thank you

Oooh Ice WM - a nice reminder :-)


Might be interesting to pair this with something like Fedora Silverblue (read-only /usr managed using rpm-ostree) and fedora-toolbox.

http://silverblue.fedoraproject.org https://github.com/debarshiray/fedora-toolbox/blob/master/fe...


you are the second person to recommend silverblue - never heard of it but intrigued now


I've built something vaguely similar to give me a virtual desktop that I can access from anywhere over Chrome Remote Desktop: https://github.com/kstenerud/ubuntu-dev-installer/blob/maste...

It installs mate or lubuntu desktop inside an LXC container, allows access via x2go, and pre-downloads chrome remote desktop, which can be configured in less than a minute (run chrome, log in, open remote desktop, enable connections).

It was spawned out of my virtual builders (https://github.com/kstenerud/virtual-builders) project, to allow me to get my Ubuntu development environment installed, configured, and running, from a fresh install even, on any LXD capable machine, in short order.

Even if my dev box dies completely, I can be back up and running on another machine or hard drive within an hour. I can set up as many of these desktops as my machine has CPU and RAM.


I love this idea, and because docker feels so much lighter than vagrant, and doesn't require virtualbox to be installed and updated (like vagrant did on windows when I used it). However on mac and windows, using docker means you're running in a VM.

I wonder if there's any plan on implementing the APIs that would allow docker server to run in WSL, thus abrogating the need for a VM? Edit: theres no evidence of any motion, but here's a uservoice for namespace, cgroups,etc support in WSL: https://wpdev.uservoice.com/forums/266908-command-prompt-con...

Edit2: Actually it looks like docker support is being worked on: https://github.com/Microsoft/WSL/issues/2291#issuecomment-43...


Docker for Mac and Docker for Windows do use a VM, but it’s a single VM for the entire host system, not one VM per container. So the overhead is very reasonable.


My big issue with Docker for Windows is that it prevents VMWare Workstation from running, as it uses Hyper-V. You can have one or the other, but not both.


+1 - Super annoying.

I've never seen a good explanation of why hypervisors can't co-exist on Windows. I'm sure there's a technical reason but if anyone has any articles that explain this I'm very interested!


they also cant coexists on linux, you know... Its a hardware limitation. You have to give the hardware component responsible for the 64bit virtualization to either hypervisor, wherever that is vbox, kvm or hyperv

unless you're fine with 32bit. In that case you can use any system you want at the same time (also on windows)


Thank you. Nice to hear - I will keep hacking away at it


Hi.

You wrote "There is a developer who (I think) works for Docker..."

I believe that you are referring to Jessie Frazelle.


I have been playing with this and am kind of OK with it - would value any feedback about the idea / concept and pointers.

It just came about out of annoyance with yet again trying to rebuild my personal workstation to even barebones level.

It's definitely in the "if you aren't embarrassed you launched too late" category


This is approaching the idea of Qubes OS, which is a good direction IMO and something I've been waiting for stability to try.

https://en.wikipedia.org/wiki/Qubes_OS

https://www.qubes-os.org


NixOS may also be something to look at. It's doing things a bit differently compared to Qubes, but seems to have similar kinds of goals. I've not used it before but the idea seems to be neat.

https://nixos.org/


It’s worth noting that NixOS and Qubes aim to solve very different problems. NixOS (and Nix itself) tries to improve package/dependency management, allowing for things like rolling back upgrades and flexibly using multiple versions of the same package. Qubes targets sandboxing of individual services/apps, with the goal of preventing lateral movement within an endpoint between applications.

NixOS doesn’t sandbox apps by default (obviously, the user could run all their apps using containers/VMs/etc, but the same is possible on other distorts).


Also worth noting that Qubes uses VMs (Xen) with whole guest OSs as the isolation mechanism, whereas Silverblue uses containers (Flatpak) to isolate individual apps. Qubes is great if you're paranoid and want to keep your banking VM isolated from your web browsing VM. Flatpak and Snaps are great if you just want to grab the latest LibreOffice without pulling a ton of dependencies into your package manager. I guess there's no reason you couldn't install Silverblue as a guest OS in Qubes...


Qubes not only seems complicated but my main goal is to have one script that defines my whole laptop / workstation experience - the "in one place" idea is what matters to me here.

I think qubes or something like it will be the right way to go for safety in "the future" - but Inwoukd like a really simple way to define my qubes upfront- really really simple.

maybe they have it - not looked deep enough


Was NixOS / Nix considered? Feels like a lot of overlap in the motivations.


Nix is problematic due to its memory requirements. The nix package manager alone requires over 2gb just to run.


While Nix can use a lot of memory while evaluating larger systems (or searching all of nixpkgs by evaluating everything), it is usable on systems with 2GB RAM.

Nix 2.0 had a bug which caused excessive memory use, but it's been fixed in 2.1: https://github.com/NixOS/nix/commit/2825e05d21ecabc8b8524836... https://github.com/NixOS/nix/commit/48662d151bdf4a38670897be...


@lifeisstillgood - in the spirit of mutual embarrassment...

https://github.com/ramses0/docker-devenv

https://github.com/ramses0/docker-devenv/blob/master/Dockerf...

I found it useful to separate OS configuration out from installing non-OS software (third-party)... it's basically a cue/cost that makes me prefer OS-level installations over "manual" installs.

https://github.com/ramses0/dotfiles

https://github.com/ramses0/dotfiles/blob/master/bash/profile

...lots of hardcoded and ugly stuff in there, but I like the idea of it a lot.

My experience has been mixed, but good overall:

1) docker mounts local host's *.ssh info, and mounts "~/host" dir and "~/Git" dir (promotes some of the host FS to top-level directories) ... this is primarily around me coding / working with Git, so it's the right choice for me. It's nice for the auth story to mostly properly follow me around as I move from computer to computer (primary auth is on local host, this image expects my local auth is properly set up).

2) docker FS isolation is interesting... you can "apt install $foo" in different windows/instances and system level changes are effectively isolated and disposable. It allows near instantaneous install of certain packages (ie: "apt install ffmpeg") and the command will be gone after that particular session (unless I decide it's something I use often enough to be added to the docker recipe). Contrast this the cruft on a 5-year old home linux box which has 1000's of packages installed from running random tutorials from the net.

3) startup time is quick and roughly equivalent to booting an old x86 PC. No matter how much I screw up the (inside the docker) operating system, it's just a quick "reboot" to fix it back to the last known-good state

4) VNC/X11 is "meh" and more of a parlor trick. However, there's interesting use cases to making an "appliance docker image" ... Firefox works OK, and is maybe a good idea for carrying around a "paranoid browser", but it definitely feels a bit uncomfortable.

5) I know the non-OS install stuff (ie: heroku, rustc, etc) can likely be done in the initial Dockerfile step, it forces me to make sure the system stays in a consistent state. When something isn't pulled directly from Debian, it's likely that there's a more compelling story to make sure the software stays installable and up to date (ie: vim plugins, etc). I guess it feels a bit like a local version of homebrew recipes?

6) As a hack, I cobbled together a "build/bake" concept which locks down a particular set of the non-OS stuff as well. When "baked" the image doesn't really auto-update (tough to automate version control against certain random sources/installs), but I have some scripts which try to keep the base OS up-to-date, encouraging you to go through the "build-bake" cycle as time goes on, keeping the box "evergreen".

An example victory is when on a windows box, I can get "real" vim, bash, etc. and operate on windows files from within linux (something that WSL can't officially do).

On the mac-side, I can get "sort -R/--random-sort" when processing random data, as well as the ability to quickly pulldown ffmpeg, or imagemagick... again in a "disposable" environment, without seriously jeopardizing my OSX install and building up cruft.


Your 2) is spot on - I think its the major reason we build up this cruft. By making an explicit choice to add something to my docker file I have decided it is worth the 2 minutes mental effort (for me a surprisingly high bar!)

not sure I get 5/6 - will dive in later (just decided this project needs a roadmap)


5/6 is the distinction between:

apt get update && apt get upgrade

vs:

curl https://rust-lang.com | grep "new version" && curl "https://rust-lang.com/installer.sh" | sh

I have very high confidence that debian/apt is "immutably available and consistently managed", but many of the non-free pieces of software (corporate-source or "not yet standardized, maintained, and included in debian") can/could require special attention, especially regarding updates.

build-env.sh - the operating system, apt-only, "just a linux box", download and install the heroku tooling b/c I often need it, but it's not in the main debian package pool.

bake-env.sh - "geeze I hate waiting for heroku, rust, and random vim plugins to download... let me freeze this moment in time" ... apt-get update still works and will keep O.S. up to date, but heroku, rust, vim plugins, etc won't "auto-install / auto-update" b/c they don't have properly managed versions or an install/update cycle like the rest of the o.s.


the bake-env.sh seems to just trigger a flag in the Dockerfile

So you have a set of dotfiles, on your url, and if bake-env, those are part of the docker build - and presumably the dotfiles do the rust installs etc, so your docker is ready with rust ecosystem installed (not a rust user so unclear) and then you can happily ignore it till you need to upgrade some rust package ?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: