Hacker News new | past | comments | ask | show | jobs | submit login

Speaking of a multipurpose home server, how do you guys compartmentalize it so that one faulty or vulnerable app doesn't take the whole thing down?

Docker/containers used to not be hardened enough. Are they now?

Virtualization/VMs used to be the answer but it adds both performance and management overhead. Is there a good system here?

Or something else entirely? Like old school separate users.




I use proxmox, which is more or less a VM and workflow manager on top of KVM.

The overhead on something like an RPi would be ridiculous, but on modern x86 hardware with an IOMMU (VT-d in Intel speak, AMD-Vi for AMD), the overhead of passing through HW is, for homelab purposes, essentially 0. A lot more expensive, but the organization and extensibility is well worth it.

I have anything that I expose directly to the internet on a separate VM from my "internal" services. If I were super paranoid, I'd expose them to separate VLANs, and then use my FW to control network traffic. The Intel 82599 can enforce different vlans on different VFs with SR-IOV.

I have a VM that runs flatcar for docker for things that are too hard to set up otherwise, but I vastly prefer NixOS for most things.


> If I were super paranoid, I'd expose them to separate VLANs, and then use my FW to control network traffic

This is exactly what I did initially, but it was indeed a bit of a pain to manage. Eventually I went with something in between, by first compartmentalizing services and then putting them in separate VMs with separate VLANs:

0. Router / FW.

1. WireGuard / reverse proxy.

2. Personal, e.g. file storage, backups.

3. Hosting. My personal site is reverse proxied through Cloudflare and only their IP ranges are whitelisted.

4. Compute, i.e. stuff I want to compile / develop / run on my server. Handy if I want to run a heavy simulation overnight or need more disk space / RAM / CPU power than my M1 MB Air has available.

5. Services. This runs many small tools / services that don't need access to my RAID pool or anything like that. If this gets infected I wouldn't really care.

6. VPN. This VM can only access the internet through a VPN. Doesn't have anything installed ATM, but has been used in the past for urlwatch and torrenting.

7. Test. This is where I try out new software before actually installing it on the correct VM. Once I've concluded testing I rollback this VM to a clean install.

It takes a weekend to install Proxmox and set up the VMs / VLANs, but after that it easy to use.


it seems to be almost impossible to find a machine that is both low power and also supports SR-IOV ARI (more than 8 VFs)

and the best reason to use SR-IOV with networking is you completely avoid the awfulness that is the Linux bridging/firewalling stack


Any advice on how to add n Raspberry Pi 4b to a Proxmox setup? Maybe just as bare docker nodes? I've used them for Nomad and k3s before, but Proxmox seems a bit heavy resource-wise in comparison.


Does an RPi 4b even have virtualization extensions? I'm wondering even if it can even run KVM.


Another option is the free tier of ESXi. It works well, but having tested Proxmox recently, I really liked it.


Docker is the de-facto standard in the community now (and, to a lesser extent, alternatives like LXC or podman). The daemon should be run rootless if possible, or the containers rootless if not.

You can still use VMs, and some use that as an additional layer of isolation because they're virtualizing anyways (performance overhead is really negligible).

I've been self-hosting on my home server for at least 5 years now, and I think I've only seen two or three vulnerabilities across all the services I know about, none of which were ever really exploitable.


Have you tried using kubernetes to manage your containers? Wondering if the extra level of complexity is worth it for a home server.


100% not worth it. If you need multi-host for some reason (beyond “I want it” - and you don’t) then try docker swarm.

It’s your home environment. You want it to be easy. You want to use the tools you run not maintain them. If you want to learn k8 for professional growth, learn it separately from a home server.

Your home server can be more pet than cattle.


I went with Docker Swarm on the same advice from someone else, and tbh, it's unnecessary overhead as well. And at least on RPis, it's very fragile and not as self-healing as I'd hope it to be. My stacks are well compartmentalized, but weird database locks will still happen, or the swarm will just become unreachable, and I gotta go power-cycle a node or two to get things back up again. (I mean, we're talking once every few weeks or something, but still not okay.)

I've been moving workloads to an old gaming rig running NixOS with varying levels of isolation (some containers, but really just good user/group/permissions management), and it runs super well.

Of course, you could do the same with just Docker Compose and no Swarm, and I think you'd still be better off than using Swarm.


Yea I had a not dissimilar experience. I didn’t have as many issues, but I pretty quickly realized a single old gaming Pc was way easier than a half dozen Pis stacked up in the closet trying to coordinate. Auto scaling and balancing seem nice at work… but complexity was rarely needed at home.

The main reasons swarm is better than other options for clustering IMO is networking. They can be set up to share the ports on all devices and map it back to the correct container on whatever host it’s on, so you can disconnect the target IP:Port from the container.


And yet my iphone is cattle. Treating any machine like a pet seems like a recipe for disaster.


My iPhone is a pet. It’s a pet with a great backup system that turns a new pet into exactly my pet. But it’s still a pet.

There’s only one and it changes manually as I need features to change. I download and install things as needed, from gui, with no version control or script to manage it. It’s a pet.


It sounds like for you: hand-operated -> pet, automated/script operated -> cattle. I think the whole point of the analogy is about if things get slaughtered can you furnish a new one without batting an eye. If yes, then cattle, not pet. So I guess the question is: if someone stole your phone right now, would you blink?


> if someone stole your phone right now, would you blink?

Yes absolutely. I can afford a new one, and I would immediately buy a new one (well I’m already waiting for the newly released one but still). I would still be quite upset and my life would be interrupted at least a little.

I took the pet/cattle analogy to be about how manual the setup is, and how replaceable it is. I think apple has smartly blurred that line with great backup tech, but I would still consider the “lovingly” hand customized aspect of maintaining a phone solidly a pet. Some version of my current phone has been around for ~10 years through various hardware iterations, all restarted from a backup image. I would be distraught if i had to recreate it without a backup, just finding my apps, logging in, finding wallpaper, rearranging icons, setting up shortcuts, etc. Maybe that’s the ideal state for a home server - a nearly no-op backup and restart process that you still manage as you need


Proxmox + Proxmox Backup Server + external storage (I use my NAS) means I don't really have to worry about disaster, as such, because every VM is backed up nightly. VMs and the hypervisor can all be pets and I can just restore a backup if something happens.


If you're doing something for a hobby, treat it like the special snowflake it is to you. If you're doing something just to get things done, treat it like the utility it is. If you're at home playing around with machines in a homelab, feel free to baby your servers.

As far as disaster is concerned, it's not that difficult to install software that really needs minimal maintenance. But it comes down to what you want out of the software and hardware that you run.


What about Terraform instead?


I have no experience with it, but generally my view is that a home server is NOT a “devops” project, its more like an iPhone. You want backups, and you want whatever is running to restart if you lose power (whether that’s a new toaster tripping a breaker or the weather killing power, it happens), but you dont need “infra as code” and all sorts of automation. Just update as you go, and move on. Docker et al. have enough tooling that you can run everything as its own container (basically a phone app) and you’re done.

If you want to try out <insert tech here> to learn something, then just learn it, don’t try to fit it in your normal life and eat at your existing stuff. Don’t replace your mac with a chrome book just because you’re learning webdev, and don’t replace your home server with terraform just because you’re learning it. What if you learn it but stop needing it or never use it professionally? You’ll now need to maintain that skill to maintain something at home.

If you want something more than a blank Linux box for your home server, check out HASS.io, synology, QNAP, TrueNAS, or one of the many “hold your hands” distros/tools designed to make it less work. Even Portainer/Proxmox will give you a bit of a GUI without being too opinionated. I use a blank Linux box primarily, but only because I live with other SWEs who all want to mess with the shared server, and everyone wants their own thing and we couldn’t agree on anything else. We plan to switch to TrueNAS and give everyone a VM but haven’t coordinated the switch yet…


docker swarm is dead


Kubernetes is 1000% overkill for a home server, but Hashicorp Nomad is very manageable. It runs all my Docker containers at home.


Kubernetes alone recommends at least 1gb of ram just for itself IIRC, so that may push it out of some home servers such as RPIs or smaller nucs depending on the actual service load.


Not with k3s.

But still, for a single or less than 3 machines and/or in a single location I don't see the point.


K3s is half that. Still quite a lot, but not as much!


I've dabbled, but really docker is way easier than k8s uses until you start moving into multi-server workloads


it isn't


ProxMox running containers wherever possible - which is nearly everywhere except for when you need to run different OSs (Windows, Android, etc.). Even the router runs in a container with all the other containers connecting to it through bridges. These bridges are assigned VLANs which are brought out tagged on one of the Ethernet ports which connects to a managed switch which takes care of untagging to specific ports and/or trunking VLANs to the different buildings on the farm.


You're looking for Sandstorm containers. They are much more hardened and purpose-built for self-hosting. To my knowledge, nobody's ever reported a container escape that affects Sandstorm.


I create a separate user for each app, and use the systemd exec configuration [1] for sandboxing [2]. Some apps only get read-only access to their own files, and no Internet access, for example (along with many other restrictions). I have some systemd drop-in units that I frequently reuse.

For standard services, I use Apparmor with the default `apparmor-profiles`, as well as fail2ban with some additional firewall rules.

[1]: https://man.archlinux.org/man/systemd.exec.5

[2]: https://wiki.archlinux.org/title/User:NetSysFire/systemd_san...


I use one VM per component. The overhead is pretty minimal and VMs I think are still more secure than containers. Maybe I am just a tech dinosaur though. I run my VMs on OpenStack for the networking flexibility, and use Ceph for block and file system.


VM's do not actually add that much overhead (depending on the workload - GPU's are notoroiously hard to share). And what most people do not realize is that something like VMware or another hypervisor is also very good at managing things like RAM across many VM's. In many cases you can overprovision (meaning you can have VM's that are technically "assigned" a total amount of RAM or CPU that is more than you even have physically) and still have great performance. The key is always to install the hypervisor on bare metal (dont run VMware on top of Windows or try to host a server where the "base" os is OSX or something).


Containers are fine for this unless you reach the popularity where you are attracting dedicated attackers.

Use userns-remap. Run the docker daemon rootless if you want but don’t stress about it. Set up auth to the docker socket. Don’t bother with running the processes in the container as not uid 0, with remap it’s effort for little gain.

Now breaking containment means having a local privesc on your Linux distro or breaking the auth on the docker socket. Like that’s plenty for drive by attackers.


SmartOS with zones. Mostly native but some LX thrown in for too linux-specific software.


I use Unraid, which manages storage for you and lets you run Docker containers for apps.


> Docker/containers used to not be hardened enough. Are they now?

I don’t think they ever will be. At least once a year there is a kernel bug where root in a non-root container/namespace can be elevated to root on the host


I miss sandstorm.


It's still here and we're still working on it! It's 300th release just rolled out. I'm personally working on packaging three different apps right now.


I still use Sandstorm! Some of the apps are a bit outdated but the security model means that mostly doesn’t matter.

The WordPress Sandstorm app is slow enough at rebuilding the static side of our large site that I’ve been meaning to try forking it or building my own though. But Sandstorm itself has been great.


I use docker containers with separate dedicated users with just enough permissions for their purpose. For example my media server user can't touch anything other than the media files and isn't part of sudo.


VMs and LXC and occasionally some docker. Modern hardware is powerful enough that overhead is a non-issue for most of these apps

Generally your just serving a single user - you - so even potato grade gear is fine




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: