Another cargo cult post about self hosting that makes it about 1000 times more complex than it needs to be via containerization. I get that it's what they're used to because of work but this isn't work and has entirely different requirements.
There are no "production deployments" at home. You just do what you want. If it's down for a day that's fine. Putting something in a transient docker container and then bending over backwards to enable file persistence is just... wrong. And it'll lead to lots of extra work for no reason.
> Putting something in a transient docker container and then bending over backwards to enable file persistence is just... wrong.
Do you mean declaring the volumes? What's so hard about that? There's a massive benefit when everything is homogenous. I can look at a docker-compose.yml file and instantly know where all the important data is. I can also copy volumes to a new host with a one-liner:
For development, I self host Portainer, Caddy, Authelia, Cloudflare Tunnel, Gitea, Drone, MinIO, Nexus, Bookstack, Nextcloud, and ArchiveBox. I barely think about any of them. I pin to a major version and schedule nightly updates using SystemD. Everything is templated, so after I set up a new container it's as simple as:
It's a very pragmatic solution if your needs are satisfied with vertical scaling because everything can go into a single VM that's plugged in to a homogenous VM backup system. The only thing that's a little bit complex about mine is that I run MinIO in a separate VM that doesn't get backed up because I use it as cache storage for APT, Docker Hub, NPM, etc. proxies in Nexus.
Docker can be frustrating while you're learning it, but everything's like that and once you get proficient with it you'll never want to go back to the old way of doing things. Docker is the packaging format that won (for servers) IMO.
I actually waited far too long to learn Docker. Turns out that in general it's been far easier to work with than the "traditional" self-hosting methods that I used to go with.
It also forces you to keep a clean system, app logs are easy to find and consume nearly any way you want, and if a container from one developer isn't fitting your needs, it's usually super easy to drop in someone else's different container implementation of the same app with one config change (the repo name).
I use to operate like this as well, but I bit the bullet and moved my apps to containers when upgraded my server hardware. Using portainer, mapping my container data to volumes on the file system. It was all incredibly easy to do and it gives me a web interface that I can manage my lab from my iPad. I rebuild my containers periodically through the gui by clicking the rebuild button and telling it to pull latest. It is so much easier to me at this point that installing a ton of packages that may or may not conflict into the operating system. Its all cleanly separated into isolated packages and management of versions and dependencies is much nicer as well.
My setup is portainer for container management on top of docker-ce, running on debian.
If you know how Docker works, it’s much easier than messing with all sorts of different services with different requirements. It’s just “add the thing to your docker-compose” and you don’t need to know how to set up PHP on a server or what a virtualenv is.
I mean, yes and no, right? We've all seen "over engineered" setups that really function to give someone a safe playground to try out those containerization technologies they may want for more sensible use elsewhere. However, it's also the case that once you get into the double digits of things you're running, walling off your versions of things from each other is valuable, and prespecified configuration is nice.
Speak for yourself, some of us use our homelabs to serve ourselves as customers because we refuse to pay for nickel and dime services that exist for the sake of convenience.
I realize that homelab goes a bit beyond localhost, but we all start somewhere.
And that's fine. But it is not at all required for self-hosting things. Maybe you do that to improve your skills for making money. Maybe you do it because you just enjoy all the complexity and learning the systems. All legit, but completely un-needed obstacles for people that just want to learn how to, or just accomplish, self-hosting.
It's like writing a page on how to make bread at home but then spending most of the words on how to build an transient extra-oven outside with special parts instead of just using your oven and talking about the actual recipe steps.
> And that's fine. But it is not at all required for self-hosting things.
It's not required, but, once you have a handle on it, you get to save a lot of time and hassle. For example, I just checked my old docs for installing and updating Nextcloud. It's 17 printed pages. My new docs are:
Personally, I run that nightly because, like someone said in another comment, "so what if your homelab is down for a day." For me having a day of downtime wouldn't be the end of the world, so everything's homogenous and auto-updated. I only touch it when I need to update to new major versions.
In my experience I spend less time fixing issues related to containers than I used to maintaining everything by hand.
I'll use an OS package over docker when one exists (with systemd you can get pretty much the same level of isolation anyway), but docker has its advantages for software that is too picky about its environment, even at home. Or who's install steps begin with "install these twenty packages, modify these 5 config files then compile this then copy it to these paths".
This is why I really like NixOS for home deployments. I already use it to configure my laptops and desktops. Works great for configuring a little server to run services & timers too.
Updating & deploying is simple too. I can `git push` some changes from any of my computers, and then ssh (which is trivial to set up w/avahi using NixOS) into the server and `git pull && nixos-rebuild switch`.
Something nice about selfhosting on localhost with a modern linux machine (using systemd and systemd-resolved for DNS) is that by default *.localhost DNS queries are routed to localhost. So you can setup your local apps to have domain names like "photos.localhost", "code.localhost", etc. and not have to muck with hostname entries and such.
Setup caddy as a reverse proxy and it will automatically inject a local CA cert into your browsers and auto provision local SSL certs too so you can access local services by names instead of ports, and everything happily works with SSL automagically.
I really, really wish windows and mac had a similar behavior to just route all *.localhost queries to localhost.
This is not about systemd-resolved but it's just mDNS/Bonjour.
I think .local and .localhost domains are working on my home network for more than five years, even with systems which do not run systemd-resolved as of today.
> The hostnames "localhost" and "localhost.localdomain" as well as any hostname ending in ".localhost" or ".localhost.localdomain" are resolved to the IP addresses 127.0.0.1 and ::1.
mDNS only uses the .local suffix but AFAIK it will never resolve anything to localhost on that domain. I'm pretty sure having a mDNS service that advertises a .local domain as 127.0.0.1 would really break things too if it serves that address up to other machines on your network.
Will the local CA certs generate security errors in modern browsers since they’re being issued for a domain (e.g. photos.localhost) that obviously can’t be verified to a CA like Let’s Encrypt?
Caddy uses a library from step certificates which injects a self signed CA cert into your browser and OS trust stores automatically, so your browser will actually be perfectly happy with the SSL certs. The first time you run caddy it will prompt to elevate privilege so it can install the certs and then it should be good to go after that point.
I'm a big fan of this approach and do similar! If it's something I only need when I'm at my computer and where I'm fine if it falls over then I'll host on localhost. I then have a small Lenovo M700 Tiny which I use to run things I may want on multiple devices (use Tailscale so I can access it remotely) but where I don't care too much if it falls over. Anything that I want to be really stable I run on my Hetzner dedicated.
It's great fun and I enjoy having the control over the few things I run. It's also super conveniant when it's running locally because I don't really have to worry much about securing the service behind reverse proxies and all that. I can just hit it however I like with any ol script.
Same here! I read the headline and thought "isn't that what Docker is used for?" As soon as i opened the article I saw the author and I had the same idea, even down to using ufw-docker. I have individual ports scripted to open with ufw-docker after docker-compose is run.
I know docker isn't the best tool for every task, but I very much enjoy how it allows me to treat my laptop and my cloud servers as almost identical machines and compartmentalize my services.
Yep! Containers are really great for this kind of thing, it's so trivial to run basically anything as long as the configuration docs are reasonable. I use a single node k3s cluster on my lenovo box (for learning) and just straight up docker-compose on my localhost and dedicated because it's just so easy.
Additional information if you are interested in self-hosting your personal data, take a look at the awesome-selfhosted (<https://github.com/awesome-selfhosted/awesome-selfhosted>) list. It contains a lot of information and tools.
That's allow me to say: sorry guys you still lost the way.
The fact that modern apps are web-apps is a living proof that the old model of document-based UIs was the right way against the widget/form model however old classic document-UIs was interactive, in the sense that the user can create and modify them inside themselves, web-apps might allow "a bit of customization" but are still FAR from being changeable at runtime by their users.
Perhaps in another 30+ years will finally rediscover the old model, sold as a very new high-tech thing, since nobody will remember the origin...
So far Emacs/org-mode/EXWM/org-roam can be used (I use them) as a document-based UI for almost anything, still integrated (thanks to EXWM) with modern/classic/raw-and-archaic GUIs like Firefox or GIMP. They are limited in GUI terms but at least they can have embedded images, live elements (org-mode headings, search&narrow UI to access them, attachments, links to anything that also just run live embedded code etc...
This makes perfect sense, especially on some minimal arch install with a tiling window manager. This way you can delegate most of the GUI stuff to the browser and have less stuff installed locally. Nextcloud is also something that can be hosted locally, it can help with calendar, image gallery, contact management, e-mails etc.
disagree, he doesnt explain how he backups stuff but I guess it is a on external server (otherwise he will lose everything in case the laptop is lost). If you are using an external server you might as well setup everything there (ideally a raspi/minipc). I have a lot of devices (tablet, mobile, laptop, desktop and home media) and being able to use all services from all devices is pretty convenient.
Using a service like rsync.net (happy customer here!) gives secure and safe backups but without the ability to run services. Which also means pretty much no matter how much I mess up, I can't expose my data to the Internet at large by mistake.
I will (and do) run Internet-facing services, but I can definitely empathise with the desire not to. And I'll only do it myself if the benefit over keeping it private outweighs the risk.
(author) I backup my stuff to three different USB hard drives in three distinct locations, plus rsync.net. All via borg. This works well, is cheap, and a lot would have to happen for me to lose all the data.
Of course this self-hosting approach does not work if you have many machines, but I have just my Laptop plus my phone.
Today, few hours ago, I was thinking: "My nextcloud on SSD@RPI is superslow to render thumbnails for photos. I'v tried once to configure which thumbnails must be prerendered, but I failed and... isn't there a simple web photo album that I can hook my nextloud photos to?"
Yesterday I was disappointed that is doesn't show ANY thumbnail for my videos taken on Android.
And this post mentions PhotoPrism [1]! Feature page looks nice - exactly what I need for my mess of unorganized photos.
Have to try it out. Hope I don't find myself disappointed.
If you don’t mind closed source software, I’ve found PhotoStructure to be a little nicer. Ive found it’s the closest thing to google photos/apple photos that’s still self-hostable: https://photostructure.com/
Over the weekend I’d been querying the local LUG for explanations of process scheduling with little success, then like magic “A journey into the Linux scheduler” drifted down my HN feed. It’s a wondrous effect.
It's not the case. Firewalling works the same in v6 as it does in v4, and most people will have a firewall on their router that prevents inbound connections from the Internet by default.
If you don't share services with others and your machine is usually on then its not much of an issue where you run them. But if you share these services then it need not be all that expensive to host it, a raspberry pi can handle quite a lot especially with an add-on drive and you could backup to an online drive service.
I do this all the time. I have permanent SSH tunnels from my home pc to my VPS and a reverse proxy on my VPS, so I have super fast access to my home apps at home and tolerable access to the same apps when I am outside.
The limitation mentioned about not being able to access localhost outside of your home wifi is easily solved with a dynamic DNS. I use DuckDns to be able to access "localhost" when I'm away.
There are no "production deployments" at home. You just do what you want. If it's down for a day that's fine. Putting something in a transient docker container and then bending over backwards to enable file persistence is just... wrong. And it'll lead to lots of extra work for no reason.