I also put everything in docker containers, and docker-compose for each thing is a must. It usually pains me to see when popular projects have no decent docker-compose files available and I have to make them.
For backups restic is a blessing. And so is Syncthing for getting things to the backup machine quickly (from mobile and non-unix machines).
> I also put everything in docker containers, and docker-compose for each thing is a must. It usually pains me to see when popular projects have no decent docker-compose files available and I have to make them.
On the bright side, it's a one-time cost and generally pretty simple, especially if they provide a sample `docker run` command. (Although yes, it would of course be better yet if projects would publish one so that it was easier)
Portainer does make certain tasks much easier. I use Portainer to clean up any Docker volumes, containers, and images I no longer need. Doing that for a lot of Docker resources using the command line gets old really fast.
I say this as someone who uses docker and docker-compose extensively and I'm very comfortable with a CLI.
How do you deal with backing up the data/“disaster” recovery? (I out disaster in quotes because we’re talking a about home servers so not mission critical data but still pretty annoying to lose.
If you have OP's level of control over the hardware (ie; you have a dedicated server on which you can access the hypervisor), then taking incremental backups of the entire VM is the best way to ensure you can hit the big undo button if anything goes wrong.
The most important, and neglected, part of backup & restore, is the restore. If resources permit, this is where I like to use a battle tested solution instead of rolling my own with scripts. For my self hosted servers I use Veeam, but there are many good alternatives.
It's nice having the option to restore the entire VM to a specific state, and to also be able to mount backups and pull files if only a few need to be restored. It's also handy to be able to spin up a backup into its own virtual machine.
Potentially, depends when it happened, which point you restore to, and if the vulnerability that got them in in the first place is still present. If one of my servers got hacked I personally wouldn't risk it, and would just nuke it and rebuild. If everything's in Docker containers then the services can be torn down and spun up easily, and databases can be exported from a backup VM and imported into the new server.
Not OP, but I've been tinkering with the idea of having a raspberry pi w/ an ext hdd stashed at a friends or parents place and do something like an rsync over wireguard for super important stuff.
I'm not doing it remotely, but have an Odroid HC2 set up as a sort of single-drive personal NAS on my home network. Restic backing up to there is absolutely stellar, including being able to mount a snapshot for selective recovery if needed.
I think just the typical evaluation of someone with expertise in doing something, it's easy if you're a real devops kind, then just put together a bunch of things, do some config files, write a make or two, take two or three hours to do something that the rest of use take a week to do. in the same way that I might set up a service to scrape XML files and fill up my ElasticSearch search instance, and take a couple hours to set up a working service that I can keep expanding and other people might be like - easy, is that a joke?
"Easy" is probably because it boils down to, install X, Y, Z, edit config for X, Y, Z, start daemons X, Y, Z. There's no complex math or thinking involved, just plugging stuff in.
It takes practice. I just started making smoothies in a blender, and there are a bunch of little things to know to make it a little easier on yourself. It’s not just “throw all your shit in a blender and push button”, at least not with the cheap blender I have access to.
which is exactly my point? He calls it easy because he can do that in a couple hours, and if problems arise later he can always just fix them with a quick 10-20 minutes. It's easy.
On a side note this is actually a problem, in my experience, if I find something easy I might not cover every edge case - I can cover those if and when they come up - but given enough potential inputs and users the edge cases always come up. And then what happens if I am unavailable at that moment, the edge cases need to be handled by people who will find the easy solution I built up incredibly hard and daunting.
As someone running something similar, I thought it was quite easy when I first set it up: I used the similar setup of a friend as a baseline to get through the configuration. It took about 1 hour to setup the base things to have the infrastructure running.
I have been running my own personal servers in a similar setup for the last 10 years.
Have turned on automatic updates, including automatic reboot, and everything runs in docker (using docker-compose).
I can not remember a single time something bad or unexpected happened. Only the planned things - upgrading the distro every couple of years, and updating major versions of the things running in containers probably once a year or two. And maybe sometimes some unplanned updates if particularly bad vulnerability gets disclosed in a popular software/library. I am pretty sure I don't spend more than a few days per year to manage it.
If I had opted for a cloud vendor managed alternative, it would have been so much more expensive. I have definitely saved thousands or tens of thousands over the last 10 years.
But then again, I know how to manage it and I planned it out so it would not cause too much trouble for me. Prior to this setup I endured many painful moments and that "wasted time" allowed me to think of a better way to manage it and avoid certain problems along the way. Also available tooling has improved a lot.
Then again - this is for my personal projects and I would do it somewhat differently for large projects.
> I always hear about the easy setups, but never about total (man-hours included) cost of ownership through a couple release cycles on each component.
I run about half a dozen web apps on a single node on Hetzner with Docker swarm mode + traefik ingress + whatever the web apps need.
Any app I have is deployed in seconds as a docker stack. I treat my Docker swarm node as cattle, and I have an Ansible script to be used in case of emergencies that deploys everything from scratch. The Ansible script takes, from start to finish, only a couple of minutes to get everything up and running. I can do this with zero downtime as I have an elastic IP I can point at any node at will.
If I wanted, I could optimize everything even further, but it's already quite fast. In fact, I can get a new deployment on my Hetzner setup up and running faster than I can get an EC2 instance available in AWS.
Proponents of big cloud providers as the only viable option typically have absolutely no idea what they are talking about regarding availability, redundancy, and disaster recovery. It's mostly resume-driven development seasoned with a dash of "you don't get fired for picking IBM".
Easy in comparison to make everything yourself and configure every little service. This needs understanding of service and how they work and it needs much time.
After 20 years doing this, my postet example is stupid simple, that works for 80% auf all equirements.
For the rest 20% you need a good admin, or book a service with a good admin.
It’s super easy. Like literally would take someone who’s worked in infra an evening to set this all up and then a Sunday morning to have it automated in Ansible.
It’s a single server running a few containers with config files. The complexity comes when you outgrow a single machine or need stronger availability guarantees but none of that matters for a single-ish user setup.
Wow so easy, only 9 different services. Then there's the underlying OS, managing hardware & network setup. Also need to make sure your network provider actually allows you to host (commercially?) in your own home. And ensuring that you have a static ip address for your server.
> need to make sure your network provider actually allows you to host (commercially?) in your own home
If you're hosting something commercially, you should get a commercial ISP plan. If you can get it at home, why would the provider not allow you to host your services that way?
That said, why would you do that? It would be very hard to scale this operation, so unless you're planning to be a tiny (and likely lossy) operation forever, get yourself a cheap VPS to start with, then migrate as needed.
This post is about self-hosting services for yourself, and perhaps to a few close relatives and friends. Many of us do that (have a look at r/selfhosted and check out the Self Hosted podcast), and OP's set up is one of the simplest around.
> ensuring that you have a static ip address
There are many ways to access your services without one. A mesh network, like Tailscale, ZeroTier, Nebula, is my favourite, but a regular VPN also works, and so does dynamic DNS.
I think I’ve had the same IP address from my cable company for more than a decade now. (Regular old cable modem user with a dynamic, but unchanging, IP address.)
Yeah but once you figure out Traefik, it’s just 3 extra lines in your deployment files for every new service.
And I inevitably have to redeploy again, and I hate doing the same boring thing twice, so it’s nice being able to bake complete orchestration into a repo.
(And it’s also nice for being able to try things because your repo has a complete full snapshot of your setup that you can diff.)
The thing I like about nginxproxymanager is that it's easy to add non-docker hosts. There are some services that I route that I don't have in the same docker cluster as everything else. That requires static files changes for traefik itself somewhere.
A few months ago, I made an offer of $100 in one of the freelancing websites, for someone to set-up something like your configuration on one of my Digital Ocean instances. I asked for a few more apps to be installed (git, svn, etc). There were no takers :-)
I think a web site/service which lets you choose "apps" and spawns a VPS instance would be very useful and profitable (Think "ninite for VPS"). I started to work on this but never had the time to continue. With an Ansible/Chef/Puppet recipe, this should be relatively easy to do.
Probably not, sandstorm is abandoned, most softwares are broken and unmaintained, gitlab version is from 2016 and full of vulnerabilities, same for WordPress version from 2018.
Project is dead. I think the guy behind the project was hired by Cloudflare few years ago
I know that there was some work on reviving sandstorm after Kenton Varda joined Cloudflare, see e.g. https://sandstorm.io/news/2020-02-03-reviving-sandstorm, but it is very possible it never got anywhere. Sad but understandable.
A lot of VPS providers have a catalog of apps to install for you and dump you in a web management console. Sometimes it's handy but usually the security is awful. Back in the day, Webmin was the go-to way to do that and configure your server in general from a web interface
Although not exactly user-friendly, I created my first proper bash script the other day for setting up a postfix server on a vps. You have to create the vanialla vps first but then the script does all of the apt install stuff and then uses awk/sed/whatever to modify config files.
The nice thing is that it is mostly what you would do manually anyway and the commands are unlikely to change much/often since they will install latest versions of postfix etc. when you run the script.
I think this might be more doable since the scripts are easy to create and maintain so perhaps just a site with bash scripts for e.g. "Postfix + Sendmail" or "PHP7 + nginx"
Whether or not you classify it as “easy” isn’t relevant. You hit the nail on the head in the prior sentence: “time” is the issue, and you’re asking someone to trade their time for a pittance.
FWIW I’d charge way more than $100/hr for this work.
I think they mean that if you already have the setup as a provisioning script all you would need to do is to modify it a little, run it and get cash in return.
Something like CapRover wouldn't work for you? Although it's not very up to date. And the one-click collection of apps, is a bit outdated too. You'd need to reference your custom docker images.
Also see Cloudron. Not cheap but I’ve heard that people are very happy with their service. Basically you self host their “app as a service” platform so to speak.
Kind of like a super polished Sandstorm, but totally different sandboxing technologies (believe Cloudron uses Docker but not sure if they still do—- and I believe Sandstorm used Kenton Varda’s cap n proto technology which probably allowed for even greater sandboxing/protection than Docker, I would have to imagine..)..
• Backup is not to some cloud service, which I cannot control but done to a small server at home with a NAS attached in the basement.
• some smaller service during development I run simply from home from an small and old ALIX based on AMD Goede. Only probleme here, I need to upgrade at some point because some SSE2 instructions are not supported, which makes now problems with newer packeges including some self-compiled packages.
- traefik (nginx proxy with auto letscencrypt)
- portainer (docker container management)
- fail2ban (basic security)
- logwatch (server / security stats by mail)
- munin (server stats)
- restic (cloud backup)
- unattended-upgrades (auto install security updates)
- apticron (weekly info)
- n8n (automatisation for e.g. quick info via telegram, if something not work)
Run every app that you want in your container.