To those who are pointing out that compose isn't meant for production, keep in mind that this product that they're selling appears to be designed for the small time family self-hoster [0]. They're not targeting production web apps in a corporate setting, they seem to really be targeting people who wish that they could self-host something on their own network but don't have the technical background to use the docker compose files that most self-hostable apps provide as the 'easy' option.
I'm quite skeptical that adding a layer of abstraction and switching to TOML instead of YAML will suddenly enable those scared away by compose to start self-hosting, but kubernetes and docker swarm were never in the cards.
Yeah, we're very early building this, the blog post is just a way for me to organize my thoughts and start fights online. It's, uh, embarrassingly useful to yell semi-coherent thoughts into the void and have experts yell back with a decade or more of experience and information about tools I haven't heard of.
> I'm quite skeptical that adding a layer of abstraction and switching to TOML instead of YAML will suddenly enable those scared away by compose to start self-hosting, but kubernetes and docker swarm were never in the cards.
Yes, this is an excellent point. I did not articulate it well anywhere, but the goal is for users to have something more like Sandstorm, with a UI to install things. The TOML is for application developers, not end users. It'll either go in a separate database or, ideally, in the source code of the applications to be installed similar to a Dockerfile. I haven't started yet, but eventually we need to work with application developers to support things they want and to make it easier to treat Tealok as the "easy option" rather than docker compose.
Oh, that makes way more sense! Yeah, that actually sounds like it could work well if you can get buy in from application devs.
The trickiest thing doing it this late in the game is going to be that docker compose has truly become the standard at this point. I self-host a ton of different apps and I almost never have to write my own docker compose file because there's always one provided for me. At this point even if your file format is objectively better for the purpose, it's going to be hard to overcome the inertia.
Yeah, I agree, we're going to need a really compelling use-case not just for end users that run the application, but for the application developers as well. Nobody wants to maintain 3+ extra deployment files for the various also-rans competing with docker-compose.
What do you use to manage all those compose files? Do you have off-site backups? I'm constantly reading and re-writing docker-compose and bash scripting everything to fit in with the rest of my infrastructure it'd be good to hear about someone with a better way.
I have a single GitHub repo that contains all the compose files for my main server. Each application gets a folder with the compose file and any version-controllable configuration (which gets bound to volumes in the docker containers).
I periodically run Renovate [0], which submits PRs against the infrastructure repo on my local Forgejo to update all my applications. I have a script in the repo which pulls the git changes onto the server and pulls and restarts the updated apps.
Data is all stored in volumes that are mapped to subfolders in a ~/data directory. Each application has a Borgmatic [1] config that tells Borgmatic which folder to back up for that app and tells it to stop the compose file before backup and resume it afterwards. They all go to the same BorgBase repository, but I give each app its own config (with its own retention/consistency prefix) because I don't want to have network-wide downtime during backups.
At the moment the backup command is run by me by hand, with BorgBase configured to send me emails if I forget to do it for a week. Eventually that will be a cron job, but for now it takes less time to just do it myself, and I don't change my data often enough for a week of lost work to hurt much.
All the applications bind to ports which are firewalled, with Caddy and Pihole being the only applications that run on exposed ports (53, 80, 443). Caddy has a wildcard DNS cert from LetsEncrypt for HTTPS and directs traffic from a bunch of local domain names to the correct applications. I just use Pihole to define my local DNS names (custom.list, which is where Pihole keeps the local DNS definitions, is a volume that's committed to the repo).
I'm quite skeptical that adding a layer of abstraction and switching to TOML instead of YAML will suddenly enable those scared away by compose to start self-hosting, but kubernetes and docker swarm were never in the cards.
[0] https://tealok.tech/