Hacker News new | past | comments | ask | show | jobs | submit login
Harbormaster: Anti-Kubernetes for your personal server (gitlab.com/stavros)
411 points by stavros 3 months ago | hide | past | favorite | 160 comments



Hey everyone! I have a home server that runs some apps, and I've been installing them directly, but they kept breaking on every update. I wanted to Dockerize them, but I needed something that would manage all the containers without me having to ever log into the machine.

This also worked very well for work, where we have some simple services and scripts that run constantly on a micro AWS server. It's made deployments completely automated and works really well, and now people can deploy their own services just by adding a line to a config instead of having to learn a whole complicated system or SSH in and make changes manually.

I thought I'd share this with you, in case it was useful to you too.


> I needed something that would manage all the containers without me having to ever log into the machine.

Not saying this would at all replace Harbormaster, but with DOCKER_HOST or `docker context` one can easily run docker and docker-compose commands without "ever logging in to the machine". Well, it does use SSH under the hood but this here seems more of a UX issue so there you go.

Discovering the DOCKER_HOST env var (changes the daemon socket) has made my usage of docker stuff much more powerful. Think "spawn a container on the machine with bad data" à la Bryan Cantrill at Joyent.


Hmm, doesn't that connect your local Docker client to the remote Docker daemon? My goal isn't "don't SSH to the machine" specifically, but "don't have state on the machine that isn't tracked in a repo somewhere", and this seems like it would fail that requirement.


What do you think isn't getting tracked?

You could put your SSH server configuration in a repo. You could put your SSH authorization key in a repo. You could even put your private key in a repo if you really wanted.


How do you track what's supposed to run and what's not, for example? Or the environment variables, or anything else you can set through the cli.


For me, I don't define any variables via the cli, i put them all in the docker-compose.yml or accompanying .env file, that way it's a simple `docker-compose up` to deploy. Then I can track these files via git, and deploy to remote docker hosts using docker-machine, which effectively sets the DOCKER_HOST env var.

While I haven't used it personally, there is [0] Watchtower which aims to automate updating docker containers.

[0] https://github.com/containrrr/watchtower


Docker Compose is designed for this.


The killer feature of harbormaster is watching the remote repository. Can docker-compose do that? If it can, I should just leverage that feature instead of harbormaster!

The nicety here on harbormaster seems to be that there are some ways to use the same code as a template in which specific differences are dynamically inserted by harbormaster. I'm not aware of how you could use docker-compose (without swarm) to accomplish this, unless you start doing a lot of bash stuff.

I also appreciate that harbormaster offers opinions on secrets management.


Yep, that's why Harbormaster uses it.


What do you mean?

You run what's supposed to run the same way you would anything else. It's the same for the environment variables.

How would you track what's supposed to run and what's not for Docker? Using the `DOCKER_HOST` environment variable to connect over SSH is the exact same way.


I wouldn't. That's why I wrote Harbormaster, so I can track what's running and what isn't.


> "don't have state on the machine that isn't tracked in a repo somewhere"

https://docs.chef.io/chef_solo/


> chef-solo is a command that executes Chef Infra Client in a way that does not require the Chef Infra Server in order to converge cookbooks.

I have never used Chef. This is babble to me.


Chef is a configuration management system. It lets you define lists of things to do called "cookbooks" (analogous to Ansible "playbooks" etc.).

To "converge" is to run something until it is stable. This terminology, I think, comes from the early configuration managemnt system CFEngine, where you write your configuration in declarative(-ish) "make it so this is true" steps, instead of imperative "perform this change" steps the way that a shell script would do. See e.g. https://www.usenix.org/legacy/publications/library/proceedin...

chef-solo is a command that executes Chef's client - the thing that actually makes configuration changes, that is to say, "converges cookbooks" - in a way that does not require a server component. The normal way of deploying Chef is that a server runs things on clients, the machines being configured, but chef-solo is appropriate for the case where there is no such distinction and there's just one machine where you wish to run Chef.


> Chef Infra is a powerful automation platform that transforms infrastructure into code. Whether you’re operating in the cloud, on-premises, or in a hybrid environment, Chef Infra automates how infrastructure is configured, deployed, and managed across your network, no matter its size.

In an imprecise nutshell: You specify what needs to exist on the target system using Chef's DSL and Chef client will converge the state of the target to the desired one.


chef-solo is a command that applies configuration statements on a host without using a separate metadata server.


Have you tried NixOS?


I have, and it's really good, but it needs some investment in creating packages (if they don't exist) and has some annoyances (eg you can't talk to the network to preserve determinism). It felt a bit too heavy-handed for a few processes. We also used to use it at work extensively for all our production but migrated off it after various difficulties (not bugs, just things like having its own language).


You can talk to the network, either through the escape hatch or provided fetch utilities, which tend to require checksums. But you do have to keep the result deterministic.

Agreed on it being a bit too heavy-handed, and the tooling isn't very helpful for dealing with it unless you're neck-deep into the ecosystem already.


What is the "Bryan Cantrill at Joyent" you're referring to?


Not (I think) the exacttalk/blog post gp was thinking of - but worth watching IMNHO:

"Debugging Under Fire: Keep your Head when Systems have Lost their Mind • Bryan Cantrill • GOTO 2017" https://youtu.be/30jNsCVLpAE

Ed: oh, here we go I think?

> Running Aground: Debugging Docker in Production Bryan Cantrill19,102 views16 Jan 2018 Talk originally given at DockerCon '15, which (despite being a popular presentation and still broadly current) Docker Inc. has elected to delist.

https://www.youtube.com/watch?v=AdMqCUhvRz8


awesome, thanks!


The technology analogy is Manta, which Bryan covers in at least one if not several popular talks on YouTube, in particular about contanerization.

He has a lot to say about zones and jails and chroot predating docker, and why docker and co. "won" so to speak.


Interesting case. But did you look at other systems before this? I myself use caprover[1] for a small server deployment. 1: https://caprover.com/


I have used Dokku, Kubernetes, a bit of Nomad, some Dokku-alikes, etc, but none of them did things exactly like I wanted (the single configuration file per server was a big requirement, as I want to know exactly what's running on a server).


I use caprover on my DO instance and it works great. Web apps, twitter/reddit bots, even ZNC.


I have been knee deep in deployment space for post 4 years. It is pretty hard problem to solve to the n-th level. Here's my 2 cents.

Single machine deployments are generally easy, you can do it DIY. The complexity arises the moment you have another machine in the setup, scheduling workloading, networking, setup to name a few, starts becoming complicated.

From my perspective, kubernetes was designed for multiple team, working on multiple services and jobs, making operation kind of self serviced. So I can understand the anti-kubernetes sentiment.

There is gap in the market between VM oriented simple deployments and kubernetes based setup.


IMO the big draw of running K8S on my home server is the unified API. I can take my Helm chart and move it to whatever cloud super easily and tweak it for scaling in seconds. This solution from the post is yet another config system to learn, which is fine, but is sort of the antithesis of why I like K8S. I could see it being theoretically useful for someone who will never use K8S (eg not a software engineer by trade, so will never work a job that uses K8s), but IMO those people are probably running VM’s on their home servers instead since how may non software engineers are going to learn and use docker-compose but not K8S?

Anecdotal, but anyone I know running home lab setups that aren’t software guys are doing vSphere or Proxmox or whatever equivalent for their home usecases. But I know a lot of old school sysadmin guys, so YMMV.


I agree with you. It is an anti-thesis that is why it is marketed as anti-kubernetes toolset.

You cannot avoid learning k8s, you will end up encountering it everywhere, whether you like it or not. It is the tech-buzz word for past few years followed by cloud native and devops.

I really thinking if you wish to be great engineer and truly respect new general tools in generally, you have to go through the route setting up proxmox cluster, loading images, building those VM templates etc. Jumping directly on containers and cloud you kind of skip steps. It is not bad, you do miss our on few foundational concepts, around networking, operating systems etc.

The way I would put it is - A chef who is also farming their own vegetables a.k.a setting up your own clusters and deploying your apps VS a chef who goes to high-end wholeseller to buy premium vegetables does not care how it is grown aka. developers using kubernetes and container orchestration, PaaS.


I’ve been working on using k3s for my home cluster for this exact reason. I run it in a vm on top of proxmox, using packer, terraform, and ansible to deploy. My thought process here is that if I ever want to introduce more nodes or switch to a public cloud I could do so somewhat easily (either with a managed k8s offer, or just by migrating my VMs). I’ve also toyed with the idea of running some services on public cloud and some more sensitive services on my own infra.


I have been doing k3s on a Digital Ocean droplet and I would say k3s has really given me an opportunity to learn some k8s basics without truly having to install and stand up every single component of a usable k8s cluster (ingress provider, etc) on my own.

It took a bit to figure out setting up an https cert provider but then it was pretty much off to the races


I use kind with podman running rootless, it only works on systems with cgroup2 enabled. But it's very cool. Conventional k8s with docker has a number of security gotchas that stem from it effectivly running the containers as root. With rootless podman k8s, it is easy to provide all your devs with local k8s setups without handing them root/sudo access to run it. This is something that has only recently started working right as more container components and runtimes started to support cgroup2.


Agreed, but I made this because I couldn't find a simple orchestrator that used some best practices even for a single machine. I agree the problem is not hard (Harbormaster is around 550 lines), but Harbormaster's value is more in the opinions/decisions than the code.

The single-file YAML config (so it's easy to discover exactly what's running on the server), the separated data/cache/archive directories, the easy updates, the fact that it doesn't need built images but builds them on-the-fly, those are the big advantages, rather than the actual `docker-compose up`.


What is your perspective on multiple docker compose files, and you can do docker-compose up -f <file name>. You could organise in a day that all the files are in the same directory. Just wondering.


That's good too, but I really like having the separate data/cache directories. Another issue I had with the multiple Compose files is that I never knew which ones I had running and which ones I decided against running (because I shut services down but never removed the files). With the single YAML file, there's an explicit `enabled: false` line with a commit message explaining why I stopped running that service.


Might be I'm missing something, but I often go the route of using multiple Compose files, and haven't had any issue with using different data directories; I just mount the directory I want for each service, e.g. `/opt/acme/widget-builder/var/data`


Harbormaster doesn't do anything you can't otherwise do, it just makes stuff easy for you.


I understand your problem. I have seen solve that with docker_compose_$ENV.yaml. You could set ENV variable and then the appropriate file would be called.


Hmm, what did you set the variable to? Prod/staging/etc? I'm not sure how that documents whether you want to keep running the service or not.


> There is gap in the market between VM oriented simple deployments and kubernetes based setup.

In my experience, there are actually two platforms that do this pretty well.

First, there's Docker Swarm ( https://docs.docker.com/engine/swarm/ ) - it comes preinstalled with Docker, can handle either single machine deployments or clusters, even multi-master deployments. Furthermore, it just adds a few values to Docker Compose YAML format ( https://docs.docker.com/compose/compose-file/compose-file-v3... ) , so it's incredibly easy to launch containers with it. And there are lovely web interfaces, such as Portainer ( https://www.portainer.io/ ) or Swarmpit ( https://swarmpit.io/ ) for simpler management.

Secondly, there's also Hashicorp Nomad ( https://www.nomadproject.io/ ) - it's a single executable package, which allows similar setups to Docker Swarm, integrates nicely with service meshes like Consul ( https://www.consul.io/ ), and also allows non-containerized deployments to be managed, such as Java applications and others ( https://www.nomadproject.io/docs/drivers ). The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.

There are also some other tools, like CapRover ( https://caprover.com/ ) available, but many of those use Docker Swarm under the hood and i personally haven't used them. Of course, if you still want Kubernetes but implemented in a slightly simpler way, then there's also the Rancher K3s project ( https://k3s.io/ ) which packages the core of Kubernetes into a smaller executable and uses SQLite by default for storage, if i recall correctly. I've used it briefly and the resource usage was indeed far more reasonable than that of full Kubernetes clusters (like RKE).


Wanted to second that Docker Swarm has been an excellent "middle step" for two different teams I've worked on. IMO too many people disregard it right away, not realizing that it is a significant effort for the average dev to learn containerization+k8s at the same time, and it's impossible to do that on a large dev team without drastically slowing your dev cycles for a period.

When migrating from a non-containerized deployment process to a containerized one, there are a lot of new skills the employees have to learn. We've had 40+ employees, all who are basically full of work, and the mandate comes down to containerize, and all of these old school RPM/DEB folks suddenly need to start doing docker. No big deal, right? Except...half the stuff does not dockerize easily requires some slightly-more-than-beginner docker skills. People will struggle and be frustrated. Folks start with running one container manually, and quickly outgrow that to use compose. They almost always eventually use compose to run stuff in prod at some point, which works but eventually that one server is full. This the is the value of swarm - letting people expand to multi-server and get a taste of orchestration, without needing them to install new tools or learn new languages. Swarm adds just one or two small new concepts (stack and service) on top of everything they have already learned. It's a god send to tell a team they can just run swarm init, use their existing yaml files, and add a worker to the cluster. Most folks start to learn about placement constraints, deployment strategies, dynamic infrastructure like reverse proxy or service mesh, etc. After a bit of comfort and growth, a switch to k8s is manageable and the team is excited about learning it instead of overwhelmed. A lot (?all?) of the concepts in swarm are readily present in k8s, so the transition is much simpler


We currently have one foot in Docker swarm (and single node compose), and considering k8s. One thing I'm uncertain of, is the state of shared storage/volumes in swarm - none of the options seem well supported or stable. I'm leaning towards trying nfs based volumes, but it feels like it might be fragile.


Sure. Our solution so far has been both simple and pragmatic - the main DB's do not live inside containers. It's a bit of manual ops, but it works for us. All the 'media' in the stacks I am dealing with is minor enough to serve over a custom API e.g. no massive image/audio/etc datasets where files need to be first class citizens.

We generally avoid mounting volumes at all costs. The challenge of mapping host uid:gid to container uid:gid (and keeping that mapping from breaking) proved painful and not worth the effort


Nomad also scales really well. In my experience swarm had a lot of issues with going above 10 machines in a cluster. Stuck containers, containers that are there but swarm can't see them and more. But still i loved using swarm with my 5 node arm cluster, it is a good place to start when you hit the limit of a single node.

> The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.

1. IIRC you can run jobs directly from UI now, but IMO this is kinda useless. Running a job is simple as 'nomad run jobspec.nomad'. You can also run a great alternative UI ( https://github.com/jippi/hashi-ui ).

2. IMO HCL > YAML for job definitions. I've used both extensively and HCL always felt much more human friendly. The way K8s uses YAML looks to me like stretching it to it's limits and barely readable at times with templates.

One thing that makes nomad a go-to for me is that it is able to run workloads pretty much anywhere. Linux, Windows, FreeBSD, OpenBSD, Illumos and ofc Mac.


Recently I've been moving both my personal stuff and work stuff to Nomad and HCL is just SO much nicer than the convoluted YAML files that Kubernetes needs.

I know HCL gets a lot of hate around here but I find just dandy and I'd rather write HCL over YAML any day of the week.

Also to just put another thumbs up for Nomad because it's been absolutely fantastic for us and has been so much easier to manage and deploy. We've begun testing edge deployments with it as well and so far looks very promising.

Consul and Vault are also two Hasicorp products I can't live without at this point either. Vault is probably one of the first things I deploy now.

Anyway rambling, I think a lot of people just immediately jump to Kubernetes without really ever giving Nomad a look first.


Hmm, what are your thoughts on the Compose format (that's used in Docker Compose and Docker Swarm)?

I agree that the format Kubernetes uses is probably a bit overcomplicated, which is why I've also sometimes used the aforementioned Compose format with tools like Kompose to save some of my time.

In that regard, it's perhaps not the fault of YAML itself (even though the format has its own complexity problems), but rather of the tool using it.

What personally keeps me away from HCL for the most part is the fact that a lot of the time you can find Compose files for various pieces of software online and only have to change some parameters for the most part to get stuff running vs having to rewrite all of it in a different format.

Are there any Compose to HCL tools out there yet?


Yeah, you are right. It is not exactly YAML's fault but how it is used. I am ok with using it with ansible/salt/compose. But k8s is stretching it to it's limits and makes it cumbersome to use.

And yeah, there is not much examples of HCL files for different software, but most of the time there is not much to adapt. Usually when i need to run something, i just take some random HCL file i have and substitute the docker image, edit the amount of resources and add storage and config via templates. And done. It might be confusing/intimidating at first, but like with anything after some time you should be ok.


> There is gap in the market between VM oriented simple deployments and kubernetes based setup.

What's wrong with Ansible? You can deploy docker containers using a very similar configuration to docker-compose.


team ansible as well, tho our 100 some odd servers probably doesn't warrant much else.


I have been toying with the notion of extending Piku (https://github.com/piku) to support multiple (i.e., a reasonable number) of machines behind the initial deploy target.

Right now I have a deployment hook that can propagate an app to more machines also running Piku after the deployment finishes correctly on the first one, but stuff like green/blue and database migrations is a major pain and requires more logic.


i ask if you know nomad (i didn't use it) but co-workers say was easier to deploy


Yes, I did look into Nomad. Again, specification of application to deploy is much simpler than kubernetes. But I think operational point of view you still have the complexity. It has similar concepts and abstractions like kubernetes when you operate a nomad cluster.


For a single machine, you don't need to operate a nomad cluster: `nomad agent -dev` instantly gives you a 1-node cluster ready to go.

if you decide to grow past 1 node, it's a little more complex, but not by a lot, like k8s.


Juju perhaps?


I think Juju (and Charms) really shine more with bare-metal or VM management. We looked into trying to use this for multi-tenant deployment scenarios a while ago (when it was still quite popular in the Ubuntu ecosystem) and found it lacking.

At this point, I think Juju is most likely used in place of other metal or VM provisioning tools (like chef or Ansible) so that you can automatically provision and scale a system as you bring new machines online.


Sadly there is very little activity aiming at bare metal and VMs nowadays. If you look at features presented during couple of past months, you will find mainly kubernetes. Switching from charms to operators. But kudos to openstack charmers holding on and doing great work.


Are you talking about this[1]?

[1] https://juju.is/


My anti-kubernetes setup for small single servers is docker swarm, portainer & traefik. It's a setup that works well on low powered machines, gives you TLS (letsencrypt) and traefik takes care of the complicated network routing.

I created a shell script to easily set this up: https://github.com/badsyntax/docker-box


I have a similar setup, but with Nomad (in single server mode) instead of docker swarm and portainer. It works great.


What does Nomad do for you, exactly? I've always wanted to try it out, but I never really got how it works. It runs containers, right? Does it also do networking, volumes, and the other things Compose does?


What I like about Nomad is that it allows scheduling non-containerized workloads too. What it "does" for me is that it gives me a declarative language to specify the workloads, has a nice web UI to keep track of the workloads and allows such handy features as looking at the logs or exec'ing into the container from the web UI, amongst other things. Haven't used advanced networking or volumes yet though.


So do you use it just for scheduling commands to run? I.e. do you use `docker-compose up` as the "payload"?


You send a job-specification to the Nomad API.

There's different kind of workloads, I use Docker containers the most, but jobs can also run on a system-level, there's also different types of operating modes, some jobs can be scheduled like cron, where other jobs just exposes a port and wants to be registered in Consuls service-mesh.

A job can also consist of multiple subtasks, an example could be nginx + django/rails subtasks that will be deployed together.

You can see an example of a Docker job here: https://www.nomadproject.io/docs/job-specification#example

With a few modifications you can easily allow for blue/green-deployments.


This is very interesting, thanks! I'll give it a go.


Nomad is so perfect for this. I've been meaning to blog about it somewhere.


Don't suppose you're able to point to a simple Nomad config for a dockerised web app, with a proxy and Let's Encrypt?


I will see if I can write up a simple example, do you have anywhere I can ping you?


That would be great, thanks!

I'm at: gordon dot stewart 333 at gmail dot com


I would also love to read this! kevinl at hey dot com


This is exactly how I deployed my last few projects, and it works great!

The only things I'd change are switching to Caddy instead of Traefik (because Traefik 2.x config is just so bewilderingly complex!), and I'm not convinced Portainer is really adding any value.

Appreciate you sharing your setup script too.


Agree the traefik config is a little complex but otherwise it works great for me. About using portainer, it's useful for showing a holistic view of your containers and stacks, but I also use it for remote deployment of services (Eg as part of CI/CD). I'll push a new docker image version then I'll use the portainer webhooks to redeploy the service, then docker swarm takes over.


Ah, I wasn't aware of the web hooks, that sounds useful :)


Here's an example using GitHub Actions: https://github.com/badsyntax/docker-box/tree/master/examples...


Absolutely agree, I switched to Caddy recently and the configuration is considerably easier than Traefik. Very simple TLS setup (including self signed certificates).


After some struggle I've managed to set up traefik with tags/docker socket so that services can "expose themselves" via tags in their service definitions - is there anything similar for caddy?


That’s still a bit more than I feel is required.

My problem is in the two to eight server space, but networking is already externally managed and I have a loadbalancer. It’s in this space I feel that we’re lacking good solution. The size is to small to justify taking out nodes for a control plane, but big enough that Ansible feels weird.


This looks great. But if you don’t need containers or are using tiny hardware, consider trying out Piku:

https://github.com/piku

(You can use docker-compose with it as well, but as a deployment step — I might bake in something nicer if there is enough interest)


+1 for pikku which is one of my favorite examples of "right abstraction, simple, just works, doesn't re-invent the architecture every 6 months".

Thanks for that, Rui!


Well, I am thinking of reinveinting around 12 lines of it to add explicit Docker/Compose support, but it’s been a year or so since any major changes other than minor tweaks :)

It has also been deployed on all top 5 cloud providers via could-init (and I’m going back to AWS plain non-Ubuntu AMIs whenever I can figure out the right packages).


That looks nice, isn't it kind of like Dokku? It's a nice option but not a very good fit if you don't need ingress/aren't running web services (most of my services were daemons that connect to MQTT).


You can have services without any kind of ingress. It’s completely optional to use nginx, it just gets set up automatically if you want to expose a website.

My original use case was _exactly_ that (MQTT services).


Oh huh, that's interesting, thanks! Is it geared towards worker/DB too, like Dokku? Unfortunately with Dokku it's a bit hard to run multiple containers together (I remember having to hack stuff to enable app deployments where one app was the frontend and another was the API).


I guess I'm one of those people mentioned on the rationle who keeps little servers ($5-10 Droplets) and runs a few apps on them. (Like a couple of Node/Go apps, a CouchDB, a Verdaccio server). I also haven't had issues with things breaking as I do OS updates. Seems like it would be nice though just to have a collection of dockerfiles that could be used to deploy a new server automatically. My current "old fashioned" way has been very doable to me but my big question before jumping to some Docker-based setup is, does running everything on Docker take a huge hit on the performance/memory/capabilities of the machine? Like could I still comfortably run 4-5 apps on a $5 Droplet? Assuming I would have seperate containers for each app? I'm having trouble finding info about this.


"Docker containers" are Linux processes with maybe a filesystem, cpu/memory limits, and a special network; applied through cgroups. You can do all of those things without Docker, and there is really not much overhead.

systemd has "slice units" that are implemented very similarly to Docker containers, and it's basically the default on every Linux system from the last few years. It's underdocumented but you can read a little about it here: https://opensource.com/article/20/10/cgroups


Slice units only help with resource allocation, but the main draw of Docker isn't that, but the fact that you can be running a completely different distro with a completely different stack since a container is basically a glorified chroot and that the image can be built from an easy to understand script and then shared with others.

Sure, you could publish your app as a .tar.gz of the filesystem root and then users could extract it and have a script to bind mount everything and chroot into it. You could then set up a systemd service(s) for that script and set up a slice. But then you're just reinventing Docker from scratch for absolutely no reason. You could also use systemd-nspawn, but then you're losing out on a bunch of really useful features that the devs omitted because of philosophical reasons (as usual with systemd, they think they know better than you).


I'm not really telling people to not use Docker. I'm telling them how common cgroups are -- every Linux system is already using them.


Cool stuff with the "slice units." I use systemd to keep apps running but didn't know all this. And yes I understand the basics of what Docker containers are. It just seems logical to me that it would be a lot more taxing on the system running that overhead. Like is it exponentially harder to fit the same amount of apps on a droplet if they're all containerized? Or is it still easy to run 4-5 modest containerized apps on a $5 droplet?


I haven't noticed any performance degradation (though granted, these are small apps), and my home server is 10 years old (and was slow even then).


So far I've found that "restart: always" in the compose.yml is enough for my home server apps. In the rare case that one of the services is down, I can SSH in and have a quick look - after all it's one of my home servers, not a production pod on GKE :p

That said, the project looks pretty good! I'll have a tinker and maybe I'll be converted


Agreed about restarting, but I hated two things: Having to SSH in to make changes, and having a bunch of state in unknowable places that made it extremely hard to change anything or migrate to another machine if something happened.

With Harbormaster, I just copy one YAML file and the `data/` directory and I'm done. It's extremely convenient.


Just to add: It's definitely a bad practice to never update your images, because the docker images and their base images will accumulate security holes. There aren't many solutions around for automatically pulling and running new images.


>There aren't many solutions around for automatically pulling and running new images.

Isn't that exactly what watchtower does?

https://github.com/containrrr/watchtower

It works great on my mediacenter server running deluge, plex, sonarr, radarr, jackett and OpenVPN in docker.


Well, yes. Curiously enough, (IIRC) watchtower started out automatically pulling new images when available. Then the maintainers found that approach to be worse than proper orchestration and disabled the pulling. Perhaps it's different now.


My experience with watchtower is that it kept breaking stuff (or maybe just pulling broken images?)

My server was much more stable after it didn’t try to update all the time any more.

I wonder if I can set a minimum timeout.


Watchtower runs the images if they update, but AFAIK it doesn't pull if the base image changes.

Then again, Harbormaster doesn't do that either unless the upstream git repo changes.


I've heard about Watchtower (auto update) and DUIN (docker image update notifier), but I haven't quite found something that will "just tell me what updates are available, on a static site".

I want to "read all available updates" at my convenience, not get alerts reminding me to update my server.

Maybe I need to write some sort of plugin to DUIN that appends to a text file or web page or SQLite db... Hm.


Looks like https://crazymax.dev/diun/notif/script/ would be useful for that.

Personally, since I’m a big fan of RSS, I’d set up email in Diun and send it to an email generated by https://kill-the-newsletter.com/


Other uncomplicated pieces of software that manage dockerized workloads:

- https://dokku.com

- https://caprover.com


I really wish Dokku would embrace docker swarm like caprover. Currently they have a scheduler for kubernetes but the docker swarm scheduler is indefinitely delayed [0]. It’s like the missing piece to making Dokku a real power tool for small teams.

Currently, if you want to scale Dokku horizontally and aren’t ready to take the kubernetes plunge, you have to put a load balancer in front of your multiple VMs running Dokku and that comes with it’s own headaches.

[0] https://github.com/dokku/dokku/projects/1#card-59170273


you should give nomad a try. Dokku has a nomad backend. https://github.com/dokku/dokku-scheduler-nomad.


I use Dokku and love it! The use case is a bit different, as it's mostly about running web apps (it does ingress as well and is rather opinionated about the setup of its containers), but Harbormaster is just a thin management layer over Compose.


I use CapRover and it mostly works.

My biggest complaint would be the downtime when the docker script runs after each deployment.


Beware that harbormaster is also the name of a program for adding RBAC to docker: https://github.com/kassisol/hbm

It's kind of abandonware because it was the developer's PhD project and he graduated, but it is rather unfortunately widely used in one of the largest GEOINT programs in the US government right now because it was the only thing that offered this capability 5 years ago. Raytheon developers have been begging to fork it for a long time so they can update and make bug fixes, but Raytheon legal won't let them fork a GPL-licensed project.


Yeah, there were a few projects named that :/ I figured none of them were too popular, so I just went ahead with the name.


One of them should fork it on their personal account and work on it during bussiness hours. No liability and all the benefits. Don't tell legal obviously.

"Someone forked it so now our fixes can get merged! :D"


I've honestly considered this since leaving. Why not do my old coworkers a solid and fix something for them, but then I consider I'd be doing free labor for a company not willing to let its own workers contribute to a project if they can't monopolize the returns from it.


> I consider I'd be doing free labor for a company not willing to let its own workers contribute to a project if they can't monopolize the returns from it

I don't think that is the reason. When Raytheon or other contractors perform software work under a DOD contract (i.e., they charge the labor to a contract) the government generally gets certain exclusive rights to the software created. Raytheon is technically still the copyright holder, but effectively is required to grant the US government an irrevocable license to do whatever they want with the source in support of government missions if the code is delivered to the government. Depending on the contract, such code may also fall under blanket non-disclosure agreements. I believe both of these are incompatible with the GPL, and the latter with having a public fork at all.

The company could work this out with the government, but it would be an expensive and time-consuming process because government program offices are slow, bureaucratic, and hate dealing with small exceptions on large contracts. They might even still refuse to make the contract mods required at the end simply because they don't understand it or they are too risk averse. Legal is likely of the opinion that it isn't worth trying, and the Raytheon program office likely won't push them unless they can show a significant benefit for the company.


It's also the CI component of (the now unmaintained) Phabricator


Looks nice. I did something similar not so much time ago https://github.com/reddec/git-pipe


Wow this has a lot of great features baked in.

Especially the backup and Let's encrypt elements are great. And it handles docker networks, which makes it very flexible.

Will definitely check it out.


Is there software like Compose in terms of simplicity, that supports multiple nodes? I use k8s for an application that really needs to use multiple physical nodes to run containerized jobs but k8s feels like overkill for the task and I spend more time fixing k8s fuckups than working on the application. Is there anything in between compose and k8s?


Docker Swarm?


I hear Nomad mentioned a lot for this, yeah.


Hashicorp Nomad


Interestingly this seems like a pretty popular problem to solve.

I made a similar thing recently as well, although with the goal to handle ingress and monitoring out the box as well, whilst still able to run comfortably on a small box.

I took a fairly similar approach, leveraging docker-compose files, and using a single data directory for ease of backup (although it's on my to-do list to split out conf/data).

If there was a way to get a truly slim and easy to setup k8s compatible environment I'd probably prefer that, but I couldn't find anything that wouldn't eat most of my small servers ram

https://github.com/mnahkies/shoe-string-server if you're interested


Huh, nice! I think the main problem yours and my project have is that they're difficult to explain, because it's more about the opinions they have rather than about what they do.

I'll try to rework the README to hopefully make it more understandable, but looking at your project's README I get as overwhelmed as I imagine you get looking at mine. It's a lot of stuff to explain in a short page.


It is quite slim and easy to setup k8s environment, thanks to microk8s and k3s. Microk8s comes with newer version of ubuntu. k3s is a single binary installation.


Last I checked k3s required a min of 512mb of ram, 1gb recommended. Is this not the case?


Yes it is. Docker's minimum requirement is 512mb with 2gb recommended. Containerd + k8s is almost the same requirements.


Interesting, I wasn't actually aware of that. Checking my running server with 1024mb of RAM, docker appears to be consuming ~50mb of that. This is running 17 containers of nginx, postgres, nodejs, haproxy etc.

I'm ok with this level of overhead for the benefits of containerization - several years ago I was running a bunch of PHP sites on Apache, one was compromised and this took down the rest as well. In addition to the isolation it also simplifies my deployment story significantly.

If K3s has a similar level of overhead then I should probably move to this, something for me to look into, however my understanding was that components like etcd require a fairly significant amount of resource even for tiny one node "clusters".


I unironically solved this problem by running docker-compose in Docker. You can build an image that's just the official docker/compose image with your docker-compose.yml on top, mount /var/run/docker.sock into it, and then when you start the docker-compose container, it starts all your dependencies. If you run Watchtower as well, everything auto-updates.

Instead of deploying changes as git commits, you deploy them as container image updates. I'm not going to call it a good solution, exactly, but it meant I could just use one kind of thing to solve my problem, which is a real treat if you've spent much time in the dockerverse.


This is legitimately a fantastic idea. Would you you be willing to publish a bit more detail about it? Even just a gist of the Dockerfile would be great.


Hmm, that's interesting, do you run Docker in Docker, or do you expose the control socket?


At a previous place I worked, someone setup something similar with `git pull && ansible-playbook` on a cron

It was using GitHub so just needed a read-only key and could be bootstrapped by connecting to the server directly and running the playbook once

In addition, it didn't need any special privileges or permissions. The playbook setup remote logging (shipping to CloudWatch Logs since we used AWS heavily) along with some basic metrics so the whole thing could be monitored. Plus, you can get a cron email as basic monitoring to know if it failed

Imo it was a pretty clever way to do continuous deploy/updates without complicated orchestrators, management servers, etc


This looks awesome!

What I couldn't immediately see from skimming the repo is:

How hard would it be to use a docker-based automatic https proxy such as this [1] with all projects?

I've had a handfull of docker-based services running for many years and love the convenience. What I'm doing now is simply wrap the images in a bash script that stops the containers, snapshots the ZFS volume, pulls newer versions and re-launches everything. That's then run via cron once a day. Zero issues across at least five years.

[1] https://github.com/SteveLTN/https-portal


Under the hood, all Harbormaster does is run `docker-compose up` on a bunch of directories. I'm not familiar with the HTTPS proxy, but it looks like you could just add it to the config and it'd auto-deploy and run.

Sounds like a very good ingress solution, I'll try it for myself too, thanks! I use Caddy now but configuration is a bit too manual.


Thanks!

One thing to note is that you'll need to make sure that all the compose bundles are on the same network.

I.e. add this to all of them:

  networks:
    default:
      external:
        name: nginx-proxy


Ah yep, thanks! One thing that's possible (and I'd like to do) with Harbormaster is add configuration to the upstream apps themselves, so to deploy, say, Plex, all you need to do is add the Plex repo URL to your config (and add a few env vars) and that's it!

I already added a config for Plex in the Harbormaster repo, but obviously it's better if the upstream app itself has it:

https://gitlab.com/stavros/harbormaster/-/blob/master/apps/p...


FWIW Traefik is pretty easy to get running and configured based on container tags, which you cam set in compose files.

Traefik can be a bit hairy in some ways, but for anything you'd run Harbormaster for it should be a good fit.

Right now I have some Frankenstein situation with all of Traefik, Nginx, HAProxy, Envoy (though this is inherited from Consul Connect) at different points... I keep thinking about replacing Traefik with Envoy, but the docs and complexity are a bit daunting.


I have a similar DIY solution with an Ansible playbook that automatically installs or restarts docker-compose files. I am considering switching to Harbormaster, since it's much closer to what I wanted from the start.


How am I supposed to know whether to jump on the kubernetes bandwagon when all these alternatives keep popping up? Kidding/not kidding


Depends upon which job interview you are going to.

If is a startup, use some buzzwords like cloud native, devops etc. Check their sentiments towards kubernetes.

On a serious note, You might have to jump on the kubernetes bandwagon whether you like it or not as many of the companies are serious investing their resources. Having spoken to various companies from series A to Enterprise. I do see the kubernetes adoption is actually not as much as I would have imagined based on the hype.

P.S discussion of kubernetes or not kubernetes was recently accelerated by a post from Ably [1]

[1] https://ably.com/blog/no-we-dont-use-kubernetes


What's missing km the conversation is that said blog post can be summarised as "we have money to burn".


This is not an alternative, just a small personal project. Learn docker, basics of kubernetes and maybe nomad.


Any chance this will get packaged up as a container instead of “pipx install”, then all the timers can just be in the container, and it can control via Docker socket exposed to the container?

Simple one-time setup and then everything is a container?

If that interesting to OP then I might look into that one weekend soon.


Oh yeah, that's very interesting! That would be great, I forgot that you can expose the socket to the container. I'd definitely be interested in that, thanks!


+1 for this! One of the things that I like most about my Docker setup is that I am basically agnostic to the setup of the host machine.


For users who are fine with the single-host scope, this looks great. Definitely easier than working with systemd+$CI, if you don't need it (and for all the flame it gets, systemd is very powerful if you just spend the time to get into it, but then again if you don't need it you don't)

I could also see this being great for a personal lab/playground server. Or for learning/workshops/hackathons. Super easy to get people running from 0.

If I ever run a class or workshop that has some server-side aspect to it, I'll keep this in mind for sure.


this is nice, this should help a lot of people in that in-between space

i just recently decided to graduate from just `docker-compose up` running inside tmux to a more fully fledged system myself...

since i know Chef quite well i just decided to use Chef in local mode with the docker community cookbook

i also get the nice tooling around testing changes to the infrastructure in test kitchen

if this would have existed before i made that switch, i may have considered it, nice work!


Cool name. Reminds me of dockmaster which was some ancient NSA system (It was mentioned in Clifford Stoll's excellent book "The Cuckoo's Egg"). It was the one the German KGB hacker he caught was trying to get into.

It sounds like a good option too, I don't want all the complexity of Kubernetes at home. If I worked for the cloud team in work I might use it at home but I don't.


Do people actually us k8s on a personal server? What’s the point? Surely just Docker with restart policies (and probably even just systemd services if it’s your thing) is enough?

K8s seems way overused in spheres without a real need for it. That would explain the « k8s is over complicated » I keep reading everywhere. It’s not over complicated, you just don’t need it.


My simple solution for smaller projects is to ssh with port forward to docker registry - here I wrote blog post on that topic:

https://wickedmoocode.blogspot.com/2020/09/simple-way-to-dep...


I think that's the same idea as Ubuntu's snaps. A different implemention, of course. However, snaps are conceptual more secure. However there's rootless Docker now.

inb4: Flatpak is not as powerful as snap. E.g., you can't use a different level with Flatpack.


This seems like a neat project! I run a homelab and my container host runs Portainer & Caddy which is a really clean and simple docker-compose deployment stack. This tools seems like it does less than Portainer, so I am not clear on why it would be preferable - just because it is even simpler?


Curious how things like database migrations are handled in setups like this? Is the expectation that schema updates are built into startup processes of services and happen automatically? Or is it outside the scope?


That's outside the scope, yes. Anything below a `docker-compose up` is for your code to handle as you like.


Python is sort of a non-starter for me. If I had a reliable way of running Python on a VPS, I wouldn't need Docker, now would I?


What do you mean? Python comes pre-installed with every distro I know, virtual environment support is built in, alternative version packages are readily available on distros (but backwards compatibility is pretty good in Python anyways).

If anything, Python in Docker is more of a pain than bare-metal since the distro (usually alpine or debian) ships its own Python version and packages that are almost entirely disconnected from the one the image provides.


Docker basically only exists as a popular product because of how terrible Python installation is, but okay good luck with that.


If this ever gets expanded to handle clustering, it'd be perfect for me. I use k8s on my homelab across multiple raspberry pis.


This looks super, I'll try it on my NAS.


If it can pull from git, why not have the YAML in a git repo, too?


That is, in fact, the recommended way to deploy it! If you look at the systemd service/timer files, that's what it does, except Harbormaster itself isn't aware of the repo.

I kind of punted on the decision of how to run the top layer (ie have Harbormaster be a daemon that auto-pulls its config), but it's very simple to add a cronjob to `git pull; harbormaster` (and is more composable) so I didn't do any more work in that direction.


so useful. I have the same use case as the author

also kind of want RDS for my machine -- backup/restore + upgrade for databases hosted locally


anti-Kubernetes is rpm/yum/apt/dpkg/pkg and all the other oldschool package managers.


you should check k3s or k0s - single machine kubernetes


I did, but even that was a bit too much when I don't really need to be K8s-compatible. Harbormaster doesn't run any extra daemons at all, so that was a better fit for what I wanted to do (I also want to run stuff on Raspberry Pis and other computers with low resources).


fair point. I have generally had a very cool experience running these single daemon kubernetes distros.


They look very very interesting for development and things like that, and I'm going to set one up locally to play with, they just seemed like overkill for running a bunch of Python scripts, Plex, etc.


I find k3s still consume 5% of my little CPU constantly, and later learnt it was because Kubernetes polls processes in the background.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: