Hacker News new | past | comments | ask | show | jobs | submit login

The Docker part seems thoroughly unnecessary.

Hey there - BalenaOS is built to run containerised workloads on small devices and is quite stripped down otherwise, kinda like coreOS. So in that sense our architecture allows you to have less stuff you don't need floating around, saving you overhead. We focused on Docker containers since balenaCloud is built for fleets. It's important that our stack supports not just one but many copies of the same device running the same code, and then it can be updated in production etc, just like a set of servers. If one is optimizing for a single device, just naked Raspbian will do fine of course. Our approach has a bit of overhead up front in terms of setup, but you get a production ready setup from day one in return.

If you do not want to need 10 different RPIs for running 10 different functionalities (Pi-hole, RetroPie, remote access, cloud, side-projects, air quality monitoring, etc.), docker is a good solution to easily deploy several services on a single RPI, and it prevents dependency hell.

I can't understand why they would add anymore overhead on such a low powered device.

Zero and original raspberries can be low-powered. 3+ is a quad core with 1gb memory. Docker doesn't even show up on CPU monitor and takes 2% shared memory in my case. The overhead of a namespace is not really noticeable either, unless you're doing lots of processing. (My influx+grafana+monitors sit under 5% CPU almost all the time)

Ease of setup and reproducibility probably. And docker doesn't add much overhead, especially when running natively on Linux.

What overhead?

"Docker induces no significant overhead on CPU nor memory usage, compared to a native execution (worse observation: -4%; 0% on all others)" [1]

[1]: https://roudier.io/2015/08/docker-vs-kvm-vs-native-performan...

Think about how the stack works. I'm not saying this measurement is definitely wrong, but if you're finding exactly 0 overhead then you have to suspect there is something weird going on with how you're measuring it.

The processes are still running natively. The most common overheads would be due to network and storage driver and those can be mitigated with some simple settings. The Docker daemon is more or less a process supervisor at this point.

I think we have a fundamentally different view of what "natively" means and what it means to be "more or less a process supervisor". That's fine, but it also means we won't get to an agreement that we are both at peace with in this case.

What feature of Docker makes you consider it significantly more than a process supervisor?

It gives you easy upgrades and nice dependency separation. It's not required in this case, but it sure makes operations easier. (Doing the same for multiple services on rpi)

Agree, an Ansible playbook would have easily suffice.

One downside of Ansible on the Pi is that a lot of stuff involves building from scratch, because aren't armhf packages available, or what's available in apt is hopelessly outdated. I have a stack at work that takes a day to build on the hardware. With Docker I can build once and pull the image later. I guess you could do similar with Ansible and copy build artefacts over, but Docker is a simpler solution.

While I do like Ansible, it‘s not really the right tool for a tutorial about sensors.

It‘s a lot easier to explain that you need to pull & run some image than it is to explain installing python, get your ansible host configuration right and then run a playbook. There’s just a lot more variance (and therefore margin for error) that you don’t want to deal with when you’re explaining something else entirely.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact