Either way, my conclusion was that Kubernetes is really meant for more scale and attention than I like to give my home infrastructure, so although I enjoyed the experiment, I would encourage people not to run important services on something like this.
It also works fine for production in my experience, I have a mildly popular website running on a 4GB VPS with it and haven't had any issues related to k3s itself.
We're even looking at starting to migrate most of our edge deployments to k3s as well.
Really really awesome piece of work by Rancher.
Are Helm charts easier to use or more prevalent than distribution packages?
> Bro, I heard you like virtualization, so we use virtualization inside virtualization to virtualize while we virtualize!
Hard pass. I hope this can be a nice alternative to managing docker-compose.
What Unix tools are an alternative for Docker? I install an app, I download a VM image, and I have a dynamically sized VM with a nice simple UI to control it. VirtalBox is fine too but more heavyweight for multiple VMs at once, and Docker is where the community and premade VMs images are.
Works on any platform.
k8s is probably overkill if you aren't prototyping a production cluster.
 In some ways this is easier than running traditional daemons, as storage locations for dockerized services tend to be well-documented, prominently, all in one place, and don't change from, say, one package manager to another. You can very easily get all your dockerized services storing their important stuff under one isolated tree, and back up the whole thing. That plus your run scripts are all you need to restore, no matter which host platform you use.
[EDIT] in fact, getting Samba set up for my super simple and surely common use case of "I want these folders shared read-only for everyone, and these others writable for this one user" was much easier with Docker than it's been since back when I used Gentoo. The fancier distros all seem to make it a big pain in the ass to do anything other than sharing user directories through the GUI, and change Samba configs between seemingly every major release, and they're always a mess. With the Dockerized version it was one short and sweet arcane magical line per directory I wanted to share, so no clearer, but very short and worked on the first try.
Kubernetes is a tool to setup and manage computer clusters comprised of COTS hardware, and also manage how processes are deployed and ran on it. That's not exactly your typical hobbyist's scale.
I mostly use it for random experimentation/learning and like not having to think about starting up/shutting down Minikube on my main laptop!
I'm kind of in this weird position where I understand the benefits and use of k8s, but I:
a) Can't think of any cutesy distributed systems/microservices type thing that I could or would want to run on a low power machine locally (lack of processing power or ISP getting pissed off @ massive amount of traffic if you're e.g. scraping a ton of data and doing stream processing on it in your little cluster)
b) Don't really understand the point in investing time in it, as it feels like one of those things you learn on the job as it comes up. And for a lot of people (the majority, probably?) it'll probably never even come up unless they just are hunting for new tech to introduce at work regardless of if the business actually needs it. Which IMO, most businesses don't even have a compelling reason to switch from the old 3 tier monolith architecture.
Kubernetes knowledge is becoming an important job skill, and if your current employer does not use Kubernetes, you'll need to learn it on your own.
> Which IMO, most businesses don't even have a compelling reason to switch from the old 3 tier monolith architecture.
Compelling reasons include self-healing, autoscaling, and official support from all major cloud providers. In my experience, it's actually easier to adopt Kubernetes at a smaller company than a larger one.
It's also becoming harder to hire developers who are willing to work on monolithic codebases. It's been a while since those were the state of the art, and a lot of people with 5+ years in the industry have never seen them before.
We adopted Kubernetes at our startup, it solved most ci/cd and devops issues for the team. We didn't need the scaling either, we have at max maybe 2 containers of a service but we know we are ready.
There are plenty of companies ou there that have old COTS lying around and require some sort of IT infrastructure. Being able to setup and manage a working cluster of COTS hardware is a great way to add computational resources to a company without spending au additional cash or having to rubber stamp permits.
I personally have done this in a previous job, and a small POC with minikube turned into a 3-node kubeadm cluster that deployed and managed two company-wide intranet services like a breeze. Zero cash was spent, the only resource used was a few hours of my time, everyone benefitted, and managers were very happy with the result.
I also baked a 25+service AI platform onto 4 virtual machines, running kubernetes, for deployment in an air-gapped system without a knowladgable operator. It was an excellent choice for that project because of the auto-healing capabilities.
I have also run it at a small startup where we had a combination of static nginx sites, ruby on rails sites, elixer sights, node.js app, and even a c++ app. (it was at a crypto company if you are wondering why so many disparate languages). Having a single deploy pipeline for 5+ different languages and architectures was awesome. I would have killed myself if I needed to support all of those in their native environments at the same time.
There are lots of good use cases for k8s, and honestly it's not that hard if you already have system admin skills because you understand the problems it solves and how it works. Most of the folks I have seen struggle with it are developers (and likewise, I struggle with OOP sometimes - i don't mean to diminish developers skills).
b) IMO, self-hosting and less centralization of the digital services we rely on is highly desirable for society. (If k8s is the right solution for any particular individual to orchestrate their stuff is a different story). I think for most people who do this, it's a hobby and something they enjoy. Why would you have a vegetable patch when there's food in the store and your employer has a complementary lunch cafeteria?
Hackers gonna hack, you know.
I don't need to spend my time developing skills for the job I've already got, I need to develop skills for the next job I'll have
> most businesses don't even have a compelling reason to switch from the old 3 tier monolith architecture.
...thus showing I can't rely on my employer to keep my skills up to date for me.
Not that there's no market for specialists in older technology - back in 1999 I heard rumours that COBOL experts were commanding huge salaries to work on millennium bug mitigation in banks. But people following that career path should be choosing it consciously, not by accident :)
Being in demand means close to nothing. Once I was contacted to work on a tool that was developed in Delphi and I would hardly suggest anyone should pivot from their career to jump on that gravy train.
You should build up the skills required for your next job, not your current offers.
If the price of a stock or house or investment have been rising for two years, it's risen in cost to get into, and you might never see the gains people have seen in the past; I doubt you'll ever get a bitcoin for a dollar again!
But if job adverts for a new tech have been rising for two years, it doesn't cost any more to learn than on the day it was released. Maybe less, in fact, as there will be more tutorials and more experts to learn from.
Kubernetes is pretty lean. It does require a significant mental load to get up and running, but that's mostly due to how it forces developers to venture into the old and largely unfamiliar sysadmin territory, where you need to pay attention to more stuff other than the compiler finishing a build job.
This is speaking from experience. I love my k3s RPi cluster, it's fantastic once you get things working. I just had to augment with some x86 nodes in order to _also_ run some software that just wouldn't run anywhere else.
I was just about to try to start using this, have you seen this article: https://vocon-it.com/2018/12/20/kubernetes-local-persistent-...
The strategy seems to be to create a storage class per app and make sure each persistent volume claim binds a distinct storage class. It sounds like a lot of heavy lifting but it's just a few YAML files... the alternative seems to be https://github.com/rancher/local-path-provisioner which uses the same strategy under the hood to fulfill PVCs as they come online using the Local Persistent Volume strategy, but does not require the storageclass arrangement I understood from the tutorial linked before.
If you want to use an SSD attached to only one special/master node, I don't think that's the wild wild west, I think that's actually closer to what you might see in a traditional network storage architecture, possible to use something like Portworx or Rook to make that work if they are supported on RPi. That's not the same as local storage provisioner though,
If you know more about it than I do I'm happy to hear more about where you got stuck, since I'll be trying to implement this strategy for myself on a non-RPi cluster with a stable set of nodes soon (stable as in, pets not cattle).
As far as I understand, local storage provisioner is for the node local storage; the storage doesn't follow the workload to whatever node is the pod scheduled, but schedules the pod to the node containing the storage device. It doesn't allow for pods to access local storage outside the node.
So for worker nodes using storage on the master node, isn't it better to use either iSCSI or NFS?
If you want to have a 'storage node' in a simple way, the NFS storage provider is the way to go. You install the nfs client libs on each node, setup an NFS share and configure and run the provisioner.
My experience with iSCSI is to stay the heck away from it. It is not what you want. iSCSI is really meant for people who already have iSCSI SANs and not people who have a disk they want to share. The more I learned about it, the more I learned that I should have picked something else for every use. It's not that it's bad, it's that it solves a much different problem than I expected given the networked nature of it.
 https://github.com/kubernetes-incubator/external-storage/tre... (I think this is the right one, been a while).
This article does the NFS approach justice I think, I was pleased to find it has been a working strategy for a while!
Source: my company has to build software for Arm on a regular basis.
I like it! ^^
If the services are required to be online, it's likely you'd want your network online as well so you'll need a battery backup for your router. So you run the Raspberry Pi on the same battery backup you have your router on.
It might be fun to implement some sort of service that checks battery life once per day and turns off a smart plug (which the Macbook power adapter is plugged into) and lets the Macbook drain once per day before turning it back on, though.
The fans would start whirring, and the device was immediately unusable for anything else. I switched to using microk8s, which is slightly better, but still makes the device crawls. The MBA also only has 4GB RAM, which is very low on what you might count as k8s-ready.
I have an old T460S, I ran VirtualBox on it for a short while, it was so slow it felt like something was wrong.
I think databases (for production use) are better off managed as services? So typically on physical hw next to the hw that runs k8s?
The alternative (as mentioned here) is to guarantee local storage on same hw next to the pod running the DBs - and you'd typically want to dedicate cpu, io and ram to the db - probably dedicate physical hw to the db pod(s).
Maybe k8s can take care of failing over to a follower - but I don't think that's likely to work without a "plug in" in k8s for your db of choice?
Maybe someone running redshift, spanner or azul sql service have some insight?
Ed: for development and testing, that's another matter. But even then I think I'd prefer "provision a new db/schema on my beefy db service" to spinning up a pod that just happens to run a db daemon.
In terms of the point related to taking into account in your application, as long as you have all the db replicas under one umbrella as a deployment/service, then having one endpoint for the db is fine and it is no concern of the application.
Keep in mind I am still learning Kubernetes, but this is what I have done to scale up separate back end components. Are there any objectionable/wrong practices being done?
First, ReadWriteMany implementations (which depend on your cluster) might not guarantee the sort of POSIX filesystem consistency that databases expect.
Second, does Postgres in read-only replica expect to be run on a read-only, possibly-changing volume? What's the consistency model then?
The standard way of doing this is to run a single postgres instance on a single PVC/PV (that replicates across the cluster anyway), letting the cluster move the pod if it dies. In addition, you can run read-only postgres replicas for some semblance of read-only HA while the master reschedules on failure. You can also go deeper into faster failover mechanisms (without having the k8s scheduler in the hot path of that) using any of the tons of postgres HA systems.
I've used kind successfully on WSL for experimenting locally, and even found a script to open up ports on the firewall and set up a port "forwarding" of sorts using the netsh utility, which let me access a program bound to a port within WSL. Though I suspect additional hurdles considering however the networking for kind works.
Edit: the part about how annoying it is to set up an embedded dev environment may not be true anymore with Windows 10’s WSL (I haven’t tried it)
WSL is great although I often prefer to SSH into a VM or run a docker container (in a VM). There are still some lingering performance problems with filesystems that they haven't solved yet though (partially solved by using VMWare for my own VMs).
But yeah, it's just a shell script that hits the API every 10 mins.
If it's instead three etcd processes in the same VM it's still a cluster.
If you evacuate two of the processes and allow a single node to maintain quorum it's still a cluster.
"I always get bugged when people use the word array to refer to an array of length 1."
It already had MacOS installed and there was no strong reason not to use it.
But MacOS does handle low memory conditions much better than Windows does.
MacOS (on apple hardware) does have the benefit of optimised fan curves and undervolted CPU profiles which would be hard to replicate in Linux though.
Catering for edge cases like; when closing lid, it should not go to sleep, with external drive connected and mounted, it won't boot if that somehow disappears, Apple remote cannot easily be disabled (there's one used in the room, and it's picked up by the laptop constantly).
I also have occasional issues where the trackpad stops working after a period, and requires a reboot to fix. More of an X issue I believe.
Couple ideas, for what little they may be worth. Grub might be installed on the wrong drive. Or, if you’re getting past grub, look up systemd’s nofail option for /etc/fstab.
Interesting, my work machine runs a linux vm on a windows 10 host. It regularly uses 95 to 96% of my 16GB RAM. When this happens Linux grinds to a halt yet Programs running on Windows are still usable. MacOS must be amazing,
I found no mention of this in the article, which I think is dangerous. Unfortunately I didn't find any way to contact the author on his site (except an unused comment plugin).
Thanks for the word of warning. I'll take a look at removing the battery to avoid any issues.
For instance, my MacBooks tend to be in use 5 - 10 years because they get handed down, and I essentially only run MacBooks while they're plugged in (e.g., off power less than one day a week), and have never had that happen.
I certainly see battery capacities drop after 3 years or more, and simply buy a new battery.
OWC "MacSales" batteries: https://eshop.macsales.com/shop/Apple/Laptop/Batteries
I've run several laptops as you describe without issue, however I've also run two macbooks at two different times plugged in 24/7 as "servers" and both had this issue within 3 years. The first of these two shattered the glass trackpad which was a safety issue in itself. Apple agreed and fixed it for free even though the warranty had expired!
This happened to one of my Dell laptops last week since it had been docked for a year and already had a worn battery. So that's why it was fresh on my mind when I commented. Luckily I caught it because the plastic case bent upwards...
For the past two weeks though, I haven't had any issues with this setup on the Macbook Air!
I though Kubernetes was all about containers, not VMs?
Why not just boot proper Linux on that thing?
Even on Linux, minikube uses a separate VM. It's just cleaner than having the kubelet running on your workstation directly. (microk8s takes a different approach and runs on your machine directly. The last time I interacted with it, it destroyed my coworker's workstation and we had to reinstall the machine completely. k8s is pretty invasive and really wants an entire machine at its disposal. VMs are just perfect for that.)
Thus my question: If the aim is to use the machine for a Kubernetes "cluster", why not boot proper Linux on it, so Kubernetes can run at full speed, without any VM overhead?
I think right now if you want to have a single-node testing cluster, you will be very happy with minikube and the VM it creates. If you want multiple nodes, you will be very happy with VMs; you can create, destroy, and inject errors right from the command-line without having to walk over to physical machines and manipulate them.
When I tried with minikube, the VirtualBox VM would boot successfully, but there was an issue with the networking b/w the VM and MacOS
I wish the person who wrote this article described what he did next with this K8s cluster. Probably nothing.
Something like this? https://www.imore.com/mac-mini-mame-arcade-cabinet-project (Looks cool! Some time down the line I'll have to try it out)
The first thing I did with the setup was to learn about the differences between Helm 2 and Helm 3. I had used Helm 2 in my previous job and wanted to get some hands-on experience with the latest version by installing and modifying some helm charts.
This is certainly something that could be accomplished with a similar setup running locally on my primary computer, but I like the reduced (mental) activation energy of always having it ready to go.
You might be able to install Linux directly and then using something like https://microk8s.io/ instead...
Many technologies, I can see how people would want to hack on.
But Kube, to me, is going way down the rabbit hole - a tech to support another tech, to support another tech, to support another tech, to do maybe something at scale, which few will ever do.
I feel as the Kubes is one of those almost entirely arbitrary forms of complexity that pulls our nerdy attention into the netherworld.
I feel lately that tech people are creating a fully on dystopia of total complexity: more than any one individual can grasp in a lifetime, and a situation in which it's nary impossible to know even which direction someone should head to be a 'pragmatic contributor' who also has 'some semblance of a life' without woefully falling behind.
It's one thing to have enthusiasts, it's another to have a situation wherein only kids coding since 18 and running their own 'kube clusters' and 10-layer stacks at home have the chops to do what's necessary.
shitting it up and wearing it down with kubernetes is a guaranteed wrong move.