Hacker News new | past | comments | ask | show | jobs | submit login
Converting an old MacBook into an always-on personal Kubernetes cluster (devopsdirective.com)
189 points by spalas 11 months ago | hide | past | favorite | 125 comments

For what it's worth, I built one like this on an old Linux laptop. I then moved my home lighting controller onto it as a test. Some months later my house lights stopped working, and I had to spend a few hours applying snippets semi-randomly from Stack Overflow. It could be the problem was due to expired Kubernetes certificates, or it could be something else.

Either way, my conclusion was that Kubernetes is really meant for more scale and attention than I like to give my home infrastructure, so although I enjoyed the experiment, I would encourage people not to run important services on something like this.

FYI, if you're looking to deploy a hobbyist/side-project Kubernetes cluster I'd highly recommend k3s [1] - it's a lightweight certified distribution, and is ridiculously easy to set up. I haven't needed to do any maintenance besides the periodic update.

It also works fine for production in my experience, I have a mildly popular website running on a 4GB VPS with it and haven't had any issues related to k3s itself.

[1]: https://k3s.io/

Amen to that, after avoiding kubernetes at home for so long (run a quite a lot of clusters at work in production and while not a nightmare, it is a lot of work) - I've now moved most of my self-hosted stuff to k3s. It's been a joy to work with.

We're even looking at starting to migrate most of our edge deployments to k3s as well.

Really really awesome piece of work by Rancher.

What is the advantage compared to the traditional method of getting a single Linux system or VM, installing the needed packages via apt/yum, rsyncing over your code (if any) and installing systemd units for anything that needs to run all the time or scheduled?

Are Helm charts easier to use or more prevalent than distribution packages?

Similar, and probably slightly easier from my experience, plus supports things like clustering and easily managing service, is microk8s[1] from canonical.

[1]: https://microk8s.org

Interesting, I'll have to look more into that and the differences to k3s. But k3s does support clustering BTW.

it's actually https://microk8s.io

Thank you very much. I was really not looking forward to running docker in a VM. It seems too much like a case of inception.

> Bro, I heard you like virtualization, so we use virtualization inside virtualization to virtualize while we virtualize!

Hard pass. I hope this can be a nice alternative to managing docker-compose.

I've had great luck with moving my container stack stuff to Portainer (portainer.io), which will do containers and stacks, but is really user-friendly and low-overhead. Combine it with Ourobouros to download and auto-deploy the latest versions of containers and it's really low-touch.

I have a strong feeling that the added complexity of Kubernetes does not give most use cases a net payoff for hobbyists. A lot of the quirks and monkey wrenching with both Docker and Kubernetes seems to be like problems that in many cases were solved in traditional _NIX systems decades ago. I'm sure there are benefits of throwing some of that architecture out for some scenarios but people shouldn't treat it as a new paradigm by default.

It's not helpful to say some unspecified thing is better than a concrete thing people use.

What Unix tools are an alternative for Docker? I install an app, I download a VM image, and I have a dynamically sized VM with a nice simple UI to control it. VirtalBox is fine too but more heavyweight for multiple VMs at once, and Docker is where the community and premade VMs images are.

Works on any platform.

k8s is probably overkill if you aren't prototyping a production cluster.

Packaging an app in Docker is work up front and then ongoing overhead to manage. If something I write is only going to ever run on one server, it may be easier for me just to install it on the base system, not in a container.

Docker's been great for hobbyist stuff for me. But I don't really use anything fancy on it, just map some folders for storage I want to survive rebuilds[1] and forward necessary ports to the host machine's interfaces. I don't rely on swarm or any of that. I mostly use it to avoid having to worry about distro-specific crap, since I no longer care to (re-)learn any of that. I don't even use docker-compose since almost none of my personal services depend on one another, just simple shell scripts that all look pretty similar.

[1] In some ways this is easier than running traditional daemons, as storage locations for dockerized services tend to be well-documented, prominently, all in one place, and don't change from, say, one package manager to another. You can very easily get all your dockerized services storing their important stuff under one isolated tree, and back up the whole thing. That plus your run scripts are all you need to restore, no matter which host platform you use.

[EDIT] in fact, getting Samba set up for my super simple and surely common use case of "I want these folders shared read-only for everyone, and these others writable for this one user" was much easier with Docker than it's been since back when I used Gentoo. The fancier distros all seem to make it a big pain in the ass to do anything other than sharing user directories through the GUI, and change Samba configs between seemingly every major release, and they're always a mess. With the Dockerized version it was one short and sweet arcane magical line per directory I wanted to share, so no clearer, but very short and worked on the first try.

> I have a strong feeling that the added complexity of Kubernetes does not give most use cases a net payoff for hobbyists.

Kubernetes is a tool to setup and manage computer clusters comprised of COTS hardware, and also manage how processes are deployed and ran on it. That's not exactly your typical hobbyist's scale.

Interesting, and I agree that this is not a suitable location for running anything important!

I mostly use it for random experimentation/learning and like not having to think about starting up/shutting down Minikube on my main laptop!

What sort of fun projects/experiments do people do with a super low powered k8s cluster locally?

I'm kind of in this weird position where I understand the benefits and use of k8s, but I:

a) Can't think of any cutesy distributed systems/microservices type thing that I could or would want to run on a low power machine locally (lack of processing power or ISP getting pissed off @ massive amount of traffic if you're e.g. scraping a ton of data and doing stream processing on it in your little cluster)

b) Don't really understand the point in investing time in it, as it feels like one of those things you learn on the job as it comes up. And for a lot of people (the majority, probably?) it'll probably never even come up unless they just are hunting for new tech to introduce at work regardless of if the business actually needs it. Which IMO, most businesses don't even have a compelling reason to switch from the old 3 tier monolith architecture.

One benefit is to learn how to admin a Kubernetes cluster.

Kubernetes knowledge is becoming an important job skill, and if your current employer does not use Kubernetes, you'll need to learn it on your own.

> Which IMO, most businesses don't even have a compelling reason to switch from the old 3 tier monolith architecture.

Compelling reasons include self-healing, autoscaling, and official support from all major cloud providers. In my experience, it's actually easier to adopt Kubernetes at a smaller company than a larger one.

It's also becoming harder to hire developers who are willing to work on monolithic codebases. It's been a while since those were the state of the art, and a lot of people with 5+ years in the industry have never seen them before.

This sounds a lot like CV-driven development, rather than any actual use cases. That's not to say it's a bad idea for an individual dev, but might not say great things about our industry.

We have adopted Kubernetes at our startup. It's solved a couple of problems but created a whole lot more. We aren't at the stage that we need the scaling yet.

We evaluated Kubernetes and chose Nomad instead. It works really well (although not as feature rich as K8s) and it allows smaller teams to understand how the whole setup works. It lowers key man risks, IMO.

What problems did it create ?

We adopted Kubernetes at our startup, it solved most ci/cd and devops issues for the team. We didn't need the scaling either, we have at max maybe 2 containers of a service but we know we are ready.

Out of memory issues because we use multiple small machines instead of one big one. There are othrs but we haven't worked out what they are yet.

> This sounds a lot like CV-driven development, rather than any actual use cases.

There are plenty of companies ou there that have old COTS lying around and require some sort of IT infrastructure. Being able to setup and manage a working cluster of COTS hardware is a great way to add computational resources to a company without spending au additional cash or having to rubber stamp permits.

I personally have done this in a previous job, and a small POC with minikube turned into a 3-node kubeadm cluster that deployed and managed two company-wide intranet services like a breeze. Zero cash was spent, the only resource used was a few hours of my time, everyone benefitted, and managers were very happy with the result.

Kubernetes is the 2020's equivalent of "nobody gets fired for buying IBM" of 1980s.

Sort of, but that's only considering the worst possible outcome...

This subthread is about playing with it at home.

Why do you think people write these blog posts.

I've been running K8s since it's first stable released. It worked wonderfully for a 30 micro-service stack of node.js and Scala applications for a financial services company. I would never want to manage that many services in ec2 or ecs instances.

I also baked a 25+service AI platform onto 4 virtual machines, running kubernetes, for deployment in an air-gapped system without a knowladgable operator. It was an excellent choice for that project because of the auto-healing capabilities.

I have also run it at a small startup where we had a combination of static nginx sites, ruby on rails sites, elixer sights, node.js app, and even a c++ app. (it was at a crypto company if you are wondering why so many disparate languages). Having a single deploy pipeline for 5+ different languages and architectures was awesome. I would have killed myself if I needed to support all of those in their native environments at the same time.

There are lots of good use cases for k8s, and honestly it's not that hard if you already have system admin skills because you understand the problems it solves and how it works. Most of the folks I have seen struggle with it are developers (and likewise, I struggle with OOP sometimes - i don't mean to diminish developers skills).

a) I'd advice to start with something fun. Many people do a usenet/torrent stack (jellyfin/plex, radarr, sonarr, jackett, transmission, ombi, NZB*, etc). And honestly most things you'd want to self-host are reasonable to run in containers. E-mail. Huginn. Cryptocurrency node. CalDAV, file hosting (Nextcloud?). Personal web site and whatever side-projects you have. GitLab. Docker image repository.

b) IMO, self-hosting and less centralization of the digital services we rely on is highly desirable for society. (If k8s is the right solution for any particular individual to orchestrate their stuff is a different story). I think for most people who do this, it's a hobby and something they enjoy. Why would you have a vegetable patch when there's food in the store and your employer has a complementary lunch cafeteria?

Hackers gonna hack, you know.

> Don't really understand the point in investing time in it, as it feels like one of those things you learn on the job as it comes up

I don't need to spend my time developing skills for the job I've already got, I need to develop skills for the next job I'll have

> most businesses don't even have a compelling reason to switch from the old 3 tier monolith architecture.

...thus showing I can't rely on my employer to keep my skills up to date for me.

Not that there's no market for specialists in older technology - back in 1999 I heard rumours that COBOL experts were commanding huge salaries to work on millennium bug mitigation in banks. But people following that career path should be choosing it consciously, not by accident :)

COBOL is still in demand and highly paid.

> COBOL is still in demand and highly paid.

Being in demand means close to nothing. Once I was contacted to work on a tool that was developed in Delphi and I would hardly suggest anyone should pivot from their career to jump on that gravy train.

You should build up the skills required for your next job, not your current offers.

You're betting on the technology demand to move to certain direction. To me it feels like investing in stocks, but instead of betting with your money you are betting with your time and brain cycles invested. What makes you so confident that this piece of technology will flourish compared to so many others?

Not the best analogy, IMHO

If the price of a stock or house or investment have been rising for two years, it's risen in cost to get into, and you might never see the gains people have seen in the past; I doubt you'll ever get a bitcoin for a dollar again!

But if job adverts for a new tech have been rising for two years, it doesn't cost any more to learn than on the day it was released. Maybe less, in fact, as there will be more tutorials and more experts to learn from.

Kuburnetes is lightweight, extensible and based on open standards, which is the recipe for a long-term solution in this space. It also has first class support in all of the major cloud providers and has an established tooling ecosystem around it.

I'll agree with everything here besides 'lightweight'.

> I'll agree with everything here besides 'lightweight'.

Kubernetes is pretty lean. It does require a significant mental load to get up and running, but that's mostly due to how it forces developers to venture into the old and largely unfamiliar sysadmin territory, where you need to pay attention to more stuff other than the compiler finishing a build job.

I also take some exception to that, but to be fair I've hward that it fits into a single, 40MB binary...

Especially as the labor market is about to be flooded with devops k8s folks, looking to apply their skills.

I use mine to run the CI/CD for my projects. I used minikube to setup a qemu-based k8s cluster.

You could sell the MacBook and build a decent raspberrypi cluster with the money you make on it.

A big problem is local storage provisioner for kubernetes is still not GA. I want to use an SSD attached to one master, and use that as storage for the other RPIs. Doing this is still undocumented/the Wild West.

Another frustration with the RPi cluster is the arm requirement. If you're only building/running your own stuff, that's fine, but finding containers and helm charts that support arm can be frustratingly difficult.

This is speaking from experience. I love my k3s RPi cluster, it's fantastic once you get things working. I just had to augment with some x86 nodes in order to _also_ run some software that just wouldn't run anywhere else.

Local Persistent Volumes though has been GA since 1.14 and I think entered beta in 1.10 or 1.12, it should be usable now

I was just about to try to start using this, have you seen this article: https://vocon-it.com/2018/12/20/kubernetes-local-persistent-...

The strategy seems to be to create a storage class per app and make sure each persistent volume claim binds a distinct storage class. It sounds like a lot of heavy lifting but it's just a few YAML files... the alternative seems to be https://github.com/rancher/local-path-provisioner which uses the same strategy under the hood to fulfill PVCs as they come online using the Local Persistent Volume strategy, but does not require the storageclass arrangement I understood from the tutorial linked before.

If you want to use an SSD attached to only one special/master node, I don't think that's the wild wild west, I think that's actually closer to what you might see in a traditional network storage architecture, possible to use something like Portworx or Rook to make that work if they are supported on RPi. That's not the same as local storage provisioner though,

If you know more about it than I do I'm happy to hear more about where you got stuck, since I'll be trying to implement this strategy for myself on a non-RPi cluster with a stable set of nodes soon (stable as in, pets not cattle).

Disclaimer: k8s beginner here.

As far as I understand, local storage provisioner is for the node local storage; the storage doesn't follow the workload to whatever node is the pod scheduled, but schedules the pod to the node containing the storage device. It doesn't allow for pods to access local storage outside the node.

So for worker nodes using storage on the master node, isn't it better to use either iSCSI or NFS?

This is exactly right. Local storage is conceptually the persistent volume equivalent to an emptydir volume mount.

If you want to have a 'storage node' in a simple way, the NFS storage provider is the way to go. You install the nfs client libs on each node, setup an NFS share and configure and run the provisioner[1].

My experience with iSCSI is to stay the heck away from it. It is not what you want. iSCSI is really meant for people who already have iSCSI SANs and not people who have a disk they want to share. The more I learned about it, the more I learned that I should have picked something else for every use. It's not that it's bad, it's that it solves a much different problem than I expected given the networked nature of it.

[1] https://github.com/kubernetes-incubator/external-storage/tre... (I think this is the right one, been a while).

NFS-Provisioner and NFS-Client have been around for a while. They're about to be promoted from Incubator actually.

That is fantastic, I remember when they were new ideas without any implementation but just a few people trying some things on GitHub issues.


This article does the NFS approach justice I think, I was pleased to find it has been a working strategy for a while!

I am not a kubernetes expert so I have no idea. Sounds like a fun challenge. I remember seeing something about network mounts for linux in general. Maybe go around kubernetes? If you solve it I'd be interested in how you did it, so an update would be greatly appreciated.

Kubernetes supports pretty much any type of storage through API Extensions so network mounts are supported. People often use the NFS-Client & NFS-Provisioner projects for this.

Have you used OpenEBS?

Would you not want to stay with an Intel-based CPU?



Because 99% of the time building images for Arm processors is a pain in the ass. It's not just about making sure you have all the requirements lined up, but the processors themselves are pretty slow when compared with a relatively modern Intel CPU, so builds take way longer. Compiling Tensorflow, for example, can take days on a Raspberry Pi.

Source: my company has to build software for Arm on a regular basis.

Only a clam seller would come up with that ;-)

I like it! ^^

It took me a lot longer than I'd like to admit to get what you mean ;)

except it may not easily survive a power outage

Power outages aren't common enough here that this is a big concern. And since the network is likely to be down, there is likely zero point in keeping your server online. (Your services can survive a reboot cycle right?)

If the services are required to be online, it's likely you'd want your network online as well so you'll need a battery backup for your router. So you run the Raspberry Pi on the same battery backup you have your router on.

I believe running a MacBook as an always on server will pretty rapidly destroy the battery anyway.

Macbook appears to be pretty intelligent about it's battery. My 2016 is still over 90% healthy and regularly gets over 5 hours on a single charge (load dependent of course). It's pretty much on 24/7.

The issue is that the battery never gets a chance to discharge if you’re using it as a server. Unless you’re unplugging it and monitoring the battery life and plugging it back in every day.

Hmm, most of my work laptops never get used unplugged and I've never had issues.

It might be fun to implement some sort of service that checks battery life once per day and turns off a smart plug (which the Macbook power adapter is plugged into) and lets the Macbook drain once per day before turning it back on, though.

For me there's very good reason for running home k8s cluster: dogfood factor. I run my smarthome and home surveillance (zoneminder) stuff, along with UniFi controller on k8s, and I must say that helps me a great deal with SRE part of my job. I do learn some important things before I encounter them in the work setting, such as: https://twitter.com/ivan4th/status/1236481744477532171

An HP or Dell USFF PC would be a great alternative. Something like an OptiPlex 7010 or 800 G2. You can pick them up for $50 used, if you need more power you can add as large of an SSD as you want plus 16GB of RAM, they're small and quiet and can run Linux.

Yes but this cost the author $0

I used to run minikube on my MBA till last year (Was running Arch). It was very underpowered to run a cluster, as I guess they'll soon find out. This was a MBA 2015, so the OP's 2012 MBA is even less powerful.

The fans would start whirring, and the device was immediately unusable for anything else. I switched to using microk8s, which is slightly better, but still makes the device crawls. The MBA also only has 4GB RAM, which is very low on what you might count as k8s-ready.

Installing Linux on the MacBook Air would be a huge improvement. Less overhead and not having to use VirtualBox to run Linux anyway...

I have an old T460S, I ran VirtualBox on it for a short while, it was so slow it felt like something was wrong.

From my experience running a 3 node cluster (as VMs) etcd was a beast in terms of CPU and disk use.

There are things I don't understand with Kubernetes and the "spin up containers as needed". How do you spin up different databases ? It won't sync, unless your code takes that into account... Say, I have a simple WP site running with a web, a php and a db container... What's the schema to give 2 or 3 more database container and still have visitors see the same content ? (and the databases being consistent) Set up replication ?

For complex services (maybe as simply defined as need low latency durable high-write-rate storage) - I'm not even sure if you should run it on a k8s pod. K8s offers some "services", notably storage - that afaik don't typically run "on" k8s, but "as part of" k8s.

I think databases (for production use) are better off managed as services? So typically on physical hw next to the hw that runs k8s?

The alternative (as mentioned here) is to guarantee local storage on same hw next to the pod running the DBs - and you'd typically want to dedicate cpu, io and ram to the db - probably dedicate physical hw to the db pod(s).

Maybe k8s can take care of failing over to a follower - but I don't think that's likely to work without a "plug in" in k8s for your db of choice?

Maybe someone running redshift, spanner or azul sql service have some insight?

Ed: for development and testing, that's another matter. But even then I think I'd prefer "provision a new db/schema on my beefy db service" to spinning up a pod that just happens to run a db daemon.

To tackle this issue in Kubernetes (specifically for a db), I have created persistent volume claims that are mounted across replicas in a db deployment, with the permission of multiple readers but only one writer. If we know that Postgres stores all its data in /var/x, then we can mount /var/x in all replicas as a shared volume.

In terms of the point related to taking into account in your application, as long as you have all the db replicas under one umbrella as a deployment/service, then having one endpoint for the db is fine and it is no concern of the application.

Keep in mind I am still learning Kubernetes, but this is what I have done to scale up separate back end components. Are there any objectionable/wrong practices being done?

This is not the right approach.

First, ReadWriteMany implementations (which depend on your cluster) might not guarantee the sort of POSIX filesystem consistency that databases expect.

Second, does Postgres in read-only replica expect to be run on a read-only, possibly-changing volume? What's the consistency model then?

The standard way of doing this is to run a single postgres instance on a single PVC/PV (that replicates across the cluster anyway), letting the cluster move the pod if it dies. In addition, you can run read-only postgres replicas for some semblance of read-only HA while the master reschedules on failure. You can also go deeper into faster failover mechanisms (without having the k8s scheduler in the hot path of that) using any of the tons of postgres HA systems.

From what I understand thos is what you need kubernetes database operators for. They configure the database cluster for you.

Or how to elegantly waste a perfectly fine piece of hardware in a neatly over-engineered way ;)

I wonder what it would take to do something like this with kind[0] so you can have something resembling an actual multi-node cluster, if it's even possible.

I've used kind successfully on WSL for experimenting locally, and even found a script to open up ports on the firewall and set up a port "forwarding" of sorts using the netsh utility, which let me access a program bound to a port within WSL. Though I suspect additional hurdles considering however the networking for kind works.

[0] https://kind.sigs.k8s.io/

Why run windows at all?

Dual booting/VMs might not be practical or worth the effort if you're a gamer or work in one of these fields: mechanical/electrical engineering, firmware, lab or factory automation, architecture, etc.

I work in a field related to mechanical, electrical engineering and firmware, and I not only think it’s definitely worth the effort to dual boot (I only use Windows for SolidWorks and some other instrumentation programs), I think it’s completely necessary for the more firmware-y parts of my job. Setting up an embedded firmware development environment where you have a decent level of control (so vendor-supplied IDEs are out) is about as annoying as setting up dual boot in the first place, with the obvious benefit that you get a proper UNIX-like system which in my experience increases my productivity. Also using a tiling WM is kind of a must for me when programming and cross-referencing various documents.

Edit: the part about how annoying it is to set up an embedded dev environment may not be true anymore with Windows 10’s WSL (I haven’t tried it)

To each his own - I work in web development but use Altium, Solidworks, Xilinx, and some esoteric embedded compilers that are Windows only for hardware side projects and some Linux infrastructure on k8s clusters for trading. All of my hardware including laptops is powerful enough that I just choose the base OS depending on the context and run VMs to provide any apps I need from other operating systems. Usually the base is Mac OS X for work to avoid deviating from the designers, Windows on personal machines because virtualized 3D still sucks on a per app basis, and Linux on headless machines at home and servers. The context switch is simply too costly for my workflow, especially since I don't always do a good job of firewalling my work from my personal machines.

WSL is great although I often prefer to SSH into a VM or run a docker container (in a VM). There are still some lingering performance problems with filesystems that they haven't solved yet though (partially solved by using VMWare for my own VMs).

I guess the only limiting factor in dual booting would be a lack of skill to set up dual boot. It's quite intimidating at first. I have been dual-booting for ~8 years and I have had my fair share of blunders but nowadays setting up a dual boot pc is very easy especially for the tech savvy (people who are running a k8 cluster) people. I cannot even recall if there was any time where I could restore atleast windows part of the os.

I don't think it was setting up the dual boot that GP was saying was impractical, more having to constantly reboot to switch back & forth between OSes.

I don't think it's just a skill thing; multibooting is more surface for "interesting" bugs and annoyances as you force interactions between systems that weren't designed to work together. Sure, you might know how to fix Windows overwriting the bootloader for the nth time, but it's still a hassle. Yes, you know how to get the UEFI settings the way you want them, but if you were single-booting the system would have done it for you. I get that it's not that bad once you're used to it, but there is a cost to these things.

It starts with messed up clocks and ends up with a bricked SecureBoot.

If you want to use your own domain you can turn Cloudflare into a DDNS by frequently checking your public-IP for changes and update the records on Cloudflare using their API if it change (this is free, except the domain cost): https://github.com/punnerud/cloudflare-ddns

I'm doing exactly this now on an RPi running Docker Swarm. Only difference is that I pay Namecheap for the domain, so Cloudflare is free...I know Swarm is outdated, but it was super easy to start being useful on just one node, now it's running all sorts of things for me.

But yeah, it's just a shell script that hits the API every 10 mins.

You can also use ddclient, it is available on most Linux distributions.

Am I the only person that gets bugged when people use the word cluster for a single computer?

If I take a physical machine, install ESXi on it, spin up three VMs, install etcd on them, and join them together I have a cluster.

If it's instead three etcd processes in the same VM it's still a cluster.

If you evacuate two of the processes and allow a single node to maintain quorum it's still a cluster.

"I always get bugged when people use the word array to refer to an array of length 1."

Same with groups, no?

Anyone have a theory why they're running OS X on it?

> I happen to have a 2012 MacBook Air sitting around unused now that it is no longer my daily driver.

It already had MacOS installed and there was no strong reason not to use it.

Does it run the latest os x OK?

Probably for convenience and stability. Linux on the desktop (or on a laptop) has its share of bugs and odd behavior still in 2020.

Definitely no more bugs than its contemporaries.

But MacOS does handle low memory conditions much better than Windows does.

MacOS (on apple hardware) does have the benefit of optimised fan curves and undervolted CPU profiles which would be hard to replicate in Linux though.

Well, I used the word convenience for a reason. I have a MBP running Linux as a server, and it was fiddly to set up correctly. I didn't mind too much, but on macOS you've got none of that.

Catering for edge cases like; when closing lid, it should not go to sleep, with external drive connected and mounted, it won't boot if that somehow disappears, Apple remote cannot easily be disabled (there's one used in the room, and it's picked up by the laptop constantly).

I also have occasional issues where the trackpad stops working after a period, and requires a reboot to fix. More of an X issue I believe.

with external drive connected and mounted, it won't boot if that somehow disappears

Couple ideas, for what little they may be worth. Grub might be installed on the wrong drive. Or, if you’re getting past grub, look up systemd’s nofail option for /etc/fstab.

I remember what it was now, I had mounted in fstab, but using the device path (e.g. /dev/sdb1) instead of the UUID of the disk. The device path seems to occasionally change (which is bizarre as it is the only device connected). Using a UUID does the trick. But it was just an example of many minor details that needed to be fiddled with in order to create a 24/7 laptop server. I don't mind - it's part of the fun I guess, but I'm just waiting for the next edge case.

> But MacOS does handle low memory conditions much better than Windows does.

Interesting, my work machine runs a linux vm on a windows 10 host. It regularly uses 95 to 96% of my 16GB RAM. When this happens Linux grinds to a halt yet Programs running on Windows are still usable. MacOS must be amazing,

I've had bad luck keeping laptops powered 24/7 for months or years. The battery always swells and fails.

Yes, this is inevitable if you're going to run this plugged in over a significant amount of time. Should remove the battery.

I found no mention of this in the article, which I think is dangerous. Unfortunately I didn't find any way to contact the author on his site (except an unused comment plugin).

Hi -- Author here.

Thanks for the word of warning. I'll take a look at removing the battery to avoid any issues.

I wouldn't change your setup based on anecdata.

For instance, my MacBooks tend to be in use 5 - 10 years because they get handed down, and I essentially only run MacBooks while they're plugged in (e.g., off power less than one day a week), and have never had that happen.

I certainly see battery capacities drop after 3 years or more, and simply buy a new battery.

OWC "MacSales" batteries: https://eshop.macsales.com/shop/Apple/Laptop/Batteries

I'm glad you haven't had this issue before, but I'm willing to bet that the (admittedly not often) time you spend unplugged has prevented it.

I've run several laptops as you describe without issue, however I've also run two macbooks at two different times plugged in 24/7 as "servers" and both had this issue within 3 years. The first of these two shattered the glass trackpad which was a safety issue in itself. Apple agreed and fixed it for free even though the warranty had expired!

This happened to one of my Dell laptops last week since it had been docked for a year and already had a worn battery. So that's why it was fresh on my mind when I commented. Luckily I caught it because the plastic case bent upwards...

Thanks Sid!

Initially, I tried doing this with a 2010 Mac Mini (which claims to support VMX), but couldn't get Minikube running on it...

For the past two weeks though, I haven't had any issues with this setup on the Macbook Air!

> which claims to support VMX

I though Kubernetes was all about containers, not VMs?

Why not just boot proper Linux on that thing?

Your Kubernetes node itself is Linux, so requires virtualization if your host machine is not Linux. Docker for Mac also works this way; when you run a container, it's running on a Linux VM that Docker sets up, not on your Mac directly.

Even on Linux, minikube uses a separate VM. It's just cleaner than having the kubelet running on your workstation directly. (microk8s takes a different approach and runs on your machine directly. The last time I interacted with it, it destroyed my coworker's workstation and we had to reinstall the machine completely. k8s is pretty invasive and really wants an entire machine at its disposal. VMs are just perfect for that.)

> Your Kubernetes node itself is Linux, so requires virtualization if your host machine is not Linux.

Thus my question: If the aim is to use the machine for a Kubernetes "cluster", why not boot proper Linux on it, so Kubernetes can run at full speed, without any VM overhead?

With processor support for virtualization, the overhead is minimal. I personally use a Windows machine at home (for games, sigh) and run VMs for a Linux development machine and a few k8s nodes for testing. The performance inside the VMs is excellent.

I think right now if you want to have a single-node testing cluster, you will be very happy with minikube and the VM it creates. If you want multiple nodes, you will be very happy with VMs; you can create, destroy, and inject errors right from the command-line without having to walk over to physical machines and manipulate them.


I run Linux (latest Kali Linux) on a similar MacBook Air and it work great for any of the light container tasks I've thrown at it (Docker, firecracker-vm, etc)

You could try k3s in the mini. k3s uses a less resource hungry setup, parity is not there yes as many resource definitions won't work out of the box.

Interesting, maybe I'll give that a try! (would be nice since I have 16GB of RAM on the Mini vs 4GB on the MBA and it is better suited to leaving on all the time)

When I tried with minikube, the VirtualBox VM would boot successfully, but there was an issue with the networking b/w the VM and MacOS

Works just as well on any sufficiently powerful old laptop. I find that any damaged laptop or one with a battery that doesn't hold a charge converts nicely into a home or lab server.

A better thing to do with an old MacBook is using it for an Arcade cabinet.

I wish the person who wrote this article described what he did next with this K8s cluster. Probably nothing.

Author here --

Something like this? https://www.imore.com/mac-mini-mame-arcade-cabinet-project (Looks cool! Some time down the line I'll have to try it out)

The first thing I did with the setup was to learn about the differences between Helm 2 and Helm 3. I had used Helm 2 in my previous job and wanted to get some hands-on experience with the latest version by installing and modifying some helm charts.

This is certainly something that could be accomplished with a similar setup running locally on my primary computer, but I like the reduced (mental) activation energy of always having it ready to go.

Should I do this to my 2009 macbook?

I don't think the CPU will support virtualization to use this approach.

You might be able to install Linux directly and then using something like https://microk8s.io/ instead...

But we are always at home nowadays…

Why would one want a 'personal kube cluster'?

You're on Hacker News. Playing with fun new technology is 100% a valid "usecase" here:)

Yes, of course! Which is why it's a rhetorical question.

Many technologies, I can see how people would want to hack on.

But Kube, to me, is going way down the rabbit hole - a tech to support another tech, to support another tech, to support another tech, to do maybe something at scale, which few will ever do.

I feel as the Kubes is one of those almost entirely arbitrary forms of complexity that pulls our nerdy attention into the netherworld.

I feel lately that tech people are creating a fully on dystopia of total complexity: more than any one individual can grasp in a lifetime, and a situation in which it's nary impossible to know even which direction someone should head to be a 'pragmatic contributor' who also has 'some semblance of a life' without woefully falling behind.

It's one thing to have enthusiasts, it's another to have a situation wherein only kids coding since 18 and running their own 'kube clusters' and 10-layer stacks at home have the chops to do what's necessary.

For learning purposes, I guess.

Or if you want to self-host. I'm not a huge fan of ceding my digital life to the likes of FAANG, even though I use their services a lot. Conversely, running my own infrastructure (DNS, email, etc.) is kind of a pain, and it'd be nice to apply the same kinds of personal force multipliers to my home setup that I use at work (config management, security benchmarks, automatic provisioning, version control, etc.)

macbook air is a good computer. put it in a drawer. it might still come in handy one day.

shitting it up and wearing it down with kubernetes is a guaranteed wrong move.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact