
Turing Pi: Kubernetes Cluster on Your Desk - jcamou
https://turingpi.com/
======
rubyn00bie
While Raspberry Pi's are awesome, and the power consumption is nothing to
scoff at (when considering a cluster), you can accomplish this same thing for
a lot less, and have quite a lot more compute power, by purchasing a used
server or even something like an AMD 3600X...

A single 3600X will grossly outperform this cluster (and cost less) with less
headaches (you don't have N physical machines) by using KVM to deploy a few
virtual machines and using Kubernetes to orchestrate and allocate within those
VMs. You'll also have a lot less latency between nodes running in VMs on the
same physical host.

Another thing that unfortunately sucks about Raspberry Pis (less with Pi 4,
but still mostly applies) is really shitty I/O performance...

I spent a large amount of time over the past summer and fall trying out
various ideas to have a "cluster" at home that was both practical and useful.
While, the PIs were nice, they never really amounted to much more than a demo.
Latency and I/O become real problems for a lot of useful interconnected
services and applications.

Honestly, if Ryzen 3000 hadn't come out, for cheaper cluster builds (~300-400)
I still think Pis would be a solid choice but... Ryzen 3000 is just so fucking
fast with a lot of cores, it's truly hard to beat.

Addendum: to touch on used servers, yes your power bill will go way up, no
joke, but for some applications like large storage arrays-- it's hands down
the cheapest/easiest route. Search by case, not by processor, it sounds weird
but the case is likely the most valuable part of the old server (like ones
with 20+ SAS2 slots for $500) or PCI-E slots that GPUs can fit into.

~~~
hinkley
The other part of 'on your desk' is hearing damage.

Server hardware vendors have traditionally not given two shits about their
servers being north of 90 decibels, and I'm pretty sure I've witnessed a few
that were pushing 100.

That Raspberry Pi is probably going to absorb more noise than it makes.

~~~
dahfizz
Is there any reason you need a kubernetes cluster literally on your desk?

Isn't the whole point of using these layers of abstraction over hardware that
you pay someone else to manage it?

The only times I can imagine needing hardware on my desk are when latency is
super important or when I constantly need to manipulate the hardware (change
hardware, play in bios, etc). In either case, I would not use k8s to run my
software.

~~~
gclawes
It's fun to play around with and learn on. If you can build a cluster from
scratch with Pi hardware, you get a lot of knowledge of how things work under
the hood for a real cluster.

------
closeparen
There's something amusingly cyclical about a blade server architecture for
Kubernetes. The tech comes out of a whole movement towards combining commodity
machines using clever software instead of buying specialist hardware, but then
adds the specialist hardware back in.

Some deeper integration between Kubernetes and the hardware
(acceleration/offload ASICs maybe), branding of k8s + this hardware as a
unified product, and this would literally just be a mainframe. Which is not a
terrible idea! Maybe Kubernetes is the mainframe operating system of the
future.

~~~
hinkley
Some mumblings have been heard accusing us all of trying to reimplement
mainframes, badly.

For a long time I have been watching the ebb and flow between peer to peer and
client server and it’s gotten quite a bit fuzzy lately. I suppose if you treat
cloud providers as a large amorphous server, it sort of still fits the mold.

------
LargoLasskhyfv
There is also the
[https://www.pine64.org/clusterboard/](https://www.pine64.org/clusterboard/)
for $99 which takes these modules
[https://www.pine64.org/sopine/](https://www.pine64.org/sopine/) at $29 a
piece, which are quad-core ARM Cortex A53 with 2GB LPDDR3. This is their wiki:
[https://wiki.pine64.org/index.php/PINE_A64-LTS/SOPine](https://wiki.pine64.org/index.php/PINE_A64-LTS/SOPine)

~~~
Already__Taken
75W 5v bricks can't be that easy to come by were as 12 or even 19v we're
usually tripping over.

~~~
LargoLasskhyfv
I'd use either this [https://www.mini-box.com/picoPSU-80](https://www.mini-
box.com/picoPSU-80) and repurpose a spare 12V brick for it, _or_ get something
for powering LED strips. There are gazillions of them rated 5V 15A at about 30
bucks.

------
tlrobinson
Note the $189 price tag doesn't include the Raspberry Pi compute modules,
which are about $30-40 each.

It's a neat form factor but you could just buy some regular Raspberry Pis and
an Ethernet switch.

~~~
sgt
That's without eMMC though. Having a bunch if normal Pi's running SD cards
would end up in tears at some point.

~~~
a012
Isn't eMMC on each RPi board? No?

~~~
liamdiprose
The compute modules come with onboard eMMC

------
aivarsk
If you're looking for a cheaper alternative then
[https://clusterhat.com/](https://clusterhat.com/) is worth taking a look. I
have one sitting on my desk (4 Pi Zero nodes and Raspberry Pi 2 controller).

~~~
cwiggs
Very cool, Thanks for the link. Are you also running k8s on it?

------
nsky-world
Hey guys, I am a co-founder of Turing Pi. I see a lot of comments around
Raspberry Pi, performance and VM. I just want to shed a light on some things
here. Turing Pi is not about performance, it's about cluster architecture. If
you look at Turing Pi as a homelab project, then yes, you can get more
performance with some used cheap servers. Of course, if you get approval from
your homies to occupy a closet. You can even run apps in containers with
Kubernetes orchestration using VM and so on.

The main idea behind Turing Pi is to deliver compute to the edge. If we look
at cases where some compute will run low latency, highly available and
internet independent apps to automate processes, and often in a hard to reach
places, then the classic servers, not a solution. Turing Pi is an early
version of edge computers with cloud-native architecture. Why it's important?
Because if you are a business with some services running in the cloud and you
want your edge computing organically to coexist with your cloud stack, then
edge clusters could be a great choice. The speed to innovate and deploy your
code into production to both cloud and the edge environment could be a
critical component.

The existing Turing Pi model more oriented at forward-thinking developers who
want to learn and push cloud-native to the edge. Why Raspberry Pi computers?
They are not the most powerful computers, but they definitely can lower the
entry point for developers by offering a huge and well-documented software
ecosystem.

------
moondev
Looking at the specs, it seems almost dishonest to promote this for
kubernetes.

> The nodes interconnected with the onboard 1 Gbps switch. However, each node
> is limited with 100 Mbps USB speed.

Not only that but Compute Module 3+ are limited to 1GB RAM, is it really
expected someone could run a realistic workload? How stable is the control
plane node with such limited resources?

It seems like picking up 3 raspi4s (4GB RAM each) and powering via PoE would
be a must better result.

~~~
wedn3sday
I agree, using the USB bus for inter node communication seems like a poor
design choice. Anyone got any insight on why they wouldnt use the much faster
ethernet connection?

~~~
cconstantine
My understanding is that the ethernet module on the raspberry pi is a usb
device, not a pci device.

------
hinkley
Someone had one of these 'backplane' style boards a few years ago, but they
only used it for flashing Pis for distribution. I don't think they ever made
it commercially available.

This looks fairly similar.

And can we all just pause for a moment and look at that heat sink on the
ethernet controller? Holy cats, what's goin' on there?

------
kube-system
This is neat, but I'm really more interested to hear about potential use
cases. I'm guessing this is mostly useful for ARM workloads? Maybe some
situations with low power requirements?

Personally, for my multi-node test clusters, I just run VMs on cheap x86
hardware.

~~~
nsky-world
What's the practical point of running containers on VM?

~~~
kube-system
To expand on my personal use-case, I don't want or need a whole stack of
physical servers sitting around just for a test k8s environment. There are
some things that you can only really test properly in a real multi-node
environment rather than single-node solutions like minikube: failovers, shared
storage, some networking particularities, etc.

But there are reasons to run containers on VMs in production too. Hypervisors
and container orchestration tools solve very different problems. Depending on
what problems you are trying to solve, it might be useful to leverage both.

~~~
nsky-world
Can you please elaborate a little bit more on the reason to run containers on
VMs in production?

~~~
moondev
Here's a reason if your production platform is Kubernetes:

\---

The default pod limit per node 110. This is by design and considered a
reasonable upper limit for the kubelet to reliably monitor and manage
everything without falling over into NotReady/PLEG status.

If your node has a ton of cpu and memory, then 110 pods will not come close to
utilizing all the metal. You can go down the path of tuning and increasing the
pod limit, but this is risky and often triggers other issues on components
that are designed for more sane defaults.

It also means that if your node goes NotReady (not a hardware failure), it's
now a much bigger deal because you have fewer total nodes and many more pods
to re-schedule at once.

This is solved by splitting up these massive nodes into smaller nodes via
virtualization.

It's also nice having an api-driven layer to manage and upgrade the vms versus
shoehorning a bare-metal solution. I would argue it also encourages immutable
infrastructure by making it much more accessible.

There are bare-metal solutions but it's often more complicated and slower than
launching/destroying vms.

------
tuananh
for this use case, i would rather use something like SimpleNUC[0] .

It's basically just NUC machines with rack mount. It's a lot more powerful
than Rpi, quite power-efficient (when compare to actual server rack) and dead
silent.

More importantly, this setup can handle some actual workload

[0]: [https://simplynuc.com/server-shelf-
solution/](https://simplynuc.com/server-shelf-solution/)

~~~
jaxn
That takes it from $300 to $3000.

~~~
tuananh
this turing pi cost $189; 7x rasp pi 3 cost $210 and we need to buy SD card or
eMMC, power plug or usb power hub. the total cost is not cheap either, when
compare with used server rack.

------
Hippocrates
Finding and running docker containers on ARM is unfortunately a pain.

