
Raspberry Pi on Raspberry Pi - KirinDave
https://blog.mythic-beasts.com/2019/06/22/raspberry-pi-on-raspberry-pi/
======
linguae
Something that just dawned on me is how far we have come regarding the compute
resources that are available to the average person living in the developed
world. Consider the hardware that Larry Page and Sergey Brin used to start
Google in 1998. According to this page ([https://blog.codinghorror.com/google-
hardware-circa-1999/](https://blog.codinghorror.com/google-hardware-
circa-1999/)), they had 10 processor cores running at speeds of 200-333MHz,
1.7GB RAM, and 366GB of distributed hard disk storage. This configuration
probably cost them a minimum of $10,000 to build, probably more. Now consider
the Raspberry Pi 4. If one spends $45 on the 2GB RAM variant and additional
money for a 512 GB drive, then for roughly $100 he or she would have the same
compute and storage resources that Larry Page and Sergey Brin had in 1998 that
started one of the world's most successful Web companies. In fact, our
smartphones can drive 1998-era Google if configured with enough storage.

From a software standpoint, imagine the possibilities of millions of people
walking around with devices that are as powerful as the compute resources
Google had in 1998. It's realizations such as this that make me excited

What I love about the Raspberry Pi is the possibilities it brings at
affordable prices. For example, students learning about how distributed
systems work can build a cluster of Raspberry Pis for just a few hundred
dollars. They have access to the same open source software that major tech
companies use for their infrastructure, like Linux and various distributed
software projects such as Apache Spark.

In an age where sometimes I'm cynical about the direction of tech and our
industry, it's realizations such as this and product announcements like this
new Raspberry Pi that make me remember why I love computing.

~~~
AnIdiotOnTheNet
On the other hand, look at how ridiculously powerful our computers are _and
they 're still so slow_. We've met orders of magnitude more computing power
with orders of magnitude slower software.

~~~
bigiain
I was amused to read this morning that Microsoft's new Terminal client has
"GPU accelerated text rendering engine". WT actual F??? You can't run a
terminal window without a few Gig of video ram and a couple of teraflops of
GPU horsepower??? _Boggle!_

[https://www.theregister.co.uk/2019/06/24/microsoft_round_up/](https://www.theregister.co.uk/2019/06/24/microsoft_round_up/)

~~~
comex
If you're on, say, a 4K 10-bit display, there's quite a bit more pixel data to
push than there used to be. You still don't _need_ a GPU just to draw text,
but since you already have one, using it will provide better performance and
likely consume less power.

~~~
Crinus
> but since you already have one,

Since i already have one i might want to use it for _other_ things. The reason
computers are slow is this sort of "since you already have that resource,
might as well use it" thinking - which makes sense if only _one_ program does
it, but if almost _all_ programs do it then it breaks down quickly.

~~~
skybrian
The same could be said for the CPU, and this helps free up the CPU for other
things. It's a trade-off.

But the underlying problem here is higher resolution screens than needed. Most
people don't actually need a 4k display. Sometimes, they can't really see
small print that easily anyway and what they need is UI's designed with large
print in mind.

~~~
Crinus
CPUs are more generic though and pretty much every single GPU accelerated text
drawing operation i've seen allocates permanent GPU resources (textures
mainly). It isn't _impossible_ to not do that, but if you only allocate the
necessary resources for the GPU on an as-needed basis and then release them
once you're done, you are introducing latency which invalidates any gains you
may have. The alacritty terminal linked elsewhere in this post, for example,
keeps a bunch of atlases around with hundreds of glyphs which are local to
each instance of the program (thus using both CPU and GPU resources) and for
macOS and Windows ignoring any system-wide glyph caching the APIs it uses may
already have (caches that will be created - and thus resource allocated -
anyway when it tries to rasterize those glyphs for its own use).

FWIW yeah, i agree that most people do not really need 4K displays but that is
another matter.

~~~
comex
> CPUs are more generic though

Which is exactly why you want to save the GPU for what it does best – drawing
pixels.

> and pretty much every single GPU accelerated text drawing operation i've
> seen allocates permanent GPU resources (textures mainly).

Makes sense as a concern, and it's not something I've looked into. (I don't
use alacritty.) On the other hand, how much memory are we actually talking
about? On my system (total screen resolution 2880x1800), a typical terminal
glyph has a roughly 14x16 bounding box; let's bump that up to 20x20 to account
for padding. Stored as 8-bit RGBA, that would take 1600 bytes. An atlas of
"hundreds" of glyphs would then be expected to take up on the order of
hundreds of KB... which seems pretty negligible? A larger font, multiple
atlases, or more characters per atlas would require more memory, but I still
don't see how you get to an amount worth worrying about. I could be missing
something.

~~~
Crinus
One important thing you are missing is that you are focusing on a single
instance of a single program doing that. These caches are not shared among
programs and unless you are only running a single program at a time in your
OS, if every program does such resource abuse (not necessarily _this_
particular type of abuse, but in not caring about resources in general) then
you get a slow computer.

Computers feel as slow as ever (which was the topic a few nodes above) despite
being much faster in theory not because of a single program but because all
(or well, the overwhelming majority) the programs in your computer abuse
resources - even if a little. It is death from a thousand little abuses.

------
ChuckMcM
_netboot on the Pi 4 is only going to be added in a future firmware update.
Netboot is critical to the operation of our cloud, as it prevents customers
from bricking the servers. Our dreams were shattered._

This is unfortunate, its something I use a lot. Guess I'll wait to get a Pi4.

~~~
KirinDave
The big news is that from a hardware perspective netboot is possible. That has
not always been the case.

~~~
stedaniels
But has been the case since the Raspberry Pi 3 Model B IIRC.

~~~
KirinDave
Yeah, we just had to wait for firmware.

------
pathartl
I get that they're using the Pi in a production environment for small-sized-
discrete-hardware hosting, but given the nature of the Pi and it's community,
the tone of this article is confusing to me.

The Pi is great for things like DIY HTPC's, kiosk displays, IoT controllers,
education, etc... but using it how this service is using it seems wrong--or
misused--for some reason. I feel like an offensive stance is being taken
against the Pi 4 for not being client production ready, when it seems like the
foundation's attitude is if you want to go full client production, use the
compute module.

~~~
m463
Isn't it a cellphone chip being misused for DIY HTPC's, kiosk displays, IoT
controllers, education, etc... :)

More seriously, perf per watt is king in datacenters, no matter what the
source.

~~~
detaro
If I remember correctly, it's originally a chip for set-top boxes.

~~~
w0mbat
The original purpose of the original ARM chip was to drive the Acorn
Archimedes desktop computer. While they were aiming to design a power
efficient device, the power consumption accidentally turned out to be far
lower than intended, which has been a big reason for ARM's continued success
in many uses.

~~~
w0mbat
The team's design goal was 1 watt, but the chip ended up needing only a tenth
of a watt. In fact on the original testbed they forgot to wire up the power
lines to the chip, but the processor still worked, appearing to be running on
no power at all! It turned out that it needed so little power that it could
run on just the leakage from the data lines.

[https://www.theregister.co.uk/Print/2012/05/03/unsung_heroes...](https://www.theregister.co.uk/Print/2012/05/03/unsung_heroes_of_tech_arm_creators_sophie_wilson_and_steve_furber/)

------
tracker1
Wonder how K8s would do on the 4GB RPi models... With a netboot controller,
even better I'd suspect.

~~~
geerlingguy
I've been running a K8s on the 3B+ for over a year and it works, just barely
([http://pidramble.com](http://pidramble.com)). The one major constraint, and
almost always the cause of control plane outages, was the 1 GB of RAM on the
master. Now that I can get a Pi with 4 GB I think it will be a lot more
resilient!

Note that the other Pis did just fine running typical workloads, as long as I
kept the overall deployment memory constraints in check.

~~~
tracker1
That seems to be the general complaint... running K8s on less than 1GB ram
seems to be unmanageable for the most part... with the new models, it might
actually be a good idea to try a few clusters of these.

------
bsder
The real question is:

Can I buy the chips? Can I get the technical documentation?

If I can't build my own RPi in volume, this is _STILL_ a problem.

~~~
m463
I too wish you could get pi's in quantity.

For similar systems in quantity, a guy I know used toradex

~~~
jdietrich
I don't know about your local distributor, but RS Components will sell you as
many as you like; they've currently got 47,400 3B+ boards in stock, no maximum
order and a price break at full box quantities (150).

