
Using multi-arch Docker images to support apps on any architecture - pdsouza
https://mirailabs.io/blog/multiarch-docker-with-buildx/
======
steventhedev
As an aside, is this the new pipe curl to sudo bash?

    
    
        docker run --rm --privileged docker/binfmt:66f9012c56a8316f9244ffd7622d7c21c1f6f28d

~~~
jraph
Except you cannot even look at the contents of the file being piped beforehand
and hope that the same file is downloaded when you actually pipe it. It's more
like running setup.exe using the administrator account.

~~~
viraptor
Sure you can - first pull, then there's many tools to help you. For example
[https://github.com/larsks/undocker/](https://github.com/larsks/undocker/)

~~~
ownagefool
Indeed.

You can also pull via the sha rather than the tag, which gives you significant
extra assurance.

docker pull
docker/binfmt@sha256:5a9ad88945dff7dc1af2ef7c351fe3dd9f7c874eb2c912c202ced088d21c178a

Once you've confirmed you're happy with the script, I don't believe there is
any issue with automating this.

docker run --rm --privileged
docker/binfmt:@sha256:5a9ad88945dff7dc1af2ef7c351fe3dd9f7c874eb2c912c202ced088d21c178a

In theory, the underlying container cannot be changed, which is what most of
the issues with piping curl into bash is.

------
jmb12686
I've been leveraging docker buildx to create multi architecture images for a
few months. It's quite nice and simple, and I've been able to even automate
the multiarch builds with GitHub Actions. See an example repo of mine here:
[https://github.com/jmb12686/docker-
cadvisor](https://github.com/jmb12686/docker-cadvisor)

~~~
pdsouza
Neat! I'd like to get some automated multi-arch image builds going too. I'll
give GitHub Actions a go using your repo as reference.

~~~
jmb12686
Here's another repo of mine doing (essentially) the same thing, multi arch
image build using GitHub Actions. There is a bit more documentation in this
repo regarding local builds though: [https://github.com/jmb12686/node-
exporter](https://github.com/jmb12686/node-exporter)

~~~
pdsouza
Great, thanks!

------
gravypod
I wish there was something like Bazel, Buck, Pants, or Please built on top of
docker/crio.

The docker build cache, and dockerizarion of tools, has made building,
testing, and deploying software so much simpler. Unfortunately the next step
in build systems (in my opinion) still has the mentality that people want to
have mutable state on their host systems.

I hope someone extends BuildKit into a system like Bazel. All rules can be
executed in containers, all software and deps can be managed in build
contexts, you automatically get distributed test runners/builds/cache by
talking to multiple docker daemons, etc.

~~~
koolba
The docker build cache alone feels like magic for long (from scratch) builds.
It feels tedious breaking out individual steps but I’ve yet to regret the
extra effort.

~~~
marmaduke
In other contexts, Cow snapshots such as those provided by LVM or ZFS etc can
provide similar effect by naming the snapshot after the hash of the code to
execute to build it

------
fortran77
The original purpose of most computer programmers was to write a program that
would solve an immediate technical problem or a business requirement. The
programmer was not concerned with the "technical" (though still important)
question: What architecture is this CPU going to use? The first time a
programmer encountered the question, his reaction was to try to compile a
program that would run on the CPU.

I used to think of this process as a sort of reverse engineering exercise. To
figure out what a CPU was doing, you needed to understand the architecture of
the CPUs used by the people who were designing it. It was as though you were
trying to reverse engineer a car engine using a hand-held computer; to
understand how the engine worked you needed to understand how the car engine.

------
javagram
Interesting article but I can’t understand why cross compilation is dismissed.

It could have been improved by some performance benchmarks showing cross
compilation performance in comparison with this emulation based solution. I
find it hard to believe it makes sense to emulate when native performance is
available.

~~~
jcoffland
Cross compiling is a lot more difficult to set up. Emulation let's you use
much of the target system's tools as is. Cross compiling means you have to
build all of those tools for the host system.

For example, with the RaspberryPi I can grab a Raspbain image, add binfmt and
qemu on my host and with a few small changes to the image chroot in to a ready
made build environment for the Pi that's faster and more convenient than
compiling on the Pi. Setting up a cross compile environment for the Pi is much
harder.

Docker is totally unnecessary BTW.

~~~
jrockway
I've honestly never had a problem cross-compiling for the Raspberry Pi. I
learned my lesson back when it first came out; I needed a new ntpd and it took
24 hours to build on the Pi. I then realized the settings were wrong and spent
about 20 minutes setting up the cross-compilation machinery on my x86 Linux
box instead of waiting another day. "apt-get install crossbuild-essential-
armhf" and some configuration and your build is done in seconds. Well worth
it.

Modern languages are even easier. I can build a Go binary for the Raspberry Pi
by setting one or two environment variables; "GOARCH=arm GOOS=linux go build
./whatever". Wonderful.

The Raspberry Pi has improved since the original. I recently needed llvm
compiled from source and it only took on the order of hours on a Pi 4. (GCC
was unusable, though; uses too much memory. Had to use clang from the package
manager to build a new LLVM. The efficiency was impressive.)

~~~
jcoffland
Finding the right parameters to cross compile each dependency you need is
tedious. Sure some packages are easy to cross compile but it's much easier to
set up an emulated build environment and then compile everything normally.

------
justicezyx
I helped support the similar thing for Borg inside Google. It's quite simple
to support in a cluster manager, and highly powerful.

------
neuromute
This looks interesting. I’ve been looking into getting Keybase running on a
Raspberry Pi 4. This might help.

------
ericb
Anyone know how to get smaller docker images? I thought if I had all the
previous layers in the docker registry that an upload would just be the size
of the diff of the new layer, but this seems to _never_ work.

~~~
gravypod
Some docker registries isolate the layer cache per account to prevent cache
poisoning attacks and data leaks. This means you might only take advantage of
the registry caching if you have already pushed the first version of a tagged
image.

If you want to get extremely small docker images you might also want to take a
look at Google's distroless images and using mutlistage builds.

~~~
ericb
> Some docker registries isolate the layer cache per account to prevent cache
> poisoning attacks and data leaks.

Are there ones that _don 't_ that you know of? This will be for an intranet
registry, so the isolation doesn't buy much.

Thanks! re:the other suggestions, I will look into those.

~~~
gravypod
GCR boasts it has a global layer cache. Also, this is a thing that should only
ever effect your very first push. You should only be seeing it twice if:

1\. None of your image layers are the same between builds (ADD as one of the
first instructions for instance)

2\. You are distributing your code to many people and they are building them
into entirely separate accounts. (you send me your code and I build, tag, and
push it to my dockerhub account).

~~~
ericb
Unfortunately, choice 2 sounds like us. I was looking for some way to short-
circuit that (by maybe shipping the repo already loaded) as the product runs
in the customer's cloud, and the images are built on their machines.

~~~
gravypod
If that's the case you could give them a pre-built copy of your containers
(assuming your builds take a very long time this might be worth it).

There's two commands: `docker save` and `docker load`. It tars the history,
layers, etc into a single file. You can further compress it for distribution.
I've had a lot of luck with it.

Your client would then download your source, `docker load` your prebuilt
copies to warm their cache, make their modifications, and further builds would
be much faster.

They'd still pay that first penalty for pushing to their internal registry but
that shouldn't take too long since that's essentially just a file copy.

~~~
ericb
I'll look into it. Thanks a bunch!

