
Fully static, unprivileged, self-contained, containers as executable binaries - ingve
https://github.com/genuinetools/binctr
======
detaro
previously (11 days ago, 111 comments):
[https://news.ycombinator.com/item?id=18745182](https://news.ycombinator.com/item?id=18745182)

------
asadlionpk
On a similar note, is there a way to package a container into an executable
for all platforms (including for macOS, Windows).

Like Electron but for containers?

~~~
voltagex_
Interesting idea. Hyper-V is sometimes available on Windows,
Hypervisor.framework on OS X, bhyve on FreeBSD and fallback to qemu if all
else fails?

~~~
techntoke
All much slower than native containers. My Docker daemon on Linux doesn't take
15 plus seconds to start and dynamically utilizes my CPUs and memory from the
host.

~~~
nwmcsween
Same with a hypervisor? A hypervisor could start as fast as a container, the
only issue is having two kernels and the overhead it entails.

~~~
techntoke
Not it can't, unless they build Docker containers into the Window and Mac
kernels with the Linux subsystem.

------
ElijahLynn
What are the use cases for this? I was expecting more in the readme around use
cases or real world examples.

------
darren0
Somewhat related, rootless docker and kubernetes [https://github.com/rootless-
containers/usernetes](https://github.com/rootless-containers/usernetes)

~~~
gravypod
Would be interesting to use to get Docker in Docker running for CIs.

~~~
oso2k
I don’t know any of the details (apologies!) but some OpenShift devs I’ve
talked to and work with at Red Hat do this for their work. It seems some of
the details are here [0].

[0]
[https://github.com/openshift/origin/blob/master/CONTRIBUTING...](https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc#develop-
and-test-using-a-docker-in-docker-cluster)

------
fwip
How does this differ in goals or intent from Singularity?

~~~
TheDong
singularity requires suid components; it is not true unprivileged containers.

singularity must be installed by an administrator of the system, binctr
doesn't care about having privileged components installed.

binctr is a lot smaller and more grass-roots as well, which could be good or
bad.

It's possible singularity wishes to also have truly unprivileged containers,
but if that is one of its goals, it currently fails at it.

~~~
pinko
My understanding is that with newer kernels, Singularity no longer requires
setuid and is truly unprivileged.

~~~
dnautics
You do need root to build, however, though there are build services out there.

------
da_chicken
Every time I see things about containers, I can't escape the feeling that
they're a tool for developer convenience that will result in nightmares for
security, maintenance, and life cycle system administration.

~~~
testvox
Containers are usually pushed from the ops side. I don't see how they make
things more convenient for developers at all.

~~~
empath75
Developing in containers is massive improvement. Just in terms of having
standardized repeatable build environments.

~~~
mikekchar
When making this statement, it would be helpful to add the "in comparison to"
part. People who can't see the benefit of containers may actually live in a
world where they have standardised repeatable build environments _without_
containers (it _is_ possible after all).

For me containers are convenient for the task of specifying the repeatable
build environments. It is also useful for caching those build environments and
distributing the cached artefacts in a versioned way.

To be honest, if you are working with only 1 or 2 build environments, I don't
actually find it particularly more convenient that setting up something
without containers. Often I find it even wasteful because you usually select
fairly large base images (hundreds of megs) even when you only need a few
things. You can build a really streamlined image, but it's a fair amount of
work.

However, if I have to coordinate many "microservices" that are talking
together via TCP/IP and implemented in several different versions of several
different technologies, it's a life saver. Normally I try to _avoid_ this
circumstance for reasons I wish were obvious to others. But of course we know
that it is _not_ obvious to others and so I appreciate being able to use
containers ;-)

------
throwaway2048
we might even call them processes! _ducks_

------
ph0rque
One step closer for my totally pointless, no real use case dream of running a
container in a browser tab.

~~~
rytill
No, that would be awesome.

~~~
drugme
Awesome or not - know of any use cases?

~~~
webmaven
Perhaps... a sandboxed webmail[0] app capable of sending and receiving
encrypted (or cryptographically signed) emails without being vulnerable to
client-side attacks stealing your private key(s) or the cleartext?

There may be similar use-cases for tamper-proof (or at least tamper-evident)
browser-based games.

[0] I'm specifying a _webmail_ client in the browser rather than a desktop
email client.

------
devereaux
Then once one of the static libraries has a major bug, a major catastrophe
happens, and hilarity ensues.

Use static linking with parsimony and intent. It is a double edge sword. Don't
make that your default. Learn from past mistakes.

That said, the concept is interesting. I would just use dynamic libraries
inside. Space and speed issues are premature optimizations.

~~~
AnIdiotOnTheNet
> Learn from past mistakes

You mean like DLL hell, and the Linux distro fragmentation mess? History has
shown that the trade-offs of dynamic linking are often worse. Case in point:
the reason containers are so popular in the first place.

~~~
wahern
DLL hell is mostly an issue of incompatible CRTs (aka libc in Unix parlance)
on Windows. Historically (until circa 2015) Windows programs (particularly
those compiled with Visual Studio) often loaded a multiplicity of incompatible
CRTs. It was often the case that malloc'd memory from one DLL could not be
free'd from another DLL because each linked to a different CRT. Compounding
matters, applications would often globally install popular DLLs which, even if
compiled from the exact same source code, would break apps if linked against a
different CRT version at build time. Which would invariably be the case as
Visual Studio _by_ _default_ links in its own version-specific CRT rather than
the system CRT, the Linux equivalent of GCC shipping and statically linking in
a new version of musl libc, making mixing-and-matching of shared libraries
compiled in different environments a risky endeavor indeed.

On Unix there's almost always a _single_ , system-wide CRT. Moreover,
commercial environments like Solaris as well as Linux/glibc have maintained
strong backwards compatibility (mostly a matter of ABI stability) so that
compiled programs continue to work correctly with future libc versions.
Believe it or not, third-party dependencies aside, compiling a C program on
Red Hat and successfully (and correctly) running on Debian is the norm, not
the exception, assuming you never on execute on a system with an _older_
version of glibc than during compilation.[1] (Even though musl libc doesn't do
symbol versioning, I assume the story is similar as it rigorously sticks to
exposing only POSIX interfaces and opaque structures as much as it can.)

Therefore, DLL hell is a much less pronounced issue in Linux land. A typical
example involves the loading of incompatible versions of third-party
libraries. But most widely used third-party libraries like libz or libxml2
have maintained strong backward compatibility. The only culprit I've
repeatedly encountered in this regard is OpenSSL, which until OpenSSL 1.1
never committed to a stable ABI, largely because it relied so heavily on
exposed, preprocessor generated structures as opposed to idiomatic opaque
structure pointers. But IME this was more an issue on macOS as the norm in
Linux is to simply use the system OpenSSL version. (Who here bothers to
download, build, and install OpenSSL in their containers rather than using the
system version of their Debian- or RedHat-derived container base image? It's
common enough at large companies but hardly so common that it was a
substantial factor in the adoption of containers.)

The ELF toolchain on modern Linux systems is actually sufficiently advanced
that you _could_ theoretically support the loading of multiple versions of a
library like OpenSSL (as long as you didn't pass objects between them), such
as the SONAME linker tag and the far more powerful but little known ELF symbol
versioning. See, e.g., [https://www.berrange.com/posts/2011/01/13/versioning-
in-the-...](https://www.berrange.com/posts/2011/01/13/versioning-in-the-
libvirt-library/)

Alas, SONAME is used inconsistently and ELF symbol versioning almost not at
all--only symbol versioning users of note AFAIK are glibc and libvirt. It's at
this point that executing in containers becomes the path of least resistance.

It's true that installing scripting language packages or building complex C++
programs which have evolved nightmarish dependency trees as impressive as
Node.js/npm is a huge motivator for containerization, but that's not the kind
of thing that is meant by DLL hell. And historically this issue was
sufficiently addressed by packaging systems. It's no coincidence Ubuntu is the
most popular container base image--the Ubuntu package repository is multiple
times the size of Red Hat's repository, and much more up-to-date. (It's larger
even than CentOS' or Fedora's repositories, even when including third-party
repositories like EPEL.) In other words, even in the era of containers people
still largely rely on the older packaging infrastructure.

[1] Maintaining this "never older than" invariant is an important caveat, but
IME I've run into more issues with kernel versions (e.g. unsupported syscalls)
than with libc versions. Containers don't solve the kernel version issue.

~~~
AnIdiotOnTheNet
You say all these words, and yet distributing an application on Windows or
MacOS is trivial, but doing the same for Linux is widely regarded as a giant
pain in the ass for anyone who isn't just scattering their source into the
wind for some maintainer to deal with (often multiple maintainers).

~~~
wahern
As someone who has principally programmed for Linux and Unix, the reverse is
true for me. Building and packaging for Linux and traditional Unix
environments feels very straight forward and transparent to me, while building
and packaging for Windows and (to a lesser extent) macOS seems like a dark
art. And I think that's largely because "properly" built and packaged
applications (particularly GUI applications) for those systems should use
Visual Studio or XCode, which require learning and applying proprietary and
restrictive (to me) build and link processes. And those processes are geared
toward building and packaging dependencies together, whether linking
dynamically or statically[1], which in the Unix universe was an anti-pattern
to be avoided as much as possible.

More to the point, the phrase DLL Hell was literally coined by and for Windows
developers to describe Windows-specific headaches. This much is a fact. A
minority of people in the Unix universe use the term DLL, certainly in the
1990s when DLL Hell became a meme; the term shared library was and remains
more common. The usage of DLL Hell in the general sense of exclaiming linking
and packaging in a particular environment to be too complex and brittle is
quite recent and uncommon, though increasing.

I did say alot of words, though :) It was late....

[1] Notably the Debian/Ubuntu universe rather strictly requires the packaging
of statically linkable libraries. With rare exception the foo-dev package
should always install a useable libfoo.a and, if supported, foo.a for module-
based frameworks that permit static embedding, even if the upstream project
only builds a shared library. This makes statically linking trivial without
having to bundle your dependencies into your build; and makes it easier to
stick to a policy of using system-installed dependencies, minimizing the risk
of transitive dependency conflicts, even when statically linking, as the
maintainers do the hard work of ensuring version compatibility holistically.
Red Hat has a similar policy but in IME more poorly executed. On multiple
occasions I've found RPM-packaged static libraries (including those built by
Red Hat, such as liblua.a) to have been built with the wrong build flags,
causing unnecessary namespace and linking headaches, sometimes forcing me to
bundle the dependency directly into my build or creating a bespoke RPM
package. More generally I've found the RPM universe to be much more
inconsistent and problematic, and instead of fixing these technical and
functional issues Red Hat expends most of their effort attempting to bypass,
rather than complement, the RPM ecosystem. By contrast Debian and Ubuntu
package maintainers do a better job of fixing broken upstream builds, which
makes life easier for everybody. If I had the choice, I'd stick to supporting
only Debian-based operating systems precisely for this reason.

~~~
AnIdiotOnTheNet
In the windows universe though, that sort of bundling is not really necessary
for large parts of what would constitute a GUI application in UNIX land,
because the OS provides a guaranteed base set of libraries you can use.

I've been working with Zig a lot lately, and writing a GUI Windows app with no
support from any MS tool was a simple matter of some DLL calls. I don't have
to recompile for every version of Windows because they might have a different
version of libfoo, or it might be under a different name, or any of that
garbage.

Even if I did have a complicated set of library dependencies on top of the OS,
I can just zip them up in a folder with my application and call it a day. In
UNIX land, I'm expected to jump through a bunch of packaging hoops and make
the user jump through hoops to get my package to avoid conflicts that may or
may not exist. Even if I want to distribute my application as a straight
AppDir, or with something like AppImage, I have to include a hell of a lot to
cover things that any given distribution might screw me on, or make the user
jump through hoops getting the right dependencies.

>More to the point, the phrase DLL Hell was literally coined by and for
Windows developers to describe Windows-specific headaches

Yes, when everyone tried to copy their DLLs into the system folder and share
them all UNIX style. There's a reason they don't really do that any more.

