
Why would anyone choose Docker over fat binaries? - signa11
http://www.smashcompany.com/technology/why-would-anyone-choose-docker-over-fat-binaries
======
friend-monoid
* If my colleagues don't have to understand how do deploy applications properly, their work is simplified greatly. If I can just take their work, put it in a container, no matter the language or style, my work is greatly simplified. I don't have to worry about how to handle crashes, infinite loops, or other bad code they write.

* We have a whole lot of HTTP services in a range of languages. Managing them all with fat binaries would be a chore - the author would have to give me a way to set port and listen address, and I have to keep track of every way to set a port. With a net namespace and clever iptables routing, docker can do that for me.

* sometimes, I have to deploy and insecure app. Usually, it's a badly configured memcache or similar. With net namespaces, I can make sure only a certain server has access to that service, and that the service cannot ruin my host server.

* Its possible for me to namespace everything myself with unshare(1) and "ip netns" and cgroups and chroot and iptables... but that would consume all my available time. Docker can do that for me.

* When you reach more then 20 or so services to keep track of, you need tools to help you out.

* load balancing. Luckily, I don't have to deal with extreme network loads which would require hand-made solutions, but just pushing up that number a little to do ad-hoc load balance makes things a lot easier.

~~~
notyourday
> * If my colleagues don't have to understand how do deploy applications
> properly, their work is simplified greatly. If I can just take their work,
> put it in a container, no matter the language or style, my work is greatly
> simplified. I don't have to worry about how to handle crashes, infinite
> loops, or other bad code they write.

Of course you do, you just moved the logic into the "orchestration" and
"management" layer. You still need to write the code to correctly handle it.
Throwing K8S at it is putting lipstick on a pig. It is still a pig.

> * We have a whole lot of HTTP services in a range of languages. Managing
> them all with fat binaries would be a chore - the author would have to give
> me a way to set port and listen address, and I have to keep track of every
> way to set a port. With a net namespace and clever iptables routing, docker
> can do that for me.

Nope, you wrote a set of rules and as long as everyone adheres to those rules
things kind of work ( in a clever way ). Of course if you had the same kind of
rules written and followed in any other method, you would arrive at the
exactly the same place. In fact, you probably would arrive at a better place
because you would stop thinking that your application works because of some
clever namespace and iptables thing.

> * sometimes, I have to deploy and insecure app. Usually, it's a badly
> configured memcache or similar. With net namespaces, I can make sure only a
> certain server has access to that service, and that the service cannot ruin
> my host server.

You may be able to guarantee this with a VM but you certainly cannot guarantee
it with a container.

> load balancing. Luckily, I don't have to deal with extreme network loads
> which would require hand-made solutions, but just pushing up that number a
> little to do ad-hoc load balance makes things a lot easier

The time to build a template and instrumentation for your
haproxy/nginx/varnish layer is when you don't have enough traffic and enough
complexities.

~~~
anilakar
I think that the main point was that docker skills are transferable, i.e. you
can expect a new hire to be productive in less time. Too many companies still
have in-house build/deploy systems that are probably great for their purpose
but don't offer valuable experience that would be usable outside that company.

~~~
notyourday
In my observation, Docker skills is a modern day ability to type "make",
without being able to actually write a Makefile or debug a Makefile.

~~~
sigjuice
Yes. Each time you start a container, you are basically saying “make clean” as
well.

~~~
notyourday
Except that people who write software to be put in containers cannot write
that Makefile to save their life so for something that should be a ten line
dependency they manage to build entire Linux, entire X and all unneeded tool
chains.

------
Lazare
I admit: I find the argument for "fat binaries"[1] over containers compelling.
There's just one problem...

...let's say I'm working on an existing code base that has been built in the
old-style scripting paradigm using a scripting language like Ruby, PHP, or
(god help us) node.js. Let's say we're well aware of the shortcomings are are
looking for a migration path to move into the future.

I can just about see how we can package up all our existing code into docker
containers, sprinkle some magic orchestration all over the top, and ship that.

I can also see, as per the article, an argument that we'd be much better off
with fat binaries. But here's the thing: You can dockerise a PHP app. How am I
meant to make a fat binary out of one? And if your answer is "rewrite your
entire codebase in golang", then you clearly don't understand the question; we
don't have the resources to do that, we wouldn't want to spend them on a big
bang rewrite even if we did, and in any case, we don't really like golang.

All of which makes this article seem oddly academic to me. There's a lot of
value to something that can be layered on top of your existing solution for
those of us who aren't starting greenfield projects.

[1]: Also, while "fat binaries" is a great term, it's one that already exists
and means something totally different.

~~~
awirth
_But here 's the thing: You can dockerise a PHP app. How am I meant to make a
fat binary out of one?_

For years Facebook (reportedly) did this with hPHPc. Not a very good idea
these days, but it's certainly within the realm of possibilities for large
companies.

~~~
hennsen
Yes, and we all know every shop in town has resources comparable to
Facebook‘s...

~~~
tyingq
You can compile a statically linked PHP interpreter. Probably a bit of messing
around with configure and makefiles, but doable.

~~~
jblow
More to the point, if this became the commonly-accepted sane way to do things,
php would support it directly and it would be easy.

------
btown
> Docker is a tool that allows you to continue to use tools that were perfect
> for the1990s and early 2000s. You can create a web site using Ruby On Rails,
> and you’ll have tens of thousands of files, spread across hundreds of
> directories, and you’ll be dependent on various environmental variables.
> What ports are in use? What sockets do you use to talk to other apps? How
> does the application server talk to the web server? How could you possibly
> easily port this to a new server? That is where Docker comes in. It will
> create an artificial universe where your Rails app has everything it needs.
> And then you can hand around the Docker image in the same way a Golang
> programmer might hand around a fat binary.

Anyone who thinks that all modern web applications are made in Golang or on
the JVM is in a pretty weird echo chamber. Server-side rendering with React or
Angular Universal practically requires you to be running Javascript on the
server, and while you can theoretically statically link V8 to Golang, it's
much easier to use Node and have native code-sharing. And the reference
implementation for GraphQL is in Javascript as well. Web application servers
never stopped moving towards scripting languages (though arguably the
microservices split from them did move towards compiled languages). So there
will continue to be, for the foreseeable future, an industry-wide need for
deployment of non-fat binaries with these types of dependencies. And in that
case, Docker is significantly more sane than trying to synchronize all those
files.

~~~
andybak
I don't see where he claims anything like that. The strongest statement seems
to be "move to languages that support fat binaries" \- which doesn't preclude
a loftier aim "your favourite language should start supporting fat binaries".

(I'm replying to clarify - not sure if I agree. I'm a Python guy - I think
everyone should switch to languages with significant white space ;-) )

------
tytso
Fat binaries solve _one_ of the problems which Docker containers solve. It
doesn't solve the security isolation problem, or the resource control issue.

That being said, there are many people who are using Docker primarily to solve
the DLL hell problem of shared libraries. And containers don't really provide
that great of a solution as far as security is concerned; VM's will also be
more secure.

And the statement that the Go language is the pioneer for fat binaries is,
well, just wrong. People were using static binaries (with, in some cases,
built-in data resources) to solve that problem for literally decades. MacOS,
for one.

I also remember working with one of the eventual co-founders of Red Hat back
when MIT was interested in a proprietary math analysis tool for SCO, but which
we were planning on running on Linux. MIT had purchased a site license, which
he was going to implement by checking the network address to see if it was
18.x.x.x. He was going to provide a statically linked SCO binary that had this
check. The funny thing was that his development system was Linux (it was more
developer friendly), and he was cross-compiling to create a statically-linked
SCO binary --- which we were then going to run on Linux using SCO emulation.
But that statically-linked binary was basically a "fat binary", which would
work on any system (including any Linux distribution) that supported the set
of system calls SCO used. (Ah, the early-90's, before Red Hat was founded and
before IBM had "discovered" Linux, were simpler times....)

~~~
crdoconnor
The "DLL hell" problem is not the only problem it solves, but it is the only
problem that docker solves reasonably well.

~~~
muxator
That's true. The other dependencies of an application are handled as well, but
abstractions are leaky.

Storage (and file system uids), networking, are all elements of the outer
system that inevitably leak in the container. Using just Docker does not save
you from having to deal with those, and IMHO, those are the really hard
problems.

------
mwfunk
The article makes sense if portability was the only reason to use containers.
Fat binaries solve the portability problem in a way that may or may not make
sense for a given project, but fat binariries don’t address any of the other
issues that container technologies provide solutions for (sandboxing, resource
management, etc.). Either the author isn’t aware of these other issues, or is
purposefully sweeping them under the the rug, because arguably they are more
important than simple binary compatibility. I’d be pretty down on containers
too if I thought they only existed in order to distribute portable binaries.

~~~
SOLAR_FIELDS
Precisely. As a real world example, I have a repository that basically is a
large collection of code modules that runs on three different runtime
environments. One of these run times is a Hadoop cluster, where we could
happily shadowJar all of our massive dependencies (like Spark) into a giant
multi-hundred-mb jar file and have no issues. Another one is a client desktop
environment that is shipped to thousands of desktop users with each version
release. Most of the code is reused between the two runtimes so it makes no
sense to separate them. Therefore we build the project without the fat
dependencies in the binary so we can ship the slim version to the client
runtime without forcing huge download every release.

------
snarfy
"A fat binary (or multiarchitecture binary) is a computer executable program
which has been expanded (or "fattened") with code native to multiple
instruction sets which can consequently be run on multiple processor types.
This results in a file larger than a normal one-architecture binary file, thus
the name." [1]

What does having an x86 and x64 binary in the same executable have to do with
dependency management? You can have a fat binary that is dynamically linked.
If you have an x64 OS then the 64 bits run, but they still call OS libraries
if the app is dynamically linked.

Static linking [2] is what compiles all the dependencies into one big
executable, but every compiled language has that. It is not exclusive to Go.

[1] -
[https://en.wikipedia.org/wiki/Fat_binary](https://en.wikipedia.org/wiki/Fat_binary)

[2] -
[https://en.wikipedia.org/wiki/Static_library](https://en.wikipedia.org/wiki/Static_library)

~~~
Lazare
I assume you knew this and were just making a point, but: OP is using the term
"fat binary" in a totally different and incompatible way to how your wikipedia
link defines it.

> every compiled language has that. It is not exclusive to Go.

I don't think the article claimed it was. But it _is_ a lot harder with C than
Golang for various technical reasons, and it's much, much harder still with
most scripting languages.

~~~
pjmlp
It is super easy to do static binaries with C, I was already doing it in 1992,
the first time I actually used C on my life.

It only hard if one happens to mix glibc, a specific implementation of the
ANSI C standard library, with any implementation of it.

~~~
IshKebab
Hahaha that is soooo wrong. You might think you make a static binary with C by
passing -static but trust me you do not. I believe it is still actually
impossible to link with glibc statically. It's only been possible to do a
static binary on Linux since Musl was written and that was not in 1992. In the
2000's there was an attempt to make true static binaries possible on Linux
(think it was called Autopackage) but they eventually gave up due to the
effort required.

The story on Windows is much better of course, but even there making a proper
dependency-free (as much as possible) binary is _much_ easier with Go than any
other language. It's possible in Go because they don't depend on the C
standard library and wrote their own linker.

~~~
rkeene2
It is not impossible to link glibc statically, and in older versions of libc
this did occur. However, the Linux community at the time decided that the
tradeoff of disallowing the system NSS to be modular was too great -- Go
developers disregard this and don't believe anyone does anything other than
the defaults.

Even still, it's possible to compile glibc such that those NSS modules are
statically linkable. See here for further information:
[https://stackoverflow.com/questions/3430400/linux-static-
lin...](https://stackoverflow.com/questions/3430400/linux-static-linking-is-
dead)

However, in 1992 it was entirely possible to statically link an application to
libc4 and have it have no run-time dependencies other than the kernel ABI.

~~~
dullgiulio
Go doesn't disregard anything. It tried to parse NSS configuration, if there
is no Go implementation available for the user conf it parsed, it falls back
to glibc.

~~~
rkeene2
Only if you use "cgo", otherwise it doesn't have access to the run-time
linker.

~~~
dullgiulio
No, by default.

------
papaf
This works for one Go or Rust executable but any real system is a mix of
technologies e.g A Go service talking to a Java application server proxied
with a C webserver that was configured with a bash script.

Once you have a mix of different technologies its easier to just say
"everything in Docker".

The ship container analogy on the old Docker website was a great illustration
of this. Its a shame the new website is not so clear about the benefits and is
full empty phrases such as "accelerating innovation".

~~~
timthelion
Does it even work well for go or rust executables? I seem to recall that
Docker itself was written in golang because "fat binaries1111", but now docker
is one of the hardest to install applications that I know of, and certainly
isn't simply one executable file to be copied onto the system.

Managing resource files in fat binaries is really problematic.

~~~
throwanem
I've never had trouble installing Docker anywhere, whether from package
managers or installers. Where have you, and what kind of trouble? If I might
run into something similar down the road, better I should know about it now.

~~~
timthelion
It used to be that docker could be installed literally by downloading the
docker binary, then it was a single curl command. Now the documentation looks
like this: [https://www.docker.com/get-docker](https://www.docker.com/get-
docker)

You press "get docker community image". Then you find your distro. Go to
"download from the docker store", discover that there is no dowload link
[https://store.docker.com/editions/community/docker-ce-
server...](https://store.docker.com/editions/community/docker-ce-server-
ubuntu) then you click on the long form link to the docs. Which gets you here:
[https://docs.docker.com/engine/installation/linux/docker-
ce/...](https://docs.docker.com/engine/installation/linux/docker-
ce/ubuntu/#install-docker-ce-1)

And then that is not simple at all.

So much for fat binaries making things easy. Good-ness grief.

~~~
jsnathan
If you simply want the latest stable docker version, try:

    
    
      curl -o /tmp/get-docker.sh https://get.docker.com && bash /tmp/get-docker.sh
    

or even just

    
    
      curl https://get.docker.com | bash
    

You can also use this script to update to the latest version, if you
previously used it to install.

------
luord
This article failed to mention the most important thing: An actual advantage
of fat binaries. The closest he comes is saying that he feels the docker way
is "old"... As compared with fat binaries, which is how desktop applications
have been shipped for decades. Hell, size, the only clear would-be advantage,
is only mentioned in a throaway sentence.

> Yes, the network can be very powerful, but trying to use it for everything
> is a royal pain.

He says this after saying docker is unsuited for the massive scale of
microservices... Where everything communicates through the network anyways.

Also I have a feeling he doesn't know the distinction between kubernetes and
docker. They don't actually compete with each other, I don't even understand
the comparison. Kubernetes _uses_ docker.

All in all, I'm _not_ convinced.

PS: At least in python, there are ways to create fat binaries anyway. And I
have no doubt that this guy is very good at operations, as he seems to have
not found problems with go's dependency management and even implicitly
compares it favorably with those of the scripting languages.

~~~
pmoriarty
_" I have a feeling he doesn't know the distinction between kubernetes and
docker. They don't actually compete with each other, I don't even understand
the comparison. Kubernetes uses docker."_

Docker (the company) is itself responsible for this confusion. They're
redefined what "docker" means a whole bunch of times.

~~~
curun1r
I had this discussion [0] with their CTO a while back and, suffice it to say,
there's a profound lack of understanding of what early adopters like me had to
do to lobby for the use of their products in enterprises.

[0]
[https://news.ycombinator.com/item?id=13775732](https://news.ycombinator.com/item?id=13775732)

------
jchw
Well, the author is definitely missing the point.

In scenario 1, I put a Go binary onto a server and make a systemd unit file.
In scenario 2, I put a Go binary in a docker container and launch it on a
Kubernetes cluster. Scenario 2 is wasting a ton more cycles and RAM, but other
than that, what's the difference?

* With containers I can put a Python app right alongside my binary but with total isolation. No need to futz around with chroots or making a static build of Python and embedding my scripts into it.

* What if I need libc, for example to link to SQLite or something? Suddenly my Go binary requires libc. The isolation is broken!

* Systemd can use Cgroups to limit RAM or CPU, but schedulers like Kubernetes can also use your CPU and RAM limits to schedule containers. As far as I know, there's no equivalent tooling for doing this with fat binaries.

* Without Docker, I have to manually manage each container port. Can't run two apps with pprof servers on the same box. With Docker, I only have to care about public ports, and can port-forward into debug ports manually, and with Kubernetes I never have to care about port conflicts.

* Kubernetes, with enough hackery to work around bugs, can actually do seamless deployments where it checks your internal health endpoint or command before making the container available. As far as I know, the alternative to this is writing crappy scripts that try to do this without declarative logic to back it.

You could go on and on. It doesn't have to be Docker. Could use rkt as well,
or really any container engine. The point is that the container engine +
scheduler pattern is immensely useful, and if it weren't, Google would already
be on the next thing.

If you're just setting up a single server with a single program, fine. Drop a
binary on it and call it a day. But when you want to implement CI/CD and
schedule applications across multiple servers and do load balancing and so on,
you might feel like you're reinventing the wheel a bit considering those
problems were all already solved by Kubernetes.

~~~
falcolas
> but other than that, what's the difference?

The need to configure, manage, monitor and maintain a whole extra set of
programs. Also, systemd can just as easily start the binary in a cgroup,
eliminating most of the following concerns.

> The isolation is broken

Worth noting that behind the scenes, Linux is using the same in-memory
versions of libc, even for containers. That isolation you speak of already
doesn't exist.

> [...] scheduling [...]

Ansible, Chef, Saltstack, etc.

> I have to manually manage each container port

You have 65,000 ports available to you. I don't think this is as big an issue
as made out to be - people are lazy, which is the only reason there are so
many conflicts over 8000 and 8443.

> Kubernetes, with enough hackery to work around bugs

Given the context that I am someone currently attempting that hackery - this
shit just kinda works, some of the time. And when it fails, it's a morass of
conflicting documentation, and responses of "well, you're not using the
default distribution for your environment" (no shit, it's no longer supported)
and "works on my single node minikube cluster". Give me ansible/chef/saltstack
any day of the week. Hell, I'll even take cfengine at this point.

Kubernetes was built for Google's use case, in Google's environment.
Attempting to use it with other restrictions and requirements is a nightmare.

~~~
markbnj
> Kubernetes was built for Google's use case, in Google's environment.
> Attempting to use it with other restrictions and requirements is a
> nightmare.

That's a tremendous overstatement. The first part is just wrong, as was
pointed out in earlier responses. The second part just smacks of your having
been soured by a poor experience. We started with kubernetes in production at
very small scale, just a few supporting services to begin with since there was
a lot to learn. Currently we're running five or six of our core production
apps on the GKE version of the platform and the experience has been very
positive.

------
pjmlp
> The Go language is the pioneer for fat binaries.

I had to laugh on this one.

~~~
rkeene2
Especially since it's nowhere near where Tcl was with Starpacks nearly a
decade ago.

------
open-source-ux
Golang didn't pioneer 'fat' binaries as the article claims. It may have made
them popular again, but we have had compiled, self-contained, dependency-free
executables for decades. If people are not aware of that, perhaps it's because
scripting languages (or languages that use a VM) have come to so thoroughly
dominate programming language discussions?

~~~
pjmlp
I guess so.

Usually everone that somehow thinks Go compilation model is inovative, never
has used anything beyond scripting languages and possibily C in addition to
them.

MS-DOS used to call them "XCopy installs", NeXTSTEP had its fat binaries with
directory structure, Windows and MacOS(pre OS X) can store all dependencies
inside the .exe file and so on.

In any case, fat binaries don't cover the dependencies with file system, other
running servers or sandboxing.

~~~
colejohnson66
> MacOS(pre OS X) can store all dependencies inside the .exe file and so on.

On macOS / OS X, it’s actually a .app _folder_

~~~
pjmlp
Yes, but on Mac OS pre-OS X you could use resource forks for that.

------
mwcampbell
> It seems sad that so much effort should be made to keep old technologies
> going,

I strongly disagree with this part. To make progress as a technological
civilization, without constantly wasting time reinventing things, we need to
keep old technologies working. So, if Docker keeps that Rails app from 2007
running, that's great. And maybe we should still develop new apps in Rails,
Django, PHP, and the like. It's good to use mature platforms and tools, even
if they're not fashionable.

That word "fashionable" brings me to something that really rubs me the wrong
way about this piece, and our field in general. Can we stop being so fashion-
driven? It's tempting to conflate technology with pop culture, to assume that
anything developed during the reign of grunge music, for example, must not be
good now. But good technology isn't like popular music; something that was a
good idea and well executed in 1993 is probably still good today.

Besides all that, as others have explained, platforms like Kubernetes have
other advantages over just dropping a self-contained binary on a Linux server.

------
supermatt
Its INCREDIBLY naive of the author to be slamming docker usage because they
think everything should be a fat binary - and what are they even calling a fat
binary? How do I make my node app a fat binary? Would they be happier if we
wrapped up docker + images in an executable and called that a fat binary?

Also, I think the author is simply confused about what docker itself is,
complaining about its lack of network orchestration - and comparing it
directly to kubernetes!

~~~
SOLAR_FIELDS
This is not intending to diminish your point because I agree with you, but you
can in fact make a fat binary with Node:
[https://github.com/nexe/nexe](https://github.com/nexe/nexe)

I have used it when I cannot guarantee to have Node.js installed on location I
am deploying app to. It works pretty well, but the C compilation of all the
Node bindings took quite awhile when I was using it (about 2 years ago). I
think a build of a small project took around 10 minutes at the time.

~~~
derimagia
This would copy the runtime for every app you need.. Pretty sure if you used
docker this wouldn't be the case but I'm 100% sure. Obviously space isn't the
biggest of deals, just mentioning it.

------
jondubois
Not all developer tools and languages let you compile to fat binaries. Also I
don't think that they were pioneered by Go. Java .jar files have been around
long before Go even existed. Jar files are probably even better in fact
because they can run on any operating system without any sort of
virtualization.

Docker standardizes containerization. Kubernetes relies on a standardized
container platform like Docker in order to be able to automate the running of
different systems in a consistent and repeatable way.

Also, from the point of view of someone who has to provide support for
software for multiple customers, it helps to know that the Linux environment
between all customers is consistent... That way you don't get strange issues
that only happen to one customer and not the others because of some slight OS
level differences.

~~~
DC-3
> Jar files are probably even better in fact because they can run on any
> operating system without any sort of virtualization.

What did you think the V in JVM stood for?

~~~
titanix2
Harsh tone for a comment that’s confusing OS virtualization with virtual ISA
and its runtime.

~~~
DC-3
I'm aware of the difference, I just don't think it matters here - in either
case there's an additional level of indirection when compared to Golang fat
binaries.

------
lmilcin
With fat binaries you are looking to isolate your application from environment
-- by providing the dependencies (library code) along with your base
application code. This way you don't have to make sure the user of your
application provides the correct libraries in correct versions.

Docker container is an extension of that idea. If you can distribute some of
the libraries, why not distribute entire execution environment? Why not
isolate your application from external environment even further?

The more you control the environment the less work you have to do to make sure
the application will work.

My point of view, there are two types of applications with diametrally
different development process:

1\. applications where you have to invest a lot of effort into making it
resistant to the environment -- it has to work regardless of the differences
in user environment. Think for example in terms of a PC game that must run on
different GPUs, with different driver versions, on different OS versions, with
different applications installed.

2\. applications where it is enough to show that there is a set of
circumstances when the application works correctly -- think in terms of
typical enterprise application where DEVs would place it on the server and
then the environment would be religiously preserved to not risk application
breaking.

Developing type 2 applications is much, much, much less effort.

By distributing type 1 applications along with the environment we basically
reduce the cost of developing them because now it would be enough to develop
them up to type 2 applications standard.

For example: you may create a simple bash script to do something. If you
distribute it as just script file, you need to make the script work on every
bash on every type of mainstream Linux distribution.

If you distribute the script as a docker image it is enough for you to make
that script work on that particular image (for example Ubuntu x.x) and you
don't need to worry about other possibilities.

This is the essence and value in distributing of applications as Docker
images.

------
neilwilson
Just moves the problem rather than solving it. You now have a Go dependency
management problem and all the fun that causes.

It won't be long before we see Go library distributions with versions in
packages and security fixes rolled out to them...

What we've missed over the years is an agreed improvement to the Unix process
abstraction that the kernel manages. IPC was never solved. So now we're trying
to do ipc over http with JSON

------
dingo_bat
Equating docker containers to fat binaries is naive and shows that you don't
really understand what docker does. At my workplace, we use containers instead
of VMs or emulators. The entire system of a virtual host runs inside a
container (multiple apps, multiple namespaces, multiple network interfaces).
Please tell me how to do that as easy as "docker run <some options>".

------
pebers
It's a bit idealistic to think people can "just" use fat binaries; sure, Go
mostly handles that (not completely; by default most nontrivial Go apps are
dynamic, which you can fix but not _that_ easily) but that's not exactly any
comfort if I've got 30,000 lines of Python which I'm not keen on rewriting.
Technologies like pex can help but in practice require some work, especially
with a codebase that hasn't been written with it in mind or when dealing with
third-party libraries that aren't expecting such an environment - and that
still doesn't solve what you're doing with the interpreter itself.

I'm far from a big fan of Docker, but can definitely see the appeal for
providing a solution to those problems, even if some see it as unprincipled.
In our case we'd already put a lot of effort into building mostly self-
contained binaries and Docker is still useful (admittedly mostly just as a
component of Kubernetes; we'd cheerfully replace it with rkt or something if
there was a good option).

~~~
SOLAR_FIELDS
We bundle our apps in binaries and stick them in docker as well. This lets our
devops team take advantage of unified deployment strategies and configuration
management. For example app A is a node app and app B is a fat Scala Play
binary. They can both use same docker deployment and environment configuration
strategies through Docker as well as make pluggable items for Kubernetes.

Why take an all-or-nothing approach when you could pick the best of both
approaches and use them in tandem? :)

------
shrimpx
A typical service has many types of dependencies including filesystem layout,
the presence of specific executables at specific versions, rpm/deb packages at
specific versions. Different services make different assumptions and evolve at
incompatible paces. Docker can isolate an assumption about FS layout as well
as an assumption that a deb is at a specific version.

This article is confusing in its focus on docker. It really proposes a much
more restricting way to build apps, in which all assumptions except linking-
level dependencies should be portable. Its answer to filesystem isolation:
don't rely on the filesystem being a certain way. Or, rely on it being a
certain way and push the complexity into your deployment layer.

------
ealexhudson
I developed stowage.org to get a closer binary-like experience with Docker.
What is great is being able to use containers like a generic any-language fat
binary compiler.

However, there is a bunch more stuff in Docker (or equivalent runtimes) that
this article doesn't touch on. It's similar in some ways, for some use cases.
It could be interesting to see what a fat binary orchestration tool would look
like, but I would bet you end up reinventing 99% of what Docker already does.

------
throwanem
Because choosing Docker requires boiling fewer oceans, and whether those
oceans should or should not be boiled has no bearing on whether I can afford
to boil them right now.

------
elnygren
Fat binaries don’t solve the same problems as Docker does (there is some
overlap though!).

The author is right about k8s. And what do you need to run stuff in k8s?
Containers. And what is a pretty good container format? Oh yeah, Docker.

Although other containers exist, let’s be honest, people running k8s are
almost always running Docker containers in them. And it all works amazingly
well.

How do I run fat binaries in k8s? In a production ready and battle tested way
please.

------
siliconc0w
Even if you build fat binaries, it's helpful for them to run in a container
for a bunch of reasons:

    
    
      0) security
      1) container overhead <<<< vm overhead
      2) better bin-packing because of 1
      3) enforce resource constraints (I know what your app 
      requires)
      4) enforce 12-factor app patterns, build apps that don't rely on local state

~~~
wnoise
Where does he say you should run fat binaries in a VM?

------
cobookman
Fat binaries solve only for dependency. Containers also deal with isolating
environment variables and filesystem.

Each program doesn't have to worry about what else is running on the machine.
It also helps you deal with the differences between say Ubuntu and redhat. Not
all nix os have the same filesystem directory setup.

~~~
Yeroc
Containers are overkill if you're just trying to isolate environment variables
and filesystems. I mean really, if that was your problem and you came up with
containers as the solution I'd say you over-engineered the solution.
Environment variables -- trivial -- set them in the wrapper script used to
launch the binary and you're done. Filesystem -- use separate _nix accounts
for your binaries and set appropriate permissions, for additional security use
tools like se-linux if needed. Somehow, somewhere along the line it seems like
there 's this huge echo-chamber of 'containers for all things' and we have
thrown out all we ever new about basic _nix.

Contrarian views like the one in the article are healthy!

------
manigandham
I really don't see how this is an argument - Docker containers are like zip-
files for deploying these very same app binaries, whether fat or lean - except
containers are very easy to define, are able to run anything inside, can
easily be uploaded/downloaded over HTTP, and abstract away all the
systemd/init/background service nonsense into a clean API.

That alone is worth it, and it's what makes Kubernetes so powerful, by taking
that basic runtime abstraction and further lifting it up beyond the VMs/nodes
themselves.

~~~
merb
> and abstract away all the systemd/init/background service nonsense into a
> clean API.

hm?! what?!

> That alone is worth it, and it's what makes Kubernetes so powerful, by
> taking that basic runtime abstraction and further lifting it up beyond the
> VMs/nodes themselves.

how is a damn yaml file over 40 lines simpler than a 7 lines systemd? (i
didn't even count the setup of kubernetes, since you prolly never did)

also anything you actually said makes no sense.... since you actually compared
the things that are comparable...

edit: yes, both have pro's and cons, still your things make no sense

~~~
manigandham
It that really the argument? Because "40 lines" are more than "7 lines"?

Both are simple to write, but K8S services for me are much easier to define.
They're also far more configurable and usable across machines. Write it once
and you're done, and add in the ability to use service-oriented storage and
networking policies and it provides far greater power and flexibility than
ever before.

My point is that there is no point comparing these things since a docker
container can hold all the fat binaries you want inside... so it's not
either/or at all.

------
danmaz74
The post starts from the assumption we should all use microservices because
web scale. I'm curious, how many people here are actually using that "new
style", and enjoying it?

------
peterwwillis
It's sad when people have a myopic view of the use of technology.

Docker is a way to distribute an executable environment, very similar to
distributing an application platform.

A "fat binary" is literally just a single monolithic executable application.
It isn't an environment. It isn't a platform. It's ridgid and difficult to
work with and, as the name implies, BIG.

There are many cases where a "fat binary" simply can not, does not, will not
work. It's a very limiting way to distribute and run an application.
Suggesting that a "fat binary" is somehow superior to a Docker environment is
not only missing the forest for the trees, it's forgetting that there's a
forest. The binary is just one tree.

Docker isn't more complex than what we had before. People just haven't been
trained on the different patterns of deploying environments and platforms, in
addition to applications. We used to build Docker-esque application platforms
all the frigging time before Linux containers even existed, and they still
exist without Docker. If you have a problem with Docker, that's fine, but it
doesn't mean you have to box yourself into a totally different restrictive
model.

People in this industry need to wake up and admit there are no simple
solutions to complex problems. Don't believe the hype.

------
philipov
Is pyinstaller a sufficient solution for creating fat binaries in python? I'm
not interested in using go, and I also have C++ binary dependencies that need
to be deployed but which come to me already built.

EDIT: "Or rather, if the problem is resource and dependency and configuration
management, we should solve the problem by moving to those languages that
support fat binaries."

Oh, so this article is really just an advertisement for Go. That sure spoils
it.

------
chmike
Containers are not only a solution for dependencies. It's also protection
boundary.

~~~
neilwilson
It's just a process with a fancy chroot. Don't believe all the docker hype.
Sensible admins have been doing something similar for years. We just didn't
have a massive PR budget

~~~
mschuster91
> It's just a process with a fancy chroot. Don't believe all the docker hype.
> Sensible admins have been doing something similar for years.

You can easily break out of a chroot jail:
[http://pentestmonkey.net/blog/chroot-breakout-
perl](http://pentestmonkey.net/blog/chroot-breakout-perl)

Thats not possible in Docker (okay, it is if you're running a container in
privileged mode but that's another can of worms and you shouldn't do it unless
absolutely neccessary). Also, Docker gives you networking isolation and
especially RAM/CPU usage limitation, which is a real headache doing with
chroot.

~~~
xorcist
> Thats not possible in Docker

Believe this at your own peril.

~~~
mschuster91
At least right now, I do not know of a way to break out of a Docker container
and the last bug I know of was fixed in 2014
([https://blog.docker.com/2014/06/docker-container-breakout-
pr...](https://blog.docker.com/2014/06/docker-container-breakout-proof-of-
concept-exploit/)).

The only ways I know of that can be used to jailbreak are, as documented on
[https://security.stackexchange.com/a/153016](https://security.stackexchange.com/a/153016):
kernel vulns (which are not inherent to Docker, and can hit you on any kind of
Linux environment as soon as you achieve RCE in a hosted application), running
a container with --privileged, and being careless with bindmounts - either by
bind-mounting /dev, /proc and friends which you shouldn't do in any case
unless you're aware of the pitfalls or by bind-mounting the Docker socket
file.

The latter is something that I see far too often (especially with docker-in-
docker setups for Jenkins slaves), but if you're avoiding this, you're safe
from that vector.

~~~
xorcist
Docker is comparable to a chroot in this regard. You can not break out of a
chroot either, unless you are being careless with super user privileges or
bind mounts. Which (unsuprisingly) people are, and that's why it's generally
not recommended to use chroot alone for security. Not because chroot somehow
is insecure or doesn't work as intended.

Docker for the longest time didn't even try to constrict the root user. Later
versions does, but the results will vary depending on your configuration.
That's why the Docker documentation states it should not be depended on for
security. It is wise to heed that recommendation.

------
rmetzler
> I don’t know if Kubernetes is better at orchestrating network resources than
> a system built around etcd or consul

Isn't Kubernetes build around etcd?

I really don't get this blogpost, it seems like "I don't have this problem, so
nobody else has."

------
tlrobinson
I can create a Docker image for just about any app written in any language
that runs on Linux with a few lines of Dockerfile, whereas you need fat binary
support from each programming language compiler/interpreter/linker/whatever.

Correct?

------
tomohawk
This echoes our experience, which is that deploying go binaries is easier and
more reliable than using docker, but that docker is useful for legacy python
and other things that are difficult to turn into FRUs (field replaceable
units).

------
XorNot
What I've been wanting to do lately is find some way of packing up
conventional apps which I would otherwise dockerize.

It feels like the Linux Kernel Library project would be a way to do this -
redirect all the filesystem calls into mapped portions of the binary to
abstract over the filesystem which I think would get you 90% of the way there
practically?

------
senthilnayagam
but why did a class of developers use inferior open source languages like
ruby, python and php over enterprise fat binary java and dotnet stack. if you
did not get that change you won't get it this time too.

docker helped alleviate fears of vendor lock-in with cloud providers.

wait some more time and you will see unikernel applications as new thin
binaries

------
e40
I use Docker mainly in situations where the required dependencies (Java, GTK,
etc) conflict with the versions on my system. In this case, the service
running in the container is required, unless I want to upgrade the operating
system, and I'm on CentOS 7 for a reason (stability and security).

~~~
pmoriarty
You could also use guix or nix for this.

------
fulafel
It's hard to say what Docker means now since the company has lots of products
called Docker, doing wildly different things, but for Mac and Windows users
it's a nice virtual machine frontend for running/developing containerized
Linux apps.

------
w8rbt
I like Go a lot, but it doesn't always build fat binaries.

go build xyz.go

xyz: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2, not stripped

~~~
brobinson
Try: CGO_ENABLED=0 go build xyz.go

------
BLanen
Docker is not only used to solve dependency issues.

------
digi_owl
I would perhaps agree if this was about desktop rather than web...

------
timthelion
"Or rather, if the problem is resource and dependency and configuration
management, we should solve the problem by moving to those languages that
support fat binaries. Because fat binaries solve the main problem, without
creating additional problems." This is the same attitude as the Javascript
attitude. "Javascript is portable, so therefore all other languages are
irrelevant."

This attitude is wrong. It is wrong because programming is hard. We have spent
many decades trying to figure out how to program effectively. To produce
simple maintainable code. Solving the hard problems, the interesting problems,
is still too hard for us. The idea that "any language will do so long as it is
portable" might as well lead us to write in LLVM-IR!

