Hacker News new | comments | ask | show | jobs | submit login
Why would anyone choose Docker over fat binaries? (smashcompany.com)
297 points by signa11 on Oct 29, 2017 | hide | past | web | favorite | 173 comments



* If my colleagues don't have to understand how do deploy applications properly, their work is simplified greatly. If I can just take their work, put it in a container, no matter the language or style, my work is greatly simplified. I don't have to worry about how to handle crashes, infinite loops, or other bad code they write.

* We have a whole lot of HTTP services in a range of languages. Managing them all with fat binaries would be a chore - the author would have to give me a way to set port and listen address, and I have to keep track of every way to set a port. With a net namespace and clever iptables routing, docker can do that for me.

* sometimes, I have to deploy and insecure app. Usually, it's a badly configured memcache or similar. With net namespaces, I can make sure only a certain server has access to that service, and that the service cannot ruin my host server.

* Its possible for me to namespace everything myself with unshare(1) and "ip netns" and cgroups and chroot and iptables... but that would consume all my available time. Docker can do that for me.

* When you reach more then 20 or so services to keep track of, you need tools to help you out.

* load balancing. Luckily, I don't have to deal with extreme network loads which would require hand-made solutions, but just pushing up that number a little to do ad-hoc load balance makes things a lot easier.


> * If my colleagues don't have to understand how do deploy applications properly, their work is simplified greatly. If I can just take their work, put it in a container, no matter the language or style, my work is greatly simplified. I don't have to worry about how to handle crashes, infinite loops, or other bad code they write.

Of course you do, you just moved the logic into the "orchestration" and "management" layer. You still need to write the code to correctly handle it. Throwing K8S at it is putting lipstick on a pig. It is still a pig.

> * We have a whole lot of HTTP services in a range of languages. Managing them all with fat binaries would be a chore - the author would have to give me a way to set port and listen address, and I have to keep track of every way to set a port. With a net namespace and clever iptables routing, docker can do that for me.

Nope, you wrote a set of rules and as long as everyone adheres to those rules things kind of work ( in a clever way ). Of course if you had the same kind of rules written and followed in any other method, you would arrive at the exactly the same place. In fact, you probably would arrive at a better place because you would stop thinking that your application works because of some clever namespace and iptables thing.

> * sometimes, I have to deploy and insecure app. Usually, it's a badly configured memcache or similar. With net namespaces, I can make sure only a certain server has access to that service, and that the service cannot ruin my host server.

You may be able to guarantee this with a VM but you certainly cannot guarantee it with a container.

> load balancing. Luckily, I don't have to deal with extreme network loads which would require hand-made solutions, but just pushing up that number a little to do ad-hoc load balance makes things a lot easier

The time to build a template and instrumentation for your haproxy/nginx/varnish layer is when you don't have enough traffic and enough complexities.


I think that the main point was that docker skills are transferable, i.e. you can expect a new hire to be productive in less time. Too many companies still have in-house build/deploy systems that are probably great for their purpose but don't offer valuable experience that would be usable outside that company.


In my observation, Docker skills is a modern day ability to type "make", without being able to actually write a Makefile or debug a Makefile.


Writing a dockerfile successfully is maybe that. But learning to make a dockerfile should take someone a day to figure out. What the entire movement is about is creating an ecosystem around a common standard for dev ops.


I use Makefiles for creating docker files there is an advantage to docker from fat binaries. Both tools exist for a reason, used both wisely you can save time and effort.


Yes. Each time you start a container, you are basically saying “make clean” as well.


Except that people who write software to be put in containers cannot write that Makefile to save their life so for something that should be a ten line dependency they manage to build entire Linux, entire X and all unneeded tool chains.


Your observation is incorrect.


If a company has existing in-house systems that are working well, why should they care about "valuable experience that would be usable outside the company"? Unless you think it's their responsibility to train people so those people can then leave and take the benefit of that training to some other, possibly competing, business?


The company may not care about building skills for employees that are transferable elsewhere (though good companies do), but they do care about being able to hire people with skills that are applicable to their systems.

While a very large company can get away with building a custom in-house ecosystem that takes months for a new employee to come up to speed on, most companies find it more palatable to use industry standard products/systems that a new employee is already familiar with and can come up to speed on quickly.


The company may not care about building skills for employees that are transferable elsewhere (though good companies do), but they do care about being able to hire people with skills that are applicable to their systems.

Of course, but who would you rather hire, someone with solid fundamentals who can get up to speed quickly with anything, or someone who job-hops to use the latest buzzword library/framework/language that may have a useful lifetime of just a few years or even less? Given an either/or decision with other things being equal, I would almost always prefer to hire the former, and then use whatever tools worked best for the job regardless of whether they were built in-house or imported. A job-hopping resume full of trendy buzzwords is a big red flag for me.

While a very large company can get away with building a custom in-house ecosystem that takes months for a new employee to come up to speed on

This argument gets made all the time in discussions about frameworks, but I just don't see it. In my entire programming career, I have yet to encounter this hypothetical nightmare architecture that takes months before you can become productive and has no useful documentation or examples even though the entire application is built around it. (YMMV, obviously.)

Being able to get up to speed quickly on a new framework or library is an essential skill for a working professional developer. Whether that is the latest shiny Web World buzzword or something internal in your new employer's environment makes little difference, as long as the internal design is also sensible and reasonably documented -- and if it's not, you have much bigger problems than who you're thinking about hiring next.


Companies don't have a moral responsibility to do that, but it's a smart retention strategy. If good devs feel like they're learning useless non transferable skills, they'll be more likely to leave for the sake of their career. If they feel like the business is investing in them, they'll be more likely to stay.


Resume Driven Development can be just as bad for everyone involved.


That's a slanted take. Skills and experience that are only useful inside of one specific company are more likely to be less meaningful and effective (almost by definition).

Many highly competent individuals like to understand the fundamentals and first principles of how systems work and how different problems can be solved, and thus find work that provides no transferable knowledge/experience to be less appealing.

Of course, there is a balance here. Sometimes frustrating tasks must be done. In that case, ideally the end goal is sufficiently motivating.


Many highly competent individuals like to understand the fundamentals and first principles of how systems work and how different problems can be solved, and thus find work that provides no transferable knowledge/experience to be less appealing.

Right, but to those people, whether you use Docker or Kubernetes or custom scripting or a unicorn whose horn magically creates new instances when you need them is mostly just an implementation detail. It's no more interesting than exactly which compiler or DB you use; these are just tools, means to an end. If the differences are significant for your use case, sure, you evaluate and choose accordingly, but then you use the tools to get on with the job. The deep, interesting stuff is almost always in what you can build once you're into that territory.


This is definitely true for many people.

However, I do think it's likely there are some fundamental principles at play that make tool 1 better than tool 2 in specific contexts, and I think it is valuable and interesting to explore that.

Perhaps we are getting into the scientist vs. engineer mindset. I would guess, though, that the best engineers undoubtedly have some scientific curiousity helping them excel in their craft.


I think the conflict can be that engineers know what they're interested in, scratching their own itch, but that's not always aligned with what's best for the company. The balance is finding both.

One extreme is that I see companies hold on to toxic employees because they're brilliant, even if they are bad for the company long term.


(it does look like we're saying the same thing) Catering to a developer's whim as a retention strategy is not going to end well for anybody. In an ideal situation it can be mutually beneficial (that's what good management is--but that's not only relevant to your tech stack), in the worst case the developer leaves and there's new hotness from 3 years ago that doesn't work very well and nobody wants to touch.


> You may be able to guarantee this with a VM but you certainly cannot guarantee it with a container.

I can guarantee it pretty well on my RHEL and CentOS systems, SELinux prevents containers from accessing things they shouldn’t just like it prevents Apache from dumping /etc/passwd.


"I can guarantee it pretty well on my RHEL and CentOS systems"

Slashdot tried that with CentOS and failed miserably. I have my doubts regarding your guarantee.


Link? I want to see what they actually tried.


You argument seems to be "why use Docker when you can do all of these things by hand?". It's kind of the Devops corollary of "why use Java when you could write everything in C?".


No, my argument is "if you are using docker without understanding what you are doing, you are not only not solving your problem but complicating your stack"

Don't get me wrong I love people doing this. It makes my skills in uncomplicating needlessly complicated systems extremely valuable.


> You may be able to guarantee this with a VM but you certainly cannot guarantee it with a container.

Don't group all container implementations into one category. You could certainly use FreeBSD Jails and Illumos Zones for isolating apps.


Those solutions still allow the host and all the other containers to be compromised through a kernel exploit, though, can't they?

The most secure isolation is provided with physical separation. Second best is VMs. Containers are a distant third.


> Those solutions still allow the host and all the other containers to be compromised through a kernel exploit, though, can't they?

If I keep my boxes up to date I only need to worry about zero day kernel exploits. So if someone comes along and uses a zero day kernel exploit to escape my FreeBSD jail then they were going to get on the box anyway, because any attacker using a zero day is highly skilled and highly targeted.

> The most secure isolation is provided with physical separation. Second best is VMs. Containers are a distant third.

While I agree that physical isolation is the safest, I don't think it's so cut and dry between containers and VMs. Virtual machines have significantly more attack surface than either Jails or Zones, think of all the emulated devices.


"Virtual machines have significantly more attack surface than either Jails or Zones, think of all the emulated devices."

Nothing prevents you from using jails or zones in your VM.


> Nothing prevents you from using jails or zones in your VM.

I get that but that's no longer VMs vs containers, Thats VM plus container vs container.


> Second best is VMs. Containers are a distant third.

I will grant you that containers are harder to isolate since it's same problem as isolating individual processes. But there isn't really a meaningful difference between a kernel exploit and a hypervisor exploit.


Attack surface of a minimal hypervisor is way smaller than that of an entire kernel.

Take a look at Clear Containers (despite the name, actually using VMs) and their qemu-lite minimal machine type.


What makes you say that? VM's get exploited as well http://venom.crowdstrike.com so what makes you think VM's are better than e.g. zones?


It pushes thinking up to the app/middleware stack and away from server instances.


The odds are you are running on VMs in AWS, GCS, Azure or even DO. If that's the case, why on earth is your building block not a VM?


* Multiple machines are a larger attack surface and a larger surface area to get things wrong.

* Overhead. It might be getting smaller but it's certainly still there. Not just the application but all the OS services that are duplicated are extremely wasteful when you're billed for your resources directly.

* Your vendor distributes applications as docker containers.

* Your developers are building and testing in docker.

* Clustered resources are dead simple with swarm. It can obviously be done with VMs but it's work.

* Devs and sysadmins alike enjoy using off-the-shelf services that are maintained by someone else.


I can run a dozen containers on my laptop a lot more easily than I can run a dozen VMs.


No idea. I’m not advocating that position.

There are enterprise use cases where docker makes sense if the app support people are less impossible to deal with than the SAs. But you don’t have that issue in cloud.


clearly you never had to reengineer some crappy code whilst making the current live system operate.

Docker nowadays at least for me is essential for encapsulating st


Extra scenarios:

* Not all our libraries are available in the same ecosystem. We need to mix JVM with C++ with a sprinkle of Python [via jep].

* We run a lot of small variations over the same code base. We'd rather not wait to re-deploy 99% of a fat binary for each run.


Another good point is that Docker's layered filesystem allows for caching that greatly benefits the use case you're describing.

We have an old .net app we pushed to servers via zip files, and the sheer size of the zip ended up being a burden after many deployments. Docker's caching and "docker prune" make things a lot easier.


This argument appears to be: when the application is crap, putting it in a container lets me deal with it without getting my hands dirty.

This is probably true, but it isn't good.


Why is it not good? It is in line with the robustness / Worse-is-Better ethos, and importantly, it lets you get on with delivering business value, which is what we're supposed to be doing. (At least in a business context. If you want to rewrite code to perfection on weekends, for fun, so do I, I am all for that, but that's almost never what my employer needs.)


"Why is it not good?"

Because it still allows the base problem of crap coding practice to proliferate.


and don't forget that crap code == insecure code.

but nobody have to worry about it. just start a container the last guy left. /sarcasm


> and don't forget that crap code == insecure code.

I don't think I believe that equality is nearly as neat you'd hope. Unless we're willing to call all code crap code, a lot of projects with well-respected code and development practices have had serious zero days, and we've also found that sandboxing software is highly effective. Insecure code in a secure, well-written sandbox can get you a secure deployment, and you probably want to have your "good" code in a well-written sandbox anyway.


zero days and the privilege escalation are a luxury.

bad code is often plaged with injection/sql injection. Which sandbox does an absolute NOTHING to protect you from. After all, it is all within the expected side effects of your code.


What does this have to do with Docker then?


because a flawed argument in defense of containers is that it is safer, since it's harder to escalate priviledges.

my point is that false sense of security makes it even easier for an attacker to use readily available privileges in the badly coded application.


This seems to be the main point of the article to me. If your language's toolchain makes it easy to make sufficiently* self-contained 'fat binaries' (like Go or Java), then Docker doesn't add much. If it doesn't, like Ruby or Python, then Docker is very useful.

* 'sufficiently' is doing a lot of work here. Go binaries might still need libc; Java tars-of-jars (or fat jars, if you're confused) still need a JVM, which needs a handful of shared libraries. But those things are relatively stable, generally backward-compatible, and one single big thing (libc or JVM), so it's fairly easy to manage having multiple versions installed anyway. More than zero work? Yes. Less work than managing a Kubernetes or Docker installation? Also yes.


Spot on. Both fat binaries and containers very often translate in: "Nobody here cares about security. We deploy opaque blobs that will be unmaintained."


I admit: I find the argument for "fat binaries"[1] over containers compelling. There's just one problem...

...let's say I'm working on an existing code base that has been built in the old-style scripting paradigm using a scripting language like Ruby, PHP, or (god help us) node.js. Let's say we're well aware of the shortcomings are are looking for a migration path to move into the future.

I can just about see how we can package up all our existing code into docker containers, sprinkle some magic orchestration all over the top, and ship that.

I can also see, as per the article, an argument that we'd be much better off with fat binaries. But here's the thing: You can dockerise a PHP app. How am I meant to make a fat binary out of one? And if your answer is "rewrite your entire codebase in golang", then you clearly don't understand the question; we don't have the resources to do that, we wouldn't want to spend them on a big bang rewrite even if we did, and in any case, we don't really like golang.

All of which makes this article seem oddly academic to me. There's a lot of value to something that can be layered on top of your existing solution for those of us who aren't starting greenfield projects.

[1]: Also, while "fat binaries" is a great term, it's one that already exists and means something totally different.


Even more, the "solution" to create static binary scales linearly with the number of technologies you use. If you create a tool to create a static binary out of your wordpress blog, but you have also Python, node.js, ruby and Perl running, you still lost.

Containers offer a standard where you can deploy your applications in a language-independent way, so they avoid this scaling problem.

The same is true for most OS-level packages (like rpm and deb), except that they provide much less isolation.


> built in the old-style scripting paradigm using a scripting language like Ruby, PHP, or (god help us) node.js

I feel like I didn't get the memo... is the new-style we're all supposed to use microservices?


No. Containers give you value regardless of the language or amount of "services" you use.

Having an easy way to wrap and deploy, publish/download over HTTP, and use a simple API to run and keep running is new and valuable, hence all the popularity.


Microservices have won so hard that the fortune 500 are making it standard.


Any data about this?


But here's the thing: You can dockerise a PHP app. How am I meant to make a fat binary out of one?

For years Facebook (reportedly) did this with hPHPc. Not a very good idea these days, but it's certainly within the realm of possibilities for large companies.


It seems you've misunderstood the problem.


I think the author's retort would be that PHP is one of these "outdated" languages like Python and Ruby. I tend to disagree, but overall enjoyed their opinion anyway. I came to criticize the article, but I think this comment already does that well: https://news.ycombinator.com/item?id=15578147


Yes, and we all know every shop in town has resources comparable to Facebook‘s...


You can compile a statically linked PHP interpreter. Probably a bit of messing around with configure and makefiles, but doable.


More to the point, if this became the commonly-accepted sane way to do things, php would support it directly and it would be easy.


> Docker is a tool that allows you to continue to use tools that were perfect for the1990s and early 2000s. You can create a web site using Ruby On Rails, and you’ll have tens of thousands of files, spread across hundreds of directories, and you’ll be dependent on various environmental variables. What ports are in use? What sockets do you use to talk to other apps? How does the application server talk to the web server? How could you possibly easily port this to a new server? That is where Docker comes in. It will create an artificial universe where your Rails app has everything it needs. And then you can hand around the Docker image in the same way a Golang programmer might hand around a fat binary.

Anyone who thinks that all modern web applications are made in Golang or on the JVM is in a pretty weird echo chamber. Server-side rendering with React or Angular Universal practically requires you to be running Javascript on the server, and while you can theoretically statically link V8 to Golang, it's much easier to use Node and have native code-sharing. And the reference implementation for GraphQL is in Javascript as well. Web application servers never stopped moving towards scripting languages (though arguably the microservices split from them did move towards compiled languages). So there will continue to be, for the foreseeable future, an industry-wide need for deployment of non-fat binaries with these types of dependencies. And in that case, Docker is significantly more sane than trying to synchronize all those files.


I don't see where he claims anything like that. The strongest statement seems to be "move to languages that support fat binaries" - which doesn't preclude a loftier aim "your favourite language should start supporting fat binaries".

(I'm replying to clarify - not sure if I agree. I'm a Python guy - I think everyone should switch to languages with significant white space ;-) )


Yep, trying to synchronize those files for an interpreted language is a lot of work. At GitLab we have 5 people to maintain our day binary based on Omnibus. Docker is much simpler.


Fat binaries solve one of the problems which Docker containers solve. It doesn't solve the security isolation problem, or the resource control issue.

That being said, there are many people who are using Docker primarily to solve the DLL hell problem of shared libraries. And containers don't really provide that great of a solution as far as security is concerned; VM's will also be more secure.

And the statement that the Go language is the pioneer for fat binaries is, well, just wrong. People were using static binaries (with, in some cases, built-in data resources) to solve that problem for literally decades. MacOS, for one.

I also remember working with one of the eventual co-founders of Red Hat back when MIT was interested in a proprietary math analysis tool for SCO, but which we were planning on running on Linux. MIT had purchased a site license, which he was going to implement by checking the network address to see if it was 18.x.x.x. He was going to provide a statically linked SCO binary that had this check. The funny thing was that his development system was Linux (it was more developer friendly), and he was cross-compiling to create a statically-linked SCO binary --- which we were then going to run on Linux using SCO emulation. But that statically-linked binary was basically a "fat binary", which would work on any system (including any Linux distribution) that supported the set of system calls SCO used. (Ah, the early-90's, before Red Hat was founded and before IBM had "discovered" Linux, were simpler times....)


The "DLL hell" problem is not the only problem it solves, but it is the only problem that docker solves reasonably well.


That's true. The other dependencies of an application are handled as well, but abstractions are leaky.

Storage (and file system uids), networking, are all elements of the outer system that inevitably leak in the container. Using just Docker does not save you from having to deal with those, and IMHO, those are the really hard problems.


Just put your fat binaries in a Docker container for double the win!


The article makes sense if portability was the only reason to use containers. Fat binaries solve the portability problem in a way that may or may not make sense for a given project, but fat binariries don’t address any of the other issues that container technologies provide solutions for (sandboxing, resource management, etc.). Either the author isn’t aware of these other issues, or is purposefully sweeping them under the the rug, because arguably they are more important than simple binary compatibility. I’d be pretty down on containers too if I thought they only existed in order to distribute portable binaries.


Precisely. As a real world example, I have a repository that basically is a large collection of code modules that runs on three different runtime environments. One of these run times is a Hadoop cluster, where we could happily shadowJar all of our massive dependencies (like Spark) into a giant multi-hundred-mb jar file and have no issues. Another one is a client desktop environment that is shipped to thousands of desktop users with each version release. Most of the code is reused between the two runtimes so it makes no sense to separate them. Therefore we build the project without the fat dependencies in the binary so we can ship the slim version to the client runtime without forcing huge download every release.


"A fat binary (or multiarchitecture binary) is a computer executable program which has been expanded (or "fattened") with code native to multiple instruction sets which can consequently be run on multiple processor types. This results in a file larger than a normal one-architecture binary file, thus the name." [1]

What does having an x86 and x64 binary in the same executable have to do with dependency management? You can have a fat binary that is dynamically linked. If you have an x64 OS then the 64 bits run, but they still call OS libraries if the app is dynamically linked.

Static linking [2] is what compiles all the dependencies into one big executable, but every compiled language has that. It is not exclusive to Go.

[1] - https://en.wikipedia.org/wiki/Fat_binary

[2] - https://en.wikipedia.org/wiki/Static_library


I assume you knew this and were just making a point, but: OP is using the term "fat binary" in a totally different and incompatible way to how your wikipedia link defines it.

> every compiled language has that. It is not exclusive to Go.

I don't think the article claimed it was. But it is a lot harder with C than Golang for various technical reasons, and it's much, much harder still with most scripting languages.


It is super easy to do static binaries with C, I was already doing it in 1992, the first time I actually used C on my life.

It only hard if one happens to mix glibc, a specific implementation of the ANSI C standard library, with any implementation of it.


Hahaha that is soooo wrong. You might think you make a static binary with C by passing -static but trust me you do not. I believe it is still actually impossible to link with glibc statically. It's only been possible to do a static binary on Linux since Musl was written and that was not in 1992. In the 2000's there was an attempt to make true static binaries possible on Linux (think it was called Autopackage) but they eventually gave up due to the effort required.

The story on Windows is much better of course, but even there making a proper dependency-free (as much as possible) binary is much easier with Go than any other language. It's possible in Go because they don't depend on the C standard library and wrote their own linker.


It is not impossible to link glibc statically, and in older versions of libc this did occur. However, the Linux community at the time decided that the tradeoff of disallowing the system NSS to be modular was too great -- Go developers disregard this and don't believe anyone does anything other than the defaults.

Even still, it's possible to compile glibc such that those NSS modules are statically linkable. See here for further information: https://stackoverflow.com/questions/3430400/linux-static-lin...

However, in 1992 it was entirely possible to statically link an application to libc4 and have it have no run-time dependencies other than the kernel ABI.


Go doesn't disregard anything. It tried to parse NSS configuration, if there is no Go implementation available for the user conf it parsed, it falls back to glibc.


Only if you use "cgo", otherwise it doesn't have access to the run-time linker.


No, by default.


It is now with Musl libc


Not only was it possible to do static linking back in those days, it was easy and it was common.

The glibc nss breakage only came later. I'm still unclear why it hasn't been fixed to once again allow easy static linking after all these years.


Pretty sure pjmlp knows how hard life is with glibc. Possibly even knows a bit about other systems that aren't linux.


In 1992, we were using Xenix, Aix, HP-UX, Tru64, SGI, Solaris, DG/UX, MS-DOS, AmigaOS, Atari, OS/2....

All of them allowed for static linking, and only a few supported dynamic linking.

Regarding Linux, the very first version to properly support dynamic linking was kernel 1.0.9, which introduced ELF support. Slackware 2.0 was one of the first distributions supporting it.

Before that, binutils had a set of patching tools to make dynamic loading work in the a.out format.

GNU/Linux is just one OS among many supported by C compilers.


This works for one Go or Rust executable but any real system is a mix of technologies e.g A Go service talking to a Java application server proxied with a C webserver that was configured with a bash script.

Once you have a mix of different technologies its easier to just say "everything in Docker".

The ship container analogy on the old Docker website was a great illustration of this. Its a shame the new website is not so clear about the benefits and is full empty phrases such as "accelerating innovation".


Does it even work well for go or rust executables? I seem to recall that Docker itself was written in golang because "fat binaries1111", but now docker is one of the hardest to install applications that I know of, and certainly isn't simply one executable file to be copied onto the system.

Managing resource files in fat binaries is really problematic.


I've never had trouble installing Docker anywhere, whether from package managers or installers. Where have you, and what kind of trouble? If I might run into something similar down the road, better I should know about it now.


It used to be that docker could be installed literally by downloading the docker binary, then it was a single curl command. Now the documentation looks like this: https://www.docker.com/get-docker

You press "get docker community image". Then you find your distro. Go to "download from the docker store", discover that there is no dowload link https://store.docker.com/editions/community/docker-ce-server... then you click on the long form link to the docs. Which gets you here: https://docs.docker.com/engine/installation/linux/docker-ce/...

And then that is not simple at all.

So much for fat binaries making things easy. Good-ness grief.


If you simply want the latest stable docker version, try:

  curl -o /tmp/get-docker.sh https://get.docker.com && bash /tmp/get-docker.sh
or even just

  curl https://get.docker.com | bash
You can also use this script to update to the latest version, if you previously used it to install.


Adding a package repository and installing some packages doesn't seem that hard, and the major distros - and at least some minor ones; I see mention of Mint there - are supported. I've never had to do more than that, and it hasn't seemed especially objectionable; I have to confess I'm still not seeing where the additional burden arises.


This article failed to mention the most important thing: An actual advantage of fat binaries. The closest he comes is saying that he feels the docker way is "old"... As compared with fat binaries, which is how desktop applications have been shipped for decades. Hell, size, the only clear would-be advantage, is only mentioned in a throaway sentence.

> Yes, the network can be very powerful, but trying to use it for everything is a royal pain.

He says this after saying docker is unsuited for the massive scale of microservices... Where everything communicates through the network anyways.

Also I have a feeling he doesn't know the distinction between kubernetes and docker. They don't actually compete with each other, I don't even understand the comparison. Kubernetes uses docker.

All in all, I'm not convinced.

PS: At least in python, there are ways to create fat binaries anyway. And I have no doubt that this guy is very good at operations, as he seems to have not found problems with go's dependency management and even implicitly compares it favorably with those of the scripting languages.


"I have a feeling he doesn't know the distinction between kubernetes and docker. They don't actually compete with each other, I don't even understand the comparison. Kubernetes uses docker."

Docker (the company) is itself responsible for this confusion. They're redefined what "docker" means a whole bunch of times.


I had this discussion [0] with their CTO a while back and, suffice it to say, there's a profound lack of understanding of what early adopters like me had to do to lobby for the use of their products in enterprises.

[0] https://news.ycombinator.com/item?id=13775732


> They don't actually compete with each other, I don't even understand the comparison. Kubernetes uses docker.

Kubernetes is scheduling and orchestration. It does compete with Docker, the company - it's a direct competitor of Swarm.

Docker, the container technology, is the de facto standard container at this time. That may not always be the case. Kubernetes is not Docker-specific, thus his mentioning of the open container initiative.

They are absolutely competitors.


Well, the author is definitely missing the point.

In scenario 1, I put a Go binary onto a server and make a systemd unit file. In scenario 2, I put a Go binary in a docker container and launch it on a Kubernetes cluster. Scenario 2 is wasting a ton more cycles and RAM, but other than that, what's the difference?

* With containers I can put a Python app right alongside my binary but with total isolation. No need to futz around with chroots or making a static build of Python and embedding my scripts into it.

* What if I need libc, for example to link to SQLite or something? Suddenly my Go binary requires libc. The isolation is broken!

* Systemd can use Cgroups to limit RAM or CPU, but schedulers like Kubernetes can also use your CPU and RAM limits to schedule containers. As far as I know, there's no equivalent tooling for doing this with fat binaries.

* Without Docker, I have to manually manage each container port. Can't run two apps with pprof servers on the same box. With Docker, I only have to care about public ports, and can port-forward into debug ports manually, and with Kubernetes I never have to care about port conflicts.

* Kubernetes, with enough hackery to work around bugs, can actually do seamless deployments where it checks your internal health endpoint or command before making the container available. As far as I know, the alternative to this is writing crappy scripts that try to do this without declarative logic to back it.

You could go on and on. It doesn't have to be Docker. Could use rkt as well, or really any container engine. The point is that the container engine + scheduler pattern is immensely useful, and if it weren't, Google would already be on the next thing.

If you're just setting up a single server with a single program, fine. Drop a binary on it and call it a day. But when you want to implement CI/CD and schedule applications across multiple servers and do load balancing and so on, you might feel like you're reinventing the wheel a bit considering those problems were all already solved by Kubernetes.


> but other than that, what's the difference?

The need to configure, manage, monitor and maintain a whole extra set of programs. Also, systemd can just as easily start the binary in a cgroup, eliminating most of the following concerns.

> The isolation is broken

Worth noting that behind the scenes, Linux is using the same in-memory versions of libc, even for containers. That isolation you speak of already doesn't exist.

> [...] scheduling [...]

Ansible, Chef, Saltstack, etc.

> I have to manually manage each container port

You have 65,000 ports available to you. I don't think this is as big an issue as made out to be - people are lazy, which is the only reason there are so many conflicts over 8000 and 8443.

> Kubernetes, with enough hackery to work around bugs

Given the context that I am someone currently attempting that hackery - this shit just kinda works, some of the time. And when it fails, it's a morass of conflicting documentation, and responses of "well, you're not using the default distribution for your environment" (no shit, it's no longer supported) and "works on my single node minikube cluster". Give me ansible/chef/saltstack any day of the week. Hell, I'll even take cfengine at this point.

Kubernetes was built for Google's use case, in Google's environment. Attempting to use it with other restrictions and requirements is a nightmare.


"Given the context that I am someone currently attempting that hackery - this shit just kinda works, some of the time. And when it fails, it's a morass of conflicting documentation, and responses of "well, you're not using the default distribution for your environment" (no shit, it's no longer supported) and "works on my single node minikube cluster"."

You could make pretty much the exact same set of complaints against all those configuration management tools (ansible/chef/salt/cfengine/puppet). They're all a huge mess of spaghetti and hackery that works when they work and can be a nightmare otherwise.

All these tools need at least a couple of decades more to mature.


> Kubernetes was built for Google's use case, in Google's environment. Attempting to use it with other restrictions and requirements is a nightmare.

That's a tremendous overstatement. The first part is just wrong, as was pointed out in earlier responses. The second part just smacks of your having been soured by a poor experience. We started with kubernetes in production at very small scale, just a few supporting services to begin with since there was a lot to learn. Currently we're running five or six of our core production apps on the GKE version of the platform and the experience has been very positive.


>The need to configure, manage, monitor and maintain a whole extra set of programs. Also, systemd can just as easily start the binary in a cgroup, eliminating most of the following concerns.

In the former case, I need to start a (for example) Ubuntu image on my machine or VM. In the latter case, I need to start a Kubernetes image. It's not as different as you think. Kops can do the heavy lifting and even generate terraform manifests, which will put you far ahead of the manual solution in terms of controlling your infra.

Also, Cgroups alone aren't enough, and is only a small part of Docker's isolation, although I did actually acknowledge that Systemd supports them...

>Worth noting that behind the scenes, Linux is using the same in-memory versions of libc, even for containers. That isolation you speak of already doesn't exist.

No it doesn't. Assuming elf binaries, the elf binfmt will call into ld-linux shared object when you hit a dynamic binary, which then does the runtime linking that links libc dynamically. You can confirm that this will call the ld-linux from the container when in a container. Compare how printf acts inside of Alpine containers versus on its Debian host. Or try hosting a CGO_ENABLED=1 Go binary in a container with no libc - it won't load because the kernel won't find the linker. If you put it in an Alpine container and it was built on a Ubuntu host, you might find it cryptically fails to load!

>Ansible, Chef, Saltstack, etc.

This is not scheduling. That is automation, configuration. SaltStack does not calculate how much CPU and RAM is available on a box to balance applications that are running. Before I used Kubernetes, I used SaltStack to manage fat Go binaries and even Docker containers for a moment, so I have a pretty good idea of what the differences are.

>You have 65,000 ports available to you. I don't think this is as big an issue as made out to be - people are lazy, which is the only reason there are so many conflicts over 8000 and 8443.

Nothing to do with laziness. Why would I want to have to pick a new pprof port every single time then remember what app has which? Why would I want to then manage a firewall for all of that? If you're doing this all by hand eventually you're going to screw up. I'd rather have isolated networking deny-by-default.

>Given the context that I am someone currently attempting that hackery - this shit just kinda works, some of the time. And when it fails, it's a morass of conflicting documentation, and responses of "well, you're not using the default distribution for your environment" (no shit, it's no longer supported) and "works on my single node minikube cluster". Give me ansible/chef/saltstack any day of the week. Hell, I'll even take cfengine at this point.

That is an absolutely inaccurate characterization of the very professional and helpful Kubernetes community. I've only had a few run-ins with them, but the experiences were absolutely completely the opposite and I've literally never heard "works on my single node minikube cluster" and I doubt you have either.

>Kubernetes was built for Google's use case, in Google's environment. Attempting to use it with other restrictions and requirements is a nightmare.

And finally, no it wasn't. It was built by the Google Cloud Platform team to help Google Cloud Platform users better utilize their VMs. There's a pretty detailed history that's all publicly available. As far as we know, Google is yet to use Kubernetes itself, other than their spin-off using it for Pokemon Go. Google internally uses some progression of Borg.


We moved our infrastructure from scheduling containers with configuration management to scheduling containers with Nomad... and it is so much better.

We no longer have the concern of defining static ports for each app, configuring App A with the port for App B, autoscaling when traffic increases 3x as someone attempts a L7 DDOS... I could go on!


> schedulers like Kubernetes can also use your CPU and RAM limits to schedule containers. As far as I know, there's no equivalent tooling for doing this with fat binaries.

I much prefer the Docker and Kubernetes world, but you could actually do this (scheduling and bin-packing fat binaries) with Nomad's exec driver:

https://www.nomadproject.io/docs/drivers/exec.html


"Systemd can use Cgroups to limit RAM or CPU, but schedulers like Kubernetes can also use your CPU and RAM limits to schedule containers. As far as I know, there's no equivalent tooling for doing this with fat binaries."

They're called cgroups. You don't need systemd, Docker or Kubernetes to use them.


Yeah, you also don't need iptables. You could just write a program to talk to netfilter directly.


I called them Cgroups. If you're being pedantic about the capitalization, take that up with RedHat as well.


> The Go language is the pioneer for fat binaries.

I had to laugh on this one.


Especially since it's nowhere near where Tcl was with Starpacks nearly a decade ago.


Go, bravely exploring uncharted territories.


+1 years ago... financial industry... all fat binaries or in tarballs (you literally send the entire code repository over!)


Golang didn't pioneer 'fat' binaries as the article claims. It may have made them popular again, but we have had compiled, self-contained, dependency-free executables for decades. If people are not aware of that, perhaps it's because scripting languages (or languages that use a VM) have come to so thoroughly dominate programming language discussions?


I guess so.

Usually everone that somehow thinks Go compilation model is inovative, never has used anything beyond scripting languages and possibily C in addition to them.

MS-DOS used to call them "XCopy installs", NeXTSTEP had its fat binaries with directory structure, Windows and MacOS(pre OS X) can store all dependencies inside the .exe file and so on.

In any case, fat binaries don't cover the dependencies with file system, other running servers or sandboxing.


> MacOS(pre OS X) can store all dependencies inside the .exe file and so on.

On macOS / OS X, it’s actually a .app folder


Yes, but on Mac OS pre-OS X you could use resource forks for that.


> It seems sad that so much effort should be made to keep old technologies going,

I strongly disagree with this part. To make progress as a technological civilization, without constantly wasting time reinventing things, we need to keep old technologies working. So, if Docker keeps that Rails app from 2007 running, that's great. And maybe we should still develop new apps in Rails, Django, PHP, and the like. It's good to use mature platforms and tools, even if they're not fashionable.

That word "fashionable" brings me to something that really rubs me the wrong way about this piece, and our field in general. Can we stop being so fashion-driven? It's tempting to conflate technology with pop culture, to assume that anything developed during the reign of grunge music, for example, must not be good now. But good technology isn't like popular music; something that was a good idea and well executed in 1993 is probably still good today.

Besides all that, as others have explained, platforms like Kubernetes have other advantages over just dropping a self-contained binary on a Linux server.


Its INCREDIBLY naive of the author to be slamming docker usage because they think everything should be a fat binary - and what are they even calling a fat binary? How do I make my node app a fat binary? Would they be happier if we wrapped up docker + images in an executable and called that a fat binary?

Also, I think the author is simply confused about what docker itself is, complaining about its lack of network orchestration - and comparing it directly to kubernetes!


This is not intending to diminish your point because I agree with you, but you can in fact make a fat binary with Node: https://github.com/nexe/nexe

I have used it when I cannot guarantee to have Node.js installed on location I am deploying app to. It works pretty well, but the C compilation of all the Node bindings took quite awhile when I was using it (about 2 years ago). I think a build of a small project took around 10 minutes at the time.


This would copy the runtime for every app you need.. Pretty sure if you used docker this wouldn't be the case but I'm 100% sure. Obviously space isn't the biggest of deals, just mentioning it.


its really not that naive, for the longest while that how google ran all its stuff through borg.

This left the resource management to the scheduler, where it belongs.


Google ran all their stuff as fat binaries through Borg? Why would they change to containers with kubernetes if the borg approach is better?


Google runs things internally as fat binaries inside cgroups, which is a Linux kernel feature that implements resource limits like "only give this thing 2GB of memory"

Docker combines cgroups with a layered filesystem and a tarfile to copy everything around in. It's the total package that is meaningful; the article argues against just one aspect and thus fails to cover the whole picture.


They aren't changing to Kubernetes internally. It's more of an open-source reimplementation (and modernisation) of Borg, because they don't think it's practical to open-source Borg itself.


because in borg land, everything apart from the binary is controlled by the scheduler.

why is that good? because it means that the thing that knows about resource utilisation has control of what resources it give out. quite patch on the farm? give the low priority jobs more memory/cpu.

You also have to remember that borg is part of the secret sauce that makes google tick. Having a single interface to a smart, efficient, fast and scalable scheduler is invaluable at scale.

Kebernetes is nice, but its nowhere near as advanced or fully featured as slurm/gridengine/tractor etc. its also not that efficient or fast. (I worked in VFX with various schedulers some were able to dispatch 100k execution instructions a second)



ooo this is smashing. Adding to the list.


Not all developer tools and languages let you compile to fat binaries. Also I don't think that they were pioneered by Go. Java .jar files have been around long before Go even existed. Jar files are probably even better in fact because they can run on any operating system without any sort of virtualization.

Docker standardizes containerization. Kubernetes relies on a standardized container platform like Docker in order to be able to automate the running of different systems in a consistent and repeatable way.

Also, from the point of view of someone who has to provide support for software for multiple customers, it helps to know that the Linux environment between all customers is consistent... That way you don't get strange issues that only happen to one customer and not the others because of some slight OS level differences.


> Jar files are probably even better in fact because they can run on any operating system without any sort of virtualization.

What did you think the V in JVM stood for?


Harsh tone for a comment that’s confusing OS virtualization with virtual ISA and its runtime.


I'm aware of the difference, I just don't think it matters here - in either case there's an additional level of indirection when compared to Golang fat binaries.


Regarding Jar files, wasn't it JVM that coined the term "compile once, debug everywhere"?


With fat binaries you are looking to isolate your application from environment -- by providing the dependencies (library code) along with your base application code. This way you don't have to make sure the user of your application provides the correct libraries in correct versions.

Docker container is an extension of that idea. If you can distribute some of the libraries, why not distribute entire execution environment? Why not isolate your application from external environment even further?

The more you control the environment the less work you have to do to make sure the application will work.

My point of view, there are two types of applications with diametrally different development process:

1. applications where you have to invest a lot of effort into making it resistant to the environment -- it has to work regardless of the differences in user environment. Think for example in terms of a PC game that must run on different GPUs, with different driver versions, on different OS versions, with different applications installed.

2. applications where it is enough to show that there is a set of circumstances when the application works correctly -- think in terms of typical enterprise application where DEVs would place it on the server and then the environment would be religiously preserved to not risk application breaking.

Developing type 2 applications is much, much, much less effort.

By distributing type 1 applications along with the environment we basically reduce the cost of developing them because now it would be enough to develop them up to type 2 applications standard.

For example: you may create a simple bash script to do something. If you distribute it as just script file, you need to make the script work on every bash on every type of mainstream Linux distribution.

If you distribute the script as a docker image it is enough for you to make that script work on that particular image (for example Ubuntu x.x) and you don't need to worry about other possibilities.

This is the essence and value in distributing of applications as Docker images.


Just moves the problem rather than solving it. You now have a Go dependency management problem and all the fun that causes.

It won't be long before we see Go library distributions with versions in packages and security fixes rolled out to them...

What we've missed over the years is an agreed improvement to the Unix process abstraction that the kernel manages. IPC was never solved. So now we're trying to do ipc over http with JSON


Equating docker containers to fat binaries is naive and shows that you don't really understand what docker does. At my workplace, we use containers instead of VMs or emulators. The entire system of a virtual host runs inside a container (multiple apps, multiple namespaces, multiple network interfaces). Please tell me how to do that as easy as "docker run <some options>".


It's a bit idealistic to think people can "just" use fat binaries; sure, Go mostly handles that (not completely; by default most nontrivial Go apps are dynamic, which you can fix but not that easily) but that's not exactly any comfort if I've got 30,000 lines of Python which I'm not keen on rewriting. Technologies like pex can help but in practice require some work, especially with a codebase that hasn't been written with it in mind or when dealing with third-party libraries that aren't expecting such an environment - and that still doesn't solve what you're doing with the interpreter itself.

I'm far from a big fan of Docker, but can definitely see the appeal for providing a solution to those problems, even if some see it as unprincipled. In our case we'd already put a lot of effort into building mostly self-contained binaries and Docker is still useful (admittedly mostly just as a component of Kubernetes; we'd cheerfully replace it with rkt or something if there was a good option).


We bundle our apps in binaries and stick them in docker as well. This lets our devops team take advantage of unified deployment strategies and configuration management. For example app A is a node app and app B is a fat Scala Play binary. They can both use same docker deployment and environment configuration strategies through Docker as well as make pluggable items for Kubernetes.

Why take an all-or-nothing approach when you could pick the best of both approaches and use them in tandem? :)


A typical service has many types of dependencies including filesystem layout, the presence of specific executables at specific versions, rpm/deb packages at specific versions. Different services make different assumptions and evolve at incompatible paces. Docker can isolate an assumption about FS layout as well as an assumption that a deb is at a specific version.

This article is confusing in its focus on docker. It really proposes a much more restricting way to build apps, in which all assumptions except linking-level dependencies should be portable. Its answer to filesystem isolation: don't rely on the filesystem being a certain way. Or, rely on it being a certain way and push the complexity into your deployment layer.


I developed stowage.org to get a closer binary-like experience with Docker. What is great is being able to use containers like a generic any-language fat binary compiler.

However, there is a bunch more stuff in Docker (or equivalent runtimes) that this article doesn't touch on. It's similar in some ways, for some use cases. It could be interesting to see what a fat binary orchestration tool would look like, but I would bet you end up reinventing 99% of what Docker already does.


Because choosing Docker requires boiling fewer oceans, and whether those oceans should or should not be boiled has no bearing on whether I can afford to boil them right now.


Fat binaries don’t solve the same problems as Docker does (there is some overlap though!).

The author is right about k8s. And what do you need to run stuff in k8s? Containers. And what is a pretty good container format? Oh yeah, Docker.

Although other containers exist, let’s be honest, people running k8s are almost always running Docker containers in them. And it all works amazingly well.

How do I run fat binaries in k8s? In a production ready and battle tested way please.


Even if you build fat binaries, it's helpful for them to run in a container for a bunch of reasons:

  0) security
  1) container overhead <<<< vm overhead
  2) better bin-packing because of 1
  3) enforce resource constraints (I know what your app 
  requires)
  4) enforce 12-factor app patterns, build apps that don't rely on local state


Where does he say you should run fat binaries in a VM?


Fat binaries solve only for dependency. Containers also deal with isolating environment variables and filesystem.

Each program doesn't have to worry about what else is running on the machine. It also helps you deal with the differences between say Ubuntu and redhat. Not all nix os have the same filesystem directory setup.


Containers are overkill if you're just trying to isolate environment variables and filesystems. I mean really, if that was your problem and you came up with containers as the solution I'd say you over-engineered the solution. Environment variables -- trivial -- set them in the wrapper script used to launch the binary and you're done. Filesystem -- use separate nix accounts for your binaries and set appropriate permissions, for additional security use tools like se-linux if needed. Somehow, somewhere along the line it seems like there's this huge echo-chamber of 'containers for all things' and we have thrown out all we ever new about basic nix.

Contrarian views like the one in the article are healthy!


I really don't see how this is an argument - Docker containers are like zip-files for deploying these very same app binaries, whether fat or lean - except containers are very easy to define, are able to run anything inside, can easily be uploaded/downloaded over HTTP, and abstract away all the systemd/init/background service nonsense into a clean API.

That alone is worth it, and it's what makes Kubernetes so powerful, by taking that basic runtime abstraction and further lifting it up beyond the VMs/nodes themselves.


> and abstract away all the systemd/init/background service nonsense into a clean API.

hm?! what?!

> That alone is worth it, and it's what makes Kubernetes so powerful, by taking that basic runtime abstraction and further lifting it up beyond the VMs/nodes themselves.

how is a damn yaml file over 40 lines simpler than a 7 lines systemd? (i didn't even count the setup of kubernetes, since you prolly never did)

also anything you actually said makes no sense.... since you actually compared the things that are comparable...

edit: yes, both have pro's and cons, still your things make no sense


It that really the argument? Because "40 lines" are more than "7 lines"?

Both are simple to write, but K8S services for me are much easier to define. They're also far more configurable and usable across machines. Write it once and you're done, and add in the ability to use service-oriented storage and networking policies and it provides far greater power and flexibility than ever before.

My point is that there is no point comparing these things since a docker container can hold all the fat binaries you want inside... so it's not either/or at all.


The post starts from the assumption we should all use microservices because web scale. I'm curious, how many people here are actually using that "new style", and enjoying it?


It's sad when people have a myopic view of the use of technology.

Docker is a way to distribute an executable environment, very similar to distributing an application platform.

A "fat binary" is literally just a single monolithic executable application. It isn't an environment. It isn't a platform. It's ridgid and difficult to work with and, as the name implies, BIG.

There are many cases where a "fat binary" simply can not, does not, will not work. It's a very limiting way to distribute and run an application. Suggesting that a "fat binary" is somehow superior to a Docker environment is not only missing the forest for the trees, it's forgetting that there's a forest. The binary is just one tree.

Docker isn't more complex than what we had before. People just haven't been trained on the different patterns of deploying environments and platforms, in addition to applications. We used to build Docker-esque application platforms all the frigging time before Linux containers even existed, and they still exist without Docker. If you have a problem with Docker, that's fine, but it doesn't mean you have to box yourself into a totally different restrictive model.

People in this industry need to wake up and admit there are no simple solutions to complex problems. Don't believe the hype.


Is pyinstaller a sufficient solution for creating fat binaries in python? I'm not interested in using go, and I also have C++ binary dependencies that need to be deployed but which come to me already built.

EDIT: "Or rather, if the problem is resource and dependency and configuration management, we should solve the problem by moving to those languages that support fat binaries."

Oh, so this article is really just an advertisement for Go. That sure spoils it.


Containers are not only a solution for dependencies. It's also protection boundary.


It's just a process with a fancy chroot. Don't believe all the docker hype. Sensible admins have been doing something similar for years. We just didn't have a massive PR budget


> It's just a process with a fancy chroot. Don't believe all the docker hype. Sensible admins have been doing something similar for years.

You can easily break out of a chroot jail: http://pentestmonkey.net/blog/chroot-breakout-perl

Thats not possible in Docker (okay, it is if you're running a container in privileged mode but that's another can of worms and you shouldn't do it unless absolutely neccessary). Also, Docker gives you networking isolation and especially RAM/CPU usage limitation, which is a real headache doing with chroot.


> Thats not possible in Docker

Believe this at your own peril.


At least right now, I do not know of a way to break out of a Docker container and the last bug I know of was fixed in 2014 (https://blog.docker.com/2014/06/docker-container-breakout-pr...).

The only ways I know of that can be used to jailbreak are, as documented on https://security.stackexchange.com/a/153016: kernel vulns (which are not inherent to Docker, and can hit you on any kind of Linux environment as soon as you achieve RCE in a hosted application), running a container with --privileged, and being careless with bindmounts - either by bind-mounting /dev, /proc and friends which you shouldn't do in any case unless you're aware of the pitfalls or by bind-mounting the Docker socket file.

The latter is something that I see far too often (especially with docker-in-docker setups for Jenkins slaves), but if you're avoiding this, you're safe from that vector.


Docker is comparable to a chroot in this regard. You can not break out of a chroot either, unless you are being careless with super user privileges or bind mounts. Which (unsuprisingly) people are, and that's why it's generally not recommended to use chroot alone for security. Not because chroot somehow is insecure or doesn't work as intended.

Docker for the longest time didn't even try to constrict the root user. Later versions does, but the results will vary depending on your configuration. That's why the Docker documentation states it should not be depended on for security. It is wise to heed that recommendation.


> It's just a process with a fancy chroot.

and also namespaces for file-system, network etc. etc.


chroot only protects the file system, sandboxes are much more than that.


BSD Jails and Solaris Zones then.

Everything old is new again, this time with an orchestration layer.


Some would say the orchestration layer is quite important


And people were running cfengine inside FreeBSD jails and Linux vservers in early 2000 already.


Wow, and I thought I was the only one who remembers Linux vservers. Whenever I tell people I was using them 15 years ago, people look at me like I'm crazy. I was beginning to think I dreamed it all.

Now here's Docker getting mountains of press for seemingly reinventing the wheel. But, to be fair, cgroups didn't exist back in the days of Linux vservers, so there was no easy way to constrain per-vserver resource use as there is now, the isolation wasn't as complete as with Docker, and there was no use of layered filesystems so you couldn't build up a Linux vserver layer by layer as you can a Docker container.

It was still very useful back then, though. Way ahead of its time, and it's a shame most people never knew it even ever existed.


Sure, but at some point it goes from niche to mainstream and that matters. Plus the docker registry model is a big deal that we didn't have before. Having a universally accessible way to just download a container and run it is arguably the secret to success this time around.


That I agree with, but then the best example are mainframe virtualization mechanisms. :)

On UNIX world older examples than the ones you provided, would be HP-UX vaults and Tru64.


> I don’t know if Kubernetes is better at orchestrating network resources than a system built around etcd or consul

Isn't Kubernetes build around etcd?

I really don't get this blogpost, it seems like "I don't have this problem, so nobody else has."


I can create a Docker image for just about any app written in any language that runs on Linux with a few lines of Dockerfile, whereas you need fat binary support from each programming language compiler/interpreter/linker/whatever.

Correct?


This echoes our experience, which is that deploying go binaries is easier and more reliable than using docker, but that docker is useful for legacy python and other things that are difficult to turn into FRUs (field replaceable units).


What I've been wanting to do lately is find some way of packing up conventional apps which I would otherwise dockerize.

It feels like the Linux Kernel Library project would be a way to do this - redirect all the filesystem calls into mapped portions of the binary to abstract over the filesystem which I think would get you 90% of the way there practically?


but why did a class of developers use inferior open source languages like ruby, python and php over enterprise fat binary java and dotnet stack. if you did not get that change you won't get it this time too.

docker helped alleviate fears of vendor lock-in with cloud providers.

wait some more time and you will see unikernel applications as new thin binaries


I use Docker mainly in situations where the required dependencies (Java, GTK, etc) conflict with the versions on my system. In this case, the service running in the container is required, unless I want to upgrade the operating system, and I'm on CentOS 7 for a reason (stability and security).


You could also use guix or nix for this.


It's hard to say what Docker means now since the company has lots of products called Docker, doing wildly different things, but for Mac and Windows users it's a nice virtual machine frontend for running/developing containerized Linux apps.


I like Go a lot, but it doesn't always build fat binaries.

go build xyz.go

xyz: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped


Try: CGO_ENABLED=0 go build xyz.go


Docker is not only used to solve dependency issues.


I would perhaps agree if this was about desktop rather than web...


"Or rather, if the problem is resource and dependency and configuration management, we should solve the problem by moving to those languages that support fat binaries. Because fat binaries solve the main problem, without creating additional problems." This is the same attitude as the Javascript attitude. "Javascript is portable, so therefore all other languages are irrelevant."

This attitude is wrong. It is wrong because programming is hard. We have spent many decades trying to figure out how to program effectively. To produce simple maintainable code. Solving the hard problems, the interesting problems, is still too hard for us. The idea that "any language will do so long as it is portable" might as well lead us to write in LLVM-IR!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: