Hacker News new | past | comments | ask | show | jobs | submit login

I have mixed feelings about Docker. I've found three major use cases so far:

(1) Testing.

(2) Build environments -- it's helpful to build distribution Linux binaries in older Linux versions like CentOS 6 so that they'll work on a wider range of production systems.

(3) Installing and running "big ball of mud" applications that want to drag in forty libraries, three different databases, memcached, and require a custom Apache configuration (and only Apache, thank you very much).

#3 is really the killer app.

This has led me to conclude that Docker is a stopgap anesthetic solution to a deeper source of pain: the Rube Goldberg Machine development anti-pattern.

More specifically, Docker is a far better solution than the abomination known as the "omnibus package," namely the gigantic RPM or DEB file that barfs thousands of libraries and other crap all over your system (that may conflict with what you have).

Well written software that minimizes dependencies and sprawl and abides by good development and deployment practices doesn't need Docker the way big lumps of finely woven angel hair spaghetti do.

Docker might still be nice for perfect reproducibility, ability to manage deployments like git repos, and other neat features, but it's less of a requirement. It becomes maybe a nice-to-have, not a must-have.

But... if my software is not a sprawling mess that demands that I mangle and pollute the entire system to install it, why not just coordinate development and deployment with 'git'? Release: git tag. Deploy: git pull X, git checkout tag, restart.

Finally, Docker has a bit of systemd disease. It tries to do too much in one package/binary. This made the rounds around HN a while back:

https://github.com/p8952/bocker

It demonstrates that at least some of Docker's core functionality does not require a monster application but can be achieved by using modern filesystems and Linux features more directly.

So honestly I am a bit "meh" about Docker right now. But hey it's the hype. Reading devops stuff these days makes me wonder if "Docker docker docker docker docker docker docker" is a grammatically correct sentence like "Buffalo buffalo buffalo buffalo buffalo buffalo."




>Docker might still be nice for perfect reproducibility

Docker actually doesn't help reproducibility at all, because the underlying reproducibility problems present in the distro and build systems are used are still present. See GNU Guix, Nix, and Debian's Reproducible Builds project for efforts to make build truly reproducible.

I had a good laugh when I read "the Rube Goldberg Machine development anti-pattern". This describes the situation of "modern" web development perfectly. I'll add that such software typically requires 3 or more different package managers in order to get all of the necessary software. And yes, Omnibus is an abomination and Docker is much better.

I think Docker is papering over issues with another abstraction layer. It's like static linking an entire operating system for each application. Rather than solving the problem with traditional package management, Docker masks the problem by allowing you to make a disk image per application. That's great and all, but now you have an application that can only reasonably be run from within a Linux container managed by Docker. Solving this problem at the systems level, which tools like GNU Guix do, allows even complex, big ball of mud software to run in any environment, whether that is unvirtualized "bare metal", a virtual machine, or a container.


> It's like static linking an entire operating system for each application.

You say it like it's a problem, but that's the most concise description of Docker I've yet read. It rhymes with the way all the fed up oldies using Go like its static linking.


This is pretty much how I view Docker as well. Except it's not really the entire operating system. A VM image is the ultimate static linking.


Nothing wrong with linking together a couple things to build an app... I call it Rube Goldberg (a.k.a. ball of mud, pile of crap, etc.) when it's like dozens of things that all have to be tweaked in exactly a certain way or everything assplodes.

I simply will not run apps like that unless I have no choice. If I see that, plonk it goes into the trash.

... and yes, the whole package management situation is comical. Every language has its own package management system, and the OS, and sometimes people use both at the same time. It's ridiculous.


> It demonstrates that at least some of Docker's core functionality does not require a monster application

```The following packages are needed to run bocker.

btrfs-progs curl iproute2 iptables libcgroup-tools util-linux >= 2.25.2 coreutils >= 7.5 Because most distributions do not ship a new enough version of util-linux you will probably need grab the sources from here and compile it yourself.

Additionally your system will need to be configured with the following.

A btrfs filesystem mounted under /var/bocker A network bridge called bridge0 and an IP of 10.0.0.1/24 IP forwarding enabled in /proc/sys/net/ipv4/ip_forward A firewall routing traffic from bridge0 to a physical interface. A base-image which contains the filesystem to seed your container with.```

Is this the "well-written software" pattern that you're talking about? Because to me, this looks like a "big ball of mud" - i.e. dependence on an eclectic combination of libraries, co-programs, and environment configuration - and indeed, if for some perverse reason I felt like I wanted to deploy this in production, it's exactly the kind of thing I'd wind up writing a Dockerfile for. (Which, I notice is functionality "Bocker" doesn't attempt to replicate.)


A few packages are needed, but in their standard configurations. Bocker does not require you to install hundreds and specially tinker with each one the way many web stacks do.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: