While there are some important criticisms in here, there are some misunderstandings of OS-based virtualization with respect to HW-based virtualization in terms of both history and implementation. Certainly, anyone who thinks that "we have reached the point that [hardware virtualized] performance is almost as fast as bare metal" is either cooking their numbers or deliberately limiting the scope of what they are evaluating to purely CPU-bound work; when it comes to anything interacting with the outside world (which is to say, I/O), OS-based virtualization crushes HW-based virtualization.[1] That said, the security criticisms are entirely fair (though they are criticisms of Linux more than of Docker); taken together with the performance win of OS-based virtualization, they form our motivation for Docker containers running as LX-branded zones.[2][3]
Thank you for the detailed feedback and links, much appreciated. Your comments regarding performance are absolutely correct and I'll update the article to reflect this. LX branded zones, and indeed Solaris in general, is something I haven't looked into so I'll certainly watch that video/deck tomorrow. Thanks again
Some of your points seems valid (although the writing style is hostile), but calling it "unnecessary" is over assumption. You think Docker is only used in deployment, but you are wrong. I use it do grade programming assignments, I distribute Dockerfiles to students, and it is very useful. I believe it is very useful in other aspects as well, just because you are unhappy does not make it unnecessary. I ran almost 30.000 containers on a single machine and it saved me many hours.
I also had a incident with Docker in 0.6, when creating a new container it used to traverse all the containers metadata and when the container count reaches thousands it froze my server and I had to manually reboot it on 3AM, but hey that's okay, it can be fixed, I reported it on Github and it does not happen anymore. It is still useful to me, and still necessary.
Thank you for the feedback, that's actually quite an interesting use case of the Docker ecosystem. I've already updated the article to reflect this, as you raise the same point that many others made.
(copy) "However it's important to note that this article focuses on the daily, long term usage of Docker, both locally and in production"
> I use it do grade programming assignments, I distribute Dockerfiles to students, and it is very useful.
You should consider doing your students a favor and not tie them into a single for-profit company's technology (most will go on to use whatever you have taught them since they will have come to be comfortable with it during their studies).
Consider using one of the App Container Specification's implementations -- there are already 3 prominent ones, all adhere to the standard which means their images are interchangeable and compatible with each other.
The standard is community-driven, and stable at this time. It could even become a project to implement your own implementation for your school, teaching your students not only a ton about Linux, but the principals of containerization in the process.
Flawed? sure.. useless hype?? yeah well are all the people loving it just stupid.. or is it just placebo?
The author makes a lot of conjecture with very, very little backing.
I love docker. I'm a programmer more than a systems engineer. I've used Linux as my sole computing environment for 6+ years. I've deployed countless LAMP stacks; and countable Haskell/postgres stacks.
For both having an extremely portable development/building environment; and dead-simple disbursement of binaries-with-prereqs, Docker has been INCREDIBLY useful to me.
For actual deployment of single-server apps; it might be a bit more trouble than it's worth, in some cases. I have a couple places where I develop and build in Docker; but actually deploy "raw"; because it is easier/fine.
But when you start considering Coreos clusters and docker containers to utilize them; deployment again is made congitively simpler (to us mere mortals) thinking in terms of containers.
I guess this is click-bait; but even as a passive user of Docker, I find it quite offensive; and not well-grounded.
I opened the article with an open mind; thinking someone smarter than me knew something terrible about docker that was going to bite me in the ass some day.. only to instead get the impression of, either being trolled, or that the author is my favorite type of neck-beard elitist.
I'd be inclined to agree that my story was incomplete, something which I addressed in another comment [2], and I have made some amendments over the last few hours which give fair reference to the positives of the Docker ecosystem. However I would disagree that I have failed to provide sufficient backing for the argument being presented, and has also been addressed in another comment [2].
I'm sorry to hear the article gave you the impression of being trolled, as I certainly did not intend this. If there are specific areas which you feel I did not justify, rather than an overall dismissal of the argument being put forward, then please call me out on it. If you read my other comments, you will see I am a reasonable person and happy to admit when I'm wrong. Though despite these corrections, I'm standing strongly by my conclusion.
PS) I have recently grown a neck beard, but it's going soon, itching the crap out of me.
I agree with everything here. Docker is amazing for developing things which need servers because it completely removes the game of trying to run multiple servers on your workstation.
For me, Docker "clicked" when I took a legacy PHP webapp, and was able to get the whole thing running in a Dockerized dev environment by setting some links and pulling down a Postgres and PHP container - which I can now launch into with a tmux script which lets me see the debug output of both while I'm developing, live updating while I edit and tinker in Webstorm.
That simply wasn't possible before - not nearly as easily.
> the author is my favorite type of neck-beard elitist.
Again, Seriously? The guy has an issue with docker, which is grounds for you to go all ad-hominem on him? What is this, Reddit?
I can only speak for myself. I find docker to be immensely useful. Sure, there were always VMs, but I was very frustrated with multi-vm setups before discovering docker.
At the very least it let me iterate on system configuration / installation procedures more quickly. I have learned a lot just due to tweaking Dockerfiles and rebuilding them. Virtual machines are much slower in this regard.
In Data Science Docker is becoming revolutionary. People tried distributing virtual machines to let others reproduce their work. Dockerfiles are much more reproducible, and don't need a few gigabyte of seperately hosted VM images. Also these VM images usually contain tons of undocumented "state", whereas a Dockerfile is easier to reverse engineer and ultimately very reproducible.
You can include Dockerfiles in all your projects. Other developers can then go in and get started with a minimal amount of guesswork. Turns out "sane" development environments, which probably means one supposedly optimal configuration/setup/framework for all your projects, is the exception rather than then the norm.
With all due respect, you could get these exact same benefits by using Vagrant with a CM. Vagrant removes the need to manually manage the VMs, and the CM will handle the iteration on "system configuration / installation procedures".
It's my opinion that Docker images from Docker Hub are less repeatable than your average VM; as pointed out in the article there's nothing stopping someone from re-using a tag, and the contents of the VM depend entirely upon on what the upstream image creators felt was necessary (which can include things like grabbing config files from the internet in order to run a service).
> Also these VM images usually contain tons of undocumented "state"
Although I agree this approach is absolutely flawed, and anyone doing so needs to re-evaluate their workflow, I feel this comment is more an argument of doing things properly, rather than a positive of Docker. Everything you mentioned above can be easily achieved by using Vagrant and a provisioner. But as discussed in other comments, the Docker ecosystem helps you get started quicker.
I have used vagrant and I don't think that it is in any way better except when bridging operating systems. Typically, vagrant based workflows are slow and resource hungry, compared to Docker. I have Python projects where I can run test suites on multiple separate, fresh, containers on Python 2 and 3 in several combinations of dependencies, in a few seconds on a weak Netbook. Try that with vagrant sometime...
You seem to assume that everyone should be doing everything the "right way". Turns out they don't. I found it easier to deal with stuff not being done "perfectly" than to try and argue other developers into doing stuff "right".
Vagrant certainly has it's own problems and frustrating bugs, in fact I outright refused to use it in the early days. There is differing opinion on "the right way", depending on what your priorities are. Mine are based on precision, perfection and beauty.
I'm glad you replied, as it wasn't intended to be arrogant. Some people care about the end result, not worrying about how it gets done, as long as it's done. Others care about the journey, perfecting their art at every stage, with little consideration for the end goal. And some people are able to balance these things.
Have you ever looked at the source code of a dependant library, followed by an overpowering and compelling urge to write your own from scratch? Suppressing the urge to rage quit because your tools are fighting against you, not with you. For me, it is a constant daily fight to force myself not to do things "properly", for it's mostly an unrealistic and unachievable goal.
Believe me when I say that being a perfectionist is nothing to be arrogant about, and despite it being one of my major strengths in some ways, it's also one of my biggest flaws.
(I'm taking it on faith that you're not trolling, and have taken time to give an honest reply)
> I can only speak for myself. I find docker to be immensely useful
It seems you are making an argument not for Docker itself, but for containers in Linux in general. To that point, something like the App Container Specification standard and one of it's implementations would suit your needs just as well, if not better. There are already 3 prominent implementations of App Container Specification, each doing their thing differently but all compatible and inter-changeable with each other and their images.
No, it is an argument for Docker's workflow. It seems allot of the criticism calling Docker unnecessary overlook how it makes distributing Linux containers as easy as git.
If your argument is that it is too easy to share then you are going to have a hard time winning over users.
But honestly how hard do you think it is to integrate secure builds into a docker system? I would just stand up a private docker repository and lock the build system to that, problem solved. Or docker could roll out an update to leverage the existing namespaces and combine that with a user controlled whitelist of public key & namespace pairs. The reason docker has enjoyed this much success is because they understand sharing is #1.
> Or docker could roll out an update to leverage the existing namespaces and combine that with a user controlled whitelist of public key & namespace pairs.
Could, but haven't.
> The reason docker has enjoyed this much success is because they understand sharing is #1.
Except they have fought tooth and nail against a standardized specification until App Container Specification was released. Docker isn't about "sharing", it's about vendor lock-in.
Thanks for sharing, I've seen appc mentioned in a few comments, I'll check it out properly tomorrow. You also raise a good comment about blindly trusting containers which, despite having concerns about this in the past, regrettably I didn't touch on.
"If you expect anything positive from Docker, or its maintainers, then you're shit outta luck.[..]If your development workflow is sane, then you will already understand that Docker is unnecessary[..]Docker would have been a cute idea 8 years ago, but it's pretty much useless today"
Umm ok? Judgement, jumping to conclusions. Take out some of that kind of tone and you have some legitimate criticisms of docker that might be taken more seriously.
There are some real warts in the docker core & community, and a high barrier of entry for multi-node deployments and isolated networks, but on the flipside, deterministic image build & deployment is a big win compared to many other ways of doing this stuff (at scale).
IMO, one of the biggest positives that Docker-like provisioning encourages is clustered-by-default architectures. When you build around failure by assuming nodes come and go (as easily as Docker makes it) your platform availability is likely to be more resilient to some types of failure.
Thank you for your honesty. Although I stand by my comments of Docker being unnecessary, I would agree that my "If you expect anything positive" comment is wrong. This has been pointed out by other people as well, and I'm going to be amending the article shortly to reflect this.
> high barrier of entry for multi-node deployments and isolated networks
Have you tried Weave? [1] It gives you an isolated multi-node network for your containers. I'd be interested to know if you still think it has a high barrier of entry.
I would agree that Docker has some flawed fundamentals, that it has many implementation flaws, and that is over-hyped. Out-of-the-box it is even almost useless for real-world production usage.
But - despite agreeing with your words - I wouldn't put them together to reach the same conclusion. The container model with managed images seems to be a big step forwards vs the VM model: it encourages having immutable 'machines' that are disposable, which works very well for cloud architectures.
I would check out the kubernetes project and where it is heading: containers orchestrated across a cluster of machines, with seamless/automatic networking between the containers.
Thanks for the feedback, I'll check out kubernetes. Although I agree that containerisation is having an impact in terms of encouraging immutable infrastructure, I would argue that this is easily achieved using virtualisation.
Other people have raised a similar argument of Docker ecosystem, which is a fair point and I'll be amending the article to reflect this.
I would have to strongly disagree with this, not just because AWS has been capable of doing this for quite some time, but also because of my own success stories. I'm due to post a follow up article, explaining the pros/cons of the alternative solutions, within a month or so.
Well you could have managed images in a VM, many do that, incl. ppl just using ec2.
you could also do that using lxc, or what not, not necessarily with docker.
So basically, ocker adds nothing to these other solutions except complication, bugs, etc. (which is also my experience).
Moreover, and to push it a little further, if anything, it's a better idea to call some container library so that you use namespace for your very use case, inside a VM. That VM being an image that you manage and deploy of course (ie that you dont tweak when its running unless its for debugging)
For me the biggest win with Docker is how composable it makes containers. Building new images only takes moments because of how fast containers can be downloaded, started and modified.
It's certainly possible to do all of the same things with virtual machines, but the difference is the process is really cumbersome. No one has made an equivalent service of Docker Hub for virtual machine images and snapshot deltas.
Exactly this. Also in my experience Docker wraps LXC much more nicely then raw LXC, which on my current work machine with Ubuntu 14.04 doesn't work at all (whereas I've got a great chain of Docker images set up for web dev right now which gives me a complete overview of what I'm building).
Docker doesn't wrap LXC any more; they re-implemented that stuff in libcontainer. Well, the code to call LXC is in there somewhere as a backwards-compatibility option, but you won't be using it unless you really try.
Although I haven't yet reviewed LXD, I agree that plain LXC is an absolute pita to use, at least it was in 2012 (the last time I used it in prod, aside from Docker)
Orchestrating things across a cluster of machines with seamless networking isn't new from kube and the docker ecosystem. Check out mesos some time, which has enabled that kind of thing for quite a while.
Well, if you want to play that game, you can look further back to HTCondor which has been doing this since 1989. I'm sure earlier examples exist as well, given that the concept itself isn't all that esoteric.
Interesting, I hadn't heard of HTCondor before but have recently become more interested in bioinformatics, which seems to use this quite a bit (at least in Cambridge). I'll have a read, thanks for the heads up :)
I strongly prefer hardware-assisted virtualization wherever possible. I'd like to see more super lightweight OSes like MirageOS, as well as sane trusted computing functionality all the way up the stack, but that might be asking for a bit much.
I'll check out MirageOS, I've also heard some interesting things around CoreOS, which I'm hoping to get some lab time for soon. Finding time to review everything properly is difficult, this article was the summary of 6 months work and 18 hours writing.
The author is viewing/reviewing Docker through a very myopic lens.
I work at a company that leverages Hadoop, and on each developer machine we each have a baby Hadoop cluster (3 nodes) running in VMs at the moment, in addition to a master controller. Hence, at any given time, my machine is actively running 5 OS's (if we include the native OS) and all their associated background tasks. My computer generally idles at 25% CPU usage. Keeps my legs warm in the winter.
We're finally going to dockerize the project and I couldn't be happier. Granted, Mac OSX doesn't natively support the LXC's docker uses (c'mon Apple!), so we'll still be running 1 VM, but that's a huge improvement over our existing implementation. My CPU wouldn't be idling nearly as high, I'm going to recover a ton of RAM, and the boot sequence is going to be substantially faster.
Being that these are developer machines, and simply understanding our use case, we're not worried about malicious tenants breaking out of their environments. It's just not a factor in our situation.
If anything I feel the "article" was a waste of time. I thought I was going to read a great summary on the state of Docker, and instead I got this unexpectedly aggressive piece from developer that couldn't conceive how to fit this particular tool into his toolbelt.
Thank you for your honest feedback, I do really appreciate it. You've raised the same point many others raised, and it's important to note that my argument of uselessness is from the specific standpoint of daily, long term usage of Docker, both locally and in production. I have since updated the article to reflect on some more of the positives, and you + others are right to call me out on this.
If you look further into the article, you will see that I do give fair mention that Docker/LXC can result in faster startup times and lower memory usage.
The tone of the article is an accurate representation of my feelings towards Docker, writing anything else would have been a lie. I'm sorry you feel like the article was a waste of time, but I appreciate your honesty! Despite these corrections, I still stand by my conclusion, however it's been quite insightful to have peer review from a wide audience and so far 99.9% of the feedback has been constructive.
I typically run 3 VMs, identical OSs on each, to replicate a scaled down production environment. My laptop idles at near zero CPU usage, and kvm nicely deduplicates memory between the VMs. I'd have a look at your setup configuration before ripping it out. Something is off, there. A poor hypervisor, mixed OSs and possibly, some base load on the VMs that can justify the external behaviour.
I partially agree with you, except for one very important point: Despite its flaws and hype, it is still incredibly useful.
I just wish it were easier for people to read the contents on the label. Right now everyone thinks it is good for everyone. It isn't, and there are some sharp edges that require RTFM'ing.
I am really enjoying it for daemons that don't need to save any state locally (via volumes). For us, it's a great fit for web app servers, background workers, and other things that don't need any guarantee of local persistence.
I feel like these sorts of usage cases are the sweet spot right now. I have zero desire to run something like Postgres in Docker in a production environment. On the flipside, a Postgres Docker container is great for a local dev environment.
Thanks for the feedback, someone else made a similar comment on Reddit, and it's a fair point.
(paste from reddit) I agree that the Docker ecosystem makes getting started easier, and my argument of uselessness is focused on the long term usage of Docker, rather than the immediate "quick start" gains. Never the less, I'll amend the article with reference to this, as you make a fair comment
I like it for quickly spawning up fresh places to test during development, or for a fresh environment since so many package managers gems, bundler, npm, etc fuck themselves up and install conflicting packages.
So I find this article to be the same as the hype around Docker, useful information surrounded by hyperbole that ruins it?
I don't need a full VM usually, and being just local dev the security isn't an issue, but used Vagrant before Docker for this.
Thanks for your honesty, I'm drafting a follow up article which explains the pros/cons of each alternative that I've reviewed so far (packer.io, heroku and ec2). However people have been posting some interesting alternatives which I didn't know about, so it'll be a month or so.
Look, if you're going to submit your own blog posts to Hacker News --- I assume sleepycal is the same Cal as the Cal who wrote the blog post --- then take the time to polish them up a little, and tone down the flameshow.
Would you like it if I told people your blog post was useless because you buried your thesis in the third-last paragraph and led off with an either astounding or false assertion that you tested docker (which you had previously disliked) by putting it in production for six months? That sort of needs some explanation and story telling. If Docker induced so much bile in you, then how and why did you end up stuck using it in production for six months?
Thank you for your honest thoughts, I respect that. I've updated the article to reflect the comments that yourself and many others have made, and you're right to call me out on it.
You are right that I've failed to justify my testing, other than stating that it was used in prod for 6 months, and in hindsight this was a mistake. I'll amend the article to reflect this, naturally it will be of limited use without reproducible code snippets, but it will at least complete the story.
As for polishing, I'd spent around 18 hours writing this article, with 4 full cleanups. I had considered putting a job out on fiverr to perfect the grammar, but that felt wrong.
I came across a few of the issues mentioned in that article while using docker in the last 4 months but I have to say I will never look back.
What I like the most personally is how easy it is to install and experiment with 3d party tools. You want an elastic search stack? In one command you have a webserver hosting Kibana with Elastic Search and Logstash properly configured on your local. Jenkins, Elastic Search, Redis, Postgres, etc: they all have their dockerfiles and can be installed as one liners. Removing them is equally as easy.
Oh and I don't know why it is written that running a Docker registry is "extremely complex". Just like any Dockerized app it is a one-liner.
This new ease-of-install just by itself is worth of my gratitude to the guys that build it.
Can you show me how you setup Kibana with ES & Logstash? I've tried setting them up separetely before but it got too convuluted for me. With Docker, I might actually get it going finally. I administer a few servers and checking logs has always been an issue for me. Usually, I end up doing it after the fact because I wasn't warned of issues.
If you're renting VMs from a provider like DigitalOcean or Linode, a VM per service can get expensive compared to running multiple containers within a single VM. So, if not Docker, some system for deploying containers in production can still be useful.
Although I would agree that services such as EC2 are not cost effectively for 1:1, this argument is somewhat invalidated when you consider $5 instances from DockerOcean. $100/month for 20 concurrent VMs is enough to replicate even the most complex of deployments. But if you're running on a shoestring, in which case it's probably not production critical, then sure containerisation might be useful too.
Also remember that Docker is not a container deployment system, and afaik, they don't advertise themselves as such (though they also don't make much effort to explain this explicitly either)
Thanks for the feedback and alternatives, I hadn't seen Spoon or LXD yet so I'll check those out. Namespaces are sometimes supported directly by the application itself, for example uWSGI, and anything lacking native support you can use firejail [1]. There is also an article [2] by an OVH employee about using namespaces directly in C.
I'm looking forward to LXD in particular. I really like how Docker feels like a convenient application deployment platform, but there is no isolation between the container and the host OS. One thing that bothers me is a lot of application images, including official ones, on the Docker repository run as root. User and groups are mapped one-to-one on the host; running as root in the container means you are root on the host. I really hope LXD maintains the convenience of Docker with the added isolation a hypervisor brings.
Rocket is an implementation of the App Container Specification, which is an open standard. There's several other implementations as well, and Apache Mesos is working on another.
Each will have their own workflow, but be interchangeable and compatible with images.
Just wanted to say a huge thank you to everyone who has taken time to reply, the response has been overwhelming! There's still many comments I'm yet to reply to, and will finish replying to these tomorrow evening, but it's now 5am and sleep is required.
I think docker is really successful in term of marketing (no offense), I guess they've learnt a lot from the nodejs/golang's camp :)
Just like few years ago, everyone here on the front page is talking about nosql and seems like everyday, there's a new webscale nosql database being born...but now, I often see more people here inclined toward traditional technologies such as Postgres as it has been greatly improved. (Yes, a lot of aged software are still improving, e.g. MySQL, Apache, PHP or even Perl5)
I am not saying Docker is just a hype, I also don't think it is completely unnecessary, but it is not for me to solve my immediate problems and therefore I would rather review Docker..maybe few years later.
There's a lot of helpful information in the article, much of which I didn't know about. I'm glad that you took the time to link to specific issues, that's very helpful. However, the article is so incredibly negative in tone, and so thoroughly dismissive of Docker that I have a hard time taking it to heart.
Docker is not a silver bullet, and it is definitely being over-hyped right now. But it does have some great usage cases, and it has got a lot of people working together on re-thinking how we build and deploy applications. I think you'd be more likely to bring about positive change by adjusting tone and trying to point out some usage cases where Docker excels. I can't speak for everyone else, but I'm a lot more likely to take someone to heart who shows some balance while criticizing.
Unless you just wanted to get lots of page views with an hyper-critical (though well-researched) article. If that is the case, carry on and disregard this.
Thanks for the reply, it actually made me smile. Typically my articles are a representation of the emotions that the subject makes me feel, in this case 6 months of rage inducing frustration. I do try and balance out my criticisms with positive thinking, which can be seen in a recent review I did of swampdragon [1].
However you're quite right to point out that this article fails to represent the positive impact of the Docker ecosystem, something which several other people have also commented. I'm going to amend the article to reflect this.
Ok, my honest takeaway is that you have too negative an attitude. I'm not a docker cheerleader, and there are likely some good points and issues raised but I couldn't help thinking this came across as just being mad at the world for liking docker.
> This did not solve the problem of slow speeds, but the only alternative was to use our own Docker Registry, which is ridiculously complex.
I feel like if you're going to criticize running your own registry you really need to devote more time to it. You can get an S3-backed, distributed private registry running in about 5 minutes using their provided image. It's really, really easy.
Thank you for the feedback. Although the README looks slightly easier than previous, it still makes for difficult reading, and I'd argue that they need to document this better. Unless you have a link to somewhere this has already been done? (a quick 5 minute look on Google didn't give any definitive answer)
I don't have a link to any improved docs,no. I did set up a private registry at my company though and I remember it being quite painless. We just run the stock image, pass our credentials to it via environment variables, and it Just Works™. I think the README is verbose because there are a lot of options when it comes to running the thing. If you know off the bat that S3 is what you want, though, I think you can read it in a pretty targeted fashion.
True, I finally broke down and just added "--insecure-registry" (it's just a cert verification issue, the connection is secure regardless). Will try again later, sounds like they're planning major registry changes anyway.
Anyway, while the S3 backed private registry is kinda slow, it works and it's literally a single docker run. I'm not sure how it could be simpler.
https://github.com/docker/docker-registry/blob/master/ADVANC...
Great post with some very useful insights, thanks also for the links to the issues.
I do like the approach of the coreos team of working collaboratively on a spec to include every platform.
https://github.com/appc/spec
My favourite gotcha for Docker right now is the lack of ssh agent forwarding support with Dockerfile. The only solution appears to be to give the Dockerfile a passwordless build key, which is sorta okay for CI but a total hassle for individual developers.
I find it bewildering that you ran docker in production for 6 months and yet you find that running your own docker registry a "complex" operation. Docker has its flaws, like any other technology, but running a registry took me all of 2 minutes on an EC2 instance. Kinda makes me wonder how much you really understand about a real docker workflow.
Although this comment seems to be more focused on baiting, rather than constructive criticism, it points out that I have failed to justify my comments about Docker registry being overly complex, though I addressed this in another comment [1]. Either way, thanks for taking time to give your thoughts.
I've been using Docker in an end-to-end development to production environment for over 6 months now and was hoping to get some valid insight from your review. However mentioning something like that makes the article seem less credible (even though the rest of the article might have some valid points).
Ignoring the obviously opinionated cruft and hyper-aggressive uber geek disdain, which appears to make up about 70% of this post, there are still one or two actual statements worth examining. Fwiw I run a small site, fifteen or so instances, and we've been using Docker in our deployment for about a year now.
> Lets say you want to build multiple images of a single repo, for example a second image which contains debugging tools, but both using the same base requirements. Docker does not support this.
Of course it does. It appears that it doesn't support it the way you think it should, but to say that you can't do it is misleading. A base image, and two images that pull from it with the different requirements will solve the problem. You apparently don't like that solution, but that is not the same thing as not having a solution.
> there is no ability to extend a Dockerfile
Yeah, this would be nice. Maybe they will add it. But it is hardly... not even close to... a make-or-break feature. Honestly I think you might just need to refactor your stuff, or perhaps Docker just isn't a fit for what you're doing.
> using sub directories will break build context and prevent you using ADD/COPY
You mean if you include a bunch of stuff in subdirectories that you don't want uploaded to the demon. Again, man, not even close to make or break. You really need to log gigabytes to a subdirectory in your build context? There's _no other way_ you could set that up? We create gigs of logs too, but most of them are events that go to logstash and get indexed into ES. Our file-based logs go to mount points outside the container. We do have images we build using context, where we ADD or COPY multi-gigabyte static data files. Seems to work fine.
> and you cannot use env vars at build time to conditionally change instructions
No, you can't. I'm not sure I would want to. I like the fact that the Dockerfile is a declarative and static description of the dependencies for a deployment. I don't think I want to have to debug conditional evaluation at build time. There are other ways to solve those problems, like refactoring your images.
> Our hacky workaround was to create a base image, two environment specific images and some Makefile automation which involved renaming and sed replacement. There are also some unexpected "features" which lead to env $HOME disappearing, resulting in unhelpful error messages. Absolutely disgusting.
First of all, what exactly is hacky about having a base image and two environment-specific images? I don't know what sort of makefile automation you're talking about, but we do some environment specific sed manipulation of configs at build time, and in some cases at container launch time. Sometimes that makes more sense than having two different versions of the container just to have a very slight change to the config.
Secondly... absolutely disgusting? Is that the sort of language you regularly use in technical writing? Oh, hey, look at the third paragraph: "If you expect anything positive from Docker, or its maintainers, then you're shit outta luck." I guess it is. The strike-out font was a nice touch, man. "I don't really mean this, but you can't help reading it!" Nobody's ever done that before.
> These problems are caused by the poor architectural design of Docker as a whole, enforcing linear instruction execution even in situations where it is entirely inappropriate
You're not talking about linear instruction execution. You're talking about grouping instructions into commited layers. I would much prefer the proposed LAYER command to conditional execution or branching, which is what I assume you mean by non-linear in your comment. But I don't find this to be a serious problem either. That seems to be a pattern with this post: in a year of using Docker to containerize all our services - in-house python code, Django, redis, logstash, elasticsearch, postgresql - I haven't run into these issues that are deal breakers for you. Again, you might want to try to refactor and simplify some of your image builds. It's better to have a few simpler containers talking to each other than to try to cram a complex multi-service deployment into one. But then, I don't know what you're doing, and maybe it's just not suited for containers. You seem to have a strong preference for VMs anyway, so do that.
> However the Docker Hub implementation is flawed for several reasons. Dockerfile does not support multiple FROM instructions (per #3378, #5714 and #5726), meaning you can only inherit from a single image.
This whole post is like a laundry list of Absolutely Critical Things Nobody Ever Needed. I can't imagine a situation in which you'd absolutely have to be able to inherit from multiple images. If you have that situation I would agree it's an indicator Docker won't work the way you currently want to do things. I do agree with you about the occasional speed issues on the hub. But they're giving it to lots of people for free, and to me for a ridiculously low price. If I need better performance I can always run my own registry.
> There are some specific use cases in which containerisation is the correct approach, but unless you can explain precisely why in your use case, then you should probably be using a hypervisor instead.
There are some specific use cases in which virtualization is the correct approach, but unless you can explain precisely why in your use case, then you should probably be using containers instead.
See what I did there?
> If your development workflow is sane, then you will already understand that Docker is unnecessary.
I do like to read even-handed, unbiased reviews of technologies like Docker, even when I already use them. I like to have my world view challenged with an exposition of solid critical points. Maybe someone will write an article like that.
Thank you for the honest feedback, this is the first post I've seen arguing the technical aspects of the article, thank you.
> The strike-out font was a nice touch, man
I've made it clear in other replies [1] that the comment was of poor taste, the strikethrough was intended to show that I was wrong to make the statement, but also not attempt to hide it from history. I have since moved it to the bottom of the article to draw attention away. To be absolutely clear, I admit I was wrong to make that comment.
You hold the opinion that most of the technical frustrations I mentioned are not deal breakers, for you these things don't matter, but to me they do. As I explained in other comments [2], my priorities are based on precision, perfection and beauty, not "getting the job done".
In my use case, Docker worked in production for 6 months and got the job done, but it left me feeling frustrated and dirty. As such, I'll refrain from responding to your individual technical arguments because we have a fundamentally different view on priorities and importance, and will result in both of us wasting our time.
However I've added a link to this reply in the article, as other users might find this rebuttal helpful. (If you would rather I remove your handle, let me know).
Thank you again for spending time writing this detailed reply.
[1] http://dtrace.org/blogs/brendan/2013/01/11/virtualization-pe...
[2] http://www.slideshare.net/bcantrill/docker-and-the-future-of...
[3] https://www.joyent.com/developers/videos/docker-and-the-futu...