It think it's worked pretty well so far, though we have had a few difficulties:
- Particularly in the early stages of development, we've been changing dependencies a lot, and that requires a lot of image rebuilds. They are very consistent, but also a bit tedious.
- For those of us on macs, using docker machine for development hasn't been all that great, because inotify doesn't work for automatic code reloading (watchify, nodemon, etc.). However, they're hard at work on that with the new Docker for Mac.
I'm hopeful that technologies like kubernetes will make it easier to deploy these containers, too, but I haven't really got there yet. Maybe another article some day!
- OS packaging is tedious to say the least, and "git clone and pull dependencies on production systems" processes are generally considered messy (if not evil, especially when pulling from repositories hosted on the internet); Docker solves both of these issues by offering a generic interface for shipping and deploying isolated instances of your application. You don't get that with just namespaces and apparmor, you need an API for that (which is really what makes Docker so useful)
- Docker (potentially with the help of its ecosystem) can provide a uniform interface to a couple of the most important operational aspects of an application: logging and monitoring. Especially for heterogeneous or simply large environments, this is a big win.
- When container deployment orchestration matures more, it is much easier to manage and auto-scale your application in a large scale setting since you don't need to reinvent that wheel for every specific stack out there. It will come.
- It makes setting up and understanding development environments easier. Similar to Vagrant, Docker Compose lets you describe your architecture in a config file and easily set up a full stack for you. Especially in companies supporting or developing for a complicated stack, that's very useful. It also probably makes on-boarding of new developers a lot easier.
Still you're making a valid point: you need to maintain those containers. Just like you need to maintain your application's dependencies. Let's be honest though, in many ecosystems that problem already exists: take your typical JEE application that once built, hardly ever upgrades its dependency list anymore (no one even monitors what security holes are being found in all those jars, in many cases). But yes, you should. That problem is really not solved with Docker containers. Most images/containers will be as thin as possible (as will the host OS, preferably), but the concern remains valid.
Running as a non-root user in the container is an extra level of protection and follows the principle of least privilege.
RUN npm config set registry https://registry.npmjs.org/
I don't know why it works, but it does.
With newer docker (1.2 onwards - https://docs.docker.com/engine/admin/host_integration/) you can have a container restart policy which handles a lot of the simple cases where previously one might have used supervisor.
Baking restart policy into the container can be convenient (and was considered standard practice before restart policies), but has the downside that it's a bit less flexible in terms of how your container works in different environments.
But that "links break when containers restart or are replaced" aspect was a deal-breaker for me when I started using docker "near production". I just use --net=host ... so I don't use docker network-related functionality at all. For my purposes server-level firewall settings are fine.