Docker is a tool to run processes with some isolation and, that's the big selling point, nicely packaged with "all" their dependencies as images.
To understand "all" their dependencies, think C dependencies for e.g. a Python or Ruby app. That's not the kind of dependencies e.g. virtualenv can solve properly. Think also assets, or configuration files.
So instead of running `./app.py` freshly downloaded from some Git <repo>, you would run `docker run <repo> ./app.py`. In the former case, you would need to care of, say, the C dependencies. In the second case, they are packaged in the image that Docker will download from <repo> prior to run the ./app.py process in it. (Note that the two <repo> are not the same things. One is a Git repo, the other is a Docker repo.)
So really at this point, that's what Docker is about: running processes. Now Docker offers a quite rich API to run the processes: shared volumes (directories) between containers (i.e. running images), forward port from the host to the container, display logs, and so on.
But that's it: Docker as of now, remains at the process level. While it provides options to orchestrate multiple containers to create a single "app", it doesn't address the managemement of such group of containers as a single entity.
And that's where tools such as Fig come in: talking about a group of containers as a single entity. Think "run an app" (i.e. "run an orchestrated cluster of containers") instead of "run a container".
Now I think that Fig comes short of that goal (I haven't played with it, that's just from a glance at its docuementation). Abstracting over the command-line arguments of Docker by wrapping them in a JSON file is the easy part (i.e. launching a few containers). The hard part is about managing the cluster as Docker manages the containers: display aggregated logs, replace a particular container by a new version, move a container to a different host, and thus abstract the networking between different hosts, and so on.
This is not a negative critique of Fig. Many people are working on that problem. For instance I solve that very problem with ad-hoc bash scripts. Doing so we are just exploring the design space.
I believe that Docker itself will provide that next level in the future; it is just that people need the features quickly.
Docker -> processes
Fig (and certainly Docker in the future) -> clusters (or formations) of processes
The question whether it's close enough to production to be useful; any testing environment simulates some things well and others poorly. Load testing would be right out, I'd presume, but it might be useful for testing some machine failures.
That's been built into [Open]Solaris for years. You can define the network topology too.
FreeBSD's 'capsicum'  also looks promising at the OS API level as a similar initiative to write code with minimal privileges, but afaict you can't use it on the command line to run unmodified code with restricted privileges, at least not yet.
To expand on the "dependencies" idea of my previous post, although you technically can put a process supervisor, a web server, an application server, and a database in the same container, this is not the best practice. It makes your app simpler to distribute (a single image, no orchestration) but harder to evolve (e.g. move the database to its own physical server, or replicate it and put them behind a connection pooler).
For instance if you have a tool to manage a cluster of containers, you will be able to manage the different processes/containers logs in a repeatable way.
But sure, if you know you don't need the added flexibility, you can put everything you want in the same image.
Seems like the "docker way" is the one-process-image. But one use-case that I find entertaining is to use docker as an super simple way of trying out software. For example I ran Wordpress for 10 minutes just to check it out. In that case it makes sense to have everything in one container as it makes it much easier to run. But in production it might not be a good idea, especially if the app is not totally self contained.
What this needs is the ability to be pointed at a working VM - lets say, Ubuntu 13.10 server - and then just figure out whats different about it, compared to the distro release.
Something like the blueprint tool, in fact.
Disclaimer: I work at Docker and previously worked at Puppet Labs.
I'm a docker newbie, though.
I too am just learning this stuff, but that should hopefully help you out!
It sounds like you want Blueprint (https://github.com/devstructure/blueprint). But careful what you ask for... I found this to not actually be a very useful approach in practice.
Could you expand on this, please? I'm curious to know what the problems were (just so I know what I'm letting myself in for)
It turns out that installing and configuring services on a server touches many files and only some of them are important. Even the basic assumption that Ubuntu is the same everywhere wasn't quite right. Linode has some of its own packages installed and I think they tweaked the kernel. Running it in VMWare, you probably have the guest additions installed, etc. These things aren't important, but Blueprint doesn't know that. So I ended up with this massive number of changed files and the tooling for filtering through them to get just the important bits wasn't so hot (or at least it wasn't a year ago).
- System configuration is managed for you using Docker (you don't need to figure out how to hook up Puppet/Chef/shell scripts)
- You can aggregate log output from all of your containers
- You can model your application as a collection of services - starting, stopping, scaling them etc
- You can ship exactly the same Docker image you use in development to production
More importantly, all this stuff works out of the box by default. Some of these things are possible with Vagrant, but you need to learn and piece together other tools (Puppet, Foreman, etc) to get it all working.
Docker isn't supposed to replace vagrant, its supposed to supplement it, at least with development machines.
So I put those files in VCS so the next guy could just clone the repo, run make devel and get the app running, ready to code on.
So unless you want to use Docker at deployment, dont split app in multiple containers - you got more running parts to integrate and no gain, instead use supervisord and run all processes in single container.
Theres few hacky parts (how to inject ssh keys into container) but so far its really cool.
I too wrote few wrapper scripts around lxc-attach so i can run ./build/container/command.sh tail -n 20 -f /path/to/log/somewhere/on/container
I cant share any code but im happy to answer questions at [HNusername]@gmail.com
I must say that this is great. I've been advocating this sort of usage of Docker for a while as most still think of Docker or containers as individual units. I'm happy to see others adopting the viewpoint of using container groups.
However, it is something I do hope to eventually see supported within Docker itself.
Also, recently, I've been telling others how since October you could do this exact same thing using OpenStack Heat. Using Heat and Docker is similar to Fig, the configuration syntax is quite similar even, but it requires the heavy Heat service and an OpenStack cloud. That means that for most people, it isn't even an option. It's great that Fig now provides a solid lightweight alternative.
As 'thu' has said already, people want and need these features quickly and I expect in the next year we'll see serious interest growing around using these solutions and solving these problems.
Services can also be controlled as if they were whole units – you can say "start my database" instead of "start this image with these ports, these volumes, etc".
I'm not sure fig totally solves this, but I'm pretty sure it's in the ballpark.
The idea is to make sure your app runs in an environment that's actually a fully functioning valid operating system. This means besides your apps process there's also the supporting processes of the operating system (ubuntu in this case).
This enables for example in-container build scripts and monitoring.
I haven't actually deployed anything with docker yet, just used it on my local machine (so I could run neo4j in isolation). It's a slightly different way of thinking from how we've traditionally managed these things so it's going to take a little time to find the best patterns to work with them. I'm sure some people have already figure it out - I just haven't had the time to dedicate to it yet.
For example, EC2 disabled some of there extended instruction sets to ensue uniformity but I am not sure how long this will last. Then we will have to deal with Docker deployment problems.
I propose we dig deep into our Gentoo roots and build the dependencies on demand.
One application I thought of is for deploying to client. You just get them to use the instance and there's zero configuration needed. but then, what if you need to make updates to the code base, how do you update the code changes to all the deployed fig/docker instances running already?