I'm not involved with this project but there is some confusions in this thread, maybe I can share my point of view:
Docker is a tool to run processes with some isolation and, that's the big selling point, nicely packaged with "all" their dependencies as images.
To understand "all" their dependencies, think C dependencies for e.g. a Python or Ruby app. That's not the kind of dependencies e.g. virtualenv can solve properly. Think also assets, or configuration files.
So instead of running `./app.py` freshly downloaded from some Git <repo>, you would run `docker run <repo> ./app.py`. In the former case, you would need to care of, say, the C dependencies. In the second case, they are packaged in the image that Docker will download from <repo> prior to run the ./app.py process in it. (Note that the two <repo> are not the same things. One is a Git repo, the other is a Docker repo.)
So really at this point, that's what Docker is about: running processes. Now Docker offers a quite rich API to run the processes: shared volumes (directories) between containers (i.e. running images), forward port from the host to the container, display logs, and so on.
But that's it: Docker as of now, remains at the process level. While it provides options to orchestrate multiple containers to create a single "app", it doesn't address the managemement of such group of containers as a single entity.
And that's where tools such as Fig come in: talking about a group of containers as a single entity. Think "run an app" (i.e. "run an orchestrated cluster of containers") instead of "run a container".
Now I think that Fig comes short of that goal (I haven't played with it, that's just from a glance at its docuementation). Abstracting over the command-line arguments of Docker by wrapping them in a JSON file is the easy part (i.e. launching a few containers). The hard part is about managing the cluster as Docker manages the containers: display aggregated logs, replace a particular container by a new version, move a container to a different host, and thus abstract the networking between different hosts, and so on.
This is not a negative critique of Fig. Many people are working on that problem. For instance I solve that very problem with ad-hoc bash scripts. Doing so we are just exploring the design space.
I believe that Docker itself will provide that next level in the future; it is just that people need the features quickly.
tl;dr:
Docker -> processes
Fig (and certainly Docker in the future) -> clusters (or formations) of processes
So, simulate a cluster of machines instead of a single machine? Seems like a good thing. "Tiny datacenter in a box."
The question whether it's close enough to production to be useful; any testing environment simulates some things well and others poorly. Load testing would be right out, I'd presume, but it might be useful for testing some machine failures.
I've only recently been playing around with Solaris derivatives, and am pretty impressed at how far it is with some of this stuff. My recent favorite is discovering 'ppriv', which lets you drop processes' privileges on a process-by-process basis without even starting up a new container/zone to encapsulate them. E.g. you can run a process with no network access or with no ability to fork, or with no ability to read/write files (or all of the above). Super-handy for running untrusted code as a stdin->stdout filter without worrying about it causing other mischief, and not having to encapsulate it in a zone/jail/container just to run one process.
FreeBSD's 'capsicum' [1] also looks promising at the OS API level as a similar initiative to write code with minimal privileges, but afaict you can't use it on the command line to run unmodified code with restricted privileges, at least not yet.
Writing a command line wrapper should be relatively simple for capsicum. Designing the interface might need some work. I think the idea mainly has been to get code to sandbox itself but I can see a use case.
Yeah, for the base system that approach makes sense to me (build privilege-dropping into the code), but sometimes I just want to sandbox an existing binary. One recent example where it's come up is a student AI competition, where their submissions aren't supposed to do anything but read/write stdin/stdout, and it'd be nice to be able to enforce that externally by just lowering the process's privileges.
Yes you can but I don't think that's the philosophy behind Docker.
To expand on the "dependencies" idea of my previous post, although you technically can put a process supervisor, a web server, an application server, and a database in the same container, this is not the best practice. It makes your app simpler to distribute (a single image, no orchestration) but harder to evolve (e.g. move the database to its own physical server, or replicate it and put them behind a connection pooler).
For instance if you have a tool to manage a cluster of containers, you will be able to manage the different processes/containers logs in a repeatable way.
But sure, if you know you don't need the added flexibility, you can put everything you want in the same image.
There are several use-cases for docker, and they could use different docker images and containers. Right now there's no easy way to distinguish all-in-one image from one-process images.
Seems like the "docker way" is the one-process-image. But one use-case that I find entertaining is to use docker as an super simple way of trying out software. For example I ran Wordpress for 10 minutes just to check it out. In that case it makes sense to have everything in one container as it makes it much easier to run. But in production it might not be a good idea, especially if the app is not totally self contained.
Docker is a tool to run processes with some isolation and, that's the big selling point, nicely packaged with "all" their dependencies as images.
To understand "all" their dependencies, think C dependencies for e.g. a Python or Ruby app. That's not the kind of dependencies e.g. virtualenv can solve properly. Think also assets, or configuration files.
So instead of running `./app.py` freshly downloaded from some Git <repo>, you would run `docker run <repo> ./app.py`. In the former case, you would need to care of, say, the C dependencies. In the second case, they are packaged in the image that Docker will download from <repo> prior to run the ./app.py process in it. (Note that the two <repo> are not the same things. One is a Git repo, the other is a Docker repo.)
So really at this point, that's what Docker is about: running processes. Now Docker offers a quite rich API to run the processes: shared volumes (directories) between containers (i.e. running images), forward port from the host to the container, display logs, and so on.
But that's it: Docker as of now, remains at the process level. While it provides options to orchestrate multiple containers to create a single "app", it doesn't address the managemement of such group of containers as a single entity.
And that's where tools such as Fig come in: talking about a group of containers as a single entity. Think "run an app" (i.e. "run an orchestrated cluster of containers") instead of "run a container".
Now I think that Fig comes short of that goal (I haven't played with it, that's just from a glance at its docuementation). Abstracting over the command-line arguments of Docker by wrapping them in a JSON file is the easy part (i.e. launching a few containers). The hard part is about managing the cluster as Docker manages the containers: display aggregated logs, replace a particular container by a new version, move a container to a different host, and thus abstract the networking between different hosts, and so on.
This is not a negative critique of Fig. Many people are working on that problem. For instance I solve that very problem with ad-hoc bash scripts. Doing so we are just exploring the design space.
I believe that Docker itself will provide that next level in the future; it is just that people need the features quickly.
tl;dr:
Docker -> processes
Fig (and certainly Docker in the future) -> clusters (or formations) of processes