Clearly, Swarm is the lightweight "cloud" thing (yay!), but if I'm deploying to Swarm, do I use Machine or Compose? Etc.
Possibly too many nouns would be made simpler by a single command line tool with subcommands, that were less skeomorphic, and instead were named after what they did.
Just my two cents, but the documentation needs to explain a bit more of the basics at a high level before going into the weeds, IMHO.
The other gotcha page is "if I'm on Amazon", when would I want to use swarm, if at all, when I had ECS, and how much remains relevant? I'm guessing not so much still applies, but perhaps I'm wrong.
Another good article would be what parts of these new tools remain relevant if I'm using, say, CoreOS (fleet), Kubernetes, or Mesos, or OpenStack with Docker. I'm not saying the others are better, it's just difficult to visualize how they interplay.
[not affiliated with tutum, just a happy user]
In a saturated market it might be how I've ended up with everything on digitalocean.
All three actually seem to work quite nicely together
I'd benefit hugely by having a list of the CLI commands, without the text filler, showing a few use cases end to end.
I have a hard time keeping all the video in my brain all at once. Just seeing the bash history alone, with some light text commentary, would work for me better than the video -- by a lot. I know different people learn differently. Show all three tools delivering a "pet store" type stack maybe. Tell me when do I want compose and when can I skip it, as quickly as possible.
I know these are things you can figure out in several hours/days by trying them all, but for those of us that don't have time, it needs to be presented at a higher level showing how they connect, rather than the marketing speak "it works great together" and then pages that contradict that by saying the integration is minimal.
The current result is everyone's Docker topology is VERY different. I know it's not proscriptive, but it's a bit of a mess. I've encountered a lot of people doing lightweight cloud with Docker (basically no-cloud) with Ansible and manual instance management, for instance, and there SHOULD be better options - but I think people go that route because they are confused about the complexity of options they have.
Back down to technical bits, I see the line "we're working with AWS to make Swarm usable on ECS", etc, as confusing and a bit marketey (as to the reason for the integration). Why do I need Swarm on ECS? Make the case, etc, these already appear to be container orchestrators. What does Swarm add on top of these other systems? Is Compose trying to be vApp or not? (vApp itself not widely deployed).
So a matrix of what ECS can do versus Swarm, etc, would also help.
Ultimately, from an outside perspective, it looks like other parties are doing more Docker management a lot better than Docker, and these are moves to catch up, but it's unclear where they fit, and whether they are needed in those contexts.
Note: I wrote Ansible, if it's not clear to me, it's a problem -- and this is a common perception around a lot of folks I talk to :)
Consider presenting things for different learning styles if you can, especially as the technology doesn't really fit into traditional management patterns - I'm genuinely interested, and probably not the only one having this problem.
Again, genuinely interested in wanting to keep up.
Btw. Ansible is the best automation tools I have seen so far, I just ported our entire product line to use it, but it is offtopic in this thread. Just saying and thank you!!
There is a super high demand for this type of content, I created a couple screencasts about containers and docker, and still receive a few requests a week for more. It works wonders, when you can actually show people how it works, show them how it will save time, show them how it will improve workflow.
Screencast + transcript + code has worked really well for me. Here are some examples:
I'd prefer a non-screencast though, so it would be easier to go at your own pace, and jump back and review things easily. Maybe it's content WITH a screencast, but I'm finding the whole trend of tech-videos to be waaaay too time consuming compared to good reference material (things that are searchable and skimmable). Different people may have different levels of time to get immersed in it though.
All being said, I loved the docker "simulator" CLI. This is a good thing.
Aside: I don't really love the docker CLI in general, because of the whole SHA editing-existing-VM type of use cases, but there are good parts. In prod, you can skip most of that as you're building from Docker files and it gets a lot simpler, but it almost seems to try to tell you you build images interactively, and Docker files are a better way that skips those parts. It really feels better to me in a workflow where it's used as an image build system.
But using the CLI in the way that generates new SHAs and uploads is weird, and (maybe this has changed) being able to accidentally upload your image to the public so easily is sketchy :) That's probably changed since I've looked at it. Another reason for more docs on workflow and showing how to put it all together into a production env.
So maybe don't be proscriptive, but show people who are deploying from scratch, and maybe those that have never even used AWS, how to do that, and what the (I shudder to say the word) "best practice" workflow for using Dockerfiles and Dockerhub might be, with all of these components used in concert.
I think you nailed it.
ps. Haha, I actually just did a screencast series on Ansbile! Didn't know you were on HN. Thank you -- it's been really fun to work with ;) The Ansible docs are some of the best I have ever seen, and I commented on that heavily, makes it a real joy to have clear examples.
OT: Proscriptive is a word and will pass spell check but it means the opposite of what you want. Prescriptive is the one you want. (Same with prescribe, proscribe)
One of the things I really like about the Ansible documentation is that there are quite a few different, non-trivial examples for each command, often countering gotchas like wildcards or escaping. Thank you for that.
If anyone here is curious to get a quick summary, the general gist of the three orchestration tools goes like this:
1.) Compose (nee Fig) emerged out of a need for a standard way to specify the run-time properties of a related group of containers. If you've played with Docker, you may have noticed that flags on run commands (and controlling the order of things like linked containers) spiral out of control fast, so Compose lets you write them all down in a file and manipulate your container groups with a little "shorthand".
2.) Machine is a tool for creating and managing hosts which run the Docker daemon (the bit which does the actual heavy lifting with containers). When they are created, you can run Docker commands against them remotely using a local Docker client binary! I got really interested in this project for two reasons.
One is that boot2docker-cli (a small Go program to help bootstrap and manage a Virtualbox VM with Docker installed, mostly for OSX and Windows users) was a pretty nice little tool, but I wanted to be able to have multiple VMs and not just one! Machine is great at this.
Likewise, Docker Machine now does the same type of trick for cloud VMs as well - so if I just want to kick up a Digital Ocean server for a few hours, run some containers, and then remove it, I can easily do so without leaving the command line. The interface for other cloud providers to do this is pretty similar too, so the workflow of using Docker across all supported clouds is the same.
I think it serves a lot better for dev/test right now than for prod but like I said the project is still feeling things out.
3.) It's easier to think of Swarm in terms of the end game IMO - and the end game is a Docker API endpoint (just one!) which works across an arbitrary number of hosts. Therefore, the user doesn't have to reason about multiple hosts on the backend - they just treat it like they're running containers on one big computer. If there aren't enough resources to run all the containers that you want to, you could just kick up a few more hosts, add them to the Swarm, and Swarm will fill them in as it goes (kind of like a load balancer can do for requests). It can do cool things like set anti-affinity on containers so that they never end up on the same host together, or never end up on a Red Hat host even if one is in your cluster, or pack them in as tightly as possible on each host.
I don't think the orchestration tools address all needs directly (and IMO they shouldn't)- logging, monitoring, and HTTP load balancing are good examples of things which don't really fall directly into their province directly.
I'd prefer to see the direction that they evolve in be one where they integrate really well with other tools and/or provide standard interfaces to platform-specific tech. For instance, it would be awesome for docker-machine to be able to import instances created with Ansible, Terraform, etc. and function as kind of a "REPL" to their "compiler" by allowing users to introspect the state of their running containers. Or for Swarm to act as a front-end for ECS. Speaking of, to answer your question:
> Why do I need Swarm on ECS?
Lots of tools "speak Docker API", so being able to point them at a Docker API and not care that it is ECS on the backend could be really useful. Additionally, ideally it would free you from being tied to the Amazon API or tools if you either don't want to learn them (since that knowledge is only useful for doing AWS work) or if you decide down the line that using, say, Mesos on baremetal as a back-end would suit your needs better instead.
Anyway, hope that all helps :) I could go all day about this stuff.
With these updates I will have to update my own howto on how to use all 3 tools (4 with with the 'Engine') http://flurdy.com/docs/docker/docker_compose_machine_swarm_c...
I should probably write a tiny tool that wraps these tools and my common usage into a simpler shell script, maybe dockgod or something. I already got too many aliases in my dotfiles...
Take a look at this issue: https://github.com/docker/compose/issues/495
Back in January I went through the effort of outlining a solution that was approved by the maintainers before implementing it. After I provided a PR, I responded to revision requests by the maintainers, and still haven't seen this change go into the project.
It's a simple change. If this feature isn't the architectural direction Docker wants, they need to close the issue and reject the pull request, instead of changing the project over and over again so that I have to maintain a PR that's over three months old.
Only happens in us-east-1 due to it having eventual consistency (whereas all other S3 regions have read-after-write consistency). Use another region and the false 404s will go away.
Docker has a bad security reputation; this is one more step in the right direction.
Other people were attempting to solve this problem too - https://registry.hub.docker.com/u/kiyoto/docker-fluentd/
Ecept for ulimit, there is literally NO security improvement in this release.
Logging merely gives you a visibility - right, you get hacked but at least it's not painful to look for information, but I would really rather prefer not to get hacked due to the existing insecurities...
Yes, outright prevention is important. Yet proper centralized log collection and intelligence helps with all three missions, including prevention.
Proper logging allows you to identify known-good behavior patterns and outlying anomalies. With profiles in place, one can automate blocking of reconnaissance and probes, not just blocking known vulnerabilities.
- Docker registry, the old pre-2.0 version: I hate it. It's incredibly slow, and it raises lots of errors.
- Docker 1.5: Mostly stable and usable if you're on the right kernel, occasionally does something weird.
- docker-machine (from git): Very nice for provisioning basic docker hosts locally or on AWS. Nice feature: it's capable of automatically regenerating TLS certificates when Elastic IP addresses get remapped.
- docker-compose 1.1.0: Kind of a toy, but a fun toy, and it generally did what it advertised.
- docker swarm: With docker-compose 1.1.0 and docker 1.5, it was pretty much unusable. Simply running "docker-compose up -d" twice in a row was enough to make it fail with random errors.
I'm going to re-evaluate swarm with docker 1.6 and the new docker-compose.
My workflow is pretty much a samba share to my account directory in an ubuntu server, and a couple SSH shells on said server... easy enough to edit/run that way. (though my VM image started crashing, I'm now just remoting to an actual hardware server).
As the person who wrote the PR for its solution and have been waiting for it to get merged for months, this is super, super frustrating.
If you guys don't want to put the logic in, reject the PR and close the issue as won't fix. Quit stringing the community along.
Get it here:
Ambassador containers i guess? I dunno. There's just no easy answer.
Here's some use-cases for each (and in one case both!):
Did you mean to write something else? I would love to know where you got that idea.
Note: I work for Weaveworks.
> Weave seems super neat
no, thank you!
If you're looking for a minimal windows host for VMs and containers, that's "nanoserver" in the works:
Run Docker with ECS or with Machine with EC2, or any native linux VMs. Adding another abstraction layer seems pointless. Unless it is for Azure.