Hacker News new | past | comments | ask | show | jobs | submit login
Docker 1.6: Engine and Orchestration Updates, Registry 2.0, and Windows Client (docker.com)
258 points by bfirsh on Apr 16, 2015 | hide | past | web | favorite | 62 comments



I'm finding that the distinction between Swarm, Compose, and Machine is not immediately obvious. Docker would probably benefit to "you should use this when", kind of logic, between the two.

Clearly, Swarm is the lightweight "cloud" thing (yay!), but if I'm deploying to Swarm, do I use Machine or Compose? Etc.

Possibly too many nouns would be made simpler by a single command line tool with subcommands, that were less skeomorphic, and instead were named after what they did.

Just my two cents, but the documentation needs to explain a bit more of the basics at a high level before going into the weeds, IMHO.

The other gotcha page is "if I'm on Amazon", when would I want to use swarm, if at all, when I had ECS, and how much remains relevant? I'm guessing not so much still applies, but perhaps I'm wrong.

Another good article would be what parts of these new tools remain relevant if I'm using, say, CoreOS (fleet), Kubernetes, or Mesos, or OpenStack with Docker. I'm not saying the others are better, it's just difficult to visualize how they interplay.


Maybe they should adopt the same content marketing strategy of Digitalocean: pay $100 per article for "how to" tutorials targeting specific use cases.


I can't tell if you are snarking or not, but I find DigitalOcean's support for documentation to be helpful and haven't yet really noticed a quality problem.


It's a real content marketing campaign DO has been doing for years and it's made their website slightly more popular than Hacker News according to Alexa. Lots of startups could learn from this.

https://www.digitalocean.com/community/get-paid-to-write


Add an 'e' to the end, and it makes a lot more sense! https://www.digitalocean.com/community/get-paid-to-write


Thanks!


No I'm being serious, I agree with you.


DigitalOcean has a tutorial series on Docker[0] with a discussion on the components[1].

[0]: https://www.digitalocean.com/community/tutorial_series/the-d...

[1]: https://www.digitalocean.com/community/tutorials/the-docker-...


Tutum (a docker platform) offer this: https://www.tutum.co/writing/

[not affiliated with tutum, just a happy user]


I find a lot of value in those articles and have followed enough of them!

In a saturated market it might be how I've ended up with everything on digitalocean.


> if I'm deploying to Swarm, do I use Machine or Compose?

All three actually seem to work quite nicely together[0]

[0]: https://blog.docker.com/2015/02/orchestrating-docker-with-ma...


Yeah, I read that back in the day, and I'm still finding it hard to follow. I'm trying to pass on some marketing advice here mostly. Leaving it to the reader to try to assemble a video down into their brains, because they have problems with the text wall (particularly visual learners) is probably not a good thing. I work best off examples and pictures.

I'd benefit hugely by having a list of the CLI commands, without the text filler, showing a few use cases end to end. I have a hard time keeping all the video in my brain all at once. Just seeing the bash history alone, with some light text commentary, would work for me better than the video -- by a lot. I know different people learn differently. Show all three tools delivering a "pet store" type stack maybe. Tell me when do I want compose and when can I skip it, as quickly as possible.

I know these are things you can figure out in several hours/days by trying them all, but for those of us that don't have time, it needs to be presented at a higher level showing how they connect, rather than the marketing speak "it works great together" and then pages that contradict that by saying the integration is minimal.

The current result is everyone's Docker topology is VERY different. I know it's not proscriptive, but it's a bit of a mess. I've encountered a lot of people doing lightweight cloud with Docker (basically no-cloud) with Ansible and manual instance management, for instance, and there SHOULD be better options - but I think people go that route because they are confused about the complexity of options they have.

Back down to technical bits, I see the line "we're working with AWS to make Swarm usable on ECS", etc, as confusing and a bit marketey (as to the reason for the integration). Why do I need Swarm on ECS? Make the case, etc, these already appear to be container orchestrators. What does Swarm add on top of these other systems? Is Compose trying to be vApp or not? (vApp itself not widely deployed).

So a matrix of what ECS can do versus Swarm, etc, would also help.

Ultimately, from an outside perspective, it looks like other parties are doing more Docker management a lot better than Docker, and these are moves to catch up, but it's unclear where they fit, and whether they are needed in those contexts.

Note: I wrote Ansible, if it's not clear to me, it's a problem -- and this is a common perception around a lot of folks I talk to :)

Consider presenting things for different learning styles if you can, especially as the technology doesn't really fit into traditional management patterns - I'm genuinely interested, and probably not the only one having this problem.

Again, genuinely interested in wanting to keep up.


Could not agree more with you on this subject. It took me almost a week to understand what is what in the Docker world. We had to trash the original documentation and write one from scratch for our team because nobody really understood the original docs. I also had to go few extra miles about the docker registry because it is also not trivial to set up especially in a corporate environment. Most of the use cases cover only scenarios that you have an absolute greenfield project and there is nothing there (no AAA framework etc.). This makes it hard for newcomers to integrate with Docker and you pretty much need a Docker gal or pal around just for the sake of keeping the users of the Docker infra happy. Some of the developers hated Docker because of the quality of documentation and a lots of counter intuitive workflows, so I had the go an extra mile for those and massage them to accept it.

Btw. Ansible is the best automation tools I have seen so far, I just ported our entire product line to use it, but it is offtopic in this thread. Just saying and thank you!!


Would you consider sharing your rewritten documentation?


I wish I could. I might write it again as a public blog post or something and let you know.


What I think needs to happen, is they need to define some examples problems, then work through how you would solve them with Docker. A high level overview with diagrams, workflow diagrams, tools, commands, supporting docs. When you have AWS, Google, DigitalOcean, etc. all supporting Docker, there needs to be tons of docs, with extremely clear use cases, and implementation guides. Unfortunately, this takes time.. I think they are starting to do something like that with a few screencasts I have been. They almost need a weekly screencast or something ;)

There is a super high demand for this type of content, I created a couple screencasts about containers and docker, and still receive a few requests a week for more. It works wonders, when you can actually show people how it works, show them how it will save time, show them how it will improve workflow.

Screencast + transcript + code has worked really well for me. Here are some examples:

Docker: https://sysadmincasts.com/episodes/31-introduction-to-docker

Vagrant: https://sysadmincasts.com/episodes/42-crash-course-on-vagran...

Ansible: https://sysadmincasts.com/episodes/43-19-minutes-with-ansibl...


That with posted solutions might be good.

I'd prefer a non-screencast though, so it would be easier to go at your own pace, and jump back and review things easily. Maybe it's content WITH a screencast, but I'm finding the whole trend of tech-videos to be waaaay too time consuming compared to good reference material (things that are searchable and skimmable). Different people may have different levels of time to get immersed in it though.

All being said, I loved the docker "simulator" CLI. This is a good thing.

Aside: I don't really love the docker CLI in general, because of the whole SHA editing-existing-VM type of use cases, but there are good parts. In prod, you can skip most of that as you're building from Docker files and it gets a lot simpler, but it almost seems to try to tell you you build images interactively, and Docker files are a better way that skips those parts. It really feels better to me in a workflow where it's used as an image build system.

But using the CLI in the way that generates new SHAs and uploads is weird, and (maybe this has changed) being able to accidentally upload your image to the public so easily is sketchy :) That's probably changed since I've looked at it. Another reason for more docs on workflow and showing how to put it all together into a production env.

So maybe don't be proscriptive, but show people who are deploying from scratch, and maybe those that have never even used AWS, how to do that, and what the (I shudder to say the word) "best practice" workflow for using Dockerfiles and Dockerhub might be, with all of these components used in concert.


> So maybe don't be proscriptive, but show people who are deploying from scratch, and maybe those that have never even used AWS, how to do that, and what the (I shudder to say the word) "best practice" workflow for using Dockerfiles and Dockerhub might be, with all of these components used in concert.

I think you nailed it.

ps. Haha, I actually just did a screencast series on Ansbile! Didn't know you were on HN. Thank you -- it's been really fun to work with ;) The Ansible docs are some of the best I have ever seen, and I commented on that heavily, makes it a real joy to have clear examples.


> proscriptive

OT: Proscriptive is a word and will pass spell check but it means the opposite of what you want. Prescriptive is the one you want. (Same with prescribe, proscribe)


What's so wrong with the "best practices" phrase? It's simply a general phrase to describe the way that most of the community is doing things in a way that fully realizes the potential of a given product.


In a shameless self-plug, my book walks through setting the Docker development workflow, showing how to develop, test and deploy with containers. You can get it on early release now, but the current version doesn't have the deployment chapter yet. http://shop.oreilly.com/product/0636920035671.do


Note: I wrote Ansible, if it's not clear to me, it's a problem --

One of the things I really like about the Ansible documentation is that there are quite a few different, non-trivial examples for each command, often countering gotchas like wildcards or escaping. Thank you for that.


Thanks!


Hi, thanks for this. It can be really hard to get a feel for how other people are thinking about things when one is really zoomed in on all of the little details. I'd love to start a conversation - nathan@docker.com if you're interested. In my opinion the tools are still feeling out where their niches and boundaries are, so that may contribute to the confusion. Networking is a big missing piece here, for instance, and I'm excited for the work that people like Socketplane (now part of Docker) and Weave are doing to help in this area.

If anyone here is curious to get a quick summary, the general gist of the three orchestration tools goes like this:

1.) Compose (nee Fig) emerged out of a need for a standard way to specify the run-time properties of a related group of containers. If you've played with Docker, you may have noticed that flags on run commands (and controlling the order of things like linked containers) spiral out of control fast, so Compose lets you write them all down in a file and manipulate your container groups with a little "shorthand".

2.) Machine is a tool for creating and managing hosts which run the Docker daemon (the bit which does the actual heavy lifting with containers). When they are created, you can run Docker commands against them remotely using a local Docker client binary! I got really interested in this project for two reasons.

One is that boot2docker-cli (a small Go program to help bootstrap and manage a Virtualbox VM with Docker installed, mostly for OSX and Windows users) was a pretty nice little tool, but I wanted to be able to have multiple VMs and not just one! Machine is great at this.

Likewise, Docker Machine now does the same type of trick for cloud VMs as well - so if I just want to kick up a Digital Ocean server for a few hours, run some containers, and then remove it, I can easily do so without leaving the command line. The interface for other cloud providers to do this is pretty similar too, so the workflow of using Docker across all supported clouds is the same.

I think it serves a lot better for dev/test right now than for prod but like I said the project is still feeling things out.

3.) It's easier to think of Swarm in terms of the end game IMO - and the end game is a Docker API endpoint (just one!) which works across an arbitrary number of hosts. Therefore, the user doesn't have to reason about multiple hosts on the backend - they just treat it like they're running containers on one big computer. If there aren't enough resources to run all the containers that you want to, you could just kick up a few more hosts, add them to the Swarm, and Swarm will fill them in as it goes (kind of like a load balancer can do for requests). It can do cool things like set anti-affinity on containers so that they never end up on the same host together, or never end up on a Red Hat host even if one is in your cluster, or pack them in as tightly as possible on each host.

I don't think the orchestration tools address all needs directly (and IMO they shouldn't)- logging, monitoring, and HTTP load balancing are good examples of things which don't really fall directly into their province directly.

I'd prefer to see the direction that they evolve in be one where they integrate really well with other tools and/or provide standard interfaces to platform-specific tech. For instance, it would be awesome for docker-machine to be able to import instances created with Ansible, Terraform, etc. and function as kind of a "REPL" to their "compiler" by allowing users to introspect the state of their running containers. Or for Swarm to act as a front-end for ECS. Speaking of, to answer your question:

> Why do I need Swarm on ECS?

Lots of tools "speak Docker API", so being able to point them at a Docker API and not care that it is ECS on the backend could be really useful. Additionally, ideally it would free you from being tied to the Amazon API or tools if you either don't want to learn them (since that knowledge is only useful for doing AWS work) or if you decide down the line that using, say, Mesos on baremetal as a back-end would suit your needs better instead.

Anyway, hope that all helps :) I could go all day about this stuff.


I'd like to discuss this more with you. May I contact you directly? I work at GiantSwarm, but in the Bay. No relation to the swarm command in docker! ;)


yep! shoot me an email (see profile)


I think the distinction is clear. How you would swap in other tools not so clear.

With these updates I will have to update my own howto on how to use all 3 tools (4 with with the 'Engine') http://flurdy.com/docs/docker/docker_compose_machine_swarm_c...

I should probably write a tiny tool that wraps these tools and my common usage into a simpler shell script, maybe dockgod or something. I already got too many aliases in my dotfiles...


I've got a bone to pick with the new Docker Compose.

Take a look at this issue: https://github.com/docker/compose/issues/495

Back in January I went through the effort of outlining a solution that was approved by the maintainers before implementing it. After I provided a PR, I responded to revision requests by the maintainers, and still haven't seen this change go into the project.

It's a simple change. If this feature isn't the architectural direction Docker wants, they need to close the issue and reject the pull request, instead of changing the project over and over again so that I have to maintain a PR that's over three months old.

Very uncool.


The fact that it's still open probably means it is under some level of serious consideration. It was opened before they released Machine (and Swarm?) so maybe they just didn't know how/when it should fit in until the dust settles. Agree that they could have said something to this effect though.


So I'm not sure I like Docker so much anymore. In most ways the systemd-nspawn system seems a lot easier to use practically and to move into normal host deployment. The docker model shines when it comes to image setup, but the runtime and management aspects leave a lot to be desired.


So I work in this space and, I'll be honest: I'd never seen systemd-nspawn before. I'm interested. Thanks for the heads up.


So excited for Registry 2.0, the slowness of the current registry is a real pain point. Anyone who writes an article benchmarking the two against each other will receive my upvote.


We put our registries behind a caching Nginx, also has the nice side-effect of HA if you push to all of them.


Behind ELBs with S3 as the backend store is where its at :)


That is actually how I'm deployed right now. Still pretty slow, and S3 has its own downsides (mostly you can get false 404s after you've pushed an image but before it's fully propagated).


> and S3 has its own downsides (mostly you can get false 404s after you've pushed an image but before it's fully propagated).

Only happens in us-east-1 due to it having eventual consistency (whereas all other S3 regions have read-after-write consistency). Use another region and the false 404s will go away.

http://shlomoswidler.com/2009/12/read-after-write-consistenc...


The logging drivers reduce a major production pain point - standardized centralized logging that doesn't require modifying the underlying image.

Docker has a bad security reputation; this is one more step in the right direction.


It's crazy that (until now) docker always logged stdout/stderr to a file, and never rolled it. Without a separately configured logrotate (in copy-truncate mode), these log files will grow without bound, until the container is removed (usually replaced).


Reminds me of the day I foolishly did "docker run -d debian yes" so I could play with some of the inspection commands. I forgot about it and an hour later it had eaten nearly all of my hard disk space...


It may be critically important, but they can't do everything at once. They're moving incredibly fast as it is.

Other people were attempting to solve this problem too - https://registry.hub.docker.com/u/kiyoto/docker-fluentd/


"Docker has a bad security reputation"

Ecept for ulimit, there is literally NO security improvement in this release.

Logging merely gives you a visibility - right, you get hacked but at least it's not painful to look for information, but I would really rather prefer not to get hacked due to the existing insecurities...


SecOps mission is: prevent, detect, respond.

Yes, outright prevention is important. Yet proper centralized log collection and intelligence helps with all three missions, including prevention.

Proper logging allows you to identify known-good behavior patterns and outlying anomalies. With profiles in place, one can automate blocking of reconnaissance and probes, not just blocking known vulnerabilities.


I've been several docker components heavily, on a real system. This was the state of play just prior to the Docker 1.6 announcement:

- Docker registry, the old pre-2.0 version: I hate it. It's incredibly slow, and it raises lots of errors.

- Docker 1.5: Mostly stable and usable if you're on the right kernel, occasionally does something weird.

- docker-machine (from git): Very nice for provisioning basic docker hosts locally or on AWS. Nice feature: it's capable of automatically regenerating TLS certificates when Elastic IP addresses get remapped.

- docker-compose 1.1.0: Kind of a toy, but a fun toy, and it generally did what it advertised.

- docker swarm: With docker-compose 1.1.0 and docker 1.5, it was pretty much unusable. Simply running "docker-compose up -d" twice in a row was enough to make it fail with random errors.

I'm going to re-evaluate swarm with docker 1.6 and the new docker-compose.


Yay for a windows docker client... though, I'm already just SSHing to a server with docker on it.

My workflow is pretty much a samba share to my account directory in an ubuntu server, and a couple SSH shells on said server... easy enough to edit/run that way. (though my VM image started crashing, I'm now just remoting to an actual hardware server).


Oh cool, another change to fig/docker-compose that doesn't include the most requested feature: https://github.com/docker/compose/issues/495

As the person who wrote the PR for its solution and have been waiting for it to get merged for months, this is super, super frustrating.

If you guys don't want to put the logic in, reject the PR and close the issue as won't fix. Quit stringing the community along.


Did you really need to grind your axe in two separate comments on the same story? The first was sufficient.


Whoops, sent it when HN went down and didn't think it had posted.


Everybody who is interested in test-driving Docker 1.6 in a really easy and fast way and has a Raspberry Pi lying around should have a look at our prepared Docker SD card image.

Get it here: http://blog.hypriot.com/post/docker-1-6-is-finally-released-...


Not sure if images on the registry also support the new labels functionality, but that might be a way to avoid to obvious image architecture (x86 vs ARM) issues.


See Docker 1.6, Compose 1.2, and Machine 0.2 in action at https://realpython.com/blog/python/dockerizing-flask-with-co...

Cheers!


I still don't understand multiple host networking. I'm not sure if i'm missing something super obvious, or if it's just complicated. I like the openvswitch approach, but it's a pretty traditional approach, and won't really work on aws/gce. Weave seems super neat, but i still want the option to run on aws/gce, and weave (afaict) precludes that.

Ambassador containers i guess? I dunno. There's just no easy answer.


Weave Network works great on AWS, GCE, Azure, your laptop, ... Pretty much anywhere you can run a (privileged) container, you can run Weave.

Here's some use-cases for each (and in one case both!):

http://weaveblog.com/tag/gce/

http://weaveblog.com/tag/aws/

Did you mean to write something else? I would love to know where you got that idea.

Note: I work for Weaveworks.

> Weave seems super neat

Thanks!


Wow. that looks really great. I'll check it out this weekend.

> Thanks!

no, thank you!



Any possibility of a boot2docker equivalent for windows, i.e. run docker in minimal linux VM? Seems like this would open new possibility for distributing client side apps via docker containers. Cross platform apps with http frontend would be viable with easy tooling around VM and docker.


Looks like boot2docker does have a windows installer @ https://github.com/boot2docker/windows-installer/releases


Using docker-machine as a superset of boot2docker is my favorite on OS X instead of the "plain old" boot2docker command (due to simple local machine management and VMware Fusion support), doesn't it run on Windows since it also has an Hyper-V driver?

[0]: https://github.com/docker/machine/tree/master/drivers


You won't run windows docker images on a linux host. That's not how docker works.

If you're looking for a minimal windows host for VMs and containers, that's "nanoserver" in the works:

http://www.techradar.com/news/software/operating-systems/mic...

http://www.infoworld.com/article/2909650/devops/microsoft-na...


Doesn't boot2docker run images on a linux host? It boots a minimal Ubuntu VM in virtualbox. How would that be different on windows?


Boot2Docker also comes with a docker client for Windows. In that case you don't have to run or install VirtualBox at all. You can just point it to your existing Docker host.


Does anyone know if/when AWS will support docker on Windows?


I am sure there may be one but I don't quite see any reason for running Docker inside a Windows VM in AWS?

Run Docker with ECS or with Machine with EC2, or any native linux VMs. Adding another abstraction layer seems pointless. Unless it is for Azure.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: