Hacker News new | past | comments | ask | show | jobs | submit login
Docker 1.0 (docker.com)
709 points by dominotw on June 9, 2014 | hide | past | favorite | 130 comments



Docker is an amazing tool, but there are quite a few misconceptions about it.

Nearly all the articles and examples seem super easy, but it's quite a challenge to use in an actual multi-host production environment supporting mission-critical systems.

This can send the wrong message to beginners and send newbies down the wrong path, which ends up as a disservice to Docker when they hit the complexities and become disillusioned with it.

To address those misconceptions, I've just posted the article "Docker Misconceptions": https://devopsu.com/blog/docker-misconceptions/

I spent about a month working on setting up Docker for production scenarios and reading everything I could on it (all the docs, courses, and well over 100 articles - see Docker Weekly for a wealth of resources). My post covers the main misconceptions I see popping up and also some advice for simplifying Docker use if you do decide to use it for multi-host production.

I'm sure I've probably missed some things, so please chime in if I've missed something important.


Thanks, that's exactly the sort of thing I was hoping to learn from the HN comments.

In the post you link above, it seems like you're mainly comparing Docker to traditional, non-virtualized setups. Do you (or does anybody) have thoughts on comparing it to virtualized environments? I recently experimented with Xen as a way to isolate services, and it seemed awfully heavy; does Docker end up being a lighter burden?


Thanks. Yeah, most of the misconceptions seem to come from folks without previous experience with self-managed virtualized setups. Those that have had that experience usually already understand the complexities involved.

I'd also love to hear more stories about Docker vs the other virtualized setups. It generally seems to be a dramatic improvement in usability, performance, etc, but I'd love to hear more about the specifics from those who have made the switch.

Here's a few production users: http://www.docker.com/resources/usecases/ http://blog.docker.com/2013/12/baidu-using-docker-for-its-pa...


We used Docker for the Hello World Open coding competition.

We provided CI for the 2500 teams and also ran the competition using Docker (both on top of AWS EC2 instances). I think that without Docker the implementation would have been a lot more painful and almost impossible given the time constraints we had for the project.

Fast starting and fast shutdown of build and test processes allowed us to run CI builds efficiently on a fairly small amount of nodes. This saved us money and also helped to keep the infrastructure manageable.

Also the additional level of security was really nice since we were running possibly malicious code from 2500 sources. The malicious part became even more evident when Kevin Mitnick tweeted that he might participate: https://twitter.com/kevinmitnick/status/446359450614378496

The Docker image repository was also really handy when we ran the races. We needed build the bots only once and then we could just use the prebuilt images on on race nodes.

Building the actual base image was fairly painful. We support about 20 different programming languages and also package in quite a few libraries. Once the base image build was fully automatized and we had the Docker related processes fully working, it was almost a revelation on how Docker could improve things over the classical virtual machine approach. I can't wait to try it out on another project :)

By the way, the competition finals are today (Tuesday) and streamed live @ https://helloworldopen.com/ (in English and Finnish)


That sounds really interesting. Please document it! I'd be especially interested to read about the revelations.


My gripes:

* half git-syntax/half package manager-syntax is cumbersome to me.

* Inability to compile Docker from source without using the Docker Project's provided "black box" binary Docker.

* No formal explanation of how to create your own base image from scratch (ie. not using a Docker Project provided base image).

* Defaults to storing images in the Docker "cloud" repo. (would have preferred a git-like setup where pushing my images to a repo of my choice was more "normal")

* GO Lang is still pretty obscure for most developers, and an interesting choice given the language's youth and likely-hood to change rapidly as it matures.

* Docker is built on-top of existing Linux technologies, mainly LXC, making it more-or-less disposable in the future (as someone else figures out how to abstract/manage LXC better)

* Docker is mainly built by Dotcloud - a for-profit company. Dotcloud has been very generous in their effort... but, being for-profit, what is their take out of it? (It can't be "just for the goodness of Linux").


> Inability to compile Docker from source without using the Docker Project's provided "black box" binary Docker.

You can compile it from source (assuming you have a go environment and set GOPATH properly):

$ sudo apt-get install libdevmapper-dev libsqlite3-dev btrfs-tools

$ go get -u github.com/dotcloud/docker/docker

docker will be in your $GOPATH/bin

> GO Lang is still pretty obscure for most developers, and an interesting choice given the language's youth and likely-hood to change rapidly as it matures.

Go1 promise will be kept by Go team[0].

> Docker is built on-top of existing Linux technologies, mainly LXC, making it more-or-less disposable in the future (as someone else figures out how to abstract/manage LXC better)

No, docker right now is built on top of libcontainer[1].

0. http://golang.org/doc/go1compat

1. http://blog.docker.com/2014/03/docker-0-9-introducing-execut...

Edit 1: Format


> * GO Lang is still pretty obscure for most developers, and an interesting choice given the language's youth and likely-hood to change rapidly as it matures.

The go authors have committed to keeping major versions stable since v1 and are unlikely to massively change anything.

I'm sure much of google's infrastructure is coming to rely on go (and docker!) so they will be as unhappy about such drastic change as others.


Fair enough point about GO Lang, however, should note Docker as a project was started way back when GO's future was... perhaps iffy. Maybe good foresight on the Docker project regarding GO Lang.

However, this doesn't excuse the remaining gripes listed above.


Fair enough point about GO Lang, however, should note Docker as a project was started way back when GO's future was... perhaps iffy.

You mean 14 months ago?


> Inability to compile Docker from source without using the Docker Project's provided "black box" binary Docker.

I also found this particular annoying.

To build it from source, make sure to install all dependencies (check the provided Dockerfile for details), add both docker and docker/vendor to your GOPATH and then execute `bash hack/make.sh dynbinary`


"half git-syntax/half package manager-syntax is cumbersome to me."

Offer them some suggestions on how to clean up the UI.

"Inability to compile Docker from source without using the Docker Project's provided "black box" binary Docker."

Addressed in a different comment.

"No formal explanation of how to create your own base image from scratch (ie. not using a Docker Project provided base image)."

As usual, documentation rarely keeps up pace with the rest of the project despite best intentions. Mind you, despite some holes, their documentation is quite impressive.

"Defaults to storing images in the Docker "cloud" repo."

This will probably change now that Google is embracing it.

"GO Lang is still pretty obscure for most developers, and an interesting choice given the language's youth and likely-hood to change rapidly as it matures."

Addressed in a different comment.

"Docker is built on-top of existing Linux technologies, mainly LXC, making it more-or-less disposable in the future (as someone else figures out how to abstract/manage LXC better)"

This is a benefit of Docker! I'd rather see them build it with existing components wherever they can.

"Docker is mainly built by Dotcloud - a for-profit company. Dotcloud has been very generous in their effort... but, being for-profit, what is their take out of it? (It can't be "just for the goodness of Linux")."

They're following a tried-and-true pattern for generating revenue off an open source model (giving away the product for free and making money on support and consulting). Companies that try to throw out a crappy product and charge for support quickly find themselves with no customers. Also, now that Google is on-board, you can expect the product to improve significantly as they contribute to the repo even more.


> No formal explanation of how to create your own base image from scratch (ie. not using a Docker Project provided base image).

http://docs.docker.com/articles/baseimages/

Some links there for examples. Ultimately the base image is a tarball of a filesystem.


> half git-syntax/half package manager-syntax is cumbersome to me.

For what it does, the syntax seems pretty natural, but that's all subjective, so of course YMM(and clearly does)V.

> Inability to compile Docker from source without using the Docker Project's provided "black box" binary Docker.

I think this is more accurately phrased as "no simple documentation of the dependencies to build docker from source without using the pre-built dev container".

> No formal explanation of how to create your own base image from scratch (ie. not using a Docker Project provided base image).

Well, there's an explanation-by-example @ http://docs.docker.com/articles/baseimages/ which seems to indicate that the process is "take the root file system of the Linux system you want to make into an image, tar it up with specified options, and then import the tar."

Admittedly, this documentation could be improved.

> Defaults to storing images in the Docker "cloud" repo. (would have preferred a git-like setup where pushing my images to a repo of my choice was more "normal")

I agree.

> GO Lang is still pretty obscure for most developers, and an interesting choice given the language's youth and likely-hood to change rapidly as it matures.

Its "Go" not "GO". And why is this a gripe? If it was true (and it doesn't seem to be, Go has a stable 1.0 release with a strong backward-compatibility commitment), that might be an added dev cost for the Docker team, but unless there was a reason to think that the cost didn't have sufficient benefits, shouldn't be a reason for a potential user to complain.

> Docker is built on-top of existing Linux technologies, mainly LXC, making it more-or-less disposable in the future (as someone else figures out how to abstract/manage LXC better)

Docker is first and has momentum, and doesn't seem like its going to stop development just because it hit 1.0. So, yeah, something better might come along, but its not like Docker is a stationary target.

> Docker is mainly built by Dotcloud - a for-profit company. Dotcloud has been very generous in their effort... but, being for-profit, what is their take out of it? (It can't be "just for the goodness of Linux").

The whole list of services (training, consulting, and support) that they announced today [1], which were previewed in the earlier blog post about Docker 1.0 [2], makes that pretty clear.

[1] http://blog.docker.com/2014/06/docker-announces-new-enterpri... [2] http://blog.docker.com/2014/06/its-here-docker-1-0/ : "In addition, to provide a full solution for using Docker in production we’re also delivering complete documentation, training programs, professional services, and enterprise support."


You're at that stage of new technology adoption of where you think you know enough to criticize an idea but don't know enough to answer your own questions. Give it a few more hours of experience.


It's lighter in the sense that it should require less resources to keep each container running, but comparable in the sense that you still need to orchestrate how services connect and how each container is configured.


This, exactly. I've heard many people talking about how Docker will put an end to a thousand different problems, especially regarding configuration management... But it really won't.

You still need to be able to configure everything inside the container (thus the need for a CM tool), and then also the environment and infrastructure inside which the containers live (thus the need for infra management tools).

Docker really doesn't make infrastructure management markedly easier—just faster, more flexible, and slightly more powerful. It's one layer of abstraction over a VM, IMO, and that's a good thing, but carries with it it's own set of complexities.


> You still need to be able to configure everything inside the container (thus the need for a CM tool)

I'm missing something, but in what scenarios would you need to do something more complex than what can be expressed in a few RUN and ADD commands in a Dockerfile?

My limited experience is that indeed, orchestration is still completely needed. We underestimated that and now we have a bunch of hacky shell scripts that build and start and stop docker containers at various moments. But the docker containers themselves are pretty simple.


It depends on what you're doing inside the containers themselves. If you're simply pulling from a git repo to update a code checkout, and installing one or two things via apt/yum, shell script/RUN isn't so bad. If you're approaching containers more like 'lightweight VMs', the string of RUN commands gets unmaintainable, fast.

That can indicate some deeper problems, but in some circumstances (and especially when making updates to your container images), using configuration management for individual containers can be helpful for the same reasons even CM-ifying a small single-purpose standalone VM can.


Great article! Is it general consensus that Phusion's base image is a good way to go? It's what I've used so far, but I've yet to hear any consensus.

I agree with the Docker Misconceptions article for the most part, but that might just be because it what makes the most sense to me (and I'm still likely ignorant of some important configuration to consider :D )


Thanks for a great summary.

>For logs, you can either use a shared directory with the host or use a remote log collection service like logstash or papertrail.

I would not put Logstash (software) and Papertrail (SaaS) in the same bucket though. With Logstash (or any other data collectors like Flume, Fluentd, etc.) you need a backend system to go with it.

Sorry to nitpick, but logging is something I work on/think about all day, so I thought to point it out.


I think CoreOS solves a number of these problems. It's no walk in the park to set up but it's a really nice stack.


Thank you for writing that article! I've been telling similar things to people over and over, but now I can just link to your article.


I'm glad to see docker stabilizing but it's very disappointing to not see any changes addressing the logging support that was promised in 1.0 (https://news.ycombinator.com/item?id=7712138 reply by shykes). The infinitely increasing-in-size and unconfigurable logfiles generated by containers in docker are basically a dealbreaker unless you bake your own logging solution into each container and avoid stdin/stdout.


Yeah, we spent some time with logging providers to get feedback on the design, and had to go back to the drawing board a little bit.

We're going to show the result of that tomorrow :)

We decided to ship 1.0 anyway just for the sheer volume of bugfixes. But don't worry we are still on a monthly release schedule. We are far from finished shipping stuff!


This sounds like a good fit for LoggerFS[1] - if at all possible I'd try to avoid logs being local to a given host, and instead pipe them off to a central log processing host.

[1] http://developer.rackspace.com/blog/introducing-loggerfs.htm...


I've found starting docker containers using something like systemd works really well. Basically you do something like this:

  # systemd unit file
  Description=My Service
  After=docker.service
  Requires=docker.service

  [Service]
  ExecStart=/usr/bin/docker run username/app
Notice I'm not using the '-d' flag in the docker run command. That forces all output to stdout/stderr, which will be routed to the systemd journal. Sadly this only works for apps that can log to the console instead of files.

More examples here:

https://coreos.com/docs/launching-containers/launching/getti...


I wonder if a very simple solution would be enough: with a one-line change, the logs could be created as named pipes instead of files. Then you could just write some upstart scripts/systemd units that do

    cat /run/docker/mypipe >/dev/log
And you'd be good.

---

I'm not sure this is the right approach, though. In my opinion, docker shouldn't log at all—containers are unix processes, and Unix processes log to ephemeral stream-sink endpoints (think AWS SNS topics) called stdout and stderr. To collect and persist those logs, the Unix Way is to use shell redirection to "attach" those endpoints to subscribers.

Now, in Docker's daemonized run mode, these pipes are just hanging loose. But docker provides the "docker attach" command precisely so you can reattach them to something (or as many somethings) as you like. All you have to do if to get sensible logging if you're using e.g. Upstart is to create a service that runs "docker attach <mycontainer>": the logs will flow from the container's stdout/stderr to the service's stdout/stderr, and Upstart will catch them and handle them sensibly from there.

So, having done something like that, the important bit would actually be making sure docker doesn't do its own broken logging at all, except in the case of test containers that will never be attached to. Actually, this also cuts across the use (or lack thereof) of the --rm flag: it'd make sense for containers started with --rm to not log. (The generalization of the --rm semantics would seem to put such containers in the same category as e.g. EC2 instance-store instances.)


In general you ought to just be able to "docker run" in the foreground from your systemd script, and then stdin/stdout will go into the journal where you can handle things like any other file. This works now, except that the logs are also duplicated into /var/lib/docker.


FWIW, I've found running supervisord for all containerized apps with a log redirect to a host-mounted path to be a perfectly reasonable solution.

I would hate to see docker go down a path where it folds in its own logging "framework", which, I feel, would be going too far (poor separation of concerns, etc).


Well from a 12-factor app perspective, having each container dump its output to stderr/stdout is basically perfect as long as docker has a pluggable mechanism for dealing with that output. The "docker attach" API is all that you need, since you can attach prior to starting the container. Then you just need a way to tell docker not to save the logs via some "--disable-logging" option to "docker run" and "docker start".


Using a log shipper is probably the best workaround in this scenario

1.) logrotate at the end of the day 2.) have your logshipper watch docker log folder for each container 3) Log shipper ( or collector) ships files as they are updated to a central server and you are free to archive or delete as you wish.

Many of the central logging systems can detect rolled logs, so this set up is not much of a stretch.

I do agree more configurable logging is a bit of an oversight, especially for something like Docker.


Is a linux tool like logrotate acceptable here? Wouldn't this be able to rotate the std-out infinitely increasing log files? Yes this isn't built in to docker, but does seem like a reasonable solution.


I've never seen a recipe documenting whether docker can handle a logrotate. Most daemons require you to HUP them after rotation. In general logrotate is a complete hack in the way it interacts with daemons.


Agreed, but there are workarounds: https://github.com/progrium/logspout


How is this different from using a regular non-docker setup?


There are docker commands (docker logs) that are dependent on this format. The primary issue is that if you deal with logs yourself via attaching to the container and piping them somewhere, the logs are still duplicated to these json files. It's unexpected, a waste of space and insecure.

That being said, logrotate would sort of work with caveats since the json files are one entry per line last I checked.


Unfortunately, installing the Docker client is currently broken, at least via brew: https://github.com/dotcloud/docker/issues/6256 — This made it into 1.0

The CHANGELOG states Production support as the 1.0 feature, but how am I supposed to be confident using Docker 1.0 in production if I can't use the client from my own machine? Or is there some other preferred install method?


This https://docs.docker.com/installation/mac/ says to use a download and doesn't mention homebrew. If it doesn't work then I understand not feeling confident in their release, so you should try it!

TLDR: Homebrew does not appear to be an official install method.


Thanks a bunch for that link; I wasn't aware that they had also released a very handy .pkg installer for boot2docker + docker. The installer works perfectly and, since it's the officially supported installer method, I'm polling to get the docker and boot2docker formulae removed from Homebrew to ease confusion: https://github.com/Homebrew/homebrew/pull/30013


In fairness, nobody is using brew to install something on their production environment.


Really? Never read/worked/etc in a os x server environment, but it seems like brew would be a good package manager for automating installs...


Hardly anyone uses os x server.


I do, however, like using the Docker OS X client to interface with my staging and production environments.


The very first thing I clicked on was "Try It!"[0] in the header. I was then promptly greeted with a 404 error.

I'm with you on this one. Not exactly confident about using this in a production environment quite yet.

[0]https://www.docker.com/resources/tutorial/


You need to use docker pull learn/tutorial - not: docker pull tutorial


Homebrew is not an officially supported target. We build an official Darwin client as part of every release, on a Linux server using go cross-compile. Build failure of that target (or any target) would atomically roll back the entire release.

You can check out our build and release tooling in the ./hack subdirectory.

Fwiw I use the Darwin build every day and it works great.


Anecdote. Have only very recently tried to install Discourse via Docker on Centos 6.5 (the only recommended way to install Discourse now) and it failed.

edit: yes I realise that the docs tell you to use a 64-bit version of a recent Ubuntu


The Discourse Docker image doesn't seem to be very well maintained. I tried it and it ran for ages before failing. Don't judge docker by it. I couldn't get it to work but I'm using Docker to handle my XMPP server (as a test of the tools) and quite happy with it.


I was hoping that Docker 1.0 would have a 'docker login' or 'docker shell' command for spawning a shell within the container. In baseimage-docker (https://github.com/phusion/baseimage-docker), we run an SSH daemon so that people can login to the container for administrative purposes. In the past, it that approach was criticized because it was also possible to use lxc-attach, but since Docker 0.8 or 0.9 lxc-attach is not usable anymore because the default Docker backend no longer uses LXC. The Docker authors promised they would have a solution in the future, but sadly it didn't make it to 1.0. I guess we'll continue to use SSH until they have an official solution.


https://github.com/polydawn/siphon-cli might address your needs for now. It's just a tiny little wrapper around terminals that shuttles the connection over a unix socket... which in the case of docker, you can put on a mount to access from the host. Author here; I built it to replace that use case of sshd with something that doesn't have the overhead of key management. Of course if you're doing something with network gaps involve, sshd might still be the right answer for you.


I still use this on 0.11:

docker ps -l --no-trunc lxc-attach -n xxxxxxxxxx


For those wondering what docker is:

http://www.docker.com/whatisdocker/


On that, I was at DrupalCon last week and the most mind expanding session I went to all week was about container-ization in general. I'm a front end dev, but I'm really interested in knowing more about Docker.

Here's the video -- https://austin2014.drupal.org/session/mo-servers-mo-problems


And here's "narrated slides" from my WordCamp Philly talk on containerization, Docker, and Dokku this past Saturday. The video isn't ready yet, and my talk was interrupted by a pride parade and some technical difficulties, so this is probably the best way to experience it.

https://www.youtube.com/watch?v=ajVHyqeGjV4


Is there a very specific use case someone could share so I can visualize what docker really is?

Something more than a hello world.


Let me try: Say you're going to deploy an open source application that requires a whole pile of dependencies. Normally you would have two choices: ship the source with a bunch of readme's and let the recipient figure out how to install your software and all dependencies and to resolve any conflicts with other software already on the machine (aka version hell).

Or you could ship a relatively large fully installed VM image of the OS+all the application parts and dependencies (this may be problematic if your target is a licensed OS).

Docker gives you something in between.

Think of a chroot'd environment (not that that's what it is, but it shares some concepts) with all the application code, libs and dependencies pre-packaged that should run theoretically out of the box.

That way you can have a 'war' like file that contains everything required to run your app and you're 100% sure that it will just work, rather than to have to field a million questions about how to make your app run on 'obscure distro x' or 'obsolete version y'.

Another use-case would be that if you're deploying something distributed across a large cluster that you can build your docker container on your desktop box, test it until you're happy and then deploy to your cluster without having to re-test everything. Since the container wasn't changed it should work.

It's a way of isolating dependencies between various pieces of software using one single container to hold them all.

Of course this approach has its own unique set of problems, but it solves certain things quite elegantly.

I hope that explains it and that if it is wrong in some aspect that someone will correct me.


Normally you would have two choices: ship the source with a bunch of readme's and let the recipient figure out how to install your software and all dependencies and to resolve any conflicts with other software already on the machine (aka version hell). Or you could ship a VM image.

Option 3: use a package manager.

Now, I realize this doesn't solve the existing software conflict problem you mention. Nor is it mutually exclusive with shipping an image: you could always construct an image by installing packages into it.

I've always been a little wary of the "ship a golden image" idea. How do Docker images get updated? What if there's a security issue with software in the Docker image -- can my package manager automatically apply security updates? How do I push other upgrades to the client once they have a Docker instance -- do they need a whole new image from me? Package managers have solved these problems, and I hope we're not reinventing those solutions in the Docker world.

If you're familiar with Docker, please enlighten me on these points.


My understanding: If you provide an image then, yes, you're supposed to build a new image every time there's an OS security update. Users of docker containers aren't supposed to run 'apt-get upgrade' themselves inside containers. In fact, people build who build app images on top of base images aren't even supposed to run 'apt-get upgrade' in their Dockerfile---it's the responsibility of the base image to be up-to-date. See:

http://crosbymichael.com/dockerfile-best-practices-take-2.ht...

This does seem to get a bit cumbersome. I'm at DockerCon today and Fabio Kung mentioned in his talk that this is one difference from Heroku's container platform---they provide the base image and can update it without requiring you to rebuild your application slug. He said there's been some discussion of a possible "docker rebase" command that would produce new images by replacing lower-level layers while keeping higher-level layers the same.


docker rebase sounds like an awesome idea. Is there a mailing list thread or GitHub issue?


If your container has a whole OS image in it, then you can run the package manager for that OS to update it.

If you just have the application running in the container then you just replace it with an updated container.


> How do Docker images get updated?

I rebuild the image. At the moment with the giant stupid image that we use for rhell land it takes about 5 minutes (builds a vmware, virtualbox, aws, qemu, and docker image (packer.io is kinda cool))

Then redeploy


Option 4: Statically link your binaries.

By the way, I do have a basic idea of what Docker is, as well as some inclination with OS-level virtualization (FreeBSD jails and LXC), but OP's example wasn't particularly inspiring.


This. Or, similarly, just put all your dependencies in a path relative to your executable. Problem solved.

I'm an enormous fan of docker for solving linux sysad woes (stacking up magic numbers in /etc/passwd, etc), and it's also great as an asylum for badly designed or badly packaged applications, but let's not forget the utter magic of paths beginning in "./".


In many cases, package managers are insufficient to solve the problem. A lot of times they don't handle "Service A must use dependency version X.* but Service B must use version Y.*"


I'll point out that Nix, which is used by NixOS but also runs perfectly on other distros, does handle this wonderfully.


So, it's a general-purpose version of python's virtualenv. Neat.


The comments here do a pretty good job explaining the technical details and common use cases. If, however, you'd like a more narrative-based explanation, you might want to check out my PyData conference presentation (http://bit.ly/1l1PPSe). It's a bit long (about 20 mins), but I try to explain the circumstances in which Docker makes sense.


Here is the CHANGELOG: https://github.com/dotcloud/docker/blob/master/CHANGELOG.md

    1.0.0 (2014-06-09)
    Notable features since 0.12.0
    * Production support


Note that Docker 1.0 (and the brief 0.12.0) changed its TCP port from 4243 to 2375. If you use a remote client (like `brew install docker`) and have `DOCKER_HOST=4243` in your environment, you'll have to update it.

To ease this pain across a large team, I'm considering using netfilter/iptables or a TCP proxy to make the old port available in our VM environment during transition.

https://github.com/dotcloud/docker/commit/5febba93babcf8c4b0...


Do we know yet if mounting a directory from host as a data volume is available in Mac OS X now? I think something along these lines was promised (not able to find the source now) - it's the only thing blocking me from using docker as my everyday dev environment.


I've been working on getting a dev environment set up with docker over the weekend, and I finally have something I'm happy with.

I don't have a nice blog post to show you, but the basic idea is:

- Use the CoreOS Vagrantfile (+ extra files like user_data and any service unit files)

- Share your local directory with the vagrant box with config.vm.synced_folder (just uncomment a line in the Vagrantfile)

- set the DOCKER_HOST and FLEETCTL_TUNNEL env variables so that the local tools work transparently with the vagrant box.

- Then you have the option of running a docker image with -v and sharing the vagrant directory with the docker container, or

- You can dive into CoreOS stuff and write a unit file for your service, and that will share the directory with -v

Now local files sync into the docker container, and if the server is configured to reload on file changes you're good to go.


Thanks a lot, I'll give this a shot tonight. Not too familiar with CoreOS yet; I was hoping that I'd simply be able to mount a volume similar to how I might do it in Linux. I understand the problems with doing that though...


Still a work in progress. See https://github.com/dotcloud/docker/issues/4023


This is a huge undertaking and we are determined to get it right. Thanks again to Brad Fitzpatrick and everyone else contributing this building block!


Currently docker can only mount folders from the host VM into your container. On OS X your host will most likely be VirtualBox. Thus you need to sync the folders from your mac into the VM, and from there into docker.

The hack that docker-osx (https://github.com/noplay/docker-osx) uses has been very effective for me. Basically it mounts your user directory into the vagrant VM at the same path. From there, you can bind-mount any folder in your home directory easily via something like "docker run -i -t --rm -v $(pwd):/srv ubuntu /bin/bash".


See here: https://github.com/dotcloud/docker/blob/v1.0.0/CHANGELOG.md

Not sure what "Production support" really means.

But, Docker is awesome software!


Quote from the blog post:

> First, while many organizations have cheerfully ignored our “Do not run in production!” warnings, others have been waiting for a level of product maturity before deploying production workloads. This release’s “1.0” label signifies a level of quality, feature completeness, backward compatibility and API stability to meet enterprise IT standards. In addition, to provide a full solution for using Docker in production we’re also delivering complete documentation, training programs, professional services, and enterprise support.


A release that seems somewhat rushed, and a Red Hat media event tomorrow. I wonder if the two are related.


Today is the first day of docker con [1], that is surely the reason for the announcement.

1. http://www.dockercon.com/


Congrats to the team - I'm wrapping up a deployment on 0.11 now; and, though there are definitely some pain points, docker wasn't even near the most painful part of the process.


Version was changed to 1.0.0-dev in Github. I think this was accidental that the version said 1.0.0.


Docker .11 was also Docker RC 1 for 1.0, .12 was RC 2. So I think it may not be accidental.


Yep, I was wrong, http://blog.docker.com/2014/06/its-here-docker-1-0/

I was just a little skeptical since there wasn't an official blog post to go with it


This is awesome! Now CoreOS just needs to follow suit and hit 1.0...


In their last article they said they're about a week behind, so by next week we should be production ready on all fronts. Cannot wait.


You'd think IPv4 connectivity with containers would be high up the priority list to get working: https://github.com/dotcloud/docker/issues/2174


It looks like it is pretty high on the priority list, the last update in that thread is on Friday.


Docker has a lot of potential and I recently really dug into it. Seems like there are still some rough edges though.

For example, when working you end up with a lot of old items or layers hanging around, and its not clear how to remove or collapse them. I've seen posts complaining about this.

Also, on some commands it outputs the the id as the first column, and sometimes the fourth, or other column. This will make script pipelines harder to write.

I've not gotten far enough to worry about logs, but I hear there is an issue with them.

It is very true I may not be using it correctly... or there could have been lots of stuff still to be done for a .12 release?


I think most (all?) of the commands can be performed over the API with JSON coming back, so the order of output shouldn't matter too much.


JSON over HTTP is not an alternative to a good CLI interface. As such, your comment, although true and happily so, is a non-sequitur.


Can someone explain to me the practical differences between vagrant and Docker? I know they're implemented in quite different ways, but in my utterly non-devops mind they seem to achieve the same thing. Wrong?


Scroll down to the bottom of this page to see Docker VS VM's.

https://www.docker.com/whatisdocker/


At last, it was already production ready, mostly bug free and fast enough, but having 1.0 release makes one more comfortable when they highly depend on it.


I am right in thinking that Docker can virtualize any environment, as long as that environment is Linux, right? Because I have an office plugin I need to test against various different version of office and content management repository and so on and it feels like this would be ideal.

Possibly I'm reaching here - but I thought I'd ask anyway, I don't know much about Docker.


The Microsoft Azure team is talking at DockeCon tomorrow. It's just the tip of the iceberg in terms of our collaboration together. More to come. I'm extremely optimistic.


There's no single mention of Linux in http://www.docker.com/whatisdocker/ page. Yet it looks like it's currently (almost) Linux-only framework and it's highly dependent on the OS specific feature. Comparison with VMs seems a bit unfair to me, because it would sound that Docker can emulate any architecture too. They even said ``"Dockerized” apps are completely portable and can run anywhere - colleagues’ OS X and Windows laptops.'' which adds confusion, because it sounds like it's a cross platform format, which apparently isn't. I don't deny this is immensely useful software for people using the clouds, but the way it's advertising is a bit misleading to me.


I don't have any experience with Docker either, but from the little I've read it's built on top of Linux Containers (http://en.wikipedia.org/wiki/Linux_Containers).

Windows has App-V (http://en.wikipedia.org/wiki/Microsoft_Application_Virtualiz...) which seems like a similar technology (I believe Office 365 uses this). Note that Office plug-ins are specifically called out as something that is problematic with this tech.


What is the difference between this and rpm+puppet with rpm+puppet I have no problems installing my apps with my dependencies to any server and best of all as its the developers who create the rpms+puppets they hand it as a building block to production who install it in a few seconds on any machine.


Gripe:

When you start a container and forget to forward a port into it you can't do it at a later time. This is braindead.


Sadly there is no new things to support FreeBSD Jails, a competitive (and much more matured) container technology than Docker's default LXC (despite earliers mentions of plans for supporting FreeBSD in version 1.0)...


I think that what you're asking for (a Jails execution driver) is really _not_ so far out of bounds, mostly given that FreeBSD has really good support for binary compatibility and Linux emulation, so FreeBSD is actually the _one_ good platform to try this with (one of the best candidate platforms?) but one thing, that people all keep repeating this about LXC makes me grumpy...

The LXC driver is _not_ the default anymore as of ~0.9, Docker switched to pluggable execution and storage drivers at that time and shortly after, started using the new "native" driver for execution (which does not use LXC at all, but uses the underlying kernel services directly, like namespacing and cgroups, that are also used by LXC).

The execution driver LXC provides the same experience from before the pluggable driver model happened, and it's still available but not the default anymore.

Am I being pedantic, is my information wrong, or has this just not been heavily publicized before (maybe it's actually a very small distinction and I should stop repeating it?)


I don't think the binary compatibility and Linux emulation will be anything useful - there is no emulation of containers, and no one would want to run a Linux process in their container.

But yes, some framework is there.


I think the point would be not to use linux containers or any abstraction around the parts that fuel LXC, but to use BSD Jails on the idea that they are "battle tested"

Just as you might not trust Docker's native driver more than LXC, if you most cynically regarded LXC as an "example of a failed marketing experiment" and then Docker as "the first notable consumer of cgroups and linux kernel namespacing, and certainly too new yet to trust with anything important" -- but there are plenty of high-profile consumers of the FreeBSD jails and it is trusted by many to keep processes separate.

I mentioned emulation and binary compatibility, maybe because it's the thing that's missing in many other cases that wind up being responsible for alternative platform support getting back-burnered and second-classed in Docker community.

Where there might be lots of people who want to use Docker on i386, it's not supported probably most of all because it would lead to a bad user experience and fragmentation. Binary images are all built for amd64 linux at this point.

I bypassed the warnings, I use it, I'm not bothered that I have to rebuild for my specific architecture. The docker users at large can still use my images, and maybe I'm a bad citizen for pushing images compiled this way, but it works.

Similarly there are probably a lot of people testing Docker on ARM, but they can't use any of the pre-built containers that are provided by the majority docker community, tested and supported. Their architecture is even farther out from being supported. They have the drawback that they can neither receive blood from "O" nor give to "AB" if you'll indulge the analogy of images to blood types, they have one of those weird blood types that doesn't even register in the statistics with a reasonably sized sample. Anemic, maybe.

If you had a jails execution driver, you certainly don't have to use Linux processes or binary emulation, you can build your containers with a BSD base and BSD toolchain. The point is that in theory, you probably could skip that step and get working with existing binary images built from Linux distributions.

It would quickly be a better "alternative platform" than any of the other existing contenders (different architectures) who you could already see in the docker ecosystem at this point; it could probably run many existing container images out of the box, on a FreeBSD host with amd64 architecture.

The contained software really shouldn't care what execution driver keeps it isolated, at least in theory. In practice someone with more knowledge than me can probably say what a bad idea this is, maybe because either "congrats, you just invented Virtualization" or "wrong, jails won't keep those processes separated."


Isn't the FreeBSD Linux emulation still 32 bit only? I can't find any reference saying that changed. In which case it there isn't any way compat would work. I think docker will give in and support multiple repositories at some point, as all these osx people are going to want native stuff, and the arm (and arm64) people will want something, and IBM will have a big docker on Power8 push and so on... Linux amd64 will do doubt be the biggest platform.

(someone should look at porting the netbsd amd64 linux emulation to freebsd...)


Some of the core abstractions are there. Get started. We'll follow :)


Join #docker on IRC for anybody struggling with Docker. Everybody is very helpful on there. And I have learned a lot just by supporting others, and asking questions myself.


Looks very promising. Too bad their jobs page says they only want engineers with Go experience. Why does language matter?


Probably because they're developing very fast presently, and a fashionable-enough company that they get a lot of good applicants and can afford to be choosy with hiring.


It seems perfectly reasonable if that's the implementation language they've chosen.


Great news! I just wonder why Docker is getting so much attention and Google's lmctfy doesn't get any?


Docker and lmctfy have largely joined forces - Rohit and Victor are now core maintainers of Docker and kicking ass contributing all sorts of low-level system awesomeness :)

https://github.com/dotcloud/docker/blob/f4b60a385cbaae045674...


That explains it, thanks!


IMO the goodness of Docker is the build system, images, layers, and the index. lmctfy has none of those features.


I haven't given docker a try yet, but I think I might have to give it a go.


Congrats on the 1.0 Docker!


Anyone know if there is a livestream or videos available from Dockercon?


No livestream but they'll be posting videos after the conference.


Congrats to the Docker team on 1.0!

Every time I use Docker it leaves a grin on my face.


Thank YOU. Users are amazing. Use more. Keep it up!


Is the devicemapper storage driver ready to use in production?


Why wouldn't it be?


When I was working on a docker-based build/test/deploy solution a few months ago, its performance was so anemic I wanted to tear (what's left of) my hair out, especially during a development/testing cycle.

I got sidetracked by other things and haven't gotten back to it, but if I do and it's still as bad as it was, I don't think I'll want to continue on that path.


I haven't seen the code but have played with LVM2 LVs / device mapper myself on my own containers-based infrastructure and also seen performance issues in some situations.

Fundamentally (1) LVM2 LVs are capable of single-depth snapshots only, forcing hacks like an actual full blockstore copy if code expects duplication. (2) Docker was written primarily against aufs which provides rapid arbitrary-depth snapshots.

I suspect this is the heart of the issue... ie. Docker expects features this storage subsystem wasn't designed to support.


Is there a way to add a Docker Webhook programmatically?


Congrats! I would love to see user namespace support.


"It's complicated" - crosbymichael, my favorite maintainer of all time.

Rest assured we're working on it and it's a priority.


Yes, user namespaces are coming soon.


Great work!


Url changed from https://github.com/dotcloud/docker/releases/tag/v1.0.0, since there were two threads on the front page and this one has the more extensive discussion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: