Honestly, I think it's fine. Sometimes an up arrow just isn't expressive enough.
However, I agree, sometimes it's nice to have something positive to read in the midst of constructive criticism.
Do you guys have any plans to offer an image library? Pre-built versions of common open source tools like official .amis or the vmware virtual appliance directory?
I understand it doesn't exactly match your primary goal, but i think there is a large untapped demand for lightweight appliances for use at home or inside the firewall that juju really isn't set up to satisfy. I love the similar feature on my synology nas, although those are really just packages. There is often a large gulf between knowing how to set something up and being willing to do it.
You can already publish your own images with 'docker push'!
(maybe not the fancy deployment, but the container technology)
That, together with ZFS turns freebsd into one hell of a server OS. ZFS seriously needs to come to linux. And don't say license problem, "there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code."
And you are quite right. DTrace is super.
OpenIndiana is another Illumos distro that can be used as a desktop. You get all of the core benefits of Illumos (zones, zfs, dtrace). The big downside is that you don't get the nice tools for managing zones/vms that SmartOS provides (vmadm/imgadm).
FreeBSD has ported a lot of the best features from Illumos so it could also work for you.
Just remember that an OS is only worth running if it has DTrace.
I personally haven't used it so far, but i hear good things :)
I agree with your statement though: ZFS + Jails is a killer pro-BSD argument when it comes to servers.
Is this "docker registry server" the final thought on how people will "ship" their containers? ... I'd much rather have the docker cli be able to be able to be configured to use some private repository of images. Maybe I missed something.
* A free, public mirror (comparable to pypi or the ubuntu mirrors) makes docker instantly more useful. You're 1 command away from sharing a ready-to-use image with the whole world, or trying someone else's image.
* Docker definitely also needs a mechanism for moving images in a peer-to-peer fashion as you see fit - ala git pull. This is actually more work to get right, but we are working on it (and pull requests are always welcome :). Any docker will be able to target any other docker for push/pull.
Hope this helps.
I can kind of see your point for the "instant gratification" of a public repository... but I'd love to see docker become some kind of standard *NIX type tool for packing up blocks of software ready to move them around, and you don't see many simple tools backed to some third-party infrastructure. (don't get me wrong - I'm not totally against it.. I just want this to work, and work forever without being tied to anything)
Also as lover of Go's idea to do away with a centralized repo for pkg management, it would be nice to see a similar approach taken for the handling of Standard Containers.
I'm very excited by all this. It feels like such a step in the right direction for all kinds of deployment problems.
Always happy to discuss further on #docker@freenode!
Another problem is that in our particular use case (tracking changes to a root filesystem) the space and bandwitdh savings of binary diffs are not worth the overhead in execution time. Downloading an image is as easy as extracting a tar stream. 'docker run' on a vanilla ubuntu image clocks at 60ms. 'docker commit' is as simple as tar-ing the rw/ layer created by aufs. How long would a git checkout of that same image be? What about committing? Not worth it.
But: What about this security issue regarding lxc? Does docker takes measures to prevent this?
The issue itself appears to be fixed if you use Linux 3.8 with compiled in namespace support (that breaks NFS and other filesystems at the moment)
(For anyone who's not the submitter, when this was mentioned about a week ago, @shykes was hinting that there would be some sort of community images repository made before launch.)
We welcome all pull requests and issues, so if you notice something missing in the docs, feel free to let us know, or add :)
$ docker pull base
$ CONTAINER=$(docker run -d base apt-get install -y curl)
$ docker commit -m "Installed curl" $CONTAINER yebyen/betterbase
$ docker push yebyen/betterbase
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
The account creation does not echo Username or Email, which is unusual for an account creation, and I think that Password should echo at least the \n even if the password is entered blind as customary.
You're right, the docs are indeed poor.
For example, this is the image I use to automatically update docker's binary downloads:
$ docker pull shykes/dockerbuilder # Be patient, it's a big one
$ docker history shykes/dockerbuilder
$ docker run -i -t shykes/dockerbuilder /usr/local/bin/dockerbuilder $REVISION $S3_ID $S3_KEY
I did not get all of this from your first reply, I guess I should have been wondering, where the images are hosted, what's stopping other people from publishing images as yebyen, etc...
The docker registry is a community server where anyone can add their own images, they just need to login first (docker login), and then push/pull their images they want to share.
nothing is stopping you, so go ahead and try it today.
Docker isn't tied to aufs, if unionfs or any other union file system comes along and is better, we are more then happy to switch to it.
I'm using unionfs and while I'm not taxing it very much I haven't seen any issues so far. I'd love to know what to look out for. Is it easy to setup aufs?
We've been playing around with docker internally at SendHub for the past few weeks, and so far it is looks promising.
I'm /really/ looking forward to seeing and hearing about cool applications people find for docker containers!
It's not an either-or, it's a symbiosis.
Few questions regarding performance compared to running the processes directly on Linux (I guess this applies to LXCs in general - forgive me if these are due to a lack of knowledge about how LXCs work in general):
- How much extra memory and CPU does each Docker process take?
- Is there any performance hit with respect to CPU, I/O or memory?
- Are there any benchmarks from testing available?
Again, kudos to all the people at dotCloud behind Docker and extra props for open sourcing!!
I know https://github.com/synack has been experimenting with docker + openvswitch, which opens many exciting possibilities.
Docker appears to run each process, in isolation. So you would have 2 isolated processes (Apache/PHP, plus MySQL) which can only talk to each other over a local network pipe (which should be quite fast with minimal overhead).
(this is my understanding).
Basically it records all the dependencies an application touches while running and creates an environment you can tar up and use on basically any system of the same arch. It even works cross distro. Wouldn't really work for 'the cloud' as you don't get the same security and isolation as LXC and you have to make sure all the execution paths are triggered but for a lot of apps it works great and it's ridiculously easy to use with no setup.
$ wget http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-master.tgz
--2013-03-26 15:43:39-- http://get.docker.io/builds/Linux/i686/docker-master.tgz
Resolving get.docker.io... 18.104.22.168
Connecting to get.docker.io|22.214.171.124|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2013-03-26 15:43:40 ERROR 404: Not Found.
Also read earlier today that Go's GC on i686 (or really any 32 bit platforms?) is not suitable for a daemon like docker, it leaks. This may be fixed in tip. I have no idea.
We do plan to add cross-arch support in the future, though - it's just a lot of work to get right.
I'll try it on Windows with vagrant then.
Wouldn't it be better to simply describe it as something similar to a vm snapshot/export. If I export say a virtualbox image I can move it around and run it on other vm players.
I think the shipping container analogy is simply bad:)
Even though I do have some issues with Go, it is nice to see safer languages being used for this type of work.
Maybe not the right article for that.