
Docker: Lightweight Linux containers for consistent development and deployment - Isofarro
http://www.linuxjournal.com/content/docker-lightweight-linux-containers-consistent-development-and-deployment
======
quaunaut
This seems to be a fairly outdated article, despite the displayed date- it
says the 'newest' version is 0.7, but we've been on 0.11 since 2 weeks ago:
[http://blog.docker.io/2014/05/docker-0-11-release-
candidate-...](http://blog.docker.io/2014/05/docker-0-11-release-candidate-
for-1-0/)

It's some pretty amazing tech, and I love it with Vagrant for my local dev
environment, and for full deploys. I'm not such a big fan of Dokku- it's a
great 'heroku replacement' but one of the best strengths of docker, is you can
just upload your code to a private repo(or use a trusted build on a private
repo!) and pull that down and run to deploy. Dokku going through the buildpack
process, while making it nice from a quick git push angle, sacrifices one of
Docker's strongest features(consistency).

~~~
binocarlos
This is a great point and has bugged me for for the past few months - I'm sat
there watching my 'npm install' whir by and I'm thinking - 'I reckon this is
not the "docker way"'

Having said - I'm still trying to work out the combination of source repo and
docker repo - i.e. 1 dev makes a code change - they commit the code to github
- they then commit the docker image to the registry.

Now - does developer number 2 'git pull' or 'docker pull'

What I really like is how docker does not really enforce anything upon you -
leaving the above question to be answered any which way.

Docker ftw!

~~~
jafaku
I don't think sharing the image is necessary during development. I just add
the Dockerfile to the repository and each dev builds their own image. If they
are methodical they can pull the repo and rebuild the image at the beginning
of their work day. Most times it will be rebuilt in a second because there
have been no changes. If they don't rebuild the image, it will be the main
suspect as soon as something breaks anyway, so I can't see this causing much
trouble.

~~~
quaunaut
This is exactly what I do, just using Trusted Builds.

Right now one of my biggest curiosities, is doing mass deploys and unified
logs. I've not tried to so it hasn't mattered yet, but it could be fun.

------
peterwwillis
I like how they're saying that Docker's strength is in letting you never learn
how to package applications. Which is a bit like saying that a Google self-
driving car's strength is that you never have to learn how to drive.

The useful features of Docker are resource isolation, image layering, and
staged deployment. The idea that you can run two versions of PHP or move your
files between distros has been a solved problem for at least 30 years.

~~~
ninkendo
I'd go even farther and say that the main useful feature of docker is the
image layering. Even the resource isolation has been available with LXC and
cgroups for a while now.

But docker certainly has a nice way of letting you use all these features in a
friendly package. It's often not the most original things that are the most
groundbreaking, sometimes it's when something comes along and takes all the
parts you already have and puts them all together in a useful way.

------
zoner
You can combine it with Vagrant as well. I used to use Vagrant with
Virtualbox, but using the Docker provider, the development is much faster. The
config files we are using is public and open source:
[https://github.com/czettnersandor/vagrant-docker-
lamp](https://github.com/czettnersandor/vagrant-docker-lamp)

However, if you don't have Linux, Vagrant can start a Virtualbox image with
Linux and run Docker in it with a little modification in the Vagrantfile

~~~
iqster
How do you turn off the VM that is hosting the dockers? I.e. when none are
running?

Btw, I found the vagrant documentation for the docker provider to be a bit
lacking. I probably wasted a day getting it to work :(

~~~
jhorey
If you're using Vagrant, you'll have to manually shut off the VM via `vagrant
halt`.

If you're just trying to get started with Docker, Boot2Docker
([https://github.com/boot2docker/boot2docker](https://github.com/boot2docker/boot2docker))
is probably the way to go.

If you're looking for an Ubuntu-based Vagrant box, you can try out my Vagrant
box
([http://ferry.opencore.io/en/latest/install.html#os-x](http://ferry.opencore.io/en/latest/install.html#os-x)).
It's based off of 14.04. Be warned though, my box is very large (~4GB) as it
contains many pre-baked images (Hadoop, Cassandra, etc.).

~~~
iqster
Thanks. Where do I issue vagrant halt (i.e. which folder)? The workaround I
found was to go virtualbox directly and issue a poweroff. Not clean at all :(

------
jmnicolas
I guess not, but can you run graphical applications in Docker ?

~~~
MrUnderhill
It's no problem to run an X server in the container and use VNC/X
client/openssh to access it from your machine. See for example this blog post:
[http://blog.docker.io/2013/07/docker-desktop-your-desktop-
ov...](http://blog.docker.io/2013/07/docker-desktop-your-desktop-over-ssh-
running-inside-of-a-docker-container/)

~~~
arethuza
Don't you mean run the X server locally and an X client in the container?

~~~
vidarh
You can do it either way depending on what you're trying to achieve. I have a
dev container set up that runs Xvnc and/or Xpra in a container so I can
connect to it from anywhere.

~~~
arethuza
Possibly going a bit off-topic here - but isn't Xvnc arguably a "headless" X
server, in that it appears to be a X server and a VNC server - but as X
client/server terminology is round they other way from VNC (and pretty much
everything else) it isn't really a "graphical" application in the sense of
jmnicolas's original question...

------
tachion
I get this is a 'hacker' but in what way this is a 'news'?

~~~
mike-cardwell
[http://ycombinator.com/newsguidelines.html](http://ycombinator.com/newsguidelines.html)
-

"On-Topic: Anything that good hackers would find interesting."

------
KaiserPro
And a massive pain for security.

~~~
thu
Assuming you are arguing in favour of VMs, the benefits of Docker stand, and
you can perfectly run Docker containers within a VMs.

The feature sets/usage sweet points of OS-level package managers, language-
specific package managers, VMs, containers, distinct/same hosts, ... are both
overlapping and different enough that you need judgement to choose which one
you want, but they are certainly not exclusive.

If your use case means you prefer VMs over containers and you don't need to
combine them, fine, but every situation is different.

~~~
meatmanek
There are valid concerns.

Someone with access to the docker control socket effectively has root on your
machine: e.g. `docker run -v /etc:/external_etc ubuntu visudo -f
/external_etc/sudoers`.

Don't let untrusted users (or scripts) run docker.

~~~
icebraining
Which is exactly what the Docker docs say: _" First of all, only trusted users
should be allowed to control your Docker daemon."_

[http://docs.docker.io/articles/security/#docker-daemon-
attac...](http://docs.docker.io/articles/security/#docker-daemon-attack-
surface)

