
Ask HN: Do you use Vagrant or Docker for active development? - dnt404-1
I think I understand the use case of Docker for deployment, but does Docker hold its sway at the moment for active development. Data persistence support is not out-of-the-box. Vagrant with its own isolated and shared volume support seems like something that would be very good in active development of software artifacts. What is the view of the HN mass, and what would you recommend?<p>Apart from these two, are there any others that looks promising?
======
brightball
On a Mac or Windows machine you're using Docker with Vagrant via Boot2Docker
anyway.

A lot of people us both
[http://docs.vagrantup.com/v2/provisioning/docker.html](http://docs.vagrantup.com/v2/provisioning/docker.html)

That's actually the only thing that got me to hold off on Docker the last 2
times I've evaluated it. I was able to get everything running for a 1 monolith
+ 7 microservice system that I work with but the local developer workflow felt
very clunky even with Fig. That was 6 months ago and it's my understanding
there have been a lot of improvements.

That project was for a Ruby team and there are so many Ruby based tools that
make the local development workflow a smooth operation that shoehorning Docker
in locally would have been a step back, so we held off on it.

It's an area that I think will see major improvement though. Heroku's even
gotten in on it.

[https://devcenter.heroku.com/articles/introduction-local-
dev...](https://devcenter.heroku.com/articles/introduction-local-development-
with-docker)

Which is really impressive to me. If anybody in the space can polish out the
user experience, it's Heroku.

------
dcosson
I recently switched to using both, docker running in a Vagrant VM. I've had
several frustrating issues with boot2docker on OSX, it's generally just been
less stable for me than Vagrant.

In terms of using docker, IMO it's the best development experience I've come
across once you get everything set up. It can be confusing to get your
workflow set up at first, and it seems like everyone does it a little
differently, I'm hoping that best practices will standardize a bit as docker
continues to mature.

I love having every part of an app (app code, split into a few microservices
if you wish, postgres, redis, rabbitmq, etc.) completely isolated, and docker-
compose is a great system for linking things together. I also currently don't
have any puppet/chef/etc code and love not having to maintain that, in my mind
a large part of the need for configuration management tools is dealing with
the complexity of diffing two arbitrary states of infrastructure, and with the
immuatable approach of docker containers all that complexity disappears.

~~~
vdaniuk
Did you manage to have automatic code reload set up with Vagrant/Docker? Last
time I tried the following config: host folders shared with vagrant VM and
vagrant folders mounted inside docker containers. Unfortunately, file change
events on host didn't propagate to docker containers. As far as I remember,
this was a limitation of Vagrant file system.

~~~
NathanKP
I'm using the same setup and it works great. Vagrant shares a host folder into
the guest VM, and then the docker containers mount subfolders of that shared
folder into the container.

So you change a file on the host computer, and it reflects automatically
inside the guest VM which is the docker host, and automatically inside the
containers since that same location is mounted by the containers.

I just checked the Vagrantfile, and the Ansible playbook we use to launch the
docker containers, and I don't see any special magic required to get the
syncing working with the Vagrant filesystem, it just works out of the box.

~~~
kalmi10
Don't you lose inotify this way? Most autoreload solutions trigger on inotify
events.

~~~
NathanKP
No, it works fine for me. The node.js livereload code running inside the
container can detect the file changes just fine, and it just alerts the
browser running on my host computer via a port that has been forwarded from
inside the container to the docker host, and from the docker host to the VM
host.

And by the way this setup is bidirectional. If code inside my container, or an
SSH session running inside my container makes a disk change, that change is
also reflected on the version of the folder that runs on the host.

So if you don't want to go through the effort of doing the port forwarding,
you can run live reload on your host machine, and reload the browser
automatically in response to changes that were made by the container.

------
nahiluhmot
> Data persistence support is not out-of-the-box.

This is actually not the case. Although containers do not share any persistent
volumes with the host by default, you can use the --volume option[0] to do so.

To answer your question, I've used Docker for local development to run MySQL,
Postgres, and Redis inside of containers. Using the aforementioned --volume
option, you can share the unix socket opened by either of these services from
the container to the host. Otherwise, you can use the --port option[1] to
share ports between the container and the host.

I've had a generally pleasant experience using Docker for this use case and
would recommend it. It's nice being able to start using a new service by
pulling and running an image. Similarly, it's nice to have the ability to
clear the state by removing the container, assuming you choose not to mount
volumes between the container and the host.

The only frustration I've run into is running out of disk because I have too
many images, but it takes a while to get to that point and those can easily be
deleted.

[0] [https://docs.docker.com/reference/run/#volume-shared-
filesys...](https://docs.docker.com/reference/run/#volume-shared-filesystems)
[1] [https://docs.docker.com/reference/run/#expose-incoming-
ports](https://docs.docker.com/reference/run/#expose-incoming-ports)

~~~
dnt404-1
Thank you for clearing that up. So, does this work like Vagrant's mount
folders? Does the container then find the path to the host's development
artifacts?

~~~
nahiluhmot
You have to manually specify the folders you want shared in the arguments to
the command. For example, running this:

    
    
       docker run -v /tmp/:/host-tmp/ ubuntu:14.04 bash -lc "echo 'TEST123' > /host-tmp/test"
    

Should create a file on your host's filesystem called "/tmp/test" with the
text "TEST123".

------
noir_lord
Use Vagrant for everything even tiny projects go in a Vagrant container
(isolation is the primary win with the ability to do a git clone, vagrant up
and be away).

Don't use any kind of provisioning on vagrant just straight bootstrap.sh as
honestly I don't like them.

------
JimmaDaRustla
I don't use Docker, although I don't have a need for it.

As a solo coder, I love vagrant - the whole nature that you can use a
configuration file with a script or two to build out an entire VM has so many
benefits. Less time to build the VM, easily destroy the entire VM, easily
rebuild the entire VM, save drive space by destroying the VM when you don't
need it, keep the VM configuration in a git repo, distribute the configuration
to someone else to use, and the best is having all the steps used to configure
the VM are documented in the config file and scripts.

~~~
explorer666
Everything you said can be done with Docker, just faster.

~~~
stephenr
If he's not on a Linux host that can run Docker to start with, its not going
to be faster, _and_ its adding complexity.

~~~
austinpray
I was able to get a somewhat serviceable environment up on my Mac. However:
after all the effort involved I ended up sticking with vagrant. There is no
way I could explain the process to someone I would potentially work with.

I check back in about once every two months to see if there have been any
breakthroughs.

~~~
justizin
I have typically created a Vagrantfile to spin up an Ubuntu host with a static
IP, configure its' docker to listen on a network port, and 'brew install
docker' to get the client on Mac. Set DOCKER_HOST to the vagrant VM IP+docker
port, and it's easy to distribute to a team.

I wish the Docker team hadn't more or less given up with boot2docker, since it
completely abandons the vagrant advantage of a folder shared with the OS.
People do wierd shit like trying to mount their OSX home dir over NFS and,
well, Vagrant solved this a long time ago.

------
zoner
Vagrant with Docker provisioner:

[https://github.com/czettnersandor/vagrant-docker-
lamp](https://github.com/czettnersandor/vagrant-docker-lamp)

Much faster than the Virtualbox provisioner, so it's not an "or" decision, the
two thing works well together :)

------
parshimers
Docker is really great for developing things IMO. I use it in a few ways
actually. One thing I've found it really useful for is isolating build slaves
in Jenkins (using the docker-cloud plugin in Jenkins).

I also like to use it to create test deployments for debugging or evaluating
things, for example it's a lot easier to run Hadoop in pseudo-distributed mode
inside a Docker container with host networking, than it is to fiddle with
running it in a VM and either getting NAT or DNS working just right, or
installing it locally. With the Docker container, if anything goes awry, it's
just so easy to get back to initial state by killing the container and
starting again.

As for Vagrant, I like it a lot too, but for different reasons. You can define
a set of actions that is a lot closer to installing whatever it is you are
developing, instead of baking everything together like you do with Docker,
which can be desirable. I have used it in the past for creating virtualized
cluster environments for integration testing of distributed systems. I think
so far I use the VirtualBox provider, but I'm thinking of re-working some of
my past uses of it that don't strictly require a VM to use the Docker
provider.

------
lzlarryli
I use docker for the development of FEniCS, an open source scientific
computing package written mixing python and c++. FEniCS requires a lot of
dependencies which can be hard to compile (PETSc alike) or need version hold
(Boost alike). Docker helps to hold the environment constant. We currently
plan to have build bots based on docker as well to streamline build testing.

When I write code inside docker, I always submit to a git repo like Bitbucket.
Data persistency is easy. Besides you can always use --volume, which works out
of box in Linux.

Vagrant requires some basic shared environment, which is not realistic in my
case. For example, I use Archlinux myself and am forced to use old Scientific
Linux at work, while many other FEniCS developers use Ubuntu, Fedora, or Mac
stuff. It is too painful to write and maintain a Vagrant script for all these
(different compiler, boost, blas, lapack and some other 10+ numerical specific
stuff). I even tried Vagrant+docker. But in the end, with docker maturing, I
switched to docker+bash script instead. It is just more convenient and needs
less dependency.

So I'd endorse a docker only approach if you mostly use Linux and your project
has a diverse group of people.

------
garethsprice
Working in a consulting capacity, mainly doing LAMP development with a small
team. We use a standardized Vagrant image
([https://github.com/readysetrocket/vagrant-
lamp](https://github.com/readysetrocket/vagrant-lamp)) which has cut down on a
lot of local environment issues for our dev team.

Previously all devs had their own environment (some MAMP/WAMP, some homebrew,
some remote, etc) which led to onboarding and support issues. Setting up a
standardized recommended dev environment has helped with that a lot - both in
terms of reducing project onboarding and getting junior developers up and
running.

Would love a day where we can build projects as Docker containers and hand
them off to our clients' IT teams, but that seems to be a way off.

SO thread where the authors of Vagrant and Docker weigh in:
[http://stackoverflow.com/questions/16647069/should-i-use-
vag...](http://stackoverflow.com/questions/16647069/should-i-use-vagrant-or-
docker-io-for-creating-an-isolated-environment)

------
hevalon
I use chef and test-kitchen to bootstrap my dev-VM (vagrant is used by test
kitchen). So I have written my cookbooks, and depending on the project, I
converge (spinning-up) VMs depending only on cookbooks that I need, eg no java
will be installed if the project requires node.js only. In addition the main
gain with that, is my dev VMs are totally disposable and they own nothing.
Everything is being synced into the VM from my host machine, same with data.

Lately I am trying to "dockerize" my backends, so in the case my workspace
project needs a mongoDB or an other backend from my architecture, I should be
pulling those containers up on converging. That will make my life easier when
writting cookbooks for the backend dependencies.

I believe you can achieve the same using ansible, chef was a personal taste.

------
danwakefield
Vagrant with ansible to set it up. 1 build vm, 6 'deployed' vm's and 1
'deployer'.

Needs a minimum of 42G ram, 150G disk space and fills its logs at 2G/h. Not
great when you are running on a 256G SSD.

Building takes 2h+ with ~10% random failure rate due to dependency mirrors and
timeouts.

The python code is deployed as gziped virtualenvs to the hosts. This actually
works pretty nicely as it means you cant just import stuff and have to build
stuff similar to 12 factor style(We dont use ENV_VARS/stdout logging though).

TBH I still dont really see the point of docker, Im sure it will 'just click'
at some point but it hasnt happened yet

~~~
dnt404-1
At this point, I too am trying to find "it will 'just click'" moment for
docker, though I have only been looking at the two for a few days only.

------
Tomdarkness
Why not both? We use vagrant to create our docker environment - a 3 machine
CoreOS cluster. This is so we accurately represent our production environment.

We then use our production docker image(s) with some more development
appropriate configuration options. Vagrant mounts the user's home directory at
/Users/<username>/ inside the CoreOS machines. Then we mount the appropriate
folder inside the docker container at where the container would normally
expect to find the app's code. This way the developers have live updates
without having to rebuild the docker image or anything.

~~~
dnt404-1
I have been reading that Vagrant provides a out-of-the-box support for docker
after version 1.7. So, it seems this will be easier, and with an obvious case
for Windows or Mac.

I am already on Linux, so, my question is why create extra abstraction to it?
Is not the function of docker to provide isolation as Vagrant but without the
extra overhead of VM. As far as I could gather, one prominent use cases of
Docker is the cheapness. Would not doing Docker on top of Vagrant beat that
purpose?

------
wodzu
I would put it the other way around. Docker+Vagrant is best used for
deployment and hopefully it will be stable and battle-tested enough so I can
use it in production.

I love the fact that once I configured the dev environment on my PC and I hit
the road on the next day I can have exactly same environment on my laptop by
running single line - "vagrant up". Not to mention that any dev working on the
same project saves himself ton of time but not having to configure everything
from scratch.

I have not taken the leap of faith yet and I am not using the docker in
production but hopefully this will happen soon.

~~~
pantulis
I think you wanted to say "Docker+Vagrant is best used for development" ;)

~~~
wodzu
Yes! Thanks for a correction!

------
eli
How would you provision the Vagrant box? I would think you'd want to avoid
having some Dockerfiles for setting up production servers and some completely
different provisioner for setting up development in Vagrant.

~~~
justizin
The idea is that you build a Dockerfile while you are developing, and then you
can push an identical environment to production - one of the common causes for
production issues being differences from the developers' environments.

~~~
eli
Yes, sorry, that was my point actually. That once you commit to Docker on
production that it probably doesn't make sense to use anything else for
development environments.

------
ajdlinux
I've used Vagrant for a few projects over the past few years - mostly small
things like hackathons and such. Haven't used it much in the past year or so
though.

At my last job we used Docker extensively for developing our main software
product, based on a Django + PostgreSQL + RabbitMQ + Celery stack. It's
definitely a bit tricky to get your head around at first, but after that, it's
very nice being able to just type "docker-compose start" and have a working
application with consistent configuration ten seconds later.

~~~
dnt404-1
Did you use separate docker container for each of the stacks? From what I can
gather, the docker way is to reduce each complexity to a separate container
solution.

And, I agree that _docker_ way of doing things is a bit tricky at first, since
we are not used to doing things _that way_

------
geerlingguy
Vagrant + Ansible for most things. I have used Docker to rebuild some of my
environments, and there's a lot of promise, but some hard issues (especially
w/r/t more complicated applications with multiple dependencies) that I'm still
hoping to make less-hard before switching more to a container-based workflow.

One of my main day-to-day Vagrant configs is encapsulated in Drupal VM
([http://www.drupalvm.com/](http://www.drupalvm.com/)).

------
awongh
I've used vagrant for a big rails 3 app with a lot of dependencies and
services, i.e., solr, a redis backed delayed_job queue, etc.- stuff that would
have difficult or impossible to manage on a mac.

The vm environment was also as close as possible to the production env, with
the same os version, etc.

It also greatly streamlined onboarding of new devs. The dev environment setup
was a couple of hours instead of a day or two.

------
zapper59
My team is currently using the gradle cargo plugin
([https://github.com/bmuschko/gradle-cargo-
plugin](https://github.com/bmuschko/gradle-cargo-plugin)) to deploy to our
remote docker machines for testing. This is my first time hearing about
Vagrant. What are it's advantages for our use case for active development?

------
mulander
The only use case I had for docker so far was to set up a cross compiler
toolchain to produce binaries for an armv7 igep board.

It was significantly easier to tell my co-workers to install docker and type
`make local` for local binaries and `make igep` to produce a igep armv7 binary
by running a docker container.

------
dolel13
I use docker extensively for both development and production, using it to
mirror the production environment as much as possible. Write code on the host,
run on the container. It took me sometime to adjust to the concept but once I
did, it was pretty cool.

~~~
dnt404-1
How much time did it take for you to adjust? I still am finding it difficult
to wrap the concept of dockerization.

------
edude03
I use Ansible and Vagrant for active development of client projects. It's a
great combo because it ensures my local environment matches production as
close as possible and as well I can go from nothing to running environment
with a vagrant up.

------
adamjin
I use vagrant for all my developments, it was easy for me to setup and play
around with some new tools, such as saltstack (configuring master and minions)
and reused the same bash scripts to setup the dev env.

------
jlu
Not to hijack the thread, just wondering anyone has experience with zero-
downtime deployment of multi-container app with cross-container communication?

~~~
InTheArena
I would definitely check out a number of docker related technologies. To
answer your question, we don't have our software out in Docker yet, but it's
coming, and it will make a splash when it lands.

Vagrant is a great technology, but I recommend taking a look at Docker compose
([https://docs.docker.com/compose/](https://docs.docker.com/compose/))
previously known as FIG. One of the great advantages of compose is that
combine it with Swarm
([https://github.com/docker/swarm](https://github.com/docker/swarm)) and you
have a very robust distributed deployment system. Docker-machine is the direct
competitor to vagrant, but to be honest, I don't use it. I spin up my docker
containers via some build in service API's we already built for proprietary
reasons.

If you want to get really robust, you can also look at Kubernetes (and their
zero-downtime deployment) and Mesos. These both add a huge amount of
complexity t the deployment, but also grant a robust distributed system for
managing downtime and deployment. Redhat also has Openshift.

~~~
jlu
Yep, machine/compose/swarm are great tools from Docker and I'm already using
it, but compose is more of a "dev" tool so it restarts all containers with
every new deploy.

What I'm looking for is a robust and systematic zero-downtime approach to
update some of the containers (say in a loadBalancer → web servers → db
architecture).

------
chrisgoman
Yes, Vagrant only (it is awesome), still not sure what docker does. After
setting up a Vagrant VM, I run fabric scripts to build the box for its role

------
buster
Using Vagrant with docker inside for development for some time now and it's
been the biggest productivity boost ever, give it a try.

------
mrbig4545
Vagrant and puppet. And it's the same puppet we use for production, so we're
as close as we can get.

------
fs111
I use vagrant to run a multi-vm hadoop cluster for testing.

------
betaby
systemd-nspawn and linux-vserver environments with dependencies (library
versions, compilers, even python vitualenv) guaranteed by cfengine promises.

------
ThrowThrow2
Isn't it more like Docker with Vagrant?

------
programminggeek
vagrant is okay, but it's kind of a PITA.

