There are plenty of other great examples of Vagrant usage around the web, too, from Laravel's Homestead to (disclosure, I maintain it) Drupal VM.
It seems a lot of older applications / communities migrate towards Vagrant as some of the things they do are harder (or at least not as straightforward yet) to implement in containers.
That's only a recent trend. Up until Docker became the standard a couple years ago (or whenever Docker for Mac was made stable), Vagrant was the standard for dev environments. The primary reason Docker succeeded Vagrant for dev environments is speed. Docker can have my dev environment up from scratch in seconds, but with Vagrant it took minutes.
Nowadays, Docker is really miles ahead in terms of usage in development environments. Vagrant was pretty easy to set-up and use back then, but now Docker is much, much easier.
I think that VirtualBox and VMWare are the only two providers that work everywhere, unless docker runs on windows now.
Running on Windows, how do you share files between the Docker image and the host machine? With Vagrant I use NFS.
So there were good reasons to avoid it before Docker too.
the only problem i had was updating base boxes, but that was self inflicted because it was easier to make a function to delete them from the libvirt cache than maintaining proper versioning.
/edit: i just realized from your other comments that you're probably constraining yourself to prebuild baseboxes. you really shouldn't. its trivial to built them with packer  and there are lots of config files on github to do just that. This makes it possible to really tweak them for full integration right after 'vagrant up'
Every time I had to set it up from scratch, it took me -- and I was not a novice, and knew the stack well -- a day or more and many, many failures to get running. Thankfully, I think the project has since abandoned Vagrant.
(and Docker isn't much better in my experience)
The point of vagrant is that you type 'vagrant up' and you have a working environment.
All of my projects use vagrant to ensure compatibility. You can 'git clone' and 'vagrant up' and have a working environment as soon as the provisioning task completes.
Vagrant encounter 1: it would always exit immediately after barfing some garbage that messed up the line discipline. It wouldn't even print help menus. Reinstalling, 32 vs 64 bit, slightly different binary versions etc didn't seem to affect this behavior.
Vagrant encounter 2: on a nearly virgin Windows box, "vagrant up" on a bog standard centos image stalled out for an entire work day. No stdout, no stderr, no logs, no exit status, it just sat there.
On a scale of 0 to 'flake', it's at full flake.
Vagrant is very bare-bones, so you need plugins, but those are ... picky.
Also, handling Linux / Windows / OSX with the same Vagrantfile results in interesting things. (Let's say you want to use NFS for Linux, so you put there an if, and you want to set up bridging for a local interface, you have to guess the interface name - or do shell and cmd.exe wizardy .)
I trust that the people who provided whatever configuration it used knew what they were doing, but it was still incredibly flaky.
You should never trust that people know what they are doing in this industry.
Vagrant is probably the best way to go about learning automated configuration management with ansible, and especially Puppet. And I've never tried it myself, but I hear people setting up local OpenStacks with Vagrant, too. Not a bad way to get your feet wet.
It doesn't have to be one or the other.
It is a little different from some other vagrant boxes in that it uses Ansible for provisioning. This means you can reasonable easy re-use the ansible roles (perhaps with some minor modifications) elsewhere too, like locally, or on some cloud image.
We will probably research docker in 2018.
However, I want to be clear its not a marketing tactic in any way (re: hosh below, with an excellent comment!). It ends up being that implicitly but we waited and developed Vagrant 1.x for 5 years prior to calling it a 2.0 because we had a lot of goals we wanted to achieve: multi-provider, fantastic Windows support, stable installers, etc. We feel we've now achieved that in a very stable way, so its time to call it 2.0.
This breakpoint for us allows us to begin planning and executing on larger changes. Of course, we'll do all of this thoughtfully since Vagrant is definitely a tool you want to "just work" today and not think about breaking your envs. I admit this does happen from time to time though and I'm sorry about that, but we're getting better.
It's a less-important issue and just a convention, not a law, but normally v1.36.12 tells me "focused on stability and just working--boring but rock-solid", while 2.0.0 tells me "first release of great, new features--amazing but don't put too much weight on it yet". I wouldn't ordinarily think of 2.0.0 as the most-stable version of 1.x with 2.0.1 being the less-stable introduction of the great, new features.
The reason being Docker for Mac uses a VM anyway (an xhvye machine) - it does try to hide/abstract this away, but inevitably this leaks. The xhyve VM has the usual parameters memory, diskspace, CPUs, and not least a kernel. There are limited options to fiddle with these parameters, though you can log into it and poke around there. I thus find it easier to just have setup Vagrant machines with Docker - then I have better control over those things.
If I were on a Linux distro though, I'd probably use Vagrant a lot less.
edit: I should also mention that vagrant-libvirt doesn't even work on Vagrant 2.0.0, due to https://github.com/vagrant-libvirt/vagrant-libvirt/issues/76...
Also, since the rise of virtio (thank goodness), you can easily make boxes that work on VB/VMware/KVM, and probably others too, and indeed, many boxes work like that.
Seems like this issue is finally addressed: https://github.com/mitchellh/vagrant/issues/8468
Vagrant is a fantastic tool because of its flexibility, but that flexibility comes at cost: there are sometimes bugs and performance issues where the different Vagrant components don’t quite mesh perfectly.
It's been such a source of frustration that there is no better shared folder alternative. VirtualBox is the only usable cross-platform backend, and vbox shared folders are the only way to have two-way syncing between guest and host. I don't understand why it's so poorly supported :/
We saw a 30% speedup in one of our apps by switching from NFS to two-way syncing between "native" filesystems using Unison.
See mitchellh's blog post on the subject:
The technical term for this is "doing it wrong"
ln -s /home/vagrant/node_modules /vagrant/node_modules
For example, when I started with vagrant, after few days of just getting bored with slow thoughput of vbox shared folders, I added nfs sharing. But cifs works as well.
Instead of targeting local Virtualbox VMs, we use AWS boxes created by the internal tool we use to manage our production fleet.
Sounds like _some_ of the mission for Otto lives on...
It's a shame so many "core" developer tools are not code-signed. It makes life hard in companies where binary whitelisting is used.
The application would still have to be audited, signed or not, prior to whitelisting.
You could still get owned, of course, but the benefit here is that you're excluding everything not explicitly whitelisted, including drive-by downloads, crap on portable devices or random programs downloaded off the internet that someone thinks will solve their problem of the day.
When people do not code-sign their software every software update is painful. At work, where we run https://github.com/google/santa, it frequently happens that companies with code-signed software forget to code-sign their auto-updater, or random binaries that run during installation. Most of the time the application crashes/hang during the update (because some piece weren't allowed to run), only to remind to you update the software again when you restart the application.
Though they de-emphasised that tool in favour of Docker for Mac and Docker for Windows which interact directly with the platform hypervisor to create a Linux VM.
Vagrant also claims to provide a "good workflow for writing Dockerfiles"; it can provide a nicer user abstraction over `docker [many, many args]` for running your biz .
You can then run Docker inside a Vagrant VM.
(Vagrant also has a Docker provider, but I can't think of a good reason to use it.)
VirtualBox lets you forward ports from within the VM to your localhost directly, bypassing iptables and any default routes.
Our network uses a "no-split-tunneling" VPN, so most of the Docker networking solutions are completely unusable for me.
Kubernetes fortunately provides easy ways to enumerate the services that you intended to expose (via ingress, or similar) so it's absolutely trivial to script forwarding every exposed service or ingress to the localhost IP. I still am editing /etc/hosts file if I ever need to use a host-based route, and I have some interesting issues with SSL certificates that sometimes did not have the server name that I expected on them, but for the most part this works great for me.
I am a Mac user and showed my coworker who is a Windows user, we tried to do the same thing on his machine and it was even easier because there is no notion of privileged ports below 1024. So, it works the same way but with one less workaround.
TBH they are not at all unusual. They are best-practices networking requirements.
Vagrant is filling the void for some of those projects since it just works with no fuss on Mac/Windows/Linux without forcing me to use Hyper-V.
Or, as tmzt mentioned, minikube and minishift will also let you set --vm-driver=virtualbox on start. Those are nice even if you don't want to use Kubernetes (but there are plenty of options.)
Regarding your second point - you can either attach a shell for testing (e.g. I often build a Dockerfile first interactively via /bin/bash inside the container) or use the hammer (a web-UI) and connect your localhost to the docker network interface.
Data exchange between host and container is also simply done via bind mounts - which might be more elaborate in production however.
My team has no such requirements and IMHO uses Vagrant solely because of inertia. We've always used Vagrant, it's what most people have installed on their machine, there is a Vagrant box with some of the moderately difficult to configure things already done, so we all can use the same configuration, like the Oracle Client libraries and the nginx frontend with a self-signed localhost certificate (required so your local development can talk to our auth server).
There's absolutely no reason we couldn't do the same thing with Docker. We just haven't.
I would argue that if you aren't using Packer or if writing your own customizations into Vagrantfile, you aren't really using Vagrant and it's a somewhat harmful black-box for us. Those steps are baked into the box file, not done in a Vagrantfile as provisioning steps, not able to be inspected inside of an Ansible playbook; so that knowledge of how to do these things could easily be lost and it would be a headache to reproduce. Packer is roughly what we need to make it better. For my team, Vagrant is just a thin wrapper over VirtualBox, so the team does not need to know that they are using VirtualBox.
The second reason you might want to use Vagrant instead of Docker is if some leadership in your org has declared that you still may not use Docker for anything. This is the case here; you may use Docker but not without a good reason and not without having your usage reviewed by a panel of experts on various subjects (it's the Design Review Board.)
We got our usage of Docker approved so that we can manage Jenkins via Helm. The kubernetes-plugin for Jenkins creates pods as build slaves, and when they complete their jobs they go away. You want your builds to run in a clean environment, you want your slaves to be disposable; pods are ephemeral, may group containers together, and they go away when they complete the job. That's just exactly what problem this tech was meant to solve.
DRB thought that was a great justification and it was approved. I am still the only place that I'm aware of across the entire institution where Docker is used in an approved way though.
Vagrant is also great for learning about clustering technology. In minutes you can have dozens of VMs running on a single machine.
That said if you don't have specific OS requirements, then http://labs.play-with-docker.com/ works well for simulating a multitude of machines.
Both Docker and Vagrant rock for the same reason: image distribution. You don't have to know anything about building a VM or container image to be able to benefit from the tools.
Of course they both make it a breeze to manage the respective runtime environment too.
So in my mind Vagrant is to VMs what Docker is to Containers. Of course the use cases overlap, nonetheless they both are indispensable tools.
I'm used to liberally using crontab, iptable rules, multiple languages, and not deploying separate containers / vms if I need something like Redis. For some of them I'd end up with 6-8 containers if I went that route.
I recently got to program again and set up a box and was pleasantly suprise that it does self provisioning too.
I wrote up ansible script and self provision my box.
They also killed off the gem-distributed version--which is still, to this day, a huge pain in the rear, to the point where I build my own Vagrant so it doesn't use its own weird out-of-the-way Ruby.