Just an additional data point: NFS does not work when you use vagrant with windows as your host OS. Things like git status when run from within the Ubuntu VM was something like 25 to 50 times slower (IIRC 7 seconds vs. 5 minutes) then when I ran it from git bash in windows.
In my experience, OSX's implementation of NFS is buggy as well. It's not uncommon for my NFS server to simply give up and stop responding to requests from the vm.
Trying out Fusion's share has been on my todo list for a while. I'm a little sad to hear its performance is poor.
I've also found that processes like Guard which watch files for changes aren't properly notified.
Even with these issues, moving all my development to VMs created by Vagrant has been incredibly useful.
Thanks, I just created it a couple months ago. Found I was constantly researching/learning new things as a sysadmin and thought I would share my notes. Helps me remember too ;) If you have any episode ideas please shoot me an email [check my prfile].
I've been using a similar workflow for years. My host OS is Windows 7 x64, which gives me all the strengths that Windows is known for with regard to multimedia, gaming, and high-end software compatibility (e.g. Photoshop.) For dev work I run Ubuntu using VMWare Workstation. It's very fast and I get to have the best of both worlds.
I run Win 8 on a Surface Pro now with the rest of the setup same as yours. Running Windows gets you the benefit of near-perfect hardware compatibility. Run Linux on a new laptop, and there are always issues. I just don't have the patience to fix it all. It's not always the usual stuff like wifi drivers, now it's more stuff like power management. In a VM, everything just works perfectly.
The core i5 is a ULV part and it's dual core. Not quad core. Comparatively speaking it is a little bit "dinky" versus desktop parts.
I run a Mac Mini with a quad core i7 and 16GB of RAM. Beyond that consideration, desktop parts are almost always under less thermal stress then those on a laptop and can typically run at higher clocks.
As a life-long FreeBSD user and a laptop owner I have to run my development environment in a VM. FreeBSD is a great piece of software, but it's hardware support is a bit... lacking.
From the article I got the impression that the main bottleneck is sharing files between VM and a host system. Probably, I can't know for sure because I don't do this. Instead I have all my files and software on the VM, including my IDE of choice. I have X server set up on Windows host (Xming) and when I really need to transfer files between a host and guest I do this over s(f|c)p.
It probably helps that my IDE is Emacs :) (Before that - gVIM) And I don't see any problems with it. Moreover, the only development related thing on the host system is Xming and VirtualBox, so when I change the computer I'm working on I just need to copy the image and install these two programs and I'm ready to go - with exactly the same environment everywhere. I used this setup from Linux, too - the only issue I had was that xhost alone didn't work for allowing external X connections, I had to edit startx script and remove the 'no tcp' (or similar) flag from there.
At work, where I have a proper desktop with nvidia graphics, I have FreeBSD installed as a main OS. The difference in speed between my environment in a VM on a laptop (with 1GB RAM allocated) and on a host system on a better hardware (with 8GB RAM available) is not that big, which makes me believe that I found an optimal way of working. I don't know how more costly virtualization makes disk IO, but I think that compared to general slowness of disk IO this difference is negligible (I don't have SSD yet, so my opinion may change in near future!).
Very interesting reading. Some of my colleagues work with Vagrant or just-plain-old VMs for development and I have heard quite a bit of grousing about performance. It's nice to have some actual data.
For whatever reason, the matter that I found most fascinating is the application start-up time.
I chiefly work with Java web applications deployed on Resin. Some ten-odd years ago, when I first selected Resin as a web container, I did so because of its relatively quick start-up time. It's around 3 seconds to boot a modest application on modern hardware. Since then, I had lost sight of the importance of application start-up time. Resin and many competing but similar platforms (e.g., Play) support dynamic reloading during development, so a full container restart is fairly uncommon.
But your article makes me wonder if it would be of interest for us to include container and application start-up time in our framework benchmarks project. I think that's something we could fairly easily measure.
The more appropriate way, as it was described before, is to take any real-word public data set, import it inside Postgres, then make an app with your favorite stack to lookup and display a set of entries on the page. For more real-world example - add authentication and provide lookups only for authorized users. Then do simple web-server benchmark no these pages.
The improvements with NFS is due to redirecting lots of read operations for OS components form HDD to NIC which means different PCI device, different interrupt, different set of ports. Of course, it helps.
Cheap sharing tenant setups with one cheap SATA HDD for all will never perform well, no matter what virtualization one is using.
Thoughtful partitioning of I/O requests between different PCI devices will help, as old-school system administrators always did, but what is the point of having virtualization then?)
Author here. Great suggestions, however I'm not interested in measuring performance in the conventional page-load sense, but instead the impact of running typical developer tasks in a VM, namely app boot time and tests.
I was poking around the website and can't seem to understand what they're selling/offering. It seems like online hosting for VagrantFiles. That can't possibly be it. Anyone know what it is they're offering?
Author here. We're building a service to make dev/prod parity and deployments easy. Thanks for the feedback - our landing page needs an overhaul :P We're still adding features, right now all you can do is generate Vagrantfiles and AMIs.
You can contact me at email@example.com if you'd like more details.
But the best thing about docker is that it's not a VM. It's a containerization layer built on some linux kernel (only) features. It's not fair to compare it to a VM, or even to put it in the same class of programs.
But you will only need 1 VM :) For realistic testing of a multi-component stack, performance gains start adding up very quickly. There's also the start/teardown time if you want to run tests from a clean environment repeatedly.
In short: the right way to use VMs is as dumb chunks of hardware. Using them as logical units of software makes increasingly less sense as container-native tools get better.
I run Win7 in a virtual machine on OSX for .Net development, since I don't need hardware layer specifics it works great. If I were building applications for the desktop I wouldn't likely it do it this way, but for Web dev it works great.
I installed them as dual boot at first but all the driver issues started getting annoying. So I created a VM and it works great now, plus I dropped a couple Linux distros on there as well.
I run Windows 7 using VMWare Fusion on my 15" MBP Retina. It's extremely responsive and I can't tell it's running in the background most of the time. I love to use open source but VMWare Fusion is a great product and these numbers support it.