config.vm.share_folder "/vagrant", "/vagrant", "./", :nfs => true
Also, tweaking the DNS settings seems to help:
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "off"]
vb.customize ["modifyvm", :id, "--natdnsproxy1", "off"]
EDIT: Updated the post, NFS erases most of the difference between VirtualBox and VMWare Fusion. Nice!
Slightly OT, but since when is 8gb of ram plus an SSD plus a quad core machine "dinky"?
I run a Mac Mini with a quad core i7 and 16GB of RAM. Beyond that consideration, desktop parts are almost always under less thermal stress then those on a laptop and can typically run at higher clocks.
From the article I got the impression that the main bottleneck is sharing files between VM and a host system. Probably, I can't know for sure because I don't do this. Instead I have all my files and software on the VM, including my IDE of choice. I have X server set up on Windows host (Xming) and when I really need to transfer files between a host and guest I do this over s(f|c)p.
It probably helps that my IDE is Emacs :) (Before that - gVIM) And I don't see any problems with it. Moreover, the only development related thing on the host system is Xming and VirtualBox, so when I change the computer I'm working on I just need to copy the image and install these two programs and I'm ready to go - with exactly the same environment everywhere. I used this setup from Linux, too - the only issue I had was that xhost alone didn't work for allowing external X connections, I had to edit startx script and remove the 'no tcp' (or similar) flag from there.
At work, where I have a proper desktop with nvidia graphics, I have FreeBSD installed as a main OS. The difference in speed between my environment in a VM on a laptop (with 1GB RAM allocated) and on a host system on a better hardware (with 8GB RAM available) is not that big, which makes me believe that I found an optimal way of working. I don't know how more costly virtualization makes disk IO, but I think that compared to general slowness of disk IO this difference is negligible (I don't have SSD yet, so my opinion may change in near future!).
For whatever reason, the matter that I found most fascinating is the application start-up time.
I chiefly work with Java web applications deployed on Resin. Some ten-odd years ago, when I first selected Resin as a web container, I did so because of its relatively quick start-up time. It's around 3 seconds to boot a modest application on modern hardware. Since then, I had lost sight of the importance of application start-up time. Resin and many competing but similar platforms (e.g., Play) support dynamic reloading during development, so a full container restart is fairly uncommon.
But your article makes me wonder if it would be of interest for us to include container and application start-up time in our framework benchmarks project. I think that's something we could fairly easily measure.
Thanks again for putting this together!
For Ruby/Rails developers it's been especially bad, especially in JRuby. Some major patches have been made to Ruby(MRI) recently to help make this less of a problem:
The more appropriate way, as it was described before, is to take any real-word public data set, import it inside Postgres, then make an app with your favorite stack to lookup and display a set of entries on the page. For more real-world example - add authentication and provide lookups only for authorized users. Then do simple web-server benchmark no these pages.
The improvements with NFS is due to redirecting lots of read operations for OS components form HDD to NIC which means different PCI device, different interrupt, different set of ports. Of course, it helps.
Cheap sharing tenant setups with one cheap SATA HDD for all will never perform well, no matter what virtualization one is using.
Thoughtful partitioning of I/O requests between different PCI devices will help, as old-school system administrators always did, but what is the point of having virtualization then?)
You can contact me at firstname.lastname@example.org if you'd like more details.
(Docker is Linux only. To run a Docker container on OSX, you need a Linux VM.)
In short: the right way to use VMs is as dumb chunks of hardware. Using them as logical units of software makes increasingly less sense as container-native tools get better.
(disclaimer: I'm the Docker guy :)
I installed them as dual boot at first but all the driver issues started getting annoying. So I created a VM and it works great now, plus I dropped a couple Linux distros on there as well.