Because I usually do it once in 5 years :-)
My setup: Linux, Kubuntu LTS. Entire system is backed-up with rsync, so I have daily snapshots for last 6 months. There is also weekly backup to cloud and external hdd.
I have primary 250GB SSD which usually sits in my workstation. Whan I travel I physically move SSD to laptop (takes 10 minutes).
If something went wrong and my primary SSD would die: I just boot from Livecd, copy 200GB, setup bootloader and reboot. No need to 'reinstall' system from scratch.
Every six months I take one afternoon and update all my tools (IDE, OS, apps...). I dont use automated updates except critical stuff such as browser.
If it's just you, absolutely, don't bother. But it gets much more difficult to support a variety of platforms and configurations as your team size grows. This is where vagrant+chef/puppet really shine.
Absolutely. At my previous work we had 'standard' VM with all necessary tools such as Visual Studio 97. I just dont understand if it brings so much benefits if all you just need is different version of Ruby.
Then again, maybe if I had the ability to quickly set up a new development-ready VM, I'd come to find all sorts of unexpected uses for it? I've certainly found duplicating Parallels VMs handy a couple of times. But I have to say that I'm still skeptical that the time spent learning Chef and trying everything out would be worth it for me.
If I were looking after 5 (10, 20...) programmers then my back of the envelope calculations would look a little different.
Have you ever had to install multiple versions of some package on the same machine? Have you ever had to help a colleague debug a weird issue that was particular to their setup? Have you ever had to work on the same project from multiple platforms and some component didn't work on one of them? Have you ever had a bug only appear in production because of a platform subtlety?
It might not look like it right now, but skills in using these tools are moving from cutting edge to essential job/contracting skills in the near future.
I don't know much about the Puppet community, but I do know that within the Chef community, the learning curve is flattening. The big thing of 2012 was that we're very close to getting these VM+Chef+Berkshelf setup in a way where you can get onboarded on an existing open-source application without knowing you are using Chef under the cover. How deep you want to dive in is up to you from that point.
I have a feeling this is going to devolve into a "well, just use vim and a tiling WM or X other thing", but the point is that just replacing Mac OS with Linux isn't always going to be good for your whole team.
If everyone's on Linux, the dev and productions systems are more similar, so less scripting issues can arise. I personally use Arch, other coworkers use Mint or Ubuntu, and only one developer uses Mac OS X (others switched because of above problems; he couldn't switch because of EFI problems). Even devs new to Linux (from Windows) love Mint, and all is well and good in the world. Servers run Debian or Ubuntu, so I test my changes in a VM because Arch has noticeably newer software in the repos, but that's because I can't stand Debian as a desktop system (package manager too slow).
Also, the Mac OS X user uses his Mac like a Linux box. He only ever touches his trackpad to use non-keyboard friendly apps (uses Vimium for Chromium, tmux/vim for terminal/editing, etc). Our devs not familiar with the keyboard generally use Sublime on Linux, which is a fantastic editor as GUI editors go.
The trackpad drivers aren't that bad, but may require some configuration in the xorg conf (I got multitouch working, but disabled it because I find it annoying).
We don't standardize on a single dev platform because each developer is different. The developer understands that using Linux is going to be easier in the long-run, but he/she must understand that his/her choice of OS must not negatively impact productivity. We've had a few developers try starting on Windows, but we'd refuse to support any dev environment besides Ubuntu Linux, and when they ran into problems, they switched to Mint/Ubuntu.
I've never seen a developer be more productive on Mac OS X than Linux. In fact, those gesture usually end up slowing him/her down from what should be easy using the keyboard (and all that switching from trackpad to keyboard is not productive).
The VMs provide isolation for testing and developing the server components. (I'm assuming you are developing server code). Through synced folders, you can still use your favorite $EDITOR on the host machine -- vim, emacs, sublime text, whatever. Onboarding is faster, and you can manage multiple node setups.
Further, you have the additional advantage of being able to replicate staging or production systems on your local box. This reduces time spent round-tripping to a staging server.
This lets you standardize the server code without requiring all developers to standardize on editors/ide/etc.
Personally, I am more productive on OSX than on Linux. But that's because my actual dev environment is tmux+vim. I let OSX and Mac hardware handle the stuff that it does much better than Linux: being able to talk to anyone, anywhere, with anything. (Networking) Being able to reliably suspend and wake.
Completely anecdotal and not at all useful in the context of this discussion. I've seen some of the top minds in the Python community using Mac OS with extraordinary productivity, as well as some of my fellow developers at Pathwright. I myself run Linux, but think it's foolish to suggest that I'm automatically more productive on it just by virtue of it being Linux.
I feel like you're missing part of the point, though. Even between different Linux distros (or versions of Linux distros) there exist differences in the packages. Some of these differences can lead to unexpected breakages. If you've got a smaller team, one or two guys are going to end up fixing everyone's disparate environments as things break unexpectedly (or you need to bump a package version or install something new).
The Vagrant (or VM) setup in general lets everyone use their preferred OS/Distro, while having the same. exact. environment to run the product you're working on. It also cuts down on support time, and your dev environment will not be just "kind of close" across your team, i'll be "really close".
Even so, ThinkPads have pretty good battery life, and on some models the CD drive can be replace with an extra battery. I've heard of some that can go nearly two working days before recharging.
You can also easily turn off ports you aren't using, at least in linux.
My solution is to set up my perfect dev environment in linux (I use Fedora) on a partitioned usb pen drive. Then either boot from it (if i'm on a crappy computer) or boot a vm direct from the usb if the computer is more competent. It's also quite nice if there's an emergency and you're away from your computer -- you can just rush into an internet cafe, plead/bribe them to let you boot from your usb then have everything, ssh keys inlcuded ready for urgent repairs!
It shouldn't matter where your environment is (a VM, a laptop, etc), you should be able to recreate it easily and reliably. This is what the automation tools are for. I prefer salt stack and have a salt config for my work laptop. If something happens to the laptop or I get a new one I just need to run salt, wonder off for an hour and when I get back my entire environment is set up. I also use it with vagrant so that when a new dev joins the team they can run one command and have a copy of the project setup in a vagrant box, databases and all, ready to go.
You could set up a similar thing so that if you lose that USB key you can have your prefect setup reinstalled with minimal fuss.
This implies you are running a terminal emulator in X on the VM and doing your coding there?
I use VMs for development all the time, and a headless VM with SSH access is perfect. It's almost identical to doing development locally.
You really can make it a near exact match for a local development environment. Bridge your ports to your expected ones on the host machine, share the code folders, and SSH in if you need to punt a dev server when you make changes (or set your Editor/IDE up to do that for you). Browser can still point at local:8000 or whatever you expect.
Vim should be able to do everything that GVim can do, if you add a few lines to your configs. I'm not sure what specifically though.
I bring my entire development setup with me when using a server, and preparing it is as simple as cloning Homeshick (or Homesick if you have Ruby), cloning your dotfiles repository, and installing any applications that you want to use.
What I wanted to point out is that everything works perfectly and I can instantly switch between applications and in/out of the VM just fine.
Backing up the VM is of course as simple as copying ~40GB to an external drive or the desktop at home (gigabit ethernet ftw) and the nice thing is that I can have multiple versions of it on various machines.
Yep, you read that right. I deploy to Debian servers but develop on Linux Mint or OS X. The key is that the environments are very similar but not identical. The payoff comes when you break yourself of the habit of relying on accidents of deployment instead of building a general-case solution that works in multiple environments.
You scan /proc to look at running processes? That's great, until you're on a machine without /proc. Better to spend an hour learning how your dev platform (Python, Java, whatever) abstracts that away for you. Trying to send an email by shelling out to /usr/bin/sendmail? Oops! That's broken in lots of places; better learn how your dev platform handles it!
The big win comes when you upgrade your stage and prod environments to a newer distro - or a different one altogether - and your stuff keeps working because you'd relentlessly whittled away all the OS-specific dependencies.
And really, how trustworthy is that abstraction? You've basically outsourced responsibility for your server environment to your language community. What if they create a bug that is exposed on Debian, but not on OS X?
I do all my development now using Vagrant running Ubuntu VMs but I still do all my editing with the same Windows editor I've used for years running on the host machine.
I suppose there's always RDP or VNC, if you can stomach their limitations.
One is the tried and true Xwindow forwarding. Usually conveniently handled by ssh now, but in the olden days (I'm talking early/mid 90s) we used alternatives.
The other is VNC in. This has been extensively discussed by the "I use my ipad as my development machine" crowd. I've had pretty good results with VNC over the ... decades.
Default CarrierWave (Rails gem) settings try to clean up the tmp files after file uploads. This is a good thing normally, but to Ruby I am on a Linix box, so it tries to `unlink` which fails catastrophically because the mounted drive was NTFS.
Then there are NPM modules. It is kind of funny because Node.js and NPM are actually pretty good on Windows for development purposes anyway, but if I want to run it from my VM it picks up my OS as Linix and tries file system operations like symlinks. I couldn't even install Express.
These are just a few of the things I ran into.
# to allow symlinks to be created
config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/vagrant", "1"]
That being said - don't! As others said, X protocol is network transparent, let's you display windows on a server from everywhere. That means you can open a GUI tool on VM and see it's window(s) on your desktop, with your window manager decorations and basically indistinguishable from an app on your main system. It's possible on Windows thanks to xming (http://sourceforge.net/projects/xming/) which is a lightweight X server for Windows.
If you find yourself using a big IDE it can sometimes be a pain. Most IDEs will let you do remote debugging etc but will still build on the local machine and do use the local JVM to enable some features.
<insert standard disclaimer about how I don't work for/with Jetbrains, just a happy customer, etc>
Also I can SSH/VNC/Xwindow/whatever into a server machine with performance stats far beyond any currently imaginable laptop. Its like owning a laptop from 2023 today in 2013.
Easier and more reliable to have everything working locally (With local dev environments using vagrant/virtualbox) and use the internet connection for syncing git changes or doing research.
How Mosh works
Remote-shell protocols traditionally work by conveying a byte-stream from the server to the client, to be interpreted by the client's terminal. (This includes TELNET, RLOGIN, and SSH.) Mosh works differently and at a different layer. With Mosh, the server and client both maintain a snapshot of the current screen state. The problem becomes one of state-synchronization: getting the client to the most recent server-side screen as efficiently as possible.
This is accomplished using a new protocol called the State Synchronization Protocol, for which Mosh is the first application. SSP runs over UDP, synchronizing the state of any object from one host to another. Datagrams are encrypted and authenticated using AES-128 in OCB mode. While SSP takes care of the networking protocol, it is the implementation of the object being synchronized that defines the ultimate semantics of the protocol.
It's nice to have at least semi-parity with the production environment. This is possible because we can utilize most (not all, due to no central Puppet server) of the same Puppet modules which we use (or I built), in the VM.
Essentially what you gain is not worrying about if a developer will break their machine/VM and go 'Whoops, can you fixor it?'. Additionally, you no longer have to worry about such things as php/node/ruby version issues with the dev/production side of things. We've gone from having sometimes issues with certain code not running the same as in development, to just throwing it up on the staging environment through deployment tools and it just runs!
(I haven't used it, but I have used -- and love using -- Salt Stack.)
I think given a Linux host environment this might work better, or a smaller code base, but at least in these two situations it was a failure.
I realize that this isn't an issue for 99.9% of startups, but in the non-startup corporate world where there are security implications to keeping everything in a publicly accessible code repo (healthcare, government, education in some states) then you've got another complication.
The only thing I've been able to work up so far is sharing a folder on my desktop through to the VM and running commits from my desktop.
Vagrant + Chef + Berkshelf is better!
You can define the cookbook dependencies in the Berksfile and version it along with Vagrantfile. Berkshelf is like bundler for your cookbooks. It assembles cookbooks from a variety of sources and loads them into your Vagrant-managed vm. Check it out at http://berkshelf.com
To help get things going in that end, I've been working on Kamino (https://github.com/hosh/kamino). It's intended to generate a Vagrantfile+Berkshelf file for use in your project. Right now, it only supports a basic Rails stack.
Pull requests welcomed :-)
The second, and more specific, reason
is that location of one of my most
intense love/hate dilemmas, Xerox
PARC that pushed the first 'graphical
user interface' (GUI) instead of
what came before, usually typing
text into a command line. Command
lines are mostly easy to automate.
GUIs are mostly a total PAIN to
I'm wanting to automate, willing
to automate, waiting to automate.
Librarian or Berkshelf seem to be the two main contenders to make updating cookbooks similar to updating gems with bundler.
Where they differ is in where they put your cookbooks, and what additional features they offer. Librarian-chef puts your cookbooks in a "cookbooks" folder in the project root, while Berkshelf keeps a global cache of all the versions of each cookbook you've installed outside of the project. Both provide support for uploading your cookbooks to a Chef server, librarian-chef through plain knife and Berkshelf through it's own upload command.
Berkshelf also features scaffolding support for Vagrantfiles and new cookbooks, and generally has extra features specifically for rapidly iterating on cookbooks themselves. It also provides a vagrant plugin to manage bundling/uploading cookbooks to a Vagrant VM, since they need to be copied out of the central repo and made available.
Overall, I've found that I like Berkshelf better. It feels a little more idiomatic, and I like having my cookbook versions shared between projects rather than duplicated everywhere. If you have any specific questions about either, I'd be happy to answer them.
I will note that the librarian-chef also has a vagrant plugin: vagrant-librarian-chef.
No hard numbers, but the main thing is that Berkshelf could not have really happen without enough standardized community cookbooks. By putting the cookbooks into a global directory, it works more like Ruby gems and Bundler: instead of focusing on your customized (and probably divergent) cookbooks loaded into your project directory, you focus only on the few cookbooks that are specific for your needs. It comes out of a bigger theme in the 2012 Summit, that of creating "library cookbooks" that gets assembled by your "application cookbook". If it's a library cookbook, you don't necessarily need it in your project library any more than you need to import a Ruby gem into your Rails directory during development.
Test-kitchen is also worth looking into, though as far as I know, that still uses Librarian under the covers. Berkshelf support for test-kitchen is pending (http://tickets.opscode.com/browse/KITCHEN-9).
In summary: Berkshelf is part of the overall trend in modularizing these cookbooks and developing tighter engineering discipline.
Haven't tried Librarian - only because I saw Berkshelf first and it integrated nicely with Vagrant.
I'd be interested to hear how people were setting up their local chef environments.
If you're looking to use knife solo for setting up production environments, perhaps one thing to look at is Vagrant 1.1+ support for EC2 provisioning (assuming you're using AWS).
You can also have Berkshelf install the declared cookbooks into a directory with `berks install -c $cookbook_dir` and then using knife-solo. Seems awkward, but I'd rather have dependencies managed in some way, rather than resolving the dependencies by hand.
We got our dev setup partially automated like that. Even the partial setup was better than what I heard -- took a week to set up a laptop with the tools and the whole platform. When I was onboarded, it took about day, more or less hands off.
it's particularly good when different clients have different OSs. and you can even do hardware development - i have tested usb drivers in a vm client that talk to hardware connected to the host.
the only drawback is initial startup time (particularly pulling latest updates after install) and archiving the vms (they're large, so fill up a laptop ssd). i export to ovas on my backup system and then wipe the vm. another worry is that virtualbox has been flakey recently (http://www.acooke.org/cute/VirtualBox1.html http://www.acooke.org/cute/UbuntuonVi0.html) - but ovas can be imported into other hosts...
Boxen is about getting your MacOS environment set up with tooling.
The author of the original blog posting is talking about how to automate the creation of a production-like environment in which to do your development.
The creation of that environment requires a few tools installed on your physical Mac, which could be managed via Boxen. For example, you might use Boxen to automate the installation/upgrade of Vagrant or your IDE of choice.