I personally liked Pivotal Workstation the best, as it had the best combination of robustness, pre-built recipes, and easy configurability. I'll be excited to take a look at Boxen the next time we bring someone into our team.
In our projects' README.md we have a section for installation. It's about 7-12 of terminal commands. It takes us less then an hour to get someone setup. I guess what I'm trying to say is that it's easy enough at small-scale. We also get some good knowledge transfer out of this process. I'm put off by the whole "let's throw puppet at it", but github is not small-scale.
A tool-chain like this is needed for Apple to move widely into the enterprise. As a sysadmin, it is a major pain to worry about unpatched, outdated, apple laptops that have to conform to a security policy. Without automated tools like this, you are left running around patching the latest java/flash/pdf issue!
I still don't understand why Apple hasn't built something like this for the SMB market. Maybe they don't want to play in the true enterprise space, but even if a company has 5 workers, having a central cloud service to handle system configuration, baselines for security, backups, vpn, software push, hardware replacement, etc. would be essential.
On mobile there are a variety of MDM services so you can almost do this for phones, but there isn't much for the SMB market for desktops/laptops. In the Microsoft world there are some tools, but even those are generally too much work for a small business to set up (even a non-tech business with 100 employees isn't likely to do it, at best they'll have ghost or something to image new machines -- a tech business might after 25-50 employees)
They don't see themselves in that market...yet, and the trends don't look good in that direction either. As we all know, Apple has continued to shy away from actual computing in recent years. Now if you could do something like this for Linux, businesses that don't yet have an MS infrastructure could find this very compelling, especially if the rumors of Linux MS Office come true. I work in IT in a large nationwide law firm and we spend so much time of our days dealing with viruses and general Windows headaches when what people do is mostly either 1) through Citrix, where we manage the Windows instances or 2) on the web. If MS Office for Linux dropped, I think businesses like ours would find this a very compelling option.
"we spend so much time of our days dealing with viruses and general Windows headaches"
Rather than maintaining all this enterprise stuff to fight off the Windows tar-pit of viruses and dreaming of a day of MS Office on Linux, why don't you switch to Macs which require no such special effort to stay virus-free and already have MS Office available?
As does OS X Server, but it doesn't "just work" out of the box the same way Macs do in general, or even iCloud for individuals.
For a 5-25 person SMB, there's not a full-time IT guy who could set it up. At best, there's a desktop support/printer/office manager type. At a tech company, you might get lucky (or unlucky) and one of your devs or site-sysadmins spends much of his time setting stuff like this up.
ARD is pretty weak compared to MS SCCM, and super-weak compared to something like AirWatch or Zenprise (MDMs).
Exactly. I have been fighting to allow people to choose Mac or PC when they are due for new hardware, and now that 80% of people are choosing Macs, it is getting to be a pain to manage them.
I am so glad to see this as an alternative to the tools that Apple has been neglecting since they stopped trying to sell server hardware.
For enterprise Mac management I've found the Casper Suite to be awesome: http://www.jamfsoftware.com/ (no affiliation)
It can easily create thin images for setting up new machines (make a small base install and then give the computer applications X Y and Z), package software and remotely deploy it to computers, create rules to push out new versions of software to computers, etc. You can set up self service so that users can browse a general store of applications but you can also assign applications to specific computers. They handle iOS devices now too, and their community is great. It also has SCCM and Altiris plugins, but I haven't used either of those features.
Ultimately someone is responsible for that machine, I think that role should fall with the sysadmins, if they exist yet.
Github is large enough that they should have in-house IT looking after the desktops/internal network. They might even have two teams, one looking after employees, and one looking after the server infrastructure.
Start-ups grow organically, so there might not be "someone" at first, other than the developer. This quickly turns into a burden when you have 50+ employees running around and warranties start to lapse, there is no current inventory list, you have a mish-mash of machine configs. Then comes the stage of getting a consistent config on each machine, so that person X and person Y can collaborate, using the same dev tools, at the same rev level. Looks like Puppet/Boxen is a tool to solve that problem.
This problem is compounded when you are in a hiring frenzy, because new people almost appear out of thin air, and you are supposed to have a machine for them. Having a quick deploy script will save your hair line considerably.
Not really. What it gives you is easily configurable, absolute control over memory usage. I can ensure that my local dev environment doesn't use more than 2GB of RAM. From one setting. In one place. In about 20 seconds flat.
Bzzzt! Jumped to conclusions & placed words in my mouth! Thanks for playing. BTW, I'm an iOS developer and I've been using OS X as almost my exclusive dev environment since 2003.
Sorry, forgot that I have to keep quiet about shortcomings of OS X, or else.
> I disagree. That doesn't belong in a consumer OS.
I applaud your brave stance against the straw man you just conjured. [slow clap]
I share your opinion on vagrant, but it still stands that there are lots of tools you use on your host machine that you might want to make simple to share and configure organizationally. Look into how pivotal labs (consultancy) works and its a little mind-blowing and inspiring -- they all share the same configs and many of the same defaults and dotfiles and shortcuts (which they decide through deliberation and consensus), they reformat their macbooks from scratch between each project (for infosec), and they all know how to write chef recipes, so if they customize their system in a way that they want back after they wipe, then they write a recipe for it and share it back via pivotal_workstation :)
It's pretty cool. And even if not everyone can put the time into building that sort of solution from scratch, pretty much everyone can ride their coattails
Vagrant is basically a headless box running on a local directory.
On vagrant you have your production tool chain open in terminal (via ssh) where you can run a dev version of what you are building by forwarding ports to local host
You then can run browsers, IDE consoles across as many monitors as you want
I'm not trying to flamebait, but are macs so popular that it's just assumed that all new hires want one as their tool? At my company we're still asking new employees which platform they want to work on, is this falling out of fashion?
Everyone technical I see in silicon valley, with maybe 5 exceptions, is on Mac laptops. A fair number of serious devs have Linux workstations to go along with their Mac laptops.
People who do some other stuff (heavy email users, some video/audio people) do sometimes use Windows (it's weird, but I think the Windows audio stuff is better for realtime now than Mac).
(the exceptions are people with FreeBSD, NetBSD, and Linux laptops)
I'd be pretty comfortable as a Silicon Valley employer only supporting Macs for office automation, and then either Mac or Linux for development workstations. If someone really wants Windows, s/he can support it independently.
The harder problem is phones -- there are people who are religiously attached to iOS and to Android, and you basically need to support both. There are pretty good MDM tools to cover both at the same time, but it does mean you can't push enterprise apps unless you do crossplatform development.
At my company, small cms company bought a few years ago by older larger media company, recruiting in Stockholm, we're offering new employees a Mac or Linux laptop, but there is a lot of hinting that "you'd probably be best off picking a Mac", and a few new hires have opted for the latter because of reasons that to me sound like they think it's the "company DNA" or something.
I'm not religious about platform choice, but I'd hate to see a future where developers are pidgeonholed with regards to their tools
Yeah -- by OA I mean machines for handling email, spreadsheets, etc. Cross-platforming documents kind of sucks still.
A reasonable compromise is Mac laptop for office tasks, and a VM for development, and then a desktop development machine. If someone is super mobile as a developer, I could see a Linux laptop as an option.
The annoying thing is that if you really want security, you are basically stuck with Windows 7 or Windows 8 now, at least for desktop/laptops, and iOS or BB for phones. (Windows and platform management has gotten better -- OS X is actually the least secure OS in a major corporate environment today, due to lack of security and management tools. It's still decent for unmanaged use vs. Windows or Linux.)
In an environment processing highly sensitive information (say, a law firm working on M&As, or a print shop handling annual reports), where the tools aren't that essential to work, you could have a legitimate "you must use only our locked down systems" argument. I wouldn't really want to work in a place like that, though.
The long-term solution in high security environments is probably a mobile-based OS for desktop/tablet/mobile use, and then virtual desktop into either a super locked down existing desktop OS, or some new environment. For a lot of stuff, locked-down tablet/mobile (or ChromeOS) connecting to SaaS apps could probably do it.
Likewise. In fact, I like the Windows (7) UI better.
I'm a full time desktop Linux user, but that's not for the desktop. I'd be happy with Windows as UI on top for "desktop stuff".
Note: I've been using macs daily, for hours from OS 6 to OSX 10.3, before I started using Windows and Linux more.
Fair enough. I was super into XMonad when I used Linux, but never really found the need on Mac OS. I remember reading up on this topic back then, maybe that's why I didn't make the switch.
We don't assume anything. New employees here at GitHub can choose whichever platform they prefer—it just so happens that most people here go for Macs of their own accord :)
I think you hit the spot. The reason why so many devs go for mac nowadays, is to have a good laptop that you don't have to babysit / configure much yourself and that's well integrated with a unix toolchain. On a desktop, many of these advantages go away when you compare to a linux workstation, therefore many go for that.
I had to babysit OS X a lot more than I had to babysit Linux. Ubuntu might take one or two tweaks when installing on a MacBook, but twenty minutes later it runs perfectly for ever (or six month, whichever comes first).
I had to babysit OS X all the time, due to its lack of a good package manager. This was years ago, I don't know if it has improved, but it grated me greatly.
> This was years ago, I don't know if it has improved, but it grated me greatly.
Well of course, OS X has heavily improved since the early days (I'd call 'early' everything below 10.4). The surge of developers wanting to use OS X has also increased the demand for package managers, and so they came. First fink, then macports and nowadays most use homebrew. You install them once using the standard OS X pkg installation facility and then you're good to go - the range of 'backend software' packages in brew is comparable to apt-get I'd say. For everything from linux that needs a GUI I have a VM, for everything Windows-only I have another VM. There's never even a question whether I can run something locally - once you have that it's hard to give it up again really.
That being said, the best OS X release is probably 10.6, since then I don't like the direction very much - but on macs you're basically forced to use the newest OS (XCode compatibility, hardware compatibility once you upgrade your macbook). The iOSification hasn't been a dealbreaker to me so far, it's IMO still a better all round experience than any other notebook, especially considering the service quality, which is bar none where I live. Friend of mine bought a $2.5k lenovo - the board went dead after 1 month, they picked it up and he hasn't seen it since the last six weeks, no replacement. I had a similar issue with my rMBP - got it back, fixed, after 2 days. Similar stories about Dell. HP might be better, but their hardware is crap IMO. I just can't trust any other laptop manufacturer at the moment, which makes me sad.
I somewhat disagree with homebrew being thrown in with Fink and Macports. The benefit to homebrew is that it doesn't require having package maintainers to babysit packages and make sure the binaries work and are updated. You basically just take the raw source and someone throws a patch on it. It's actually a thousand times better because you don't have to worry about compiling binaries that work on everyone else's systems no matter what crazy config they have.
What I was trying to get at is that those systems were unsustainable and I can see homebrew developing into a legitimate "default" package manager that is infinitely maintainable rather than Fink/Macports which were simply shims that were doomed to fail by their architecture.
Well for one, you probably don't want them to be apt-get in terms of installing binaries. We had that with Fink, etc. and it sucked and was outdated quickly. Homebrew is pretty solid though, probably better than what Apple would come up with. They actually helped make what you would like a reality partially by releasing the Command Line Tools so you don't have to bother with Xcode.
Why not? Binaries are so much better for a platform like the Mac (basically just 32 bit and 64 bit Intel architectures at this point, with fairly homogenous OS versions across the installed base). Why on earth make everybody waste their local machine cycles compiling something that should end up the same for everyone anyway?
Fink was only out of date because they weren't keeping it up-to-date (perhaps their builds weren't automated? That's pure speculation, I have no idea). Debian manages to keep things more or less up-to-date with more packages (and architectures) than Fink ever had.
Incidentally, if you put your Homebrew in /usr/local then it will often install using the "pour" technique which is pure binary distribution.
I loved Fink until it started languishing, and I love homebrew now: its "everything in git" philosophy and very-open-to-pull-request attitude of the maintainer make me optimistic that it won't slow down and become irrelevant like Fink did. I think that's the real difference between the projects.
We offer new employees Apple hardware, on which they are free to use OSX or Linux. Apple hardware is the best hardware out there, so that's a no-brainer. We avoid Windows for security and compatibility reasons.
Yes, sadly. At my interview they said I could have a Linux machine, and I believed them. I got a freaking Mac. It's assumed that everybody secretly yearns for a Macintosh.
Honest question; I've never understood why people have such a fixation on puppet-style tools.
Small-scale? Run a simple script.
Large-scale? Use a network-hosted configuration (optimally read-only root and network booting so the entire system is known-good) to avoid the entire class of configuration drift / migration / state-accrual problems associated with the above.
I just see puppet as sort of trying to provide the latter and failing, resulting in a complex version of the former.
> Honest question; I've never understood why people have such a fixation on puppet-style tools.
There are two things I want:
1. To describe the desired correct configuration as a directed, acyclic graph.
2. To have some automatic compare-and-repair mechanism regularly bring my systems to such a state.
If you don't have a good tool for those two requirements, you wind up reinventing it anyway.
Your small shell scripts start being littered with lots of checks for this and that file, if-thens and cases. Then they break when script 27 silently gets out of sync with script 42.
You realise one day that system configurations can be expressed as DAGs (possibly it's a new insight, perhaps you were reading ITIL documentation). And you begin to dream about a tool that can take a descriptive DAG and generate the correct shell scripts. You have now half reinvented puppet.
But the systems still get out of sync. So you start tinkering with a tool that periodically checks each system and reruns the correct script. Now you have to ensure that your scripts are all idempotent. All those if-thens and checks creep back in.
So now you begin to dream about a system that only generates the steps needed to close the gap between the DAG and the current state of the system.
Congratulations. You just reinvented the other half of puppet.
> 1. To describe the desired correct configuration as a directed, acyclic graph.
If you have already decided on your solution before you have analyzed the problem then there's nothing to discuss.
> 2. To have some automatic compare-and-repair mechanism regularly bring my systems to such a state.
Right. My fundamental point is that puppet-esque solutions only describe certain aspects of the system state, ultimately failing here and resulting in 'configuration drift'. Entire system images are far more elegant.
Most of the use cases I have seen for puppet-style stuff are legacy-situation based and only make sense within that context.
So what is the alternative to puppet-style solutions? Personally I use full system images on cloud and cluster solutions (corosync/pacemaker) to maintain server state.
> If you have already decided on your solution before you have analyzed the problem then there's nothing to discuss.
You can see how I arrived at that solution.
> My fundamental point is that puppet-esque solutions only describe certain aspects of the system state, ultimately failing here and resulting in 'configuration drift'.
Particularly with runtime / process supervision. One of my pet peeves.
> Entire system images are far more elegant.
Yes and no. It really depends on what your cost/benefit tradeoffs are. I like system images for startup, but systems still drift from their initial configuration no matter how that configuration is established (DAG or blob).
Even with system images you'll need to detect divergence from ... what, exactly? Doing a byte-for-byte comparison is going to suck.
Do you just kill and relaunch periodically? I can see that being stochastically effective.
> Do you just kill and relaunch periodically? I can see that being stochastically effective.
Personally no, but you easily could.
> It really depends on what your cost/benefit tradeoffs are.
Well, realistically, to facilitate automated deployment and testing of multi-machine systems you really do need to be operating at this level of abstraction. (ie. to declare your desired state). Once you reach this point, a configuration file for some version of a package within some node is really a distraction rather than a help; it should have been abstracted to some known-good state and taken out of the equation if you are to have any hope of staying sane. Many people use entire system images coupled with service monitoring as the de-facto segregation point for management purposes.
> Do you just kill and relaunch periodically? I can see that being stochastically effective.
While that's entirely possible and probably a reasonable approach in some cases, the corosync/pacemaker de-facto/spiritual approach is to detect issues with a given resource (roughly: 'service instance'), automatically destroy it and fail over to another instance thereof, and then potentially start another instance to replace it automatically on the same, or some other cluster node. To facilitate rapid failover, the master/slave paradigm allows you to have live backup nodes running and promote them as masters easily. Any type of hardware or software can be scripted as a managed resource. The type and frequency of resource health monitoring checks can be custom defined.
I still don't quite understand how you manage version control with entire system images. I guess you could just write a changelog, but that requires a large amount of self-discipline to ensure that the changelog exactly matches system state. Let's say you discover some minor instability, and discover that it was introduced 8 months ago, but the changelog for that image was completely innocuous. Can you do anything other than throw out all 8 months' worth of images and try to work your way back to present via the changelogs?
Puppet-like systems have the same issue in that they don't attempt to specify the entire system, but at least with puppet if there is an issue due to system drift you can start with a clean base installation and re-run it.
I still don't quite understand how you manage version control with entire system images.
You just uniquely name the environment, for example with a version number, and/or use a snapshot-capable datastore.
I guess you could just write a changelog, but that requires a large amount of self-discipline to ensure that the changelog exactly matches system state.
Definitely don't do this.
Let's say you discover some minor instability, and discover that it was introduced 8 months ago, but the changelog for that image was completely innocuous. Can you do anything other than throw out all 8 months' worth of images and try to work your way back to present via the changelogs?
If you write a test that can trigger the issue and replay the test against past images ("regression test") then you will identify exactly where the issue was introduced. The key thing is: don't have manually configured what-not in production. Keep it versioned, keep it solid, keep it known.
Puppet-like systems have the same issue...
Exactly. This is my point. They don't really deliver on the promise of decent automation, because they are inherently patchwork/partial-scope in approach.
I still feel you're ultimately defining a DAG ("I want 20 web servers, 3 database servers and 2 load balancers") and relying on some compare-and-repair (health checks and replacement).
But you've moved your unit of management from packages etc to machines. I've previously argued that this is the key thing that will change web hosting economics. I was sorta wrong, but I can see now where you were coming from.
Snoop-doggy move over; Nerd-DAGgy defines the season!
In an interview with MTV on his new formal collection, taglined 'S-eXpression', Nerd-DAGgy was asked what the secret of the hot new look was. In characteristic brevity, he replied "It's declarative."
-- This Season's Assembly, Fashion World, February 16, 2013.
The feeling that I want to write something quick or easy shouldn't depend on the size of my infrastructure, it should be easy either way -- 500 servers, 50, 1, doesn't matter.
That being said, our modules do need some patch-love for things like launchd and OS X groups, we've been a lot more Linux/Unix focused to date. (Patches are welcome of course!)
Thanks for your reply Michael. I had a look at Ansible and your personal website and you definitely have been doing work with a fair bunch of serious systems in this space, so I really value your perspective. However, looking at the ansible news and documentation - "playbooks, configuration management, deployment, and orchestration" - I really couldn't see how it differs significantly in paradigm to puppet. Is it just the ability to converge on some defined configuration automatically? It's unclear to me. Would you be so kind as to summarise the differences in brief? Thanks.
Sure, multi-tier has been a focus from day 1, so a common example is a workflow where you want to hop between tiers of a 3-tier application, touch load balancers and monitoring along the way, and so on... that's all trivially possible primarily because it's push, not pull, based. Ansible can both code up and execute those kinds of workflows very quickly. A major core other aspect is not requiring any agents on the remote nodes, not even having a server, and not needing an additional PKI. Because it's using SSH, it's using your OS credentials, so if your boxes are, for example, using Kerberized SSH, that just works out of the box.
The resource model is still very puppet-like (state=absent/present, idempotence, etc), in that we have states and convergence, but I have tried in many places to not make this a programming language, and keep the amount of syntax down. A key focus was making the content readable, so you didn't have to think about dependency ordering and could tell by skimming the playbooks top to bottom exactly what they will do.
Network hosted configurations are the scourge of most admin's existence due to the emphasis it places on network reliability and performance along with a SPOF for the network storage.
The other aspect is that many admins are ally do not like the bad habits associated with an image-level of abstraction, which leads to many hidden dependencies and configurations settings. Aa package-level manifest like Puppet, Chef, or Ansible enables much more of a complete explicit specification of the environment along with cross-machine dependencies. Such a description also allows easier reassembly of subsets of the system into different combinations.
The trend towards image-level abstractions does work for many as an alternative: Netflix for example avoids Puppet and just reassembles AMI images on each deploy, with auto discovery of cross machine dependencies at runtime via (eg) Zookeeper or other cluster facilities. But they have the discipline not to get into an image sprawl situation.
If they are always on the network, it's trivial, you just boot from the network as with any other machine.
If they are often on the network, you can use occasional rsync, for example to the system partition either at boot or in the background. That way, boot is possible even if disconnected.
If they're not often on the network, and you are seriously considering doing this at all, I would reconsider which problem you are trying to solve.
Also: I don't know about anyone else here, but I would prefer not having a company manage my laptop. The use cases for such a scenario, in my view, are pretty hard to imagine.
That was my first thought as well. And then I started wondering why they are automating so much setup on OS X itself instead of using something like Vagrant for dev stuff.
I find OS X to be pretty frustrating to work with natively. I've long tried to keep things like Postgres and Python consistent, but even a 10.x.y update can break things. Even with homebrew and postgresapp, it's challenging to keep the system from breaking. I'm currently running 10.7 precisely because I didn't want to rebuild my setup on 10.8 (I've since started using Vagrant much more aggressively to help solve this).
I realize that part of the point of boxen is to help set a baseline but it seems like a Vagrant box that mimics the actual Github stack would make more sense.
Understood, but the original blog post begins “Boxen started nearly a year ago as a project called “The Setup” — a pipe dream to let anyone at GitHub run GitHub.com on their development machine with a single command.”
It was this line specifically which implied that Boxen is used to emulate the production environment in OS X and spurred my comment.
Isn't puppet overkill for this? Seems like a bunch of ruby to build command line arguments. Bash would have been easier. We have a workstation dotfiles which we clone and execute an install script with plugins for oh-my-zsh.
To set up initially, possibly. But this is for maintenance too. Need to install a security patch? Puppet. Need to upgrade versions? Puppet. Want to tweak some setting somewhere? Puppet. Using Puppet makes it stupid easy for us to keep our setups (a) in sync, (b) up to date and secure, and (c) do it without a bunch of hacky scripts that will probably break or not work right in 3 months.
I'm not seeing it how installing a security patch is any different. Security patches usually check to see if they need to be applied before doing anything. Updating a version is as simple as `brew update node`, `brew updated mongodb`, etc. The bash you call hacky is what's being done here with an extra layer of Ruby.
Cool, well have fun next time you need to update a setting buried in a config file or verify the results of something you ran. You can write 200 lines of bash that might work (or might not if the user decides they like csh or zsh) or like 3 lines of Puppet and guarantee it's idempotent.
There's a lot more to this than just setup scripts, but if all you need is setup scripts with the occasional update, then by all means just stick with that. Our needs just happen to be beyond that.
Might seem like overkill, but Chef/Puppet are designed to be idempotent, which is exactly what you want when you're running configuration scripts that are expected to evolve and update, and hence must be run over and over without reformatting the workstation.
Disclosure: I'm a big pivotal_workstation and Chef fan, but excited to see what new angles boxen demonstrates :)
Puppet not only allows you to deploy changes but makes sure they stay. There is something called Puppet Dashboard, that allows you to see all the installed clients, who has a valid config, who is outside that config. This allows you to quickly see machines that need attention. There are some better screenshots here [2].
It's easy to install something originally from state X to state Y with bash, but Puppet makes it easy to continue ongoing because it's a declarative syntax.
How does this work for proprietary software, or stuff that requires installation keys or authorization? Say Office 2011 or sublime text 2 or some other paid app (whether in the app store or not)?
You can use Boxen to install Homebrew and then (with puppet modules which you may have to install yourself) you can use Boxen to manage/install Homebrew packages.
This looks pretty great. We do something slightly similar at Chartbeat, but using puppet + ubuntu server VMs on our macs. Enables us to to develop on the same hardware we'll be using in production. Our CTO wrote a blog post about it here if anyone is interested:
This looks very nice -- automated workflow setup for developers. It's essentially generalization across every "developer setup guide" you could possibly imagine.
Now the question is: can we get this for Linux users?
Boxen works with puppet too, but they don't fulfill the same role. Boxen is about the sharing and centralization of your puppet-like tools. Presumably not everyone wants to write a developer workflow from scratch, and Boxen could solve that without requiring a Mac, right?
Exactly -- no one wants to start from scratch. Puppet maintains something called, Puppet Forge [1] where anyone can uploads and share their puppet recipes. This goes beyond Puppet Forge, since it is a bundle of scripts that work together, so there is much added value to something like boxen.
Presumably, if you already have your own git server running elsewhere (and if you're considering this, you almost certainly have some solution for hosting git repos, GitHub or otherwise), there's no reason you couldn't just host your boxen repos there.
It's a great cross-sell, and one I think'll be successful for them, but as far as I can tell there's no explicit lock-in.
* Thoughtbot's laptop script - http://robots.thoughtbot.com/post/8700977975/2011-rubyists-g...
* Lunar Logic's Lunar Station - https://github.com/LunarLogicPolska/lunar-station
* Pivotal Labs's Pivotal Workstation - https://github.com/pivotal/pivotal_workstation
I personally liked Pivotal Workstation the best, as it had the best combination of robustness, pre-built recipes, and easy configurability. I'll be excited to take a look at Boxen the next time we bring someone into our team.