
Otto: Meet the Successor to Vagrant - clessg
https://ottoproject.io/
======
gcr
What do I do when one of Otto's magic commands breaks my website in
production?

Will I ever have to give Otto my root password?

I am deeply afraid of this tool.

The marketing heavily pushes the angle of "Don't worry, it's magic! It knows
everything! You won't have to break a sweat!" But statements like this, while
appealing to newcomers, sicken the people who understand the stack well enough
to also understand why this tool could help them.

If Otto wants to gain mindshare of the devops people who will be forced to
support it, I strongly suggest they reconsider their branding.

I really, really hope I never have to use this tool.

~~~
mitchellh
Good questions, and I'm happy to answer. I'm going to answer with a
philosophical point of view. I'm not trying to dodge the question, but I'm
hoping it provides you some idea of how we're approaching the problem. If not,
just let me know I can try to clarify.

The bigger philosophical idea is the centralization of knowledge. How does
Otto know how to deploy a PHP application? Because people who can be
considered experts in PHP encoded their knowledge of how to do it. You no
longer have to be that expert. (Note that 0.1 deploys PHP in an awful way, 0.2
will do a lot better, thanks to the "experts" coming in). Our goal is to
centralize dev/deploy knowledge so that we can focus on higher level problems
and teach Otto how to do the details.

The next question is how do I know its safe? Under the covers, Otto uses a lot
of production proven tooling. On top of that, we have a really extensive
acceptance test suite to verify behavior against real things prior to release.
We're doing the best we can do, but Otto will need to earn a lot of trust.
We're going to build that trust over time. Note that you can always inspect
the ".otto" directory after compiling your Appfile to see the configurations
it generates.

You can choose to not use Otto's deployment features. That is completely fine.
Our goal is that in less than a year, it does what you'd want it do (help us
by teaching it!), and you'll gain trust for it. We have extensive acceptance
tests to verify certain behaviors. Of course, we can't cover every edge case
but we are going to do our best. This will be a trust building process.

And, I don't blame you being afraid. It is a very new tool that is trying
something that is threatening to a lot of people. No worries. Just wait a
while; maybe it earns your trust, maybe it doesn't. Take a look at our other
tools if you want absolute control, Otto might not be for you. :)

Finally, and this is not a negative thing: the HN crowd is generally not the
crowd Otto is aimed at. HN folks are tinkerers, they want to know how things
work. That is what makes HN great. They question things and want to know the
full details. Otto scares people like that because it _removes_ power from you
(much in the same way compilers removed power from assembly programmers). For
these folks, I ask you to take a look at the tools that are under Otto:
Consul, Nomad, Packer, Terraform, even Vagrant. They provide absolute control.
But, Otto is already making great great adoption in the group of people who
think "I don't care how, just deploy this application." Think folks that work
with well known technologies like the LAMP stack, Wordpress, Rails, etc. all
day. Otto for them is fairly revolutionary, because this is a tool that
actually makes them more productive and simplifies thing, vs. a lot of the
other tools even we've built which appear to add complexity.

And for the folks that want simplicity, what Otto is trying to do is give them
industry best practices in addition to simplicity. As an extreme example: we
want `otto deploy` to be easier than FTP-ing PHP files to a server, because it
isn't a best practice for many reasons. We want you to get an AWS
infrastructure designed by experts deployed onto a server that is configured
by experts with supporting tools for monitoring, logging, security, and more
that only experts would really properly use and configure. These are Otto's
aspirations. I believe we can get there, but Otto obviously is new so we
haven't proven it yet. But I'm going to try.

I hope that clarifies things!

~~~
twa927
It's OK to use heuristics and "magic" in many cases - when it's acceptable to
have correctness rate lower than 100%. When an IDE generates some code it's OK
that it's correct in 90% cases only, because I can review the code and correct
it. But for a deployment tool even 99.99% correctness is not acceptable - it
must be 100%. That's why I wouldn't use a deployment tool built on heuristics.
You can't make it correct 100% even when having huge number of tests. (unless
the tool works like a magic config generator only that I can review)

~~~
mentat
I'm not sure what real world deployment tools you're talking about, but 99.99%
would be pretty good. Actually distributed deployment is "a hard problem".

------
mike1o1
For those interested, The Changelog podcast recently had an interview with the
creator of Otto, specifically discussing Otto and Vagrant. I thought it was a
great listen. Basically, Otto is looking to do for deployment what Vagrant did
for setting up dev environments.

[https://changelog.com/180/](https://changelog.com/180/)

------
Shank
I absolutely adore HashiCorp's tools, but every time I read a sentence like
this, I'm disappointed:

> We'll deploy the application to AWS for the getting started guide since it
> is popular and generally well understood, but Otto can deploy to many
> different infrastructure providers.

Okay, great! I'll just go look at the infra types list and...oh, it's only aws
right now [0].

[0] -
[https://ottoproject.io/docs/infra/index.html](https://ottoproject.io/docs/infra/index.html)

AWS isn't just easy to setup and use, it's also very expensive. On the free
tier, things are wonderful and the world is fine, but if you've exhausted the
free tier, you're now paying a lot through the nose for little to no benefit.
I pay Kimsufi less than $50/mo for more storage, unmetered bandwidth, and more
CPU on dedicated hardware at the cost of the AWS API. This makes using this
tool and many other HashiCorp tools completely impossible to use for me
because I'm not using AWS infrastructure. I can't afford to shell out crazy
high monthly bills for the overhead AWS adds only for the convenience of one
tool.

This is disappointing, and quite frankly, a major turnoff to these sorts of
tools.

~~~
josegonzalez
They only _just_ released version 0.1.

~~~
mtolan
Then perhaps it's inappropriate to claim that "Otto can deploy to many
different infrastructure providers" until "Otto can deploy to many different
infrastructure providers".

~~~
mstade
Right or wrong, it's pretty standard practice to sell the vision of the
product as opposed to the actuality of the product.

"Otto will some day hopefully be able to deploy to many different
infrastructure providers" isn't a particularly attractive statement. You could
try something like "Otto currently only supports deploying to AWS, but will
eventually support many more" but while perhaps technically correct it'll
probably shy more people away than get them involved. Those same people might
just contribute additional deploy targets if only they got nudged enough to
get involved in the first place.

You're not wrong, but getting people on board with the vision is much more
important for a "hive mind" project such as Otto, than being technically
correct as to the current state of things.

~~~
fphhotchips
No, it's not standard practice. It's false advertising.

~~~
mstade
I'm not saying it isn't; I'm just saying that people do this all the time.
Many times just to fake it till you make it, other times to be willfully be
deceptive. I doubt it's the latter in this case.

------
manishsharan
Otto and Vargrant like technologies make me nervous and here is why: I
recently took a contract assignment in a dev shop that use vagrant. If
everything worked as advertised , I should have just done vagrant up and my
jbosss appserver etc. would have automatically built and deployed for me so
that I could focus on code. However, I had to spend a couple of day trying to
fix Vagrant salt issues --- that I had no clue about. I could have easily
setup a jboss linux vm without breaking a sweat but debugging vagrant and salt
was like a solving a mystery wrapped in an enigma. So now that I am upto speed
with vagrant and salt , it does not seem all that bad -- but I do feel as
though it has been overhyped.

I shudder to think that the dev manager are now adding "must have xx yeas of
solid vagrant experience " along with "must have 5 years of [some ide]" in
their skills requirement.

~~~
geerlingguy
Vagrant just boots a VM and runs a provisioner on it (at a basic level). I'm
guessing that even if you started a VM and installed Linux on it by hand in
VirtualBox, the Salt config would've given you some trouble. A lot of
configurations I see in the wild that are used with Vagrant and/or with cloud
VMs are extremely brittle and have a lot of unwritten assumptions built in.

~~~
tonyarkles
I've been using Vagrant lately to spin up clean VMs to run Ansible against,
and that's definitely something that you have to be very very careful about.
I'm currently looking at setting up Jenkins to frequently re-provision from a
clean VM, just to keep an eye on me to make sure I don't accidentally
introduce a change that breaks things.

So far, I think the only real assumption my users have to deal with is that
you need a full source tree checkout, not just the smaller part with the
Ansible playbooks. The project uses Subversion, so it's quite easy to just
checkout a subtree of the project instead of the whole thing. Which users did
and immediately discovered how broken things are in that situation :)

------
geerlingguy
Previous discussion:
[https://news.ycombinator.com/item?id=10291778](https://news.ycombinator.com/item?id=10291778)

------
victorhooi
I'm still trying to understand how this fits in with Docker/Rocket, and
creating your own Dockerfiles (versus using Appfiles here), and using Swarm to
link containers etc.

Does it essentially supplant Docker/Dockerfiles, for all intents and purposes?

~~~
jacques_chester
It's a PaaS, in the vein of Cloud Foundry, OpenShift and Heroku.

Edit: I'm honestly surprised by the downvote. If I'm wrong, please correct me.

------
weitzj
I am always amazed about the Hasicorp tools. What I really like that they
solve problems I did not think about (yet). What I think is their biggest
advantage is their ability to integrate in a heterogenous infrastructure - you
do not need to throw away your current infrastructure setup. You can gradually
integrate tools like Consul/Consul-template. You do not have to install for
example Docker.

And I guess otto is a nice tool which solves the problem to deploy AND develop
a microservice architecture. So maybe you already got your part right, where
developers push a new microservice to production, and the production setup
runs fine. There might still be problems how a developer can create a local
development environment with multiple microservice depedencies, which you
might want to have locally. The best solution in my opinion is fig.sh(docker-
compose) right now. But docker-compose does not help you for deployment (and
you have to depend on Docker)

~~~
krakensden
Really? I feel like half of my time working with hashicorp tools is spent
cursing. Vagrant takes more time to parse its ruby code than it takes to boot
a vm and its guest. It breaks my coworkers routing on a regular basis.
Terraform breaks for almost everything I've tried it with- no, deleting my
infrastructure and rebuilding it from scratch is not acceptable in production.
Consul is a distributed dowtime protocol, using a modern, peer reviewed
consensus algorithm, and a gossip protocol for auto-partitioning of your
network.

~~~
weitzj
I just use consul/consul-template and Vagrant (+ Puppet) right now. Vagrant
works just fine, I do not experience routing problems. But I will remember
your advice/experience for our future plans.

------
weitzj
I do not quite grasp how I would hook up an Nginx/Haproxy in front of my
application. Would this be another application to deploy? Where does the nginx
config live? Do I use consul template to update it depending on my other
deployed applications?

I get how you would combine a database and a web application, e.g. Rails. But
what about extra routes/legacy redirects?

------
frik
An Ansible alternative written in Go would be perfect.

I like Docker. Though some initial automation to config Linux distro with a Go
based tool would be awesome. (no Python/Ruby stack)

~~~
smoyer
"An Ansible alternative written in Go would be perfect."

Why?

I understand the allure of Go - I was an embedded systems engineer working
predominately in assembly language and C for many years. Memory management was
a pain and concurrency was practically non-existent on many uCs.

But Ansible is pretty mature at this point, with a vibrant community, lots of
internal and third-party modules and it's based on a ubiquitous scripting
language (Python).

Please help me understand the rationale of a rewrite - what are the goals,
what deficit would if fix and would the time required be justified versus
enhancing Ansible as it is. At best I'm a Go newb, but I'm pretty proficient
with Python - if it's just that Ansible's "not in your language", I'd argue a
rewrite represents a false economy.

~~~
frik
Why not?

> ubiquitous scripting language

Third party modules don't cover everything and then you have to write Python.
Plus the config files of Docker are better. So I would welcome a modern
Ansible alternative that requires just one executable instead of a full Python
stack.

------
ridiculous_fish
I love how Vagrant enables testing software on lots of different distributions
and OSes. The large box catalog sure beats going through RandomLinux's
installer. Unfortunately the successor to Vagrant doesn't seem to support this
use case.

------
Fizzadar
"Otto detects your application type and builds a development environment
tailored specifically for that application, with zero or minimal
configuration." \- that alone tells me to avoid this like the plague. Too much
"magic".

------
iamleppert
Abstraction != Automation. What does this have as a benefit over something
like Heroku?

~~~
jacques_chester
The ability to install and inspect it yourself. I work for the company that
donates the majority of engineering effort to Cloud Foundry. For a lot of
customers, that matters a lot.

------
odiroot
This is great but can we have it without Vagrant?

We already have fully capable systems to simulate potential production stack
with Docker, no need for another layer of virtualization.

OTOH I really love that there's no need to dig into .otto directory.

~~~
dandandan
Vagrant itself isn't virtualization; it's just a wrapper around Virtualbox,
VMWare Fusion, etc. It has a Docker provider.

------
mattkrea
You are asking for trouble using a tool like this if you don't understand
every little thing that it is doing. Managing hundreds of instances on AWS is
not easy even using Amazon's own tooling.

~~~
Joeri
On the other hand, a Linux system involves hundreds of millions of lines of
code, which you couldn't even fully grok in your lifetime even if you wanted
to. I would argue that nobody truly understands what their application is
doing. We're only discussing the degree of magic, not the presence of it. And
yes, when things break you have to go pealing back the layers of magic. This
just adds one more later of magic to peal back.

------
Rapzid
How does Otto address app updates with new and stale assets? Would there be
pre-deploy steps allowing you to sync assets to S3 or shared storage?

------
zaroth
Also an entertaining children's series staring a funny little robot which my 3
year old is really enjoying these days. You get to make up most of the story
yourself as you go, which is definitely a feature.

[1] - [http://www.amazon.com/s/field-
keywords=see+otto](http://www.amazon.com/s/field-keywords=see+otto)

------
mattexx
Sounds like Rails for Devops. Works great until it doesn't...

------
mstade
My experience from developing products and services across many stacks, in
many fields, and in organizations ranging from the small (5-20 engineers) to
the large (1000+ engineers) it seems the same pattern always emerges:

1\. Setting up your development environment typically amounts to following
some perpetually out-of-date check list of things you have to do

2\. Setting up the development environment specific to some project typically
amounts to following some perpetually out-of-date check list of things you
have to do

3\. Painfully figuring out why 1) and 2) aren't really working out, and what's
missing from those checklists

4\. Not updating the checklists through all that ad-hoc stumbling, mostly
because you imposter syndrome makes you think the problem is _you_ and not the
lists

5\. Finally getting to work on things, making small incremental changes to
your environment as you go

6\. Realizing that the small incremental changes you make actually makes
things break when you try to integrate or deploy your work, because the parity
between local dev and CI and live environments just isn't there

7\. Having some coffee and lament the state of affairs with your co-workers,
yet not really do anything to change things; either because you can't really
get anywhere in due time (1000+ engineers kind of organization) or because the
law of the jungle dictates you have to keep shipping(tm) (5-20 engineers kind
of organization) so there's never any time to deal with the debt that just,
keeps, building

8\. Cry yourself to sleep

There are minute differences, but this is the general pattern I've seen
through my decade long experience as a software engineer.

An important aspect of why this is – I think – stems from thinking your
project or setup is a unique snow flake. It's really not. What you're doing is
probably not earth shatteringly new and exciting, and even if it is, most of
the things you do to get you there aren't. We are all standing on the
shoulders of giants, but instead of realizing that and codifying all that
knowledge into tooling we can just rely on, we seem to keep thinking that if
only we have full control over things we'll be fine.

Then there's the other side of that coin, which is to say you don't ever want
to have control (for whatever reasons,) and so you hand over everything
services that'll build and deploy things magically for you, so long as you
have the proper configurations. The promise of those offerings are alluring,
and when they work it's great, but inevitably you end up with configuration no
longer being best practice, or formats change, or the service pivots; so
promises are broken.

My feeling is that the answer as usual probably lies somewhere in the middle
of having full control, and relinquishing most or all of it.

If I understand things correctly, straddling this divide is what Otto wants to
do, and Just Work(tm) for the 80% (perhaps more like 99.99%?) of projects that
_aren 't_ unique snow flakes. The others aren't the target market, and for
them the tooling already exists in various forms anyway.

It's an ambitious goal; sure to be fraught with edge cases and rabbit holes.

I hope it works out.

