I want to just give a few key notes, though many other folks around here are right.
* For Otto 0.1, we focused on developer experience. We don't recommend deploying for anything more than demos. Future versions of Otto _are_ going to make this production-ready though, and we already have plans in place to do so.
* As others have discovered, Otto is built on top of our other tools and executes them under the hood. We didn't reinvent the wheel, and this has huge benefits. We're dedicated to making all our tools better, and as we do, Otto naturally improves as well.
* Otto is a lot of magic, we're not trying to hide that. It is a magical tool. But we also give you access to the details (OS, memory, etc.) using "customizations." We'll improve customizations and increase the number of knobs over time.
* Vagrant development will continue, we have some major Vagrant releases planned. Please see this page for more info: https://ottoproject.io/intro/vagrant-successor.html
* Remember that Otto is 0.1 today and Vagrant is 6 years old. We have a long way to go to reach the maturity and toughness of Vagrant. We're committed to make that happen, but it will take time.
Thanks everyone, sorry for not being able to be more active in here. Have a great day!
While Otto may be replacing Vagrant as the preferred directly-used tool for most users, its a higher level tool where Vagrant still exists and is used under the covers.
Since Vagrant still exists and is maintained and is used by Otto (as well as being usable independently), it would be extremely confusing if Otto was called Vagrant 2.0.
> Its confusing to be "abandoning" Vagrant for a new system "Otto" when really all that is happening is that Otto is fixing issues that have come up over time with Vagrant.
The main issue Otto seems to be fixing with Vagrant is that Vagrant, alone, isn't a complete solution, a suite of other tools are needed by typical teams; Otto incorporates Vagrant and those other tools, puts an abstraction layer over them, and lets you use them together.
But that is the closest thing anyone thinks of when you say "successor." Then having to explain that it's not a successor, well, you've done it yourself.
Ruby on Rails, for instance, is a beautiful example of an internal DSL, but the Vagrant DSL is awful, you could write a much easier to read/use internal DSL in Java.
Then there is the issue that Packer and Vagrant are two different tools. Why should you need to change anything about your provisioners AT ALL when you are trying to burn an image? Doesn't that just defeat the whole point of devops?
And then there is the issue that when Vagrant can't talk to the mothership, it doesn't work right.
It goes on and on. People are screaming out today that "devops is a big waste of time" and I think 80% of that is that Packer and Vagrant are so awful and putting another layer is going to make it 95% awfulness from Hashicorp.
I can't help but think that what the DevOps world really needs isn't another thin layer of magic trying to shellac over the issues of everything below it, but rather to have a hard think and rebuild everything from the ground up with more appropriate primitives. Sort of like what NixOS is trying to do (haven't used it, so can't attest to how successful it has been).
That's my two cents.
Sometimes as I am working with boto (Python SDK for AWS), I wish I have the time to write some of the boto modules myself, because they were inconsistent and harder to use than other modules. I can easily create many AWS services with Ansible modules, but I find it easier to hack and integrate more tightly with my environment by writing my own Python code using boto directly. What's inside the machine remains to be Ansible because it does a really good job.
Another classic example is logstash and AWS logs like cloudtrail, flow log and alarams. I can easily write a parser in Python or in C and get my job done, and overtime the code can be reusable. But with logstash I can't guarantee that existing filters and plugins will always work and they get really really messy no matter how good you are with logstash. And that's a layer I have to reinvent because of simplicity and total control.
I'd rather pipe to Elasticsearch myself than relying on logstash in cases like that.
Awful compared to what?
'And then there is the issue that when Vagrant can't talk to the mothership, it doesn't work right. It goes on and on.'
I'm curious, which package management tool have you seen that works when it can't talk to the mothership?
Hashicorp has a particular philosophy of how it builds and ships its ecosystem: small, standalone, composed tools. That leads to some overlap and inconsistent quality, as they evolve independently. Atlas is supposed to smooth over the overall experience, but, software is hard.
Alternative ecosystems tend to be large pills to swallow (i.e. PaaS), though they might have a better overall experience.
I'm going to assume this is about not being able to detect/handle version upgrades of base boxes if they're not in Atlas (or if you're offline).
This is my issue with this ecosystem too - Atlas is a "free" service but to use/run an alternative "Atlas" would mean reverse engineering it (if there is a spec detailing the endpoints, calls, payloads expected/accepted I'd love to hear about it!).
As for "which package manager works it can't talk to the mothership":
Debian's Apt/Dpkg can work from a purely offline mirror on the local disk if you want. RPM can too. But more likely you'll want to use your own private repo. Possibly just a mirror of the upstream repo. Maybe your own packages. Maybe a mix of both.
No official spec to my knowledge, but there are a couple of attempts at this
"Debian's Apt/Dpkg can work from a purely offline mirror on the local disk if you want"
Okay, but Vagrant caches boxes on the local disk too.... if one runs "box update --box --provider" in a scheduled job.
Thanks for the references - I did find a reference to an environment variable in the vagrant source somewhere so maybe it can work reasonably well with a private reverse engineered "atlas"
These people have obviously never managed fleets of servers pre-devops. Forgive me if the opinions of some devs who have never managed a server in their lives ranks lower than those who have been in the trenches for a while now.
> devops is a big waste of time
Devops is about culture, not about tooling.
Software managers all the time are bitching that their developers get it into their heads that they need a reproducible build environments and then the three of them go screw off for two weeks trying to get Vagrant to work 100% right.
Anyway, in a more typical software development environment you'd provide such transition tools and carry your users along, so that the next version already starts with a huge base of users. Rather than throw out the old way and come in with a new completely incompatible way to do things.
They still work at different levels, with Otto more likely trying to satisfy needs for things like "Ruby" and "Redis" while Vagrant is still more explicit.
And I don't see why we need default stacks, don't we already have that with Vagrant? Isn't that the entire point of the Vagrant file and the setup file? You grab an image, you grab the post-setup file, and off you go? Is it really that much work to write a post-setup script to run apt-get and do some config work?
It just seems like we're reinventing the wheel, again. And again, and again. And we invent all these new tools just to go through and spend another 6 years fixing them, improving them, debugging them, cajoling others into using them, etc., instead of just improving the tools we already have.
Or perhaps I'm just in a shitty mood, I dunno. All I know is that the only reason I can use Vagrant in my professional life is PRECISELY because it's not for production deployment, and that's awesome, because it fills a very specific and needful spot that was vacant before. Why do we always have to keep expanding, cannot we not be happy with just really good tools for a really good specific purpose?
P.S. Thanks for the info.
Because that tool, along with the beer in their fridge, is for paid by VC money, who needs to recoup their investment. $99 Vagrant licenses for use with VMware isn't going to cut it.
What I ideally want is virtual machine based (like vagrant), immutable configuration (like NixOS) approach, with reasonably simple configuration (like docker+fig) and file-based settings (as opposed to docker, where your image is pretty much separate from Dockerfile) with 1 common repo for your "best possible environment" config examples, where every somewhat important decision is explicitly listed and can be changed by user. So something similar (in some sense) to vim-pathogen: git clone, maybe run some other magic command and your env is up and running in several minutes. If contents of config get changed, so does virtual machine.
I understand that what I'd like to have is a bit utopical in today's reality. But nevertheless, Otto is pretty much opposite of what I consider perfect — I cannot imagine anything farther from desirable than that.
Yes - sometimes you need something very specific.
But, please - it sounds like you've just damned us to repeat the same low-level tasks again and again.
80% of websites ARE the same. If you're in the 20% (or 10% or 1%) then good luck to you. But for those of us deploying another typical webapp - I'd really like to draw on community knowledge. I never wanted to learn devops same way as I never want to learn cryptography, oAuth, SQL internals, how nginx works etc. I just want to use tools that solve these problems for me.
The odd time we have some special requirement (a work queue perhaps?), but most of the time it's language + store + web server and we're off.
It actually matters... at scale. No project starts at scale. Most projects never need to scale; most projects die before they scale.
Magic software is for prototyping. Sensible defaults and convention-over-configuration mean trying (and failing, and trying again), quicker. Even though I have software in production with a million users, I still code new my new experimental projects on Heroku, because relying on "magic software" is one barrier less in the way of getting to work. (Not a technical barrier, mind you; a barrier of choice paralysis about what my architecture is going to look like.)
At scale, meanwhile, you have a separate thing, a magic piece of strong-AI-equivalent software called a "dedicated ops team." When you get there, the task of refactoring your idiotic prototyping decisions becomes their (hopefully-well-paid) problem.
- __Right now__, Otto appears to be Vagrant++. I wouldn't use this for prod, at least not for a while.
- Otto is written in Go. Source is here: https://github.com/hashicorp/otto
- Otto uses a plugin model for different applications. Plugins aren't supported yet. https://ottoproject.io/docs/plugins/app.html
- Built in plugins don't appear to be consuming a sane plugin interface. How the built-in plugins work is non-obvious.
- Under the hood, Otto appears to be using Packer, Terraform, and Vagrant.
- I would consider Otto, Nomad, and Terraform to all be "provisioner tools". They seem to be all directly related to tools such as Ansible provisioning, Chef provisioning, Fog, or other direct management tooling, like the AWS CLI or Powershell CLI for VMWare.
- Otto, Nomad, and Terraform all promise to solve the same problem in prod in different ways:
-- Otto is a one-off push to set up infrastructure and deploy to prod.
-- Nomad is for pushing jobs to help maintain long standing infrastructure in prod.
-- Terraform is for periodic pushes to prod to create idemopotent infrastructure.
In other words, from least to most robust IMO:
Least Robust -------> Most Robust
Otto --> Terraform --> Nomad
- Terraform defines that you have X servers with certain specs.
- Otto defines that those servers are running Docker or whatever.
- Nomad defines that your application is running in X containers in your infrastructure.
Sounds like a big bag of "nope."
I love the idea of focusing on the development part and letting something / someone else worry about hosting.
There will come a time when you will need to setup a more customized hosting solution but hopefully by then you can hire someone to do operations that knows what they are doing.
I don't think Heroku or Otto are designed for big projects anyways, they're a great way to get up and running and help you grow without having to worry about infrastructure upfront or for a couple years.
If you're right about that in Otto's case, that's a bummer. It's frustrating that there is this seemingly intractable divide between things that are great to start with for a new project (Heroku, etc.) and things that scale well as a project grows huge (Kubernetes, etc.). Every time a devops tool comes out, I read about it in hopes that it has both the easy-start and but-scales-as-needed stories, and inevitably find people saying it's actually one or the other.
The "non-abstraction" part doesn't seem to be in any cloud provider's interest to sell, though; even with AWS, when you allocate a database, it doesn't result in a new EC2 instance for the DB being dropped into your bag of instances, such that you just get charged instance fees for the instance. Instead, it all gets packaged up so that you can be charged higher, separate, value-based database fees. It's a bit ridiculous.
cf scale my-app-name 4
In such cases, explicit and declarative is much better than implicit and hidden.
A little (but not quite) like this:
And now, with a new, complex, and broad product introduced...I just don't have a lot of confidence that the quality is going to be there or that it will ever fulfill the very large goals that are outlined.
I would prefer they focused on Vagrant and made it a really outstanding, polished tool.
To me, it seems like it's just a bit of word play to try to get more interest in a product that is less interesting to some people. Vagrant is generic enough and helpful enough that it has become one of the two or three preferred tools for building local environments for developers.
AFAIK (anecdotal evidence here, to be sure), other HashiCorp tools are nowhere near as dominant. So is the tagline just to try to get more people interested in the tool?
It wouldn't sound as interesting to _me_ if it were "Otto, something like Heroku but a Go app that uses a bunch of HashiCorp products to deploy apps locally and in the cloud".
In reality, it seems the "successor to Vagrant" line is more to attract attention, as it's not at all a _replacement_ for Vagrant, just a tool you can glom on top of Vagrant and a bunch of other HashiCorp tools.
This appears to address that.
OTOH, it says it's executing Vagrant and Packer under the covers, so I really need some of the Packer limitations I have (like https://github.com/mitchellh/packer/issues/409
) addressed more than I want glue on top.
Anyway, if people want to hack on Strider, pull requests are welcome.
I'm not using it actively (yet), but it's a very very tiny amount of code to supply both. All the work gets done by boto.
Back to otto - I am curious what otto means when it says it's going to start to talk to infrastructure and means that it's going ot be more of a workflow engine that can also invoke terraform or what.
The DSL changes appear to maybe be a step in that direction?
Would probably benefit from more than one liners on the homepage, to show what is really involved more quickly.
Ahhhh that Packer issue. I have been following that one for what seems like years too.
Then again, who am I to complain? Vagrant is a superb tool and the endless and pointless reinventing of standards every few years keeps like 70% of us employed.
Seems to me it might, if you view its "one job" as coordinating a bunch of lower level tools that each do their own "one job".
Well, and assuming it does it well.
Did they do any user research? Doesn't feel like it based on the above statement.
For example, a lot of Vagrant LAMP users don't require much provisioning at all. Of the top 10 most downloaded Vagrant boxes , two of them are pre-provisioned (or nearly) Vagrant boxes. Homestead : 2,769,045 downloads and Scotch Box : 275,963 downloads.
The biggest problem with these setups is deployment. Vagrant Push  requires a little bit too much overhead for this audience. The blog announcement even admits this. Hell, Laravel/Taylor Otwell (the Homestead guys) even built a full on deployment service called Forge .
If I had to guess, Otto basically is a hybrid to all this. Like an easy Vagrant and basic Heroku all-in-one to help push their Hashicorp's Atlas product.
I'm definitely looking forward to testing it out. Personally loving Hashicorp.
1. Apache or Nginx+PHP-FPM
2. INI Configuration
3. PHP Extensions and their INI configuration
4. Vhost Configuration, especially rewriting rules
5. Docroot in app root, or dedicated directory
This cant possible be defined in a common/generic way and supporting more than 50% of the use-cases.
There is a reason why complex click to configure interfaces exist for PHP and Vagrant with Phansible http://phansible.com/ and PHPuppet.
Besides a lot of this variety is because people don't follow best practices, and instead have a hodgepodge of this and that technologies, with this and that settings.
edit: Or at least it should in theory.
And this choice is solved through a variety of mechanisms that people rely on for other parts of their infrastructure, and feel strongly about.
For my part, this is also a solution that is way too late: We have Docker/Rocket and a range of similar tools. Why do it yet another way, when if you instead build a Docker image, you can take that Docker image and deploy it without having to translate your dependencies to a different format and re-test everything?
But I take the point - the idea that anyone deploys stuff to Windows is just so foreign to me that it didn't even occur to me.
Otto is designed to automate the provisioning of local dev environments and production environments. While some use Vagrant to solve this problem, it's typically an un-standardized, home-grown solution. Otto is an attempt to standardize and automate the process.
FULL DISCLOSURE: I've been working on a project for the last year-ish that solves the exact same problems: Easily provisioning and configuring local dev environments and finding consistent parity between dev and production environments. I'm interested to really dig in and see the differences between Otto and the project I've been working on, Nanobox. Would love some outside feedback so feel free to take a look: https://nanobox.io
But I digress. I think Otto is less a "successor" to Vagrant, and more of a natural offshoot that solves a different problem. I don't ever see it replacing Vagrant, especially since it uses Vagrant behind the scenes.
Why the new custom config file format? I've mostly found that these homegrown formats (logstash? nginx?) suffer from inconsistency and lack of flexibility, and don't have any obvious benefits. Why not use one of the following like other Hashicorp tools?
- JSON/YAML, possibly with support for templating like Ansible
- A Ruby DSL
- A limited, but well-tested and understood format like .ini files
When you are writing templates for your configuration files, what you've wanted all along was a real programming language.
I used to really enjoy using Fabric. I only ever tolerated Ansible. I wonder if building on Fabric might have resulted in a better devops tool than Ansible provides.
It's JSON-compatible, at least (valid JSON is valid HCL).
the following errors and try again:
* The host path of the shared folder is missing: C:Projectsotto-playground
* The host path of the shared folder is missing: C:Projectsotto-playground.ottompiledppoundation-consulpp-dev
Error building dev environment: Error executing Vagrant: exit status 1
The error messages from Vagrant are usually very informative.
Please read it carefully and fix any issues it mentions. If
the message isn't clear, please report this to the Otto project.
I hope Python gets added to that list as a first class environment in the future.
You setup a cronjob, did you code the script well, who does is notify, how do we know when it fails, what's the recovery strategy?
That's why you use a virtual machine.
If you're wondering why to use a scripted provisioning system, it's to have that code under version control and shared between developers.
Then I read this, and it doesn't look like it's a Vagrant successor in any way that's useful to me. I don't want it to try and figure out service dependencies for me, because I can absolutely guarantee that it's unable to do that for the component I care about. I don't want it fiddling with DNS. I think using the same description but different commands for dev vs. production is a terrible idea. They say Otto does application-level instead of machine-level configuration, but machine-level is what I want. They say multi-VM is too heavyweight but that's also what I want. It's opinionated in all the wrong ways. Everything in https://ottoproject.io/intro/vagrant-successor.html makes it clear that Otto is fundamentally different from Vagrant, which totally belies their claim (at the end) that it will replace Vagrant in any significant way.
They should just come right out and say that they've created something different on top of Vagrant. Maybe it's cool, but it's not a successor. This brand hijacking just makes them seem fickle or shifty. Now I think I'll just leave Vagrant behind while my investment in it is still small, and learn one of the bazillion other tools that I could use to accomplish the same thing.
Furthermore, automatic dependency installation sounds like it will make reproducibility difficult, and I imagine automatic application type detection will fail in spectacular ways. What happens when the magic doesn't work? The Appfile looks like yet another ad-hoc domain specific language that you need to learn that has the usual major deficiencies compared with using an existing general purpose programming language.
I'm especially curious how to configure dependencies. You might need to creating tables in a database which is setup as a dependencies, but also need to support restoring from backups or setting up replication. Beside that, some of this configuration should belong to the owner of the service using the db (db names etc), other to the owner of the db (global server settings, limits etc).
To add to that, some configuration needs to happen at runtime, so you can't just update the Appfile.
Anyway, this sounds like an awesome "UX" - let's see how it plays out in reality.
Edit: Otto appears to do a lot of things. But the part that everybody's complaining about is the "magic" part, the part that sets up a system automatically based on the language of the app. Heroku buildpacks also perform this magic, and have been open sourced and have a large community that helps maintain them. They're useful outside of Heroku -- for instance you can use them with docker via https://github.com/progrium/buildstep
It seems crazy to me that Hashicorp would try and reinvent this wheel. It's a problem that's fairly easy to do as a proof of concept, but it's the niggling details and number of combinations that can really explode.
Heroku's buildpacks are open, but they build them for their own purposes. In particular a large part of my work involved recreating behaviours added to support this or that Heroku change (particularly STACK, god what a mess that was), dealing with binaries being silently substituted, the messy statefulness of their staging architecture.
Oh, and most importantly, we made it all work in a disconnected environment. Which Heroku never intended.
Speaking individually, not as a Pivot, if you didn't have to, I wouldn't recommend starting with Heroku's buildpacks. They solve a lot of problems, but no small part is solving Heroku's problems.
CF buildpacks explicitly state which binaries should be used with which stack in the manifest.yml file. If you had a breakage, feel free to report it on the relevant github repo.
There is already a pretty popular library for android named Otto http://square.github.io/otto/
Yeah I agree. That's why there will be a time gap between 5 years, until Otto becomes mainstream. The eco-system of vagrant is so good that I even got it supported in my IDE ( WebStorm ). I don't plan to use Otto, before I can enjoy all the benefits of Vagrant that exist today.
There, I will vote for this option against any other tool that claims ability to do things automagically.
I want to build apps, not worry about hosting and servers.
It is precisely what I have wanted for a long time and the ability to customize and override defaults where wanted seems to handle most of the complaints that people in this thread are mentioning.
I don't think I can underestimate just how much I don't want to have to stay up to date with all the various best practices for deployments. I have zero interest in that and it is currently a fairly expensive problem to fix for many.
My biggest concern is that it still relies on VirtualBox for local dev. VirtualBox is unreliable and slow, especially its file mounting driver.
I'd love to see this evolve to use a different virtualization solution, maybe something based on the OS X Hypervisor Framework (ala https://github.com/mist64/xhyve)
I'll definitely be keeping an eye on the project!
Eagerly waiting to see what goes there eventually - very exciting!
Also, will there be a way to create custom Infra Types?
Is Otto mean to replace vagrant?
doesn't mean its a bad name
Side note: I don't know why someone is down voting people saying this. I mean yeah they will not change their name I guess but it should be bad for the next big project to choose a name that is not taken by a popular project already.
I'm not going to trade it for magic to be honest. Unless there is a compelling reason to do so that I'm not seeing/reading.
We've been using Packer and Vagrant with Windows builds for the better part of a year now and it's been rock solid.
We actually leverage it into our Windows Deployment Server to create automatically updated OS Images to be deployed to production machines.
I've partially written my own version of it in powershell and Hyper-V now.
Error building dev environment: Get https://checkpoint-api.hashicorp
certificate is valid for www.example.com, not checkpoint-api.hashicorp.com
edit: I just noticed your profile indicates that you work at Docker, Inc. I'm trying to figure out whether I feel that plays into your opinion or not but I would expect a bit more professionalism at least.
I've always felt that Vagrant was a solution in search of a problem and no being at Docker hasn't changed that either way. I have colleagues that love Vagrant and they are certainly entitled to that opinion, as much as I'm entitled to think that it's horrible.
I honestly haven't seen any value brought from Vagrant or its ilk to anyone. There is temporary relief from some very real problems of transmitting development environments around the place. But the tool ends up creating pets that you have to care for over long periods of time. You have to deal with entire operating systems when you really want to abstract away the OS in favor of getting some real work done.
Configuration files as infrastructure aren't particularly interesting to me. I don't see how building annoying bits of configuration will fix anything in the long run.
Also note that while I think the HashiCorp infrastructure-as-code attempts are quite lacking, their Consul and Vault tools are really nice. :)
How is this any different than a Dockerfile? Vagrantfile is certainly more abstract I guess.