I don't see the point of downvotes here... I have several production deployments of NixOS, and can't imagine using any other OS / package manager (supplemental utilities are fine, of course).
I'm just starting off working with nixops. quite nice IMO, although not getting into the meat of it yet (i.e., trying real deployments). Nix solves the dependency problem really nicely, and I can spin up machines from scratch really fast. Also curious about comparisons here. If this is a tight implementation of something like nixops I am stoked.
If any terraform folks are watching this, I have a little feedback on the home page. When I first got there, it wasn't apparent that I could scroll, and I thought I had to sit through the flashy animation before I could see anything. Was a little bit off-putting.
Also the skew/unskew on the #demo and #feature seems to cause font to render poorly on some browsers/OS combinations. Works fine in Firefox in Ubuntu. Chrome in Windows 7 is the worst with Chrome in Ubuntu being marginally better but still not rendering correctly. IE11 seemed to work fine in Windows 7.
Top image is fixed and bottom image is what I get when I first visit the site.
What is weird is that as soon as I remove the skew on the first feature bar (feature-auto) it clears up everything below it (all of the text within skewed elements looks weird). But if you re-enable the skew it distorts again. Also if remove the skew from the 2nd feature it will fix everything below it but not the stuff above and so on for the 3rd or the #demo.
One of the main strengths of something like cloudformation, is that we can use libraries in languages we're comfortable with to build a programmable DSL.
This gives me the full power of python, so I can build abstractions, use inheritance and encapsulation to specialize certain things.
We've done a lot of work to automate our infrastructure provisioning, but I'm interested in the abstraction layer Terraform provides -- especially for multiple providers.
How can we bridge the gap that is left by Terraform from having a fully complete programming language to define infrastructure (which has downsides but in my opinion, more upsides)?
Sweet, thanks Mitchell. I'm always impressed at the level of polish your products have. I'll take this for a spin and I'll report back any results. Is there a mailing list or just #hashicorp on Freenode?
I've tried to use Troposphere, but found it a redundant layer on top of the Python dicts. I ended up building my own Macro language that compiles to raw CloudFormation JSON. What's funny is that I wanted to call this Macro language exactly Terraform. I'll definitely look into Terraform, it looks to have a lot of the features that I implement although most of mine were tailored exactly toward CFN and it may not be practical to switch. Maybe I'll finally convince my employer to open-source it.
I use HashiCorp tools and I recommend them wherever I go. The reason I do that is because the tools are built with very specific use cases and are grounded in actual practices and backed by solid theory. None of their tools are something that was hacked up over the weekend. Looking forward to Terraform taking over the provisioning/deployment landscape.
Bummer, Terraform is also the engine that powers Harp & the Harp Platform. It doesn't appear this was very well researched before choosing a name. Confusion seem inevitable.
How does it handle failures? I.e in the first example it creates a server and then creates a dns record. What if the dns record creation fails? Does it roll back everything (i.e destroy the server)? I'd probably want a system that automatically retried x times before rolling back for some situations. In other situations I'd probably want it to not roll back or only roll back some of the tasks. How flexible is it?
Terraform saves partial state as it creates resources for these exact scenarios.
In your example, Terraform would create and save the ID of the server to state before going along to create the DNS record. If the DNS record failed to create for some external reason, the next `terraform apply` you ran would simply refresh the server and go on to create the DNS record.
Does it really provide most of the features? All I can find in it's documentation is using salt to creating and destroying instances and volumes on various cloud providers. That just scratches the surface of what Terraform does.
Yes, it really does. Saltstack (with Salt Cloud) does cloud orchestration, configuration management, provisioning, works with containers, works with cloud providers
Provisioning and configuration management seem out of scope for Terraform, it calls other tools to perform them. What I was wondering is if Saltstack's infrastructure management capabilities were comparable to Terraform's.
This is pretty awesome. My question is how well does this integrate with and already setup infrastructure? Or would I have to recreate the system to get going with Terraform?
This is a really important question and I'm glad you brought it up.
We're actively working on a way to bring existing infrastructure under Terraform management without having to recreate it from scratch. The process will actually be really easy (but is vaporware at the moment): Terraform only needs the TYPE of a resource and ID of that resource. From there, it can "refresh" the rest of the metadata in.
Our idea is that you'll be able to say "I have an `aws_instance`, it's ID is `i-1234567`, and it satisfies the 'foo' resource" and it'll attach it.
Point being: we're thinking about this, and it is an important aspect of Terraform.
Another point is that you don't need to convert 100% of your infrastructure to Terraform to extract value from it. You can start by putting only specific services under management, and grow from there.
Hi, I think declarative resource management like this is the definitive future (or present as of now).
I was wondering what happens when you increase the amount of nodes of a particular instance? I assume a new machine is started. But what happens if you remove the entire resource altogether, does it terminate the whole cluster? Similarly how does a scaling in a node work? And how would it affect dependencies?
How does this consolidate with elasticity of a particular cluster. I.e. if an amazon load balancing group destroys an instance or takes one up, do I simply refresh and it would detect that this new instance is part of the defined cluster in terraform?
Glad to hear you are thinking on how existing infrastructure could be integrated to Terraform.
I think this is specially important with elastic IPs that have been whitelisted somewhere, e.g. a VPN of a 3rd party service provider. So a way to say, take EIP x.x.x.x, don't ever release it and then indicate what instance to associate it to is great.
Another example of this feature is starting from an existing VPC and build from there, or even actually inherit information from the current state and convert it to an "auto generated plan" to build from. Thanks again Hashicorp for all the great software.
I have been developing a tool that is almost the same called 'ozone.io'.. It leverages CMT tools such as puppet, ansible, chef. Not by writing plugins, but rather have users write or extend scripts called 'runners' that install and execute the CMT tool per node. You can checkout a prototype chef-solo runner at https://github.com/ozone-io/runner-chef-solo.
Parallel deployment of multiple clusters is also covered. It too is handled by a directed acyclic graph based on dependencies on other clusters. I am on my own and I am writing it for my thesis which will come out pretty soon.
The whole thing works declaratively, so it converges your infrastructure to the desired state. By increasing the nodes for 'smallweb' it will undergo the steps defined in the cluster lifecycle. It will then also update the configuration of the nginx load balancer.
As you can see each cluster is pinned to a provider/instanceprofile, and one of the things I am adding are affinity rules so the cluster deploys to multiple locations/providers.
It is not ready to be opensourced but if any wants to see, contribute or see more I can give view access.
Actually, I've been reading and I might have jumped the gun.. Its programmer panic, sorry.
You are right, it looks like a different type of system.
I actually support all the providers Jclouds provides: Openstack Nova, rackspace, EC2, AWS-ec2.. more.
Whats more ozone does not store any state, rather it stores it in a zookeeper/etcd cluster so you are in control of your meta-data.
Ansible author here! Hi Mitchell and crew! Our users are great fans of Vagrant and Packer, so +1 for those.
From a cursory overview, it looks like Terraform is basically LIKE a CloudFormation type abstraction. Ansible contains declarative models for lots of cloud providers as well, but does not attempt to abstract out the different clouds. We generally view wanting to show things in their natural state (figuring you will know you want to use feature X or Y, and want the knobs/buttons exposed).
But yes, we have similar features for saying "X instances of this should be running now", make it so, all that don't require use of CloudFormation.
For people that like Terraforms flavor though, I can see users using these tools together. They could declare a cloud using either ansible or terraform, and then use ansible for the final configuration and application deployment, plus full lifecycle management.
I know things like Packer like to walk people down a more imagey road, but you can also use config tools to describe the recipes that build your images, and that would include Ansible, Puppet, Chef, or even (if you so wished) bash.
I'm not a fan of bash though :)
I'm happy to see more efforts to make cloud provisioning accessible though, and like the idea that this would allow more easy migration between some cloud providers. Ansible has a lot of the same declarative thingies, but will probably appeal more to people who aren't looking for the DSL.
It doesn't look like Terraform is attempting to be a provisioner itself, so it wouldn't do the things that Ansible or other config/app deploy tools do once you have a running instance, but does some of the things various config tools do to help you GET a running instance.
One of the things shown in the Ansible examples are how to do a cloud deploy in one hop, i.e. request resources and also configure the stack all the way to the end, from one button press, and can also be used to orchestrate the rolling updates of those machines, working with the cloud load balancers and so on, throughout their entire life cycle -- all using just the one tool.
As for where the future of this tool is, I obviously can't speculate, nor should I. But I do welcome more attempts to simplify beasts like the AWS EC2 API space, because I think we both agree nobody wants to really keep all of that in their head at all times.
The best thing about this is even though Ansible and Terraform seem to do some similar things (Ansible, of course, has a few other tricks up its sleeve, but just comparing the tools in terms of infrastructure orchestration...), there's plenty of room in this space for multiple solutions.
Just like Chef and Puppet seem to have leapt off from a solid platform started by cfengine et all, and made 'configuration as code' a thing, Ansible, Salt cloud, and Terraform seem to be kicking off the 'infrastructure as code' movement (and are adapting to many different workflows—Docker, Chef, Puppet, etc. play nicely in this sandbox).
Too many places rely on band-aids, shell scripts, and manual process for infrastructure, mostly because tools like Ansible and Terraform haven't existed until recently (or today, in Terraform's case).
+1 for too many places rely on band aids. The frustrating thing is that there's no need now. These tools are adoptable by almost anyone and if you use them you shine a giant spotlight into the previously dimly lit area of your system configurations. You get documentation, testability and version control of your infra. It's extremely liberating.
One of the most valuable things has turned out to be the ability to refactor infra as a project evolves. Some task that would have required a few hands and a project manager now becomes a few edits.
I lead a CFEngine team and It never fails to amaze people when I demo end to end life cycle management "that used to take 3 guys 2 days" etc.
Is there a way to encrypt variables and provide a password to decrypt when executing a plan, so that I can commit my API keys, passwords, etc to source control without fear? I'm thinking something similar to Ansible and its 'vault' concept for variables (sure Chef, Puppet, etc have something similar).
There isn't at the moment, but it is definitely something we need to think through. We didn't want to ship anything broken though (illusion of security), so we deferred this feature until we can think it through more carefully.
I'd suggest looking into OpenPGP and see if one of its apps won't work for you (they implement ssh-agent protocol support, and have a couple other apps for doing on-demand crypto)
On the page about integration with Consul [1], I read "Terraform can update the application's configuration directly by setting the ELB address into Consul." The questiomn is whether I can do somewhat other way around, i.e. set get information from Consul and point ELB to it, somewhat like Synapse or SmartStack... Or may be I don't need service discovery tool for this yet and can just use TF without Consul, simply configure the components of the infrastracture and the ELB? The point is just to simplify the first step and avoid adding logic to support Consul lookups in the apps... What's the easiest way here?
> The ~ before the Droplet means that Terraform will update the resource in-place, instead of destroying or recreating. Terraform is the first tool to have this feature of the tools which can be considered similar to Terraform
While not multi-platform, AWS's Cloud Formation does just this, it takes as its input a stateless JSON description of a set of AWS resources and their dependencies. Given a change in the desired state, it will do its best to update resources rather than creating them from scratch when possible.
Yes, but CloudFormation doesn't have Terraform's concept of an "execution plan"[1], which tells you what Terraform will do, before it does it. It is very easy with CloudFormation (especially as they get larger and more complex) to run into cases where CloudFormation unexpectedly destroys and recreates a resource, something that would never happen in Terraform because you would see it before it happened in the plan.
Agreed, having to dig through the AWS documentation to see what whether updating a particular value requires replacing the resource it applies to is tedious and error prone.
Another things that excites me about separating the plan from the execution is that I can periodically run `terraform plan` to check for configuration drift without risking making a change. I've been bitten by the following:
1) Someone creates an ASG with the desired number of instances set to 1 via CloudFormation
2) Later, they increase the desired number of instances, e.g. to 5, via the EC2 console or API rather than CloudFormation
3) I come along and apply an update to the stack via CloudFormation. This resets their ASG to 1 instance, terminating the other 4.
If anyone from AWS is reading this, please steal this feature for CloudFormation!
This is a very nice feature, and I'm going to take a close look at Terraform. If you are already using cloudformation, I'd recommend applying stack policies that deny permission to "Update:Replace" or "Update:Delete" any resources. When you need to, you can use a one-time policy that lets you blow away specific resources, by name or type.
Or to anything that is "kind of a biggie" (not just deletes) in VPC networking:
- AttachInternetGateway / DeleteInternetGateway / DeleteCustomerGateway
- AssociateRouteTable / DeleteRoute / DeleteRouteTable / CreateRoute / DisassociateRouteTable
- ReplaceNetworkAclAssociation
- And all the "Delete"s
I'm surprised that the CloudFormation team hasn't added a --dry-run option yet. It would make CF sooooo much more usable for a production stack. (I have a policy now to never update a stack once it's taking production traffic.)
This would help, but its not enough. `--dry-run` tells you what it would do at a point-in-time, but isn't a guarantee that by the time you update a stack that that is what would actually happen.
Terraform's execution plans, on the other hand, can be saved and applied. This tells Terraform that it can _only_ apply what is in the plan. It _must not_ do anything else.
To match TF here, CloudFormation would really need "staged changes and applied changes" as separate steps.
That's sounds like a great feature; you are correct, one of the biggest pain points with Cloud Formation is the need to devine which actions will be taken in response to changes.
It looks like Terraform is launching with decent coverage of AWS resources. Thinking of my own usage, the main ones missing are ElastiCache and CloudWatch. I'm not sure how you can setup a useful autoscaling group without the latter.
tl;dr: Terraform is modular virtual infrastructure automation. [I would say it's an orchestration tool, but that usually implies datacenter-wide resources, and this just seems to apply to cloud service providers]
"[..] Terraform combines resources from multiple services providers: we created a DigitalOcean Droplet and then used the IP address of that droplet to add a DNS record to DNSimple. This sort of infrastructure composition from code is new and extremely powerful."
Well, "new" in the sense of "we created another thing to automate infrastructure deployment and configuration". I have worked with various amalgamated solutions that do this for the past 12 years. Of course they mention that in the software comparison section, but it doesn't take away from the fact that this isn't new by a long shot.
"Terraform has a feature that is critical for safely iterating infrastructure: execution plans. Execution plans show you what changes Terraform plans on making to your infrastructure. [..] As a result, you know exactly what Terraform will do to your infrastructure to reach your desired state, and you can feel confident that Terraform won't surprise you in unexpected ways."
So it's declarative, and it has a dry-run mode.
The thing that really bugs me is the idea that you should be creating "code" to do rote tasks such as changing resources or deploying things. You know what the single most problematic thing about infrastructure changes is? Human error. It's a simple fact of user interface design that humans are less likely to fuck up a point-and-click interface than a command line program that you have to feed a hand-edited config to. And automated config generation can arguably be more error-prone.
Automation/orchestration should not simply make things happen automatically. It should make things work more reliably, and require less expertise to do so. To be frank, any code monkey with a few weeks of free time to kill can create a tool that does exactly what this one does, and that's why there are dozens of them that all do the same thing, yet we always need a new one.... because they all stink at actually making things work better.
Off-topic: it is interesting to me how different companies seem to dominate a space for a few years, and then recede. It's a common pattern. I can remember in 2009 when it seemed like RightScale www.rightscale.com was the dominant force creating tools to take advantage of AWS, but nowadays I never hear of them, never see anything interesting come from them. All the interesting stuff is happening elsewhere.
I think it's easy for dominant players to sit back and let revenue stream in when there's no much in the way of real competition, which can lead to devs leaving for greener pastures (where green is either more interesting or pays better because dev is de-prioritized at the current company). Then some real competition shows up and the company is woefully unprepared. I think it's an important lesson, just because your competition doesn't appear to be forcing you to innovate and expand, doesn't mean there isn't someone silently lurking ready to upend your world.
Thanks everyone at Hashicorp! This tool looks awesome. I wish it had been around years ago so I might have a nice version-controlled set of configuration files instead of a bunch of wiki articles and post-it notes ;)
I have a quick question I didn't see covered in the docs. Is there a best practise way to organise Terraform configuration files? Specifically when using it to manage different environments (e.g. staging, prod, qa). I'm thinking of some sort of folder structure like this:
We should've included a roadmap, but Terraform 0.2 will introduce first-class modules and a [basic] package manager. We'll also address environments then.
What granularity do you see Terraform acting at? Could it replace Puppet, say?
I would love to keep the complete codebase for our infrastructure's config in a single place, in a single language; I can see Terraform driving Puppet with variables, but there's an overlap here (declarativeness and modularity) that would benefit from being seamless.
Then again, with something like Docker you hardly need Puppet, and could simplify the recipes to the point where Terraform injected vars into a dockerfile before deploy.
I'm pretty sure by package manager, he means to install Terraform modules.
Docker still fits for your problem, so does Packer. Honestly, it's super nice to not have to worry about any external resources failing, where you only have 1 thing pulled in to deploy any type of application, and that would be the image.
The same is also true of packages, if you're packaging correctly and work in a sane environment. It's also a lot easier for devops to manage, because they then only have to apply access control, logging, resource management, and other integration requirements to the main OS image, instead of to every Docker image produced for deployment.
debs built with fpm[1] have been working for me. Unless you need something particularly complex, it should just be a matter of setting up a directory with the right layout and calling fpm with a couple of parameters.
It reads like you can use any provisioning software, use any server provider supported, use any DNS provider supported, you just need to write a bunch of configuration.
Offtopic but i guess you, hashicorp guys, would like to know; there is a typo on the geometric animation. It says "Build, Combine, and Launch Infrastucture_", should be Infrastructure.
Would this be a good fit or are there any plans to include providers for hypervisors (VMWare, Virtualbox, Xen etc.. ) Or even containers (i.e, Docker)?
Anyone know if an API is planned? If I want to manage infrastructure from code I would love Terraform to be an option.
As a (predominantly) Node.js developer, I'd probably use pkgcloud for this sort of thing. Terraform supports a great range of providers and has some more advanced features, so I'd love to play with it as an alternative to pkgcloud.
This would be great for us. Various parts of our stack are spread around so many different platforms, and this could really take the grunt work out of that. Not to mention removing the need of dealing with fifteen various shoddy interfaces. Heck, AWS isn't even consistent with itself (just check out OpsWorks vs Route53).
Razor is for private cloud, the biggest problems it solves are bare-metal provisioning as well as OS-specific mass-install tools. One of use-case for Razor: pop-in a new server into the rack, get provision ESXi with Razor and then provision a number of CentOS and Debian VMs without having to care how different is Kickstart from Preseed. It's a great too, but it's mostly for private cloud and Teraform is for public cloud and is therefore at a higher level.
I don't have any issues with the colors, but I find the slight blurriness (aliasing?) to be a bit painful to read.That said, I love the design of the site otherwise.
On another note, Terraform appears to be great. I cannot wait to try it out.
Mitchell, do you even sleep? Every time I see one of your tools, I feel like I need to pay you a hefty sum to teach me how to start coding productively, cuz Hashicorp output seems ferocious. Keep up the good work.
Well this will seem silly, but my bad. After looking at all your tools, I never looked at the HashiCorp site. I had no idea it was not a one man shop. I saw your last name and the branding on the project pages and I made a judgement a few years back. Should I have check my facts.
Regardless, your software kicks ass. Thanks to you, and your team. Keep on trucking.
I don't think it seems silly... VentureBeat [1] seems to think the same thing:
Mitchell Hashimoto, the guy who created the popular
Vagrant tool for setting up development environments, has
gone and built another useful tool for developers working
on public-cloud platforms.
Everytime HashiCorp releases one of these tools, I can't help but think I should beg you for a job, I mean they're even being created in Go, which is great!
I funded the Reading Rainbow Kickstarter. I was shocked and dismayed to learn that no portion of the funding would be used to teach rainbows how to read.
It's the new thing. Pick a noun or verb that's only marginally-related (or sometimes not even remotely related) to your product and take ownership of it.
To be fair, Uber was originally UberCab, but dropped "Cab" from the name after the MTA in San Francisco charged Uber with running an unlicensed taxi service.
At least Uber connotes that it's better than the competition. I was shocked to find out the other day about a CSS framework called Inuit. The mascot is a little inuit guy in a parka. Terraform seems much better in comparison.
McDonald's was at least started by two guys with the surname McDonald. It is/was their company so they named it after themselves. That's no different than "Frank's Burger Stand", which is again completely different than writing a clone of GCC in javascript and then calling it Patriot or something.
It sounds like it would be possible to plug nix-based provisioning into Terraform, and use it to manage the high-level cluster structure.
Edit: downvotes? whatever for?