Wow, I have that sinking feeling that this OpsWorks stuff may have turned what was a coin flip decision we made between Chef and Puppet (in which we chose Puppet) to a one heavily weighted towards Chef, but hard to say without reading more.
My immediate question is how easy it would be to keep your Chef recipes in source control somewhere and have equally easy one click deployment to both AWS and to a local Vagrant development box. We currently have a similar environment set up with puppet and it makes testing sysadmin stuff pretty awesome.
Edit: already finding holes - can't deploy to cluster compute instances (a deal breaker) and deployment seems rather slow (15 minutes to deploy from scratch versus about 8 with normal puppet) :(
There's a vanilla Chef repo you can clone to store cookbooks, roles, and data bags in source control .
For testing on local VMs there's Opscode's bento , which automates creating Vagrant baseboxes and running recipes. You can specify using Chef Solo to do this, which speeds up testing significantly as it removes chatting to remote Chef or Hosted Chef servers.
Why would OpsWorks' choice of configuration and deployment management software affect your own choice, unless you want some sort of hybrid?
I think this will be a boon to AWS users who are currently deploying manually, but I don't think very many people who already use deployment/config management software will want to switch and rely on AWS.
I don't think it's a puppet/cfengine/ansible/salt killer,
I think the big advantage here is really just how much you just get out of the box without sacrificing flexibility -- if you're deploying on AWS in the first place. This seems like it would also save us from writing a bunch of management scripts (around scaling, etc) which would be tied to AWS anyway (read: no inherent disadvantage since any solution in-house or not would require the AWS platform). I'm always looking for ways for us to reduce our code/services footprint and this would seem to do so without a huge amount of vendor lock-in.
Basically if you get your car from the dealership (AWS is great esp for EC2 and S3), you don't have to buy the tires from them too.
What I saw from the video does look interesting, but in the end, if you are going to have to edit your configuration management recipes in git, and then you're going to have the same choices to make about what tool is best for you. Whether that's Chef or something like http://ansible.cc -- this is what you are going to pick because it's what you're going to spend most of your automation time in.
I also suspect Amazon will feel that people may want to bring their own tools and will add more options over time.
it's easy, but also a bit hacky...
because opsworks triggers multiple lifecyle events and also distributes the chef config
if you solve the config problem, maybe with a static example you're pretty much done
This is the most interesting news I've heard from AWS in a long time. I don't mean to suggest that they have not produced any exciting news in a while, but that this is service is so exciting because it will fundamentally change how a lot of people use AWS.
Of particular interest to me is how this will affect platforms like Heroku. After the recent routing mesh debacle a number of people contacted me to discuss migrating their application from Heroku to AWS.
My Heroku -> AWS migration procedure is already pretty straightfoward:
- identify Heroku services
- write Puppet manifests to manage EC2 instances
- write CloudFormation stacks to control other aspects of the infrastructure, e.g. autoscaling and S3 buckets
- write deploy scripts so clients can continue using git-based deploys
OpsWorks is only going to make this migration easier, and will have the additional benefit of letting my clients manage more of their own infrastructure without my help. Training people to change autoscaling group configuration via a web interface is one thing, training them to update CloudFormation stacks is a little less user-friendly.
From the clients I have spoken to, the key benefits of Heroku are a) git-based deploys and b) easy scaling configuration by a web interface. All of these things are possible with AWS, albeit a little more difficult to set up - especially in the case of scaling configuration - and OpsWorks is going to close the gap even further.
If AWS can provide most of these services directly, will people still be willing to pay the Heroku premium?
There is one downside to the OpsWorks announcement, at least from my personal perspective. I'm currently writing a book on AWS System Administration, and I've almost finished the chapter describing various Puppet workflows. Like many other commenters I was undecided whether I should focus on Chef or Puppet, given their relatively similar feature set and popularity. I'm not sure if I've backed the wrong horse here and should rewrite the chapter using Chef, or if having a Puppet-perspective would be useful. Any thoughts?
This was previously known as "Scalarium", which was a third-party solution developed by Peritor that added several features on top of AWS, including management of instances using chef, infrastructure planning etc.
AWS have owned the base cloud market with EC2 and S3.
What I find impressive is that they're continuously lowering prices competitively and expanding their services in a way that makes their base offering more attractive.
OpsWorks starts to battle one of the biggest reasons to go with PaaS providers such as Heroku: the difficulty of managing these services yourself.
: This benefits both Amazon and the consumer. Lower prices and continuous reductions improves customer confidence in and use of the AWS platform, whilst consumers are just happy they're paying less. Heroku and Google App Engine (even VPS providers) rarely decrease prices, making AWS something certain to be considered.
EDIT: Second nitpick: AWS OpsWorks is no Heroku. Amazon's already released AWS Elastic Beanstalk, though, is supposed to replicate Heroku's functionality (there's a reason Heroku is still around, though).
Yeah, I noticed some of the similarities to Beanstalk myself and am kind of curious where they are going with this concept? It seems to me that instead of building a fully integrated Chef management platform with fancy gui (which was my initial thought), they just recreated Beanstalk with some major bonus features and perhaps a lot more control.
I wonder what is the easiest way to automatically snapshot EBS volumes? I don't want to run a script - scheduling, managing access keys, etc are all too much hassle. Besides runing the script on the machine itself is unreliable - I want the setup to take one last snapshot after the instance dies.
Ideally I would right-click on an EBS volume, select "backup periodically" -> "every 10 minutes" -> retain each for three days, and then dailies for 30 days" and be done with it.
You can do this with ylastic - "scheduled tasks" - it's $25-50/mo (depending on the plan) for managing your whole AWS infrastructure and is basically an alternate WEB UI that autodiscovers your AWS footprint. Has some nice ways to manage autoscaling configurations as well, and also migrating AMIs between regions. I do not work for ylastic or anything but have used it for 2 different clients and used the "scheduled tasks" feature to do periodic backups of EBS volumes just as you've described you'd like to do.
What has always bothered me is AWS comes up with a service which is half cooked and doesnt answer the true requirement of the application. I have used almost all of AWS offering in different contexts and every time, when we really needed the action, it failed miserably.
I believe, AWS should concentrate on its core offering and that of being a awesome IaaS. It needs to fix it severe and long downtimes and many such core issues, unpredictable network performances, etc
I was hoping to do the same (albeit much easier to follow than their own walkthrough), but I'm going to sit back for a while and see what develops.
These could be early teething problems, so I'm not going to judge them so soon.
That said, I'm surprised by what's not included in the main interface (VPC, ELB, to start with).
you hit the nail on the head (oppositely, I was surprised to see them deploy out HA Proxy by default). The lack of VPC makes this far less appealing. I've been working on a scaling capable VPC configuration with quite a few moving parts and got so giddy with excitement over the potential of OpsWorks, then I tested it out and realized it's nowhere near ready for prime time and is missing some major players.
Still...Props to them and am excited to see it mature.
I followed the guide exactly, so I could get a real feel for the system as I progressed.
It wasn't possible to follow the guide like this, as I had to SSH onto the boxes to find out what was going wrong.
I'm sure it'll be fixed eventually - I'm just letting people know that, right now, the walkthrough is unreliable
This makes OpsWorks useless if you want to autoscale with any reasonable "nimbleness" - meaning the amount of time it will take to wait for a non-custom AMI to be bootstrapped with chef client and then load and run all my standard and customized cookbooks is way too long... I need to be able to specify custom AMIs that are already largely prepared so they can boot fast. Also, the Chef bootstrap process is brittle. twice in as many weeks it has been broken by dependency fails. (First, a net-ssh dependency, then a JSON gem dependency).
So this service basically says - "Just because we don't have a Redis Amazon Service for you to snap into doesn't mean you should not use AWS or look to other PaaS, instead, easily integrate other apps with OpsWorks."
I don't see any press release from OpsCode, which makes me think this is not a joint venture with them, which in turn makes me wonder how dangerous is to be part of the AWS ecosystem...
If you prove a market around AWS, what are the odds of getting this market gobbled by AWS themselves? They have a level of access to their own infrastructure that you don't have, let alone the access to their own (huge) customer base, integrated billing and branding.
Amazon has been repeatedly accused of exactly what you describe with Amazon Marketplace. This is the first article I was able to find, but I've seen similar articles over the past few years (One of my companies is an ecommerce site that considered selling through Amazon marketplace)
So it would not surprise me if they used this same tactic with AWS.
The lack of VPC support and inability to configure custom security groups would prevent me from using it for right now. It seems like a good alternative to PaaS out there though, and I'm sure it will be rounded out with more features in the coming months.
(Note: I wouldn't see us using it in the near future anyway, since we already manage our machines on AWS with Puppet.)
FWIW, if you do need ELB support right away (and keep in mind that you'll have to look at it from each server's point of view), you can probably use https://github.com/opscode-cookbooks/aws (and use the elastic_lb resource), and hook it into your own cookbook for setting up whatever ELB's you want (then set up your custom cookbook repo for your stack.)
(I'm trying to say that this platform is incredibly flexible, and you can reuse what's already out there. If you need support for X, Y, or Z, then you can likely write in support with Chef.)
Not only that, but there's no allowance made for micros either.
Also, SSL on a per-app basis is confusing me - does that mean that each box individually is handling SSL termination, or is it done on the loadbalancer side?
I have a lot of reading to do before I can really understand what's happening here I think
OpsWorks looks quite impressive from my non-devops perspective. And most interesting of all is the fact that it's free! I understand that it's build to encourage more usage of AWS, but there are plenty of 3rd party services out there which to similar things but are not free.
This still has a long way to go before it is a threat to Scalr / Right Scale, but as it matures, that is definitely going to happen. Where companies like Scalr and Right Scale add value is the ability to deploy your cloud across multiple providers.
Sort of, but not exactly. It's not API compatible with Chef server, and I believe you don't get the nice search syntax for grabbing node information across your fleet that match some kind of criteria. But you still do get info on your other machines, and you do get nice orchestration across your fleet.
you get a much better orchestration as you could with chef server
you also have much faster interactions, better life cycles
and to the searching... you get every information of evert machine under your fleet
it is much much more better the chef server
OpsWorks gives you better features, try out to start a small stack and then also have a look at the default cookbooks, there you can see what you can do with OpsWorks: https://github.com/aws/opsworks-cookbooks
I have a webapp that I built, that utilizes rackspace's API and chef. I have a chef server that I use for all my configuration management, I have a Rails app that I use to talk to my chef server to manipulate machines, or rackspace's api to spin up machines, and I am building "triggering" into it (low disk space, high CPU do X). I am able to change IP addresses, spin up a machine with a specific stack etc. Granted this isnt the default chef-server, but the ability to do all this stuff, and not be locked into AWS is there.