My immediate question is how easy it would be to keep your Chef recipes in source control somewhere and have equally easy one click deployment to both AWS and to a local Vagrant development box. We currently have a similar environment set up with puppet and it makes testing sysadmin stuff pretty awesome.
Edit: already finding holes - can't deploy to cluster compute instances (a deal breaker) and deployment seems rather slow (15 minutes to deploy from scratch versus about 8 with normal puppet) :(
For testing on local VMs there's Opscode's bento , which automates creating Vagrant baseboxes and running recipes. You can specify using Chef Solo to do this, which speeds up testing significantly as it removes chatting to remote Chef or Hosted Chef servers.
I think you shouldn't worry, I wouldn't be surprised if they will support BOTH Chef and Puppet in the future - it's their best interest to support both.
I think this will be a boon to AWS users who are currently deploying manually, but I don't think very many people who already use deployment/config management software will want to switch and rely on AWS.
I don't think it's a puppet/cfengine/ansible/salt killer,
Basically if you get your car from the dealership (AWS is great esp for EC2 and S3), you don't have to buy the tires from them too.
What I saw from the video does look interesting, but in the end, if you are going to have to edit your configuration management recipes in git, and then you're going to have the same choices to make about what tool is best for you. Whether that's Chef or something like http://ansible.cc -- this is what you are going to pick because it's what you're going to spend most of your automation time in.
I also suspect Amazon will feel that people may want to bring their own tools and will add more options over time.
Of particular interest to me is how this will affect platforms like Heroku. After the recent routing mesh debacle a number of people contacted me to discuss migrating their application from Heroku to AWS.
My Heroku -> AWS migration procedure is already pretty straightfoward:
- identify Heroku services
- write Puppet manifests to manage EC2 instances
- write CloudFormation stacks to control other aspects of the infrastructure, e.g. autoscaling and S3 buckets
- write deploy scripts so clients can continue using git-based deploys
OpsWorks is only going to make this migration easier, and will have the additional benefit of letting my clients manage more of their own infrastructure without my help. Training people to change autoscaling group configuration via a web interface is one thing, training them to update CloudFormation stacks is a little less user-friendly.
From the clients I have spoken to, the key benefits of Heroku are a) git-based deploys and b) easy scaling configuration by a web interface. All of these things are possible with AWS, albeit a little more difficult to set up - especially in the case of scaling configuration - and OpsWorks is going to close the gap even further.
If AWS can provide most of these services directly, will people still be willing to pay the Heroku premium?
There is one downside to the OpsWorks announcement, at least from my personal perspective. I'm currently writing a book on AWS System Administration, and I've almost finished the chapter describing various Puppet workflows. Like many other commenters I was undecided whether I should focus on Chef or Puppet, given their relatively similar feature set and popularity. I'm not sure if I've backed the wrong horse here and should rewrite the chapter using Chef, or if having a Puppet-perspective would be useful. Any thoughts?
I'll be doing a ShowHN as soon as it is published :)
If anyone wants to make suggestions about other content you'd like to see in the book, please let me know!
This was previously known as "Scalarium", which was a third-party solution developed by Peritor that added several features on top of AWS, including management of instances using chef, infrastructure planning etc.
OpsWorks starts to battle one of the biggest reasons to go with PaaS providers such as Heroku: the difficulty of managing these services yourself.
: This benefits both Amazon and the consumer. Lower prices and continuous reductions improves customer confidence in and use of the AWS platform, whilst consumers are just happy they're paying less. Heroku and Google App Engine (even VPS providers) rarely decrease prices, making AWS something certain to be considered.
: Accidentally wrote IaaS -- thanks Argonaut.
EDIT: Second nitpick: AWS OpsWorks is no Heroku. Amazon's already released AWS Elastic Beanstalk, though, is supposed to replicate Heroku's functionality (there's a reason Heroku is still around, though).
Ideally I would right-click on an EBS volume, select "backup periodically" -> "every 10 minutes" -> retain each for three days, and then dailies for 30 days" and be done with it.
Chef is an excellent fit.
I believe, AWS should concentrate on its core offering and that of being a awesome IaaS. It needs to fix it severe and long downtimes and many such core issues, unpredictable network performances, etc
It needs some massaging on the web servers in order to actually respond without a 500 error
That said, I'm surprised by what's not included in the main interface (VPC, ELB, to start with).
Still...Props to them and am excited to see it mature.
I'm sure it'll be fixed eventually - I'm just letting people know that, right now, the walkthrough is unreliable
This makes OpsWorks useless if you want to autoscale with any reasonable "nimbleness" - meaning the amount of time it will take to wait for a non-custom AMI to be bootstrapped with chef client and then load and run all my standard and customized cookbooks is way too long... I need to be able to specify custom AMIs that are already largely prepared so they can boot fast. Also, the Chef bootstrap process is brittle. twice in as many weeks it has been broken by dependency fails. (First, a net-ssh dependency, then a JSON gem dependency).
There's no integration with Elastic Load Balancer, no integration with RDS or in fact any of the other AWS services except for deployment from S3.
I believe the general intent of this is to enable users using non-AWS services (yes, I know that's redundant) to more efficiently work with outside apps on Amazon - the example in their Layers guide is specific to Redis for example. (http://docs.aws.amazon.com/opsworks/latest/userguide/working...).
So this service basically says - "Just because we don't have a Redis Amazon Service for you to snap into doesn't mean you should not use AWS or look to other PaaS, instead, easily integrate other apps with OpsWorks."
Also, feels like a great move with Peritor/Scalarium and Chef.
I'm definitely grabbing the popcorn!
If you prove a market around AWS, what are the odds of getting this market gobbled by AWS themselves? They have a level of access to their own infrastructure that you don't have, let alone the access to their own (huge) customer base, integrated billing and branding.
So it would not surprise me if they used this same tactic with AWS.
Edit: here's a google news link to the article:
edit 1 or 2 of these were typed from memory - but it should be clear at a glance which one is which in the EC2 console
(Note: I wouldn't see us using it in the near future anyway, since we already manage our machines on AWS with Puppet.)
Does anyone have anything positive to say about it?
I think this makes it easier to manage your own load balancers using instance storage without EBS.
(I'm trying to say that this platform is incredibly flexible, and you can reuse what's already out there. If you need support for X, Y, or Z, then you can likely write in support with Chef.)
Also, SSL on a per-app basis is confusing me - does that mean that each box individually is handling SSL termination, or is it done on the loadbalancer side?
I have a lot of reading to do before I can really understand what's happening here I think
I have a webapp that I built, that utilizes rackspace's API and chef. I have a chef server that I use for all my configuration management, I have a Rails app that I use to talk to my chef server to manipulate machines, or rackspace's api to spin up machines, and I am building "triggering" into it (low disk space, high CPU do X). I am able to change IP addresses, spin up a machine with a specific stack etc. Granted this isnt the default chef-server, but the ability to do all this stuff, and not be locked into AWS is there.
The implementation is open-source, too: http://github.com/dotcloud/hipache
Disclaimer: I work at dotCloud