Hacker News new | comments | show | ask | jobs | submit login
AWS OpsWorks - Flexible Application Management in the Cloud Using Chef (aws.typepad.com)
208 points by jeffbarr 1386 days ago | hide | past | web | 77 comments | favorite



Wow, I have that sinking feeling that this OpsWorks stuff may have turned what was a coin flip decision we made between Chef and Puppet (in which we chose Puppet) to a one heavily weighted towards Chef, but hard to say without reading more.

My immediate question is how easy it would be to keep your Chef recipes in source control somewhere and have equally easy one click deployment to both AWS and to a local Vagrant development box. We currently have a similar environment set up with puppet and it makes testing sysadmin stuff pretty awesome.

Edit: already finding holes - can't deploy to cluster compute instances (a deal breaker) and deployment seems rather slow (15 minutes to deploy from scratch versus about 8 with normal puppet) :(


There's a vanilla Chef repo you can clone to store cookbooks, roles, and data bags in source control [0].

For testing on local VMs there's Opscode's bento [1], which automates creating Vagrant baseboxes and running recipes. You can specify using Chef Solo to do this, which speeds up testing significantly as it removes chatting to remote Chef or Hosted Chef servers.

[0] https://github.com/opscode/chef-repo

[1] https://github.com/opscode/bento


Previously been in your position. I chose Chef.

I think you shouldn't worry, I wouldn't be surprised if they will support BOTH Chef and Puppet in the future - it's their best interest to support both.


Why would OpsWorks' choice of configuration and deployment management software affect your own choice, unless you want some sort of hybrid?

I think this will be a boon to AWS users who are currently deploying manually, but I don't think very many people who already use deployment/config management software will want to switch and rely on AWS.

I don't think it's a puppet/cfengine/ansible/salt killer,


I think the big advantage here is really just how much you just get out of the box without sacrificing flexibility -- if you're deploying on AWS in the first place. This seems like it would also save us from writing a bunch of management scripts (around scaling, etc) which would be tied to AWS anyway (read: no inherent disadvantage since any solution in-house or not would require the AWS platform). I'm always looking for ways for us to reduce our code/services footprint and this would seem to do so without a huge amount of vendor lock-in.


I don't think it is.

Basically if you get your car from the dealership (AWS is great esp for EC2 and S3), you don't have to buy the tires from them too.

What I saw from the video does look interesting, but in the end, if you are going to have to edit your configuration management recipes in git, and then you're going to have the same choices to make about what tool is best for you. Whether that's Chef or something like http://ansible.cc -- this is what you are going to pick because it's what you're going to spend most of your automation time in.

I also suspect Amazon will feel that people may want to bring their own tools and will add more options over time.


We also decided to use Puppet, but I'm afraid this could generally change the configuration management world towards Chef.


it's easy, but also a bit hacky... because opsworks triggers multiple lifecyle events and also distributes the chef config if you solve the config problem, maybe with a static example you're pretty much done


This is the most interesting news I've heard from AWS in a long time. I don't mean to suggest that they have not produced any exciting news in a while, but that this is service is so exciting because it will fundamentally change how a lot of people use AWS.

Of particular interest to me is how this will affect platforms like Heroku. After the recent routing mesh debacle a number of people contacted me to discuss migrating their application from Heroku to AWS.

My Heroku -> AWS migration procedure is already pretty straightfoward:

- identify Heroku services

- write Puppet manifests to manage EC2 instances

- write CloudFormation stacks to control other aspects of the infrastructure, e.g. autoscaling and S3 buckets

- write deploy scripts so clients can continue using git-based deploys

OpsWorks is only going to make this migration easier, and will have the additional benefit of letting my clients manage more of their own infrastructure without my help. Training people to change autoscaling group configuration via a web interface is one thing, training them to update CloudFormation stacks is a little less user-friendly.

From the clients I have spoken to, the key benefits of Heroku are a) git-based deploys and b) easy scaling configuration by a web interface. All of these things are possible with AWS, albeit a little more difficult to set up - especially in the case of scaling configuration - and OpsWorks is going to close the gap even further.

If AWS can provide most of these services directly, will people still be willing to pay the Heroku premium?

There is one downside to the OpsWorks announcement, at least from my personal perspective. I'm currently writing a book on AWS System Administration, and I've almost finished the chapter describing various Puppet workflows. Like many other commenters I was undecided whether I should focus on Chef or Puppet, given their relatively similar feature set and popularity. I'm not sure if I've backed the wrong horse here and should rewrite the chapter using Chef, or if having a Puppet-perspective would be useful. Any thoughts?


I don't think you should throw out the stuff on Puppet. Perhaps expand and add how to do it with Chef too.


Thanks for the responses folks - it seems like a combined approach is preferable.

I'll be doing a ShowHN as soon as it is published :)

If anyone wants to make suggestions about other content you'd like to see in the book, please let me know!


totally agree. Even the CloudFormation docs have articles discussing both methods. Having both would be a benefit to all.


I would very much still want to read this book with puppet. Didn't really like chef all that much, far too many moving parts, though the convenience of this may sway me.


That's exact same reason I prefer puppet. Puppet seems to be very lean and simple.


excited to read the book! lemme know when it's available.


+1 for chef + puppet


The Amazon blog post is not very clear on this, but Werner Vogels post is:

http://www.allthingsdistributed.com/2013/02/aws-opsworks.htm...

This was previously known as "Scalarium", which was a third-party solution developed by Peritor that added several features on top of AWS, including management of instances using chef, infrastructure planning etc.


AWS have owned the base cloud market with EC2 and S3. What I find impressive is that they're continuously lowering prices competitively[1] and expanding their services in a way that makes their base offering more attractive.

OpsWorks starts to battle one of the biggest reasons to go with PaaS[2] providers such as Heroku: the difficulty of managing these services yourself.

[1]: This benefits both Amazon and the consumer. Lower prices and continuous reductions improves customer confidence in and use of the AWS platform, whilst consumers are just happy they're paying less. Heroku and Google App Engine (even VPS providers) rarely decrease prices, making AWS something certain to be considered.

[2]: Accidentally wrote IaaS -- thanks Argonaut.


Nitpick: Heroku is a PaaS.

EDIT: Second nitpick: AWS OpsWorks is no Heroku. Amazon's already released AWS Elastic Beanstalk, though, is supposed to replicate Heroku's functionality (there's a reason Heroku is still around, though).


Yeah, I noticed some of the similarities to Beanstalk myself and am kind of curious where they are going with this concept? It seems to me that instead of building a fully integrated Chef management platform with fancy gui (which was my initial thought), they just recreated Beanstalk with some major bonus features and perhaps a lot more control.


I wonder what is the easiest way to automatically snapshot EBS volumes? I don't want to run a script - scheduling, managing access keys, etc are all too much hassle. Besides runing the script on the machine itself is unreliable - I want the setup to take one last snapshot after the instance dies.

Ideally I would right-click on an EBS volume, select "backup periodically" -> "every 10 minutes" -> retain each for three days, and then dailies for 30 days" and be done with it.


You can do this with ylastic - "scheduled tasks" - it's $25-50/mo (depending on the plan) for managing your whole AWS infrastructure and is basically an alternate WEB UI that autodiscovers your AWS footprint. Has some nice ways to manage autoscaling configurations as well, and also migrating AMIs between regions. I do not work for ylastic or anything but have used it for 2 different clients and used the "scheduled tasks" feature to do periodic backups of EBS volumes just as you've described you'd like to do.


You can already do this with Bitnami Cloud Hosting http://bitnami.org/cloud (disclaimer: I work in the project)


Amazon is still committed to solving the wrong problem, which is "how do I run a lot of virtual machines", when the actual problem is "how do I deploy/scale/maintain complicated applications"

Chef is an excellent fit.


Amazon AWS is a low-level platform, not necessarily a full solution. They do solve the "right" problem, but not necessarily yours. Thats where Heroku and (the now obviously aquired) Scalarium come in.


But OpsWorks is the answer for the problem.


Bad news for heroku...


What has always bothered me is AWS comes up with a service which is half cooked and doesnt answer the true requirement of the application. I have used almost all of AWS offering in different contexts and every time, when we really needed the action, it failed miserably.

I believe, AWS should concentrate on its core offering and that of being a awesome IaaS. It needs to fix it severe and long downtimes and many such core issues, unpredictable network performances, etc


Word of warning - the step-by-step deployment of a PHP application is full of errors, and the application itself doesn't work as expected from the document.

http://docs.aws.amazon.com/opsworks/latest/userguide/getting...

It needs some massaging on the web servers in order to actually respond without a 500 error


I'm currently writing a blogpost about how to spawn up a simple Rails stack. I've got a broken instance in a boot-stop-terminate-boot loop. It's very beta.


I was hoping to do the same (albeit much easier to follow than their own walkthrough), but I'm going to sit back for a while and see what develops. These could be early teething problems, so I'm not going to judge them so soon.

That said, I'm surprised by what's not included in the main interface (VPC, ELB, to start with).


you hit the nail on the head (oppositely, I was surprised to see them deploy out HA Proxy by default). The lack of VPC makes this far less appealing. I've been working on a scaling capable VPC configuration with quite a few moving parts and got so giddy with excitement over the potential of OpsWorks, then I tested it out and realized it's nowhere near ready for prime time and is missing some major players.

Still...Props to them and am excited to see it mature.



Surely that's only because of the chef cookbooks used. As long as your cookbooks are working this should work.


I followed the guide exactly, so I could get a real feel for the system as I progressed. It wasn't possible to follow the guide like this, as I had to SSH onto the boxes to find out what was going wrong.

I'm sure it'll be fixed eventually - I'm just letting people know that, right now, the walkthrough is unreliable


Hi Daniel, we'd like to address the issue you're having with the walk-through. Can you send feedback using the console's feedback button and I'll contact you directly?


I received your email so I'll ping you back directly. Thanks for reaching out!


https://aws.amazon.com/opsworks/faqs/#amis : "Q: Can I use my own AMIs? No, however you can customize the AMIs OpsWorks supports using Chef scripts to install agents and other software that you require."

This makes OpsWorks useless if you want to autoscale with any reasonable "nimbleness" - meaning the amount of time it will take to wait for a non-custom AMI to be bootstrapped with chef client and then load and run all my standard and customized cookbooks is way too long... I need to be able to specify custom AMIs that are already largely prepared so they can boot fast. Also, the Chef bootstrap process is brittle. twice in as many weeks it has been broken by dependency fails. (First, a net-ssh dependency, then a JSON gem dependency).


Looks interesting, but it seems strange to me that it's something of an island.

There's no integration with Elastic Load Balancer, no integration with RDS or in fact any of the other AWS services except for deployment from S3.


Yeah, I spotted that too, it's interesting that this service puts the focus on coordination + orchestration with additional app servers versus leveraging existing AWS services.

I believe the general intent of this is to enable users using non-AWS services (yes, I know that's redundant) to more efficiently work with outside apps on Amazon - the example in their Layers guide is specific to Redis for example. (http://docs.aws.amazon.com/opsworks/latest/userguide/working...).

So this service basically says - "Just because we don't have a Redis Amazon Service for you to snap into doesn't mean you should not use AWS or look to other PaaS, instead, easily integrate other apps with OpsWorks."


let's see how the service grows


Something in my gut tells me this is a game changer, and we may yet to see a great PaaS/IaaS consumer show in the next weeks/months.

Also, feels like a great move with Peritor/Scalarium and Chef.

I'm definitely grabbing the popcorn!


I don't see any press release from OpsCode, which makes me think this is not a joint venture with them, which in turn makes me wonder how dangerous is to be part of the AWS ecosystem...

If you prove a market around AWS, what are the odds of getting this market gobbled by AWS themselves? They have a level of access to their own infrastructure that you don't have, let alone the access to their own (huge) customer base, integrated billing and branding.


Amazon has been repeatedly accused of exactly what you describe with Amazon Marketplace. This is the first article I was able to find, but I've seen similar articles over the past few years (One of my companies is an ecommerce site that considered selling through Amazon marketplace) http://online.wsj.com/article/SB1000142405270230444140457748...

So it would not surprise me if they used this same tactic with AWS.

Edit: here's a google news link to the article: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&...


I doubt OpsCode is getting much revenue from OpsWorks, or any, since AWS is giving it out for free. It's hard to compete with free. This tweet from OpsCode reads like they were blindsided and trying to catch up https://twitter.com/opscode/status/303941706888409089


If you want to undo your setup and wait for a while for this system to mature a bit, you need to delete the security groups that are auto-created in your account in this order:

AWS-OpsWorks-Monitoring-Master-Server

AWS-OpsWorks-DB-Master-Server

AWS-OpsWorks-MemcacheD-Server

AWS-OpsWorks-Custom-Server

AWS-OpsWorks-Blank-Server

AWS-OpsWorks-PHP-App-Server

AWS-OpsWorks-Default-Server

AWS-OpsWorks-Rails-App-Server

AWS-OpsWorks-nodejs-App-Server

AWS-OpsWorks-Web-Server

AWS-OpsWorks-LB-Server

edit 1 or 2 of these were typed from memory - but it should be clear at a glance which one is which in the EC2 console


For the record - don't do this. The system doesn't seem to like it at the moment


The lack of VPC support and inability to configure custom security groups would prevent me from using it for right now. It seems like a good alternative to PaaS out there though, and I'm sure it will be rounded out with more features in the coming months.

(Note: I wouldn't see us using it in the near future anyway, since we already manage our machines on AWS with Puppet.)


Thanks for the feedback on the need for VPC support. We're listening to customer input to prioritize our roadmap. You can add custom security groups to each layer in the layer configuration.


I find it very odd that they've built in the HAProxy load balancer option, but there's no mention of ELB at all


I dunno. I find it more distressing that everyone that starts with ELB tends to move off of it at some point (most recently -- filepicker.io).

Does anyone have anything positive to say about it?


I think this provides a lower level mechanism for managing nodes. Both ELB and RDS use EBS and have been hit in the past by the knock on effects of EBS outages.

I think this makes it easier to manage your own load balancers using instance storage without EBS.


I had to look into the docs to find the "ELB is not supported at this time" statement.


FWIW, if you do need ELB support right away (and keep in mind that you'll have to look at it from each server's point of view), you can probably use https://github.com/opscode-cookbooks/aws (and use the elastic_lb resource), and hook it into your own cookbook for setting up whatever ELB's you want (then set up your custom cookbook repo for your stack.)

(I'm trying to say that this platform is incredibly flexible, and you can reuse what's already out there. If you need support for X, Y, or Z, then you can likely write in support with Chef.)


Not only that, but there's no allowance made for micros either.

Also, SSL on a per-app basis is confusing me - does that mean that each box individually is handling SSL termination, or is it done on the loadbalancer side? I have a lot of reading to do before I can really understand what's happening here I think


The lack of micro instance is a real shame. I suspect that's because the time taken to do a full chef install run on a micro might be vast.


I've used t1.micros extensively with chef -- it's not as bad as you might think.


ssl is handled on app basis not in the load balancer and by the way, you can hook into everything by rewriting or overwriting the recipes


That's what I figured. It seems odd that they'd be enforcing that approach from the start, though


OpsWorks looks quite impressive from my non-devops perspective. And most interesting of all is the fact that it's free! I understand that it's build to encourage more usage of AWS, but there are plenty of 3rd party services out there which to similar things but are not free.


Do AWS Elastic Beanstalk and OpsWork integrate in some ways or is OpsWork just more flexible?


There is currently no integration between AWS Elastic Beanstalk and AWS OpsWorks. You can see the differences between the services here https://aws.amazon.com/application-management/


second


This still has a long way to go before it is a threat to Scalr / Right Scale, but as it matures, that is definitely going to happen. Where companies like Scalr and Right Scale add value is the ability to deploy your cloud across multiple providers.


Confused... is this an alternative for chef server?


Sort of, but not exactly. It's not API compatible with Chef server, and I believe you don't get the nice search syntax for grabbing node information across your fleet that match some kind of criteria. But you still do get info on your other machines, and you do get nice orchestration across your fleet.


you get a much better orchestration as you could with chef server you also have much faster interactions, better life cycles and to the searching... you get every information of evert machine under your fleet it is much much more better the chef server


sorry, do you mean chef server is better, or this looks better?


OpsWorks gives you better features, try out to start a small stack and then also have a look at the default cookbooks, there you can see what you can do with OpsWorks: https://github.com/aws/opsworks-cookbooks


Im not clear on why you get better features.

I have a webapp that I built, that utilizes rackspace's API and chef. I have a chef server that I use for all my configuration management, I have a Rails app that I use to talk to my chef server to manipulate machines, or rackspace's api to spin up machines, and I am building "triggering" into it (low disk space, high CPU do X). I am able to change IP addresses, spin up a machine with a specific stack etc. Granted this isnt the default chef-server, but the ability to do all this stuff, and not be locked into AWS is there.


You are right about this, as it is now, it looks more of a vendor lock-in gimmick


Does anyone know if this will support web sockets? Major issue (for me) in regards to using heroku is the lack of web sockets.


Given that this doesn't support ELBs yet and you'll need to roll your own load balancer with HAProxy, there's nothing stopping websockets from working.


dotCloud is a PaaS which supports websockets: http://docs.dotcloud.com/0.9/guides/websockets/

The implementation is open-source, too: http://github.com/dotcloud/hipache

Disclaimer: I work at dotCloud


if you customize an nginx layer - new nginx supports web sockets... idk if this answers your question but look: http://nginx.com/news/nginx-websockets.html


I would be more excited if it supported micro instances, but as it stands, I'm still very tempted.


Heroku, GAE and Appharbor, brace yourselves




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: